id
stringlengths
10
10
title
stringlengths
5
246
abstract
stringlengths
42
3.32k
authors
stringlengths
5
21.5k
published_date
timestamp[s]
link
stringlengths
33
34
markdown
stringlengths
140
1.08M
abstract_ja
stringlengths
0
1.35k
2309.03189
Polarization of Thermal Dilepton Radiation
The spectra of dileptons radiated from the fireballs formed in high-energy heavy-ion collisions have been successfully used to investigate key properties of hot and dense QCD matter. In this paper we study polarization observables which have thus far received little attention. Microscopic calculations of in-medium electromagnetic spectral functions have thus far mostly focused on integrated yields which are proportional to the sum of the longitudinal and transverse components of the virtual photon's selfenergy. Photon polarization results from the difference of these components, which in general does not vanish for lepton pairs at finite three-momentum relative to the heat bath (and is maximal for fully transverse real photons). Using a model that successfully describes dilepton spectra in heavy-ion collisions, with hadronic emission via medium-modified vector mesons and quark-antiquark annihilation constrained by lattice QCD, we compute polarization observables in different dilepton mass bins and confront them with data of the HADES and NA60 experiments.
Florian Seck, Bengt Friman, Tetyana Galatyuk, Hendrik van Hees, Enrico Speranza, Ralf Rapp, Jochen Wambach
2023-09-06T17:52:12
http://arxiv.org/abs/2309.03189v1
# Polarization of Thermal Dilepton Radiation ###### Abstract The spectra of dileptons radiated from the fireballs formed in high-energy heavy-ion collisions have been successfully used to investigate key properties of hot and dense QCD matter. In this paper we study polarization observables which have thus far received little attention. Microscopic calculations of in-medium electromagnetic spectral functions have thus far mostly focused on integrated yields which are proportional to the sum of the longitudinal and transverse components of the virtual photon's selfenergy. Photon polarization results from the difference of these components, which in general does not vanish for lepton pairs at finite three-momentum relative to the heat bath (and is maximal for fully transverse real photons). Using a model that successfully describes dilepton spectra in heavy-ion collisions, with hadronic emission via medium-modified vector mesons and quark-antiquark annihilation constrained by lattice QCD, we compute polarization observables in different dilepton mass bins and confront them with data of the HADES and NA60 experiments. Measurements of electromagnetic (EM) radiation from high-energy heavy-ion collisions have provided unprecedented insights into the properties of Quantum Chromodynamics (QCD) matter formed in these reactions. Over the last decade, a rather consistent picture has emerged in interpreting the observed dilepton spectra. At low invariant masses, commonly referring to \(M\lesssim 1\,\)GeV, thermal radiation mostly emanates from the hadronic medium of the fireball evolution, with a strongly broadened \(\rho\)-meson peak indicating an ultimate melting and transition into a continuum of partonic degrees of freedom [1; 2]. Similar findings have also been reported at the higher energies of the Relativistic Heavy-Ion Collider (RHIC) [3] and the lower energies at the Schwerionensynchrotron (SIS18) [4]. On the other hand, in the intermediate-mass region (IMR), \(1\,\text{GeV}\lesssim M\lesssim 3\,\)GeV, the radiation contribution is strongly favored toward early phases, which, at least for collision energies of \(\sqrt{s}\gtrsim\!\!10\,\text{GeV}\), has been associated with partonic radiation sources [5], with temperatures well above the pseudocritical one obtained from lattice QCD, \(T_{\text{pc}}\simeq\!155\)-\(160\,\text{MeV}\)[6; 7]. Pertinent transverse-momentum (\(p_{T}\)) spectra corroborate these findings: The NA60 collaboration established that the well-known blue-shift effect due to a collectively expanding source is much less pronounced in the IMR compared to the low-mass region (LMR), implying earlier emission for the former compared to the latter. Successful model descriptions of dilepton data have largely relied on hadronic many-body theory, where the predicted melting of the \(\rho\)-meson rather seamlessly transits into a structureless quark-antiquark continuum [8], albeit with substantial enhancements over the free \(q\bar{q}\) rate toward low masses. However, the precise micro-physics underlying the strongly coupled QCD liquid in the transition regime remains a matter of debate. Therefore, further tests of the existing model calculations would be very valuable. In this letter we demonstrate this in a first quantitative application to spin-polarization observables of low-mass dileptons in heavy-ion experiments. The key quantity in our study is the EM emissivity of thermal QCD matter which is determined by the correlator of the EM current, schematically written as a thermal expectation value, \(\Pi^{\mu\nu}_{\text{EM}}=\langle j^{\mu}_{\text{EM}}j^{\nu}_{\text{EM}}\rangle_ {T}\), which can also be interpreted as the in-medium photon selfenergy. The pertinent spectral function, \(\varrho^{\mu\nu}_{\text{EM}}=-2\text{Im}\Pi^{\mu\nu}_{\text{EM}}\), figures in the dilepton emission rate as \[\frac{dN_{ll}}{d^{4}x\,d^{4}q}=\frac{\alpha^{2}L(M)}{6\pi^{3}M^{2}}f^{B}(q_{0} ;T)g_{\mu\nu}\varrho^{\mu\nu}_{\text{EM}}(M,|\vec{q}|;T,\mu_{B}) \tag{1}\] where \(M\)=\(\sqrt{q_{0}^{2}-\vec{q}^{2}}\) denotes the dilepton invariant mass, \(f^{B}(q_{0};T)=1/(e^{q_{0}/T}-1)\) the thermal Bose function, \(\alpha\simeq 1/137\) the fine-structure constant and \(L(M)\) a lepton phase-space factor (\(L(M)\)=1 for lepton masses \(m_{l}\ll M\)). Using the standard 4D projectors for a spin-1 particle, \(P^{\mu\nu}_{L,T}\), one can decompose the spectral function into its longitudinal and transverse components as [9; 10] \[\varrho^{\mu\nu}_{\text{EM}}=\varrho_{L}P^{\mu\nu}_{L}+\varrho_{T}P^{\mu\nu}_{T}\;, \tag{2}\] rendering \(g_{\mu\nu}\varrho^{\mu\nu}_{\text{EM}}=\varrho_{L}+2p_{T}\). At vanishing 3-momentum in the heat bath one has \(\varrho_{T}=\varrho_{L}\), but at finite \(|\vec{q}|\) this no longer holds as spherical symmmetry is broken. Angular dependencies in the dilepton production rate can be unravelled by resolving the angle, \(\Omega_{l}=(\phi_{l},\theta_{l})\)
Dileptonスペクトルが、高エネルギー重粒子衝突で形成された球体から放射され、高熱密度QCDの性質を調査するために成功的に使用されてきた。この論文では、これまであまり注目されてこなかった偏光可能な見地を調査する。中性原子核における電磁スペクトル関数の微視的計算は、長手方向と横方向の成分の合計に比例する積算値に集中してきた。偏光はこれらの成分の差から得られるが、一般には、熱浴に対して3次元運動量を持つLepton対の場合には、存在しない。完全な横方向の現実光子の場合には最大となる。この論文では、重粒子衝突におけるDileptonスペクトルを成功的に記述するモデルを作成し、中性原子核に修正されたベクトルMesonsと質量反粒子 annihilationを制限する。このモデルを用いて、さまざまなDilepton質量領域における偏光可能な見地を計算し
2306.00205
Kinetic Friction of Structurally Superlubric 2D Material Interfaces
The ultra-low kinetic friction F_k of 2D structurally superlubric interfaces, connected with the fast motion of the incommensurate moir\'e pattern, is often invoked for its linear increase with velocity v_0 and area A, but never seriously addressed and calculated so far. Here we do that, exemplifying with a twisted graphene layer sliding on top of bulk graphite -- a demonstration case that could easily be generalized to other systems. Neglecting quantum effects and assuming a classical Langevin dynamics, we derive friction expressions valid in two temperature regimes. At low temperatures the nonzero sliding friction velocity derivative dF_k/dv_0 is shown by Adelman-Doll-Kantorovich type approximations to be equivalent to that of a bilayer whose substrate is affected by an analytically derived effective damping parameter, replacing the semi-infinite substrate. At high temperatures, friction grows proportional to temperature as analytically required by fluctuation-dissipation. The theory is validated by non-equilibrium molecular dynamics simulations with different contact areas, velocities, twist angles and temperatures. Using 6^{\circ}-twisted graphene on Bernal graphite as a prototype we find a shear stress of measurable magnitude, from 25 kPa at low temperature to 260 kPa at room temperature, yet only at high sliding velocities such as 100 m/s. However, it will linearly drop many orders of magnitude below measurable values at common experimental velocities such as 1 {\mu}m/s, a factor 10^{-8} lower. The low but not ultra-low "engineering superlubric" friction measured in existing experiments should therefore be attributed to defects and/or edges, whose contribution surpasses by far the negligible moir\'e contribution.
Jin Wang, Ming Ma, Erio Tosatti
2023-05-31T21:52:40
http://arxiv.org/abs/2306.00205v2
# Kinetic Friction of Structurally Superlubric 2D Material Interfaces ###### Abstract The ultra-low kinetic friction \(F_{\mathrm{k}}\) of 2D structurally superlubric interfaces, connected with the fast motion of the incommensurate moire pattern, is often invoked for its linear increase with velocity \(v_{0}\) and area \(A\), but never seriously addressed and calculated so far. Here we do that, exemplifying with a twisted graphene layer sliding on top of bulk graphite - a demonstration case that could easily be generalized to other systems. Neglecting quantum effects and assuming a classical Langevin dynamics, we derive friction expressions valid in two temperature regimes. At low temperatures the nonzero sliding friction velocity derivative \(\mathrm{d}F_{\mathrm{k}}/\mathrm{d}v_{0}\) is shown by Adelman-Doll-Kantorovich type approximations to be equivalent to that of a bilayer whose substrate is affected by an analytically derived effective damping parameter, replacing the semi-infinite substrate. At high temperatures, friction grows proportional to temperature as analytically required by fluctuation-dissipation. The theory is validated by non-equilibrium molecular dynamics simulations with different contact areas, velocities, twist angles and temperatures. Using \(6^{\circ}\)-twisted graphene on Bernal graphite as a prototype we find a shear stress of measurable magnitude, from 25 kPa at low temperature to 260 kPa at room temperature, yet only at high sliding velocities such as 100 m/s. However, it will linearly drop many orders of magnitude below measurable values at common experimental velocities such as 1 \(\mu\)m/s, a factor \(10^{-8}\) lower. The low but not ultra-low "engineering superlubric" friction measured in existing experiments should therefore be attributed to defects and/or edges, whose contribution surpasses by far the negligible moire contribution. keywords: structural superlubricity, twisted graphene, moire pattern, sliding friction + Footnote †: journal: Journal of Low Temperature Physics ## 1 Introduction Structural superlubricity (SSL) is the phenomenon where the low temperature static friction \(F_{\mathrm{s}}\)- the minimal force required to start sliding - of a theoretically infinite crystal-crystal contact is identically zero [1; 2; 3]. In that state the kinetic friction \(F_{\mathrm{k}}\)- the force required to maintain steady-state sliding with velocity \(v_{0}\) - is nonzero, and believed to grow proportionally to \(v_{0}\) as \(v_{0}\to 0\). Incommensurate interfaces between 2D materials (incommensurability guaranteeing perfect cancellation of lateral forces) are good candidates for SSL, owing to their high in-plane stiffness, weak interlayer interaction, and small corrugations [4; 5; 6]. Since the first "experimental verification" in nanoscale graphite flakes at room temperature [7], experiments on SSL have been promoted in various directions with diverse foci, including large scales [8; 9], high speed [10; 11], low temperature [12], and hetero-structures [13; 14; 15; 16; 17]. Besides their intrinsic physical interest, superlubricity phenomena are of potential relevance in mechanical engineering, data storage and aerospace [18]. A first striking dichotomy one encounters in literature is that while most experimental work reports kinetic friction, theoretical studies of SSL concentrate on static friction [19; 20], which is distinctly different [21]. As a result, kinetic friction, despite the abundance of data and its all-important energy dissipation significance, is only modestly modeled and analytically understood [22; 23; 24]. Early studies of incommensurate Frenkel-Kontorova (FK) chains showed that the kinetic friction of this one dimensional SSL system, whose static friction is nominally zero, may turn especially large when exciting phonons whose wavevector is the 1D equivalent of the inverse moire wavelength [25], a finding likely to carry over to 2D. Although general formulations of kinetic friction are long established [26; 27], there are currently no explicit calculations of \(F_{\rm k}\) for realistic SSL systems at a generic velocity. In particular, predicting kinetic friction from the structural and physical properties of incommensurate structurally lubric 2D material interface, where the swift "surfing" motion of the moire pattern is the only source of the extremely small dissipation, stands as an open question of basic significance. Here, using a twisted graphene layer on semi-infinite Bernal graphite as a demonstrative example, we derive analytical expressions of the kinetic friction force of structurally superlubric sliding friction expected to be valid without any fitting or fudge parameters. Following a preliminary part, where the geometry and mechanical bases of the 2D twisted material interface are described (Sect. 2), our approach consists of four steps. In the first step, starting from a Langevin equation of motion for a bilayer, consisting of a sliding layer and a substrate layer mimicking a semi-infinite bulk, we analytically connect friction to the dissipation incurred by the moving moire pattern (Sect. 3). Besides its correct proportionality to velocity and to area, the result is also proportional to the substrate layer's Langevin damping constant \(\zeta\) - alas an arbitrary and uncontrolled parameter. In the second step (Sect. 4), we specialize the Adelman-Doll-Kantorovich basic surface scattering approach [26; 27] to calculate precisely the Langevin damping \(\zeta\), making that no longer an uncontrolled parameter. In the third step, we consider in Sect. 5 the opposite limit of high temperatures where friction is analytically determined by fluctuation-dissipation, and where a very useful heuristic interpolation from low to high temperatures is proposed. In Sect. 6 we validate these theoretical results by direct comparison with non-equilibrium molecular dynamics (MD) sliding simulations, first of a twisted graphene bilayer, and eventually of the twisted monolayer sliding on multilayer graphite, where the multilayer extrapolation technique of Benassi _et al._[28] is implemented to obtain the true, damping-independent kinetic friction. A discussion concludes the paper in Sect. 7. As one could physically anticipate, the true, defect-and-edge-free structurally superlubric friction of a 2D material interface, whose frictional stress is due exclusively to the gossamer-like moire flight, is generally irrelevant. That is, it is of such small magnitude to be practically unaccessible to measurement, at least in the commonest low-velocity sliding experiments. Probably for this reason the pure moire kinetic friction is often invoked but never calculated, let alone subjected to a theory-simulation comparison and discus sion. That however does not diminish its importance. Calculating the SSL kinetic friction will provide an element of clarity, conceptual and practical. Conceptually, it is instructive to estimate the tiny friction elicited by moving discommensurations, quasiparticle-like entities that fly and dissipate energy in Stokes-like fashion inside a viscous medium. Practically, these results are required to substantiate (or to deny) the hunch that much of the supposedly superlubric kinetic friction data reported in literature, of very measurable magnitude and not of linear velocity and area dependence, must in reality be of different origin, usually connected with stick-slip caused by a multiplicity of defects and edges rather than to the Stokes flight of the perfect moire [21]. As it will turn out, the hunch is vastly confirmed. ## 2 Modeling the moire corrugation pattern and mechanics A twisted graphene bilayer - lower layer the "substrate", upper layer the "slider" - is shown in Fig. 1, depicted at a twist angle \(\theta=6^{\circ}\) as an example. The relaxed structure (b) and (c) shows the large out-of-plane deformation of both layers concentrated at discommensurations - the most important tribological characteristics, stemming from the low bending stiffness of membrane-like 2D materials [29; 30]. The potential field experienced by an atom at point \((\mathbf{r},d)\) in the sliding Figure 1: Structure of twisted bilayer graphene. (a) Unrelaxed structure. The lattice constant \(\lambda\) and the direction of reciprocal vectors \(\mathbf{K}_{n}\) of the moir lattice are illustrated. (b) The relaxed structure (\(\theta=6^{\circ}\)), where upper and lower layers are colored according to the out-of-plane displacement. (c) The cross-section along the AA-to-2nd nearest AA direction (dashed line in b). For clarity, the out-of-plane deformation in (b) and (c) is enlarged and the interlayer spacing is reduced. The real values are marked in the figure. layer can be modeled as \[U(\mathbf{r},d)=\frac{2U_{0}e^{\alpha(1-d/d_{0})}}{9}\sum_{n=1}^{3}\cos( \mathbf{K}_{n}\cdot\mathbf{r})+[-\varepsilon+k(d-d_{0})^{2}] \tag{1}\] The first term on the right-hand side describes the moire-modulated corrugation potential, where \(d\) is the interlayer distance at \(r\), \(d_{0}\) is the equilibrium distance for the unrelaxed bilayer, \(U_{0}\) is the sliding energy barrier when \(d=d_{0}\), \(\alpha\) represents the decay rate of that barrier as \(d\) is increased, and \(\mathbf{K}_{n}\) is the reciprocal vector of the moire superlattice (\(n=1,2,3\)). Specifically, \(\mathbf{K}_{n}=\mathbf{k}_{n}-\mathbf{q}_{n}\), where \(\mathbf{k}_{n}\) and \(\mathbf{q}_{n}\) are the reciprocal vectors of the substrate and the upper slider. With rotation tensor \(\mathbf{R}(\theta)\), they are connected by \(\mathbf{q}_{n}=\mathbf{R}(\theta)\cdot\mathbf{k}_ {n}\), where \(\theta\) is the twist angle. The second term represents interlayer adhesion, with \(\varepsilon\) the average adhesion energy per atom, and \(k\) its \(d\)-dependence. Using for pure convenience a 6-12 Lennard-Jones-type interlayer interaction of depth parameter \(\varepsilon\) and range \(d_{0}\) for our parameter representation, then \(k=36\varepsilon/d_{0}^{2}\). A potential expression such as Eq.(1), generic but perfectly adequate for relatively small twist angles, was also validated in previous work [23]. Besides neglecting the very high harmonics connected with non-sinusoidality of the potential exerted by each layer, the additional assumptions made, both in agreement with simulations, are 1. The in-plane displacement of each atom away from the layer's center-of-mass is zero. That is justified by the large in-plane stiffness of 2D materials. 2. The maximum out-of-plane displacement, i.e., the moire height modulation amplitude \(H\) is much smaller than the interlayer distance \(d_{0}\). In graphene for example, \(H\leq(d_{\rm AA}-d_{\rm AB})/2\), whence \(H/d_{0}\leq 0.1/3.4\sim 3\%\)[31]. Based on Eq. (1), the \(z\)-direction force field exerted by the substrate on the slider \(F_{z}(\mathbf{r},d)=-\partial U/\partial d\) can be written as \[F_{z}(\mathbf{r},d)=\frac{2\alpha U_{0}e^{\alpha(1-d/d_{0})}}{9d_{0} }\sum_{n=1}^{3}\cos(\mathbf{K}_{n}\cdot\mathbf{r})+2k(d_{0} -d) \tag{2}\] Since \(|d-d_{0}|\ll d_{0}\), the contribution from the second (adhesion) term is negligible. Thus the monolayer out-of-plane deformation field can be approximated by \[w(\mathbf{r})=\frac{2\alpha U_{0}e^{\alpha(1-d/d_{0})}}{9k^{\prime}d_{0}}\sum_{n=1}^{3 }\cos(\mathbf{K}_{n}\cdot\mathbf{r})=\frac{2H}{9}\sum_{n=1}^{3}\cos(\mathbf{K}_{n}\cdot\mathbf{r}) \tag{3}\] where \(k^{\prime}\) is an effective out-of-plane stiffness and \(H\) is the moire corrugation height. Thus, the interlayer distance between two layers (of opposite corrugation) is \[d(\mathbf{r})=\langle d\rangle+\frac{4H}{9}\sum_{n=1}^{3}\cos(\mathbf{K}_{n}\cdot\mathbf{r}) \tag{4}\] where \(\langle d\rangle\) is the equilibrium distance for the relaxed bilayer. For convenience, we define \(\delta=d_{0}-\langle d\rangle\) (\(0\leq\delta\ll d_{0}\)). In the next step, an explicit expression for \(H\) will be obtained by the conservation relationship between the interfacial adhesive work and the bending deformation energy during the structural relaxation (from unrelaxed flat to relaxed corrugated structure). ### Adhesive energy The adhesive energy per atom of the corrugated graphene sheet is \[E_{\rm adh}=\frac{1}{A_{\rm m}}\int_{\rm moire}U(\mathbf{r},d)\,{\rm d}A \tag{5}\] The integral is on the moire cell, of area \(A_{\rm m}=\sqrt{3}\lambda^{2}/2\), where the moire lattice constant is \(\lambda=\sqrt{3}a/\sqrt{2-2\cos\theta}\)[32], and \(a\) the bond length of graphene. Due to the sixfold angular periodicity, twist angle \(\theta\) can be extended from domain \((0,\pi/6]\) to any angle. Based on the small deformation assumption, we obtain \[E_{\rm adh}\approx-\varepsilon-\frac{4\alpha U_{0}H}{27d_{0}}+\frac{32 \varepsilon H^{2}}{3d_{0}^{2}} \tag{6}\] The first term is the adhesive energy for flat bilayers, the remaining ones describe the perturbation introduced by the out-of-plane moire corrugation. ### Bending energy The (per atom) bending energy cost of the monolayer moire corrugation is \[E_{\rm bend}=\frac{\kappa}{2N(1-\nu^{2})}\int_{\rm moire}\{(\frac{\partial^{2}w}{ \partial x^{2}}+\frac{\partial^{2}w}{\partial y^{2}})^{2}+2(1-\nu)[(\frac{ \partial^{2}w}{\partial x\partial y})^{2}-\frac{\partial^{2}w}{\partial x^{2}} \frac{\partial^{2}w}{\partial y^{2}}]\}\,{\rm d}A \tag{7}\] where \(N\), \(\kappa\) and \(\nu\) are the atom number, bending rigidity and Poisson's ratio of the monolayer. Substituting in Eq. (3), the above equation simplifies to \[E_{\rm bend}=\frac{64\pi^{4}\kappa a^{2}H^{2}}{27\sqrt{3}(1-\nu^{2})\lambda^{4}} \tag{8}\] ### Bulk slider elastic energy Most experimental studies and applications involving sliding of 2D materials interfaces require them to be deposited or encapsulated [14; 15], rather than freestanding. The perpendicular substrate and sliding stage elasticity also limits out-of-plane deformations. To model this effect, also needed for our successive connection with sliding on thick graphite, we attach perpendicular springs (with spring constant \(k_{z}\)) to all atoms of substrate and slider to limit their out-of-plane deformation. The total elastic energy of monolayer graphene normalized to the area of a single atom is \[E_{\rm spring}=\frac{1}{A_{\rm m}}\int_{\rm moire}\frac{1}{2}k_{z}w^{2}\,{\rm d }A\ \ =\frac{1}{27}k_{z}H^{2} \tag{9}\] ### Total potential energy The energy per slider atom of the model bilayer can thus be written as \[\begin{split} U_{\rm tot}&=2\cdot(E_{\rm bend}+E_{ \rm spring}+E_{\rmadh})\\ &=\frac{128\pi^{4}\kappa a^{2}H^{2}}{27\sqrt{3}(1-\nu^{2}) \lambda^{4}}+\frac{2}{27}k_{z}H^{2}-2\varepsilon-\frac{8\alpha U_{0}H}{27d_{0 }}+\frac{64\varepsilon H^{2}}{3d_{0}^{2}}\end{split} \tag{10}\] where the prefactor 2 represents two layers. The right-hand side can be regarded as a quadratic function of \(H\), \(U_{\rm tot}=AH^{2}+BH+C\), where \[\begin{split} A&=\frac{128\pi^{4}a^{2}\kappa}{27 \sqrt{3}(1-\nu^{2})\lambda^{4}}+\frac{2}{27}k_{z}+\frac{64\varepsilon}{3d_{0 }^{2}}\\ B&=-\frac{8\alpha U_{0}}{27d_{0}}\\ C&=-2\varepsilon\end{split} \tag{11}\] Thus, the (real) moire height \(H\) of the slider corresponding to the minimum total potential energy is \[H=-\frac{B}{2A}=\frac{\alpha U_{0}}{\frac{32\pi^{4}\kappa a^{2}d_{0}}{\sqrt{3}(1- \nu^{2})\lambda^{4}}+\frac{k_{z}d_{0}}{2}+\frac{144\varepsilon}{d_{0}}} \tag{12}\] Here, the only twist angle-dependent parameter is the in-plane moire size \(\lambda(\theta)\). At large twist angles, \(\lambda\) is small and so does the corresponding corrugation. At \(\theta\to 0\), \(\lambda\) grows and eventually diverges as \(\theta^{-1}\), whence \(H\) saturates to \(\frac{\alpha U_{0}}{k_{z}d_{0}/2+144\varepsilon/d_{0}}\). We underline that (Eq. 12) does not contain fudge or unknown quantities, all parameters being determined by the mechanical or structural properties of the material. For twisted bilayer graphene, the interlayer parameters are: sliding energy barrier \(U_{0}=14\) meV/atom, decay rate \(\alpha\approx 8.8\), interlayer distance \(d_{0}=3.4\) A, interlayer attraction parameter \(\varepsilon=24.5\) meV/atom [33]. The intralayer parameters are: bending stiffness and Poisson's ratio \(\kappa=1.4\) eV and \(\nu=0.2\)[34], and bond length \(a=1.42\) A[35]. The \(z\)-direction supporting spring stiffness \(k_{z}\), which reproduces the moire height Figure 2: Dependence of moiré corrugation amplitude on the twist angle in a graphene bilayer. (a) Comparison of the simulation (red) and theory (blue). The levelling off below \(\sim 5^{\circ}\) reflects the saturation of the corrugation magnitude to its maximum extent \((d_{\rm AA}-d_{\rm AB})/2\sim 0.11\) Å. (b) Theoretical corrugation for system with 10 times larger bending stiffness (red), 8 times larger supporting stiffness (yellow), and 2 times larger interlayer adhesion energy (green). in twisted bulk graphite, is \(k_{z}=0.33\) N/m. (That for a freestanding bilayer, would of course be \(k_{z}=0\)). A direct comparison between our analytical moire corrugation \(H\) and molecular simulation results is shown in Fig. 2a (details for the simulation given in SI). The comparison confirms that our theoretical model is quantitatively applicable to a wide range of twist angles, from 0.3 to 30 degrees (and of course beyond 30 degrees, owing to sixfold angular periodicity). It is worth noting that the sinusoidal deformation assumption is not strictly applicable to the system with a twist angle \(\theta\lesssim 3^{\circ}\), where the moire deformation strongly deviates from sinusoidal [36; 21], and the in-plane deformation becomes non-negligible [37]. Nevertheless, since the saturated out-of-plane corrugation at small twist angles originates from reduced interlayer attraction in the so-called AA regions, the in-plane deformation has a negligible effect, and our simplified model still provides a good estimate for \(H\). The result (Eq. 12) can also describe the corrugation magnitude of 2D materials other than twisted graphene. For different materials/constraints, e.g., larger bending stiffness \(\kappa\) (for transition-metal dichalcogenides), higher \(k_{z}\) (for adsorbed or multilayer system), stronger binding energy \(\varepsilon\), the corrugation height and its relation to the twist angle is predicted and shown in Fig. 2b. It can be seen from the figure that higher bending stiffness reduces the moire corrugation at large twist angles, while larger \(k_{z}\) and \(\varepsilon\) reduce the corrugation at small twists. ## 3 Kinetic friction of a surfing moire: Langevin equation Considering a sliding process with velocity \(\mathbf{v}_{s}\), the moire out-of-plane displacement field becomes \[w(\mathbf{r})=\frac{2H}{9}\sum_{n=1}^{3}\cos[\mathbf{K}_{n}\cdot(\mathbf{r}-\mathbf{v}_{\rm moire }t)] \tag{13}\] where the moire surfing velocity could be written as [32]: \[\mathbf{v}_{\rm moire}=-\frac{2}{3}\sum_{n=1}^{3}\frac{(\mathbf{k}_{n}-\mathbf{q}_{n}) \otimes\mathbf{q}_{n}}{|\mathbf{k}_{n}-\mathbf{q}_{n}|^{2}}\cdot\mathbf{v}_{s} \tag{14}\] The magnitude of the moire surfing velocity is related to the sliding velocity as \(|\mathbf{v}_{\rm moire}|=v_{0}\lambda/a_{\rm Gr}\), with \(v_{0}=|\mathbf{v}_{s}|\) the sliding speed of the upper layer and \(a_{\rm Gr}\) the layer's atomic lattice constant. As illustrated in Fig. 3, the angle \(\beta\) between the moire surfing direction and the atomic sliding direction is twist angle-dependent, \(\beta=(\pi-\theta)/2\). For \(\theta\to 0\), moire moves nearly perpendicular to the sliding direction of the upper flake. We begin by describing the friction energy dissipated from the interface to the substrate monolayer, by the empirical - yet fundamentally motivated - Langevin equation, which contains a phenomenological damping coefficient \(\zeta\). For each substrate atom, \[m_{i}\ddot{r}_{i,\alpha}=-\nabla_{i,\alpha}V_{\rm tot}-\zeta_{\alpha}\dot{r}_{ i,\alpha}+R_{i,\alpha} \tag{15}\] where \(\alpha=x,y,z\), \(m_{i}\) and \(r_{i,\alpha}\) are the mass and position (along \(\alpha\) direction) of \(i\)-th substrate atom, \(V_{\rm tot}\) is the total potential energy of the system, \(\zeta_{\alpha}\) and \(R_{i,\alpha}\) are the damping coefficient and random force along direction \(\alpha\). In the Markov approximation, the random force term satisfies the fluctuation-dissipation relation \(\langle R_{\alpha}(t)\rangle=0\) and \(\langle R_{\alpha}(t)R_{\alpha^{\prime}}(0)\rangle=2\zeta k_{\rm B}T\delta_{ \alpha\alpha^{\prime}}\delta(t)\). Aiming first at the energy dissipation from the interface to the substrate, we assume zero temperature (thus the thermal noise term \(R=0\)). The influence of finite temperature and the neglect of quantum effects will be discussed in Sect. 5. Figure 3: (a) Twisted graphene. Sliding direction of the layer center-of-mass (red arrow) and of the moiré pattern (black arrow). For a center-of-mass sliding distance equal to the interatomic distance \(a\) (1.42 Å along \(\mathbf{v}_{s}\)), the larger displacement of the moiré is shown in (b). Here the twist angle is \(\theta=6^{\circ}\), \(\lambda=2.35\) nm, and \(\beta=87^{\circ}\). At steady state sliding, the kinetic friction of the system satisfies power conservation \[F_{\rm k}v_{0}=\sum_{\alpha}^{x,y,z}\zeta_{\alpha}\sum_{i=1}^{N}m_{i}\langle v_{i,\alpha}^{2}\rangle \tag{16}\] where the left hand side represents the input power, the right hand side the dissipated power (each \(\langle v_{i,\alpha}^{2}\rangle\) being proportional to \(v_{0}^{2}\) at \(T=0\)), where \(N\) is the number of slider atoms. Assuming for a 2D material bilayer a highly anisotropic damping coefficient, \(\zeta_{z}\gg\zeta_{x}\sim\zeta_{y}\) (the reason for such assumption will be further addressed in the next section), this can be simplified to \[F_{\rm k} =\frac{\zeta_{z}}{v_{0}}\sum_{i=1}^{N}m_{i}\langle v_{i,z}^{2}\rangle \tag{17}\] \[=\frac{m\zeta_{z}}{v_{0}A_{\rm C}\tau}\int_{\rm moire}\langle v_{z }^{2}\rangle\,\mathrm{d}A\] \[=\frac{m\zeta_{z}}{v_{0}A_{\rm C}\tau}\int_{\rm moire}\int_{0}^{ \tau}v_{z}^{2}\,\mathrm{d}t\,\mathrm{d}A\] where \(A_{\rm C}=3\sqrt{3}a^{2}/4\) is the area per atom. Based on Eq. (13), the out-of-plane velocity is \(v_{z}=\mathrm{d}w/\mathrm{d}t\), and Eq. (17) can be further simplified to \[F_{\rm k}=\frac{c_{1}Nm\zeta_{z}H^{2}v_{0}}{a_{\rm Gr}^{2}} \tag{18}\] where \(c_{1}\) is a prefactor which depends on the out-of-plane moire structural corrugation shape. For the sinusoidal structure (Eq. 3) - a good approximation for \(\theta\gtrsim 3^{\circ}\) systems, \(c_{1}=16\pi^{2}/81\). The result (Eq. 18) shows that friction of a 2D superlubric slider is proportional to the atom number \(N\), therefore to the slider's contact area \(A=3\sqrt{3}Na^{2}/4\); proportional to the sliding velocity \(v_{0}\) (i.e., it is viscous); proportional to the square of moire corrugation \(H(\theta)\); and finally, proportional to the damping \(\zeta_{z}\). This is as far as a traditional Langevin formulation takes us. ## 4 Fundamental derivation of damping parameter The analytical Eq. (18) appears to solve the problem. Yet, the presence in the result of the so far arbitrary damping coefficient \(\zeta\) is far from satisfactory - in real life there is no damper. Simply, frictional phonons propagate away from the interface and never come back. This situation, described by many authors [26, 25, 27, 28, 38, 39], is physically clear but still needs a solution analytically connecting the effective bilayer damping to the real damping-free system. Another way out is to do away with damping. For example, the multilayer extrapolation technique of Ref. [28] implies that application of a Langevin damping \(\zeta_{N_{L}}\), \(N_{L}\) layers away from the sliding interface will yield the correct friction for _any_\(\zeta_{N_{L}}\) in the \(N_{L}\rightarrow\infty\) limit. A welcome feature of that approach is also that for any finite \(N_{L}\) the bottom dissipation parameter \(\zeta_{N_{L}}\) can be _variationally optimized_. On the other hand, linear response, single-phonon friction (see e.g., [40]), free from any arbitrary damping parameters and therefore conceptually more satisfactory, generally yields a Born approximation friction formula whose applicability does not include the low velocity limit and whose result is dimension dependent. That suggests that viscous friction, the velocity-linear friction generically expected in SSL should result from multi-phonon processes, beyond Born perturbative theory. An effective, microscopically generated and analy Figure 4: Schematic diagram of SSL system and energy dissipation. (a) The (real) multilayer system with phonon dissipation. (b) The reduced effective bilayer system with random force \(R_{i}\) and phonon dissipation described by the effective damping \(\zeta\). The slider (region A), topmost layer of the substrate (region B) and half-infinite remaining substrate (region C) are colored red, blue, and gray respectively. coefficient \(\zeta\) can be obtained from basic principles of surface scattering, as formulated long ago [26; 27] for a harmonic system. Consider a model 2D material friction geometry as illustrated in Fig. 4a, where region A is the slider, B is the interfacial layer of the substrate, and C is a (half-infinite) substrate. The generalized Langevin equation for the atom in region B could be described by: \[m_{i}\ddot{r}_{i}=f_{i}-\int_{0}^{t}\Gamma(t,\tau)\dot{r}_{i}\,\mathrm{d}\tau+R _{i} \tag{19}\] where \(m_{i}\) and \(r_{i}\) is the mass and the position of \(i\)-th atom, \(f_{i}\) represents the interaction between atom-\(i\) and the rest of the system, \(R_{i}\) is considered as the random force due to the motion of atoms in region C, and the integral term describes the frictional force with the friction kernel \[\begin{split}\Gamma(t,\tau)&=\beta\langle R_{i}(t )R_{i}^{\dagger}(\tau)\rangle\\ &=m_{i}D_{\mathrm{BC}}(t)\Pi_{\mathrm{CC}}(t-\tau)D_{\mathrm{CB}} (\tau)\end{split} \tag{20}\] here \(\beta=1/k_{\mathrm{B}}T\), \(D\) is the dynamical matrix, \(\Pi_{\mathrm{CC}}=\sum_{\lambda}e_{\lambda}\otimes e_{\lambda}^{\dagger} \cos(\omega_{\lambda}t)/\omega_{\lambda}^{2}\), where \(e_{\lambda}\) and \(\omega_{\lambda}\) are the eigenvector and eigenvalue of the substrate (C region). By neglecting the long-range interactions and in a Markov approximation [41] (checked to be reasonable for our systems with short relaxation times from fs to ps, a simplified damping term \(m_{i}\zeta\dot{r_{i}}\) may replace the friction term in Eq. (19), yielding the effective damping coefficient \[\zeta_{\alpha}=\int_{0}^{t}D_{\mathrm{BC}}\Pi_{\mathrm{CC}}(\tau)D_{\mathrm{CB }}\,\mathrm{d}\tau \tag{21}\] Since direct interaction between atoms in region B and atoms deep down in region C is negligible, the effective frictional kernel can be reduced to a first-neighbour one, so that the above formula can be simplified as \[\zeta_{\alpha}\simeq\frac{|\Phi_{12}^{\alpha\alpha}|^{2}}{m^{2}}\int_{0}^{t} \Pi_{\mathrm{CC}}(\tau)\,\mathrm{d}\tau \tag{22}\] where \(\Phi_{12}^{\alpha\alpha}=\frac{\partial^{2}V}{\partial r_{1,\alpha}\partial r _{2,\alpha}}\) is the force constant between atom 1 (in B) and 2 (in C) along \(\alpha\) direction (which could be estimated from its definition or equivalently, from the elastic constant. See SI for details), and \(m\) is the mass of atom in region B. For a half-infinite isotropic substrate, this further simplifies by using \[\begin{split}\Pi_{\rm CC}(\tau)&\simeq\int_{0}^{\omega_{ \rm D}}\rho(\omega)\frac{\cos(\omega\tau)}{\omega^{2}}\mathrm{d}\omega\\ &=\frac{3}{\omega_{\rm D}^{2}}\frac{\sin(\omega_{\rm D}\tau)}{ \omega_{\rm D}\tau}\end{split} \tag{23}\] where \(\rho(\omega)=3\omega^{2}/\omega_{\rm D}^{3}\) is the phonon density of states and \(\omega_{\rm D}\) is the Debye frequency. Thus, for \(t\to\infty\), Eq. (21) finally becomes \[\zeta_{\alpha}\simeq\frac{3\pi}{2\omega_{\rm D}^{3}}\frac{|\Phi_{12}^{\alpha \alpha}|^{2}}{m^{2}} \tag{24}\] This Eq. (24), similar to earlier single-molecule [42] and bulk solid [27] formulas, is our desired result for the damping coefficient, whose insertion into Eq. (18) of previous section should lead to the friction of an SSL 2D material interface, approximate but now free of arbitrary parameters. Simple as it is, it is controlled by two fundamental quantities, both connected to the lattice dynamics of the substrate. The first is the denominator \(\omega_{\rm D}^{3}\), indicating that a softer substrate will cause a much higher viscous sliding friction. This is actually a very general result, also obtained earlier for surface vibrating molecules [42]. The second is the numerator \(|\Phi_{12}^{\alpha\alpha}|^{2}\), a dynamical matrix term measuring how effective the first-substrate layer transmits a friction-generated vertical vibration to the second (now Bernal commensurate) layer, the second to the third, and so on. Again as in the vibrating molecule theory this is proportional to the fourth power of the sliding-induced \(z\)-oscillation frequency and therefore to the second power of the bulk phonon density of states - a higher density of states corresponds to a higher decay rate of the interface-generated phonon, which reflects as a higher friction. At the cost of ruining its simplicity, Eq. (24) could in principle be improved by taking into account dissipation due to \((x,y)\) polarizations, vibrational anisotropy, anharmonicity, and quantum effects. As regards including all phonon polarizations, it is physically clear and proven by many simulations [14, 24, 21] that the main contribution to the friction of a bilayer should be due to the part of Eq. (21) from the 2D material substrate vibrations whose \(|\Phi_{12}^{\alpha\alpha}|^{2}\) is vertically polarized, whose \(\omega_{\rm D}\) is the softest. To test the second point, we actually extended our estimate to an anisotropic Debye model, more appropriate for layered materials. That done, results yield damping coefficients of the same order of magnitude (see SI for details). Anharmonicity will only play a role at high temperatures; a regime where, as will be discussed later, a totally different approach is called for, since thermal fluctuations exceed moire corrugation \(H\). At the opposite extreme low temperature limit, below a temperature \(T_{q}\) to be introduced below, quantum effects will in fact become important but will require a treatment that is beyond our present scopes. Within the present approximations, whose range of validity we have thus qualified, it is interesting to look at actual orders of magnitude predicted for moire friction. Considering the interlayer interaction of graphene, we estimate that \(\Phi_{12}^{zz}=2.7\) N/m (neglecting in-plane terms like \(|\Phi_{12}^{xx}|^{2}\) that are much smaller). Inserting the \(z\)-Debye frequency of graphite \(\omega_{\rm D}=1.2\times 10^{14}\) rad/s [43], we obtain an effective damping coefficient \(\zeta_{z}=0.05\) ps\({}^{-1}\). Even if omission of anharmonicity inevitably entails a slight underestimate - as we shall see later by comparison with true values from simulations - this damping coefficient is considerably smaller than empirical values currently used in 2D materials simulations, e.g., 4.5 ps\({}^{-1}\)[14; 23; 24], or 1 ps\({}^{-1}\)[33; 44], 2 ps\({}^{-1}\)[45]. Such large empirical values may have been adopted as conveniently similar to \(10^{3}\) simulation time steps (the latter typically \(\sim 1\) fs), or adjusted by requiring independence of friction on damping, or chosen just so as to yield a friction that is comparable to experiments. That kind of choice is particularly alarming because the typical simulation and experimental velocities differ by six orders of magnitude, and true superlubric friction should be, and we confirm it is, linear with velocity. Contrary to that, the experimental 2D friction generally depends logarithmically and not linearly upon velocity [46; 14; 12]. All that implies that the choice of effective Langevin damping \(\zeta\) should be completely reconsidered in SSL simulations, and that validation of analytical results should be sought through comparison with genuinely superlubric PBC simulations, rather than with experiments, where friction is "engineering superlubric" and not structurally superlubric [21]. ## 5 High temperature SSL kinetic friction It is now necessary to connect the low temperature results presented so far, Eqs. (18) and (24), with realistic finite and high temperatures. Conceptually, as the temperature increases, the random thermal flexural corrugations with amplitude \(\langle|H_{T}|\rangle\propto\sqrt{k_{\rm B}T}\) become larger [47], surpassing the original moire corrugation \(H\), eventually making it irrelevant. In addition, and more importantly, thermal agitation and anharmonicity will gradually suppress the phonon lifetimes and mean free paths, reducing and eventually destroying the use of single phonon approximations. We can address this limiting regime in the low velocity, high temperature "diffusive" limit through fluctuation-dissipation, predictably leading to a linear increase of friction with \(T\). In this limit, kinetic friction is given by \[F_{\rm k}=\frac{1}{\mu}v_{0} \tag{25}\] where \(\mu\) is the drift mobility, connected to the diffusion coefficient \(D\) and temperature \(T\) by Einstein's relation \(\mu=D/k_{\rm B}T\). The connection between the diffusion coefficient and temperature is generally Arrhenius-like, i.e., \(D(T)=D_{0}\exp(-E_{0}/k_{\rm B}T)\), where \(D_{0}\) is the maximum diffusion coefficient and \(E_{0}\) the activation energy barrier for diffusion. Thus, we expect the temperature dependence of kinetic friction at finite temperature and infinitesimal velocity to follow linear response \[F_{\rm k}(T)=\frac{k_{\rm B}v_{0}}{D_{0}}T \tag{26}\] To substantiate this result, we must still specify what \(D_{0}\) actually is. One way to do that, actually applicable only in the SSL case, is to make a connection between low velocity friction, just described, and the opposite high velocity limit of "ballistic friction" - described for example by Guerra _et al._[48] - where additional resistance to sliding arises from collisions with large thermal fluctuations causing surfing moire to dissipate more work - such as a raft would when surfing a rough sea. The low and high velocity regimes are in general quite different. That is because energy barriers dominate ordinary friction at most velocities, only becoming irrelevant at extremely low ("thermolubric") [38, 49] and extremely high ("ballistic") velocities [48]. The peculiarity of SSL systems is precisely the absence of energy barriers against sliding, and that merges the two regimes into a single one. In the ballistic regime, kinetic friction can be described by replacing the moire out-of-plane corrugation with thermal fluctuations: \[F_{\rm k}\simeq\frac{Nm\zeta_{z}\langle H_{T}^{2}\rangle v_{0}}{a_{\rm Gr}^{2} }=\frac{c_{2}Nm\zeta_{z}v_{0}k_{\rm B}T}{\kappa} \tag{27}\] where \(c_{2}\) is a dimensionless prefactor of order 1 which reflects the proportionality between temperature and mean square vertical fluctuations. Even if straightforward to formulate in a harmonic approximation, the actual calculation of \(c_{2}\) for our target geometry - a twisted/incommensurate monolayer on top of a substrate monolayer or semi-infinite bulk- implies summing contributions from a cumbersome variety and number of modes. In addition, a harmonic description of frictional phonons would be wrong at high temperatures. In place of that calculation we therefore simply use the \(c_{2}\) value independently obtained by an equilibrium MD simulation. (For a twisted and \(k_{z}\) harnessed graphene bilayer with \(\theta=6^{\circ}\), for example, the actual value is \(c_{2}\approx 2.8\)). By equating two expressions, we find that \(D_{0}\sim\kappa(Nm\zeta_{z})^{-1}\). (Note once again that the atom number \(N\) is proportional to the contact area \(A\) as it should). The overall picture of the viscous kinetic friction \(F_{\rm k}=v_{0}({\rm d}F_{\rm k}/{\rm d}v_{0})\) in SSL systems is thus clear. At high temperatures friction grows linearly with \(T\). At lower temperatures, where the thermal fluctuation amplitude \(\langle|H_{q}|\rangle\) is smaller than the moire corrugation \(H\), friction becomes temperature independent, leveling off to a value determined the out-of-plane moire distortion \(H\). Reflecting this physical crossover, the total kinetic friction force can be heuristically approximated by \[F_{\rm k}\simeq Nm\zeta_{z}v_{0}\sqrt{(\frac{c_{1}H^{2}}{a_{\rm Gr}^{2}})^{2}+ (\frac{c_{2}k_{\rm B}T}{\kappa})^{2}} \tag{28}\] This formula, with damping \(\zeta_{z}\) given by Eq. (24), and with \(H=H[\lambda(\theta)]\) determined by the twist angle \(\theta\) through Eq. (12), represents the main result of this paper. By equating two terms in the above formula, a crossover temperature between two regimes can be estimated as \[T_{c}\simeq\frac{\kappa H^{2}}{a_{\rm Gr}^{2}k_{\rm B}} \tag{29}\] For \(\theta=6^{\circ}\) case (\(H=0.12\) A), \(T_{c}\simeq 30\) K. One last remark is about quantum effects that are ignored here, but that must become important at sufficiently low temperatures. Considering the equipartition energy of the classical flexural mode with the moire wavelength, and the quantum zero-point energy, one can estimate a further crossover temperature \(T_{q}\) below which the classical theory fails [50]: \[T_{q}=\frac{\hbar\omega_{\rm moire}}{2k_{\rm B}}=\frac{\hbar}{2k_{\rm B}}\sqrt{ \frac{\kappa}{\rho_{\rm 2D}}}(\frac{2\pi}{\lambda})^{2} \tag{30}\] where \(\rho_{\rm 2D}\) is the area density, and \(\lambda\) is the moire size. For twisted graphene with \(\theta=6^{\circ}\), this transition temperature is estimated to be \(T_{q}\approx 15\) K - below the other classical crossover \(T_{c}\), but not much lower. This indicates that our theoretical formula (Eq. 28) should apply for \(T>T_{q}\) (and below the \(z\)-Debye temperature, \(T_{\rm D}=900\) K [43]), but it will require additional quantum modifications in the true low temperature limit. ## 6 Validation by MD simulations Our theoretical results for structurally superlubric friction of a twisted 2D interface on top of a semi-infinite 2D crystal need to be validated. We cannot do that by comparison with experiments, because a) the experimental velocity and area dependence, generally much weaker than linear, show that their friction is not strictly structurally superlubric; b) the friction of SSL systems at their low velocity would be far too small to be measured by any available technique. We can nonetheless validate our result, by comparing our theoretical friction to a non-equilibrium MD multilayer simulation where, following Benassi _et al._[28], we can obtain an approximately parameter-free friction. In this section, we firstly compare our analytical results (Eq. 28) with damping-based MD graphene bilayer simulations at low and high temperatures. Then, finite temperature "realistic" friction is obtained using the parameter-free simulation to test our prediction of the damping coefficient (Eq. 24). ### Damping-based bilayer simulation at low and high temperature The validity of the analytical low temperature damping parameter Eq. (18) can be tested by MD simulations of our model twisted graphene bilayer (Fig. 4b). Realistic model geometries with twist angles \(\theta\) ranging from \(0.3^{\circ}\) to \(30^{\circ}\) are created by optimization of the total energy at rest with periodic boundary conditions (PBC). The interlayer and intralayer interaction are described by registry-dependent interlayer potential (ILP) and REBO force field respectively [35; 33]. Perpendicular springs (with spring constant \(k_{z}=0.33\) N/m) are attached to each atom in both substrate and slider layers to mimic the real confinement effect between the driving stage and the semi-infinite substrate. The center-of-mass of the slider is connected to a dragging spring with spring constant \(K_{p}=100\) N/m, pulled with a constant velocity \(v_{0}\) along \(x\). A Langevin thermostat with \(T=0\) K and damping \(\zeta_{z}\), which from Eq. (24) is approximately 0.05 ps\({}^{-1}\), is attached to the substrate layer. The kinetic friction \(F_{\rm k}\) is calculated as the time-averaged force experienced by the dragging spring. Simulation results shown in Fig. 5a-c agree Figure 5: Size (a), velocity (b), twist angle (c), and temperature (d) dependence of a twisted graphene bilayer sliding friction from MD simulation with \(\zeta_{z}\) given by Eq. (24) (data points) and theory of Eq. (28) (dashed lines). The value of parameters are given in each plot. The arrow in (d) marks the crossover temperature \(T_{c}\), below which (blue region) friction is dominated by moiré corrugation and saturates to a constant, and above which (red region) it is dominated by flexural fluctuations and grows linearly with temperature. The use of unusually high sliding velocities permits shorter simulation times, and is allowed by the completely linear dependence of friction upon sliding velocity. quantitatively with the theoretical prediction (dashed lines), confirming the linear dependence of SSL friction upon area (\(A\propto N\)), and the twist angle dependence predicted by \(H[\lambda(\theta)]\) of Eq. (12). To validate the high temperature analytical result, we thus perform MD simulations with the same model (with twist angle \(\theta=6^{\circ}\)), and compare the frictional result with theory of Eq. (28). The finite temperature (thermal noise) is introduced by a Gaussian-distributed random force \(R_{z}\) with \(\langle R_{z}(t)R_{z}(0)\rangle=2\zeta_{z}k_{\rm B}T\delta(t)\). The damping coefficient \(\zeta_{z}\) is given from Eq. (24). The MD simulation results are shown in Fig. 5d (solid points). To compare with our effective bilayer theory, we extract the dimensionless prefactor \(c_{2}\), which describes the linear dependence of \(\langle H_{T}^{2}\rangle\) on temperature \(T\), from simulations (details in SI). Substituting its value \(c_{2}\approx 2.8\) into Eq. (28), we find excellent agreement between our theory and simulations. ### Comparison with parameter-free simulations To obtain realistic kinetic friction in SSL systems, we built a multi-layer simulation model as shown in Fig. 6a, and adopted the parameter-free variational method - applying damping to a far away boundary layer to correctly absorb phonons generated at the sliding interface [28]. The optimal boundary damping minimizes the back reflected energy by that remote boundary, and the corresponding maximal friction force is an approximation to the real friction that can be made as accurate as desired by increasing the layer number. The simulation model consists of one layer of graphene slider and \(N_{L}\) layers of Bernal graphite substrate. In our current simulation we use \(N_{L}=10\), but we verified in a few test cases that the results are nearly same as \(N_{L}=20\), once temperatures are not too low. PBCs are applied to \(x\) and \(y\) directions. The twist angle between the slider and the substrate is \(\theta=6^{\circ}\), and the inter/intra-layer force field and the sliding set-ups are same as that in sect 6.1. Perpendicular "confining" springs (\(k_{z}=0.33\) N/m) are attached to each slider atom to limit the out-of-plane deformation. It can be expected that the deformation of the slider will increase for smaller \(k_{z}\), resulting in higher sliding friction. The limited effects of \(k_{z}\) on the out-of-plane deformation and friction is discussed in SI. The center-of-mass of the slider is connected to the driving stage by a horizontal dragging spring (\(K_{p}=100\) N/m), moving with a constant velocity \(v_{0}\) along \(x\). The bottom layer of the substrate is fixed. The next layer up is connected to a Langevin thermostat with temperature \(T\) and a boundary damping \(\zeta_{N_{L}}\), which is the parameter that is variationally optimized by minimizing the backreflected phonon energy [28]. The kinetic friction in each simulation is calculated from the time-averaged lateral force during a 10-ns steady sliding, multiple independent simulations are performed to obtain the converged value. Before coming to the results, we should note that results are more and more reliable the higher the temperature, whereas for lower temperatures the multilayer thickness becomes insufficient and the optimal variational boundary damping parameter \(\zeta_{N_{L}}\) increases to the maximum tolerable value, until optimization is lost, reflecting the excessive increase of phonon mean-free-path. That makes the multilayer variational approach essentially a high temperature one, not for practical use below \(T_{c}\) where friction saturates. Simulation results for friction in the range \(T=50\sim 400\) K are shown in Fig. 6b. There is good qualitative agreements between damping-based bilayer simulations with \(\zeta_{z}\) from Eq. (24), and parameter-free multilayer simulations. The friction stress at each temperature is of similar magnitude and both scale linearly with temperature (note that here \(T>T_{c}\)). Quantitatively, we find that with the theoretical \(\zeta_{z}\sim 0.05\) ps\({}^{-1}\) of Eq. (24), the kinetic friction is underestimated by a factor \(\sim 2\) with respect to the reference Figure 6: Parameter-free simulation. (a) Schematic diagram of the model and set-ups. (b) Comparison of friction from parameter-free twisted graphene/multilayer graphite simulations (red) and effective bilayer graphene simulations with the theoretical damping \(\zeta_{z}\) from Eq. (24) (black). The solid line is guide to the eye. simulation, considered to be reliable. Thus, for our particular exemplification of 2D materials-based SSL, the recommended damping coefficient of the effective bilayer should have been in the order of 0.1 ps\({}^{-1}\). Given the simplifications and approximations we employ, particularly the harmonic phonon assumption, this discrepancy is not surprising and not at all fatal, finally justifying the use with good theoretical control of an effective bilayer with damping. It permits to estimate the friction stress for a SSL system at all temperatures above a very low \(T_{q}\sim 15\) K, at any velocity. With a typical experimental velocity (\(v_{0}\sim 1\)\(\mu\)m/s), we predict \(\tau\sim 10^{-6}\) kPa - an utterly negligible frictional stress compared with current experimental values for graphene, graphite and other 2D sliders, typically \(1\sim 100\) kPa. This huge gap confirms, as already suggested by area and velocity less than linear friction, that real, finite-size experimental contacts _are not strictly structurally superlubric_, and the friction is instead dominated by the edges of the slider and/or defects at the interface [21]. ## 7 Discussion and Conclusions We presented analytical predictions for the kinetic friction of structurally superlubric 2D material interfaces, accompanied by exemplificative sliding simulations of twisted graphene bilayer, and of twisted graphene on a semi-infinite bulk graphite substrate. At low temperatures, we analytically derive from basic Langevin formulations an effective damping coefficient that yields at low temperatures a bilayer sliding friction that equals that on a semi-infinite bulk. At high temperatures, kinetic friction is directly obtained by fluctuation-dissipation; and an overall formula is proposed covering all temperatures. These analytical results compare very well with the simulated kinetic friction of the bilayer, once the theoretically obtained Langevin damping is used. Finally, the equivalence of the theoretical effective bilayer to a realistic 2D monolayer sliding on a semi-infinite bulk is validated by variational multilayer simulations. The theoretical and realistic kinetic frictions agree to within a factor 2, which can be considered a very good result in view of the harmonic approximations used, and of the lack of adjustable parameters. Numerical values of the frictional stress obtained and validated, of about \(10^{-6}\) kPa for a realistic sliding velocity \(v_{0}\sim 1\)\(\mu\)m/s provide the very first quantitative measure of the tiny Stokes frictional dissipation connected with the surfing of the gossamer-like moire pattern at an incommensurate 2D material interface. Hard to detect as this predicted ultra-low friction clearly is, the actual unveiling of its nature and its value in a real case as presented here is nonetheless important on several accounts. Firstly, it physically formulates the problem, leading to clear, parameter-free results. Second, these results translate into numbers and temperature dependencies that were so far unknown. Third, they provide an element of clarity, showing that much of the supposedly superlubric kinetic friction reported in literature, as much as six orders of magnitude larger and with improper nonlinear velocity and area dependence, must be of different origin than SSL, being probably connected with stick-slip caused by the presence of edges, defects, and third bodies - whose eventual mitigation provides the directions for the future realization of superlubricity. ## Acknowledgments E.T. and J.W. acknowledge support from ERC ULTRADISS Contract No. 834402. Support by the Italian Ministry of University and Research through PRIN UTFROM N. 20178PZCB5 is also acknowledged. M.M. acknowledges the financial support from the National Natural Science Foundation of China (No. 11890673 and 51961145304). J.W. acknowledges the computing resources support from National Supercomputer Center in Tianjin. We are grateful for discussions with A. Khosravi, A. Silva, and A. Vanossi. ## References * (1) Kazumasa Shinjo and Motohisa Hirano. Dynamics of friction: superlubric state. _Surface Science_, 283(1):473-478, 1993. * (2) Jean Michel Martin and Ali Erdemir. Superlubricity: Friction's vanishing act. _Physics Today_, 71(4), 2018. * (3) Andrea Vanossi, Nicola Manini, Michael Urbakh, Stefano Zapperi, and Erio Tosatti. Colloquium: Modeling friction: From nanoscale to mesoscale. _Rev. Mod. Phys._, 85:529-552, 2013. * (4) Yiming Song, Cangyu Qu, Ming Ma, and Quanshui Zheng. Structural superlubricity based on crystalline materials. _Small_, 16(15):1903018, 2020. * (5) Andrea Vanossi, Clemens Bechinger, and Michael Urbakh. Structural lubricity in soft and hard matter systems. _Nature Communications_, 11(1):4657, 2020. * (6) Jiahao Yuan, Rong Yang, and Guangyu Zhang. Structural superlubricity in 2d van der waals heterojunctions. _Nanotechnology_, 33(10):102002, 2021. * (7) Martin Dienwiebel, Gertjan S. Verhoeven, Namboodiri Pradeep, Joost W. M. Frenken, Jennifer A. Heimberg, and Henny W. Zandbergen. Superlubricity of graphite. _Phys. Rev. Lett._, 92:126101, Mar 2004. * (8) Ze Liu, Jiarui Yang, Francois Grey, Jefferson Zhe Liu, Yilun Liu, Yibing Wang, Yanlian Yang, Yao Cheng, and Quanshui Zheng. Observation of microscale superlubricity in graphite. _Phys. Rev. Lett._, 108:205503, May 2012. * [9] Rufan Zhang, Zhiyuan Ning, Yingying Zhang, Quanshui Zheng, Qing Chen, Huanhuan Xie, Qiang Zhang, Weizhong Qian, and Fei Wei. Superlubricity in centimetres-long double-walled carbon nanotubes under ambient conditions. _Nature Nanotechnology_, 8(12):912-916, Dec 2013. * [10] Jiarui Yang, Ze Liu, Francois Grey, Zhiping Xu, Xide Li, Yilun Liu, Michael Urbakh, Yao Cheng, and Quanshui Zheng. Observation of high-speed microscale superlubricity in graphite. _Phys. Rev. Lett._, 110:255504, Jun 2013. * [11] Deli Peng, Zhanghui Wu, Diwei Shi, Cangyu Qu, Haiyang Jiang, Yiming Song, Ming Ma, Gabriel Aeppli, Michael Urbakh, and Quanshui Zheng. Load-induced dynamical transitions at graphene interfaces. _Proceedings of the National Academy of Sciences_, 117(23):12618-12623, 2020. * [12] Yanmin Liu, Kang Wang, Qiang Xu, Jie Zhang, Yuanzhong Hu, Tianbao Ma, Quanshui Zheng, and Jianbin Luo. Superlubricity between graphite layers in ultrahigh vacuum. _ACS Applied Materials & Interfaces_, 12(38):43167-43172, 2020. * [13] Dirk Dietzel, Michael Feldmann, Udo D. Schwarz, Harald Fuchs, and Andre Schirmeisen. Scaling laws of structural lubricity. _Phys. Rev. Lett._, 111:235502, Dec 2013. * [14] Yiming Song, Davide Mandelli, Oded Hod, Michael Urbakh, Ming Ma, and Quanshui Zheng. Robust microscale superlubricity in graphite/hexagonal boron nitride layered heterojunctions. _Nature Materials_, 17(10):894-899, Oct 2018. * [15] Mengzhou Liao, Paolo Nicolini, Luojun Du, Jiahao Yuan, Shuopei Wang, Hua Yu, Jian Tang, Peng Cheng, Kenji Watanabe, Takashi Taniguchi, Lin Gu, Victor E. P. Claerbout, Andrea Silva, Denis Kramer, Tomas Polcar, Rong Yang, Dongxia Shi, and Guangyu Zhang. Ultra-low friction and edge-pinning effect in large-lattice-mismatch van der waals heterostructures. _Nature Materials_, 21(1):47-53, Jan 2022. * [16] Emanuele Panizon, Andrea Silva, Xin Cao, Jin Wang, Clemens Bechinger, Andrea Vanossi, Erio Tosatti, and Nicola Manini. Frictionless nanohighways on crystalline surfaces. _Nanoscale_, pages -, 2023. * [17] Xin Cao, Andrea Silva, Emanuele Panizon, Andrea Vanossi, Nicola Manini, Erio Tosatti, and Clemens Bechinger. Moire-pattern evolution couples rotational and translational friction at crystalline interfaces. _Phys. Rev. X_, 12:021059, 2022. * [18] Oded Hod, Ernst Meyer, Quanshui Zheng, and Michael Urbakh. Structural superlubricity and ultralow friction across the length scales. _Nature_, 563(7732):485-492, 2018. * [19] A. S. de Wijn. (in)commensurability, scaling, and multiplicity of friction in nanocrystals and application to gold nanocrystals on graphite. _Phys. Rev. B_, 86:085429, Aug 2012. * [20] E. Koren and U. Duerig. Moire scaling of the sliding force in twisted bilayer graphene. _Phys. Rev. B_, 94:045401, Jul 2016. * [21] Jin Wang, Ali Khosravi, Andrea Vanossi, and Erio Tosatti. Colloquium: Sliding and pinning in structurally lubric 2d material interfaces. _Rev. Mod. Phys._, under review, 2022. * [22] Ming Ma, Andrea Benassi, Andrea Vanossi, and Michael Urbakh. Critical length limiting superlow friction. _Phys. Rev. Lett._, 114:055501, Feb 2015. * [23] Jin Wang, Wei Cao, Yiming Song, Cangyu Qu, Quanshui Zheng, and Ming Ma. Generalized scaling law of structural superlubricity. _Nano Letters_, 19(11):7735-7741, 2019. * [24] Davide Mandelli, Wengen Ouyang, Oded Hod, and Michael Urbakh. Negative friction coefficients in superlubric graphite-hexagonal boron nitride heterojunctions. _Phys. Rev. Lett._, 122:076102, 2019. * [25] L. Consoli, H. J. F. Knops, and A. Fasolino. Onset of sliding friction in incommensurate systems. _Phys. Rev. Lett._, 85:302-305, Jul 2000. * [26] S. A. Adelman and J. D. Doll. Generalized langevin equation approach for atom/solid-surface scattering: General formulation for classical scattering off harmonic solids. _The Journal of Chemical Physics_, 64(6):2375-2388, 1976. * [27] L. Kantorovich. Generalized langevin equation for solids. i. rigorous derivation and main properties. _Phys. Rev. B_, 78:094304, Sep 2008. * [28] A. Benassi, A. Vanossi, G. E. Santoro, and E. Tosatti. Parameter-free dissipation in simulated sliding friction. _Phys. Rev. B_, 82:081401, 2010. * [29] Guorui Wang, Zhaohe Dai, Junkai Xiao, ShiZhe Feng, Chuanxin Weng, Luqi Liu, Zhiping Xu, Rui Huang, and Zhong Zhang. Bending of multilayer van der waals materials. _Phys. Rev. Lett._, 123:116101, 2019. * [30] Edmund Han, Jaehyung Yu, Emil Annevelink, Jangyup Son, Dongyun A. Kang, Kenji Watanabe, Takashi Taniguchi, Elif Ertekin, Pinshane Y. Huang, and Arend M. van der Zande. Ultrasoft slip-mediated bending in few-layer graphene. _Nature Materials_, 19(3):305-309, 2020. * [31] M M van Wijk, A Schuring, M I Katsnelson, and A Fasolino. Relaxation of moire patterns for slightly misaligned identical lattices: graphene on graphite. _2D Materials_, 2(3):034010, 2015. * [32] Klaus Hermann. Periodic overlayers and moire patterns: theoretical studies of geometric properties. _Journal of Physics: Condensed Matter_, 24(31):314210, 2012. * [33] Wengen Ouyang, Davide Mandelli, Michael Urbakh, and Oded Hod. Nanoserpents: Graphene nanoribbon motion on two-dimensional hexagonal materials. _Nano Letters_, 18(9):6009-6016, 2018. * [34] Qiang Lu, Marino Arroyo, and Rui Huang. Elastic bending modulus of monolayer graphene. _Journal of Physics D: Applied Physics_, 42(10):102002, 2009. * [35] Donald W Brenner, Olga A Shenderova, Judith A Harrison, Steven J Stuart, Boris Ni, and Susan B Sinnott. A second-generation reactive empirical bond order (rebo) potential energy expression for hydrocarbons. _Journal of Physics: Condensed Matter_, 14(4):783, 2002. * [36] Kuan Zhang and Ellad B. Tadmor. Structural and electron diffraction scaling of twisted graphene bilayers. _Journal of the Mechanics and Physics of Solids_, 112:225-238, 2018. * [37] Nathanael P. Kazmierczak, Madeline Van Winkle, Colin Ophus, Karen C. Bustillo, Stephen Carr, Hamish G. Brown, Jim Ciston, Takashi Taniguchi, Kenji Watanabe, and D. Kwabena Bediako. Strain fields in twisted bilayer graphene. _Nature Materials_, 20(7):956-963, 2021. * [38] Sergey Yu. Krylov and Joost W. M. Frenken. The physics of atomic-scale friction: Basic considerations and open questions. _physica status solidi (b)_, 251(4):711-736, 2014. * [39] Jeong Young Park and Miquel Salmeron. Fundamental aspects of energy dissipation in friction. _Chemical Reviews_, 114(1):677-711, 2014. * [40] Emanuele Panizon, Giuseppe E. Santoro, Erio Tosatti, Gabriele Riva, and Nicola Manini. Analytic understanding and control of dynamical friction. _Phys. Rev. B_, 97:104104, Mar 2018. * [41] L. Kantorovich and N. Rompotis. Generalized langevin equation for solids. ii. stochastic boundary conditions for nonequilibrium molecular dynamics simulations. _Phys. Rev. B_, 78:094305, Sep 2008. * [42] B. N. J. Persson, E. Tosatti, D. Fuhrmann, G. Witte, and Ch. Woll. Low-frequency adsorbate vibrational relaxation and sliding friction. _Phys. Rev. B_, 59:11777-11791, 1999. * [43] J. Krumhansl and H. Brooks. The lattice vibration specific heat of graphite. _The Journal of Chemical Physics_, 21(10):1663-1669, 1953. * [44] Xiang Gao, Wengen Ouyang, Michael Urbakh, and Oded Hod. Superlubric polycrystalline graphene interfaces. _Nature Communications_, 12(1):5694, Sep 2021. * [45] L Gigli, N Manini, A Benassi, E Tosatti, A Vanossi, and R Guerra. Graphene nanoribbons on gold: understanding superlubricity and edge effects. _2D Materials_, 4(4):045003, aug 2017. * [46] Wen Wang and Xide Li. Interlayer motion and ultra-low sliding friction in microscale graphite flakes. _EPL (Europhysics Letters)_, 125(2):26003, feb 2019. * [47] A. Fasolino, J. H. Los, and M. I. Katsnelson. Intrinsic ripples in graphene. _Nature Materials_, 6(11):858-861, 2007. * [48] Roberto Guerra, Ugo Tartaglino, Andrea Vanossi, and Erio Tosatti. Ballistic nanofriction. _Nature Materials_, 9(8):634-637, 2010. * [49] Yalin Dong, Ajay Vadakkepatt, and Ashlie Martini. Analytical models for atomic friction. _Tribology Letters_, 44(3):367, Sep 2011. * [50] Juraj Hasik, Erio Tosatti, and Roman Martonak. Quantum and classical ripples in graphene. _Phys. Rev. B_, 97:140301, 2018. **Supplementary Information** **Kinetic Friction of Structurally Superlubric 2D Material Interfaces** Jin Wang,\({}^{1,\,2}\) Ming Ma,\({}^{3}\) and Erio Tosatti\({}^{1,\,2,\,*}\) \({}^{1}\)_International School for Advanced Studies (SISSA), I-34136 Trieste, Italy_ \({}^{2}\)_International Centre for Theoretical Physics, I-34151 Trieste, Italy_ \({}^{3}\)_State Key Laboratory of Tribology,_ _Department of Mechanical Engineering,_ _Tsinghua University, Beijing 100084, China_ \({}^{4}\)_CNR-IOM, Consiglio Nazionale delle Ricerche - Istituto Officina dei Materiali, c/o SISSA, 34136, Trieste, Italy_ ###### Contents * I Methods for structural optimization * II Negligible effects from in-plane damping * III Damping coefficient from anisotropic Debye model * IV Thermal fluctuations from simulations * V Corrugations with different \(k_{z}\) Methods for structural optimization The energy optimization is performed with open-source code LAMMPS [1; 2]. The bilayer simulation models with twist angle \(\theta\) ranges from \(0.3^{\circ}\) to \(30^{\circ}\) are created with periodic boundary conditions (PBC) along \(x\) and \(y\) directions. The interlayer and intralayer interaction is described by registry-dependent interlayer potential (ILP) and REBO force field respectively [3; 4]. The carbon atom in each layer is tethered by a linear \(z\)-directional spring to its original position to mimic the normal direction elasticity. The spring constant, which reproduces the moire height in twisted bulk graphite, is \(k_{z}=0.33\) N/m. During the structural optimization, the in-plane stress is kept to be zero, \(p_{xx}=p_{yy}=p_{xy}=0\). The optimization is performed by FIRE [5] together with CG algorithms with several loops. The convergence criterion is when the largest single atom force \(F_{i}<10^{-6}\) eV/A. ## II Negligible effects from in-plane damping The in-plane damping \(\zeta_{x}\) (and \(\zeta_{y}\)) is negligible (compared to the out-of-plane \(\zeta_{z}\)) in the damping-based bilayer graphene simulations. This has been realized before (e.g., Ref. [6]) and can be understood here by defining an anisotropy factor: \[r_{aniso}=\frac{\zeta_{z}}{\zeta_{x}}=\frac{|\Phi_{12}^{zz}|^{2}}{|\Phi_{12}^ {xx}|^{2}}\] (S1) The \(z\)-direction force constant is \(\Phi_{12}^{zz}=2.7\) N/m (could also be estimated from elastic constant \(C_{33}=38.7\) GPa by \(\Phi_{12}^{zz}=C_{33}A_{\mathrm{C}}/d_{0}\)), and the force constant \(\Phi_{12}^{xx}\) could be estimated from the interlayer shearing \(C_{44}=5.0\) GPa [7]. Since \(\Phi_{12}^{zz}/\Phi_{12}^{xx}=C_{33}/C_{44}\), the anisotropy factor is in the order of \(10^{2}\) - the friction contribution from in-plane damping is thus negligible. ## III Damping coefficient from anisotropic Debye model Considering an anisotropic Debye model with dispersion relation [8]: \[\omega^{2}=v_{\rm in}^{2}(q_{x}^{2}+q_{y}^{2})+v_{\rm out}^{2}q_{z}^{2}\] (S2) where \(v_{\rm in}\) and \(v_{\rm out}\) is the in-plane and out-of-plane sound speed. The first Brillouin zone of this model is assumed to be an ellipsoid: \[(\frac{q_{x}}{q_{x0}})^{2}+(\frac{q_{y}}{q_{y0}})^{2}+(\frac{q_{z}}{q_{z0}})^{2}=1\] (S3) /here for 2D materials, we take \(q_{x0}=q_{y0}\). The Debye frequencies along in-plane and out-of-plane directions could be defined as \(\omega_{Dx}=\omega_{Dy}=v_{\rm in}q_{x0}\) and \(\omega_{Dz}=v_{\rm out}q_{z0}\). The density of state is [9]: \[\begin{split}\rho(\omega)&=\frac{V}{8\pi^{3}}\int \frac{{\rm d}S}{v_{g}}\\ &=2\times\frac{V}{8\pi^{3}}\iint\frac{1}{v_{g}}\sqrt{1+(\frac{ \partial q_{z}}{\partial q_{x}})^{2}+(\frac{\partial q_{z}}{\partial q_{y}})^ {2}}{\rm d}q_{x}{\rm d}q_{y}\end{split}\] (S4) where \(V\) is the volume of the unit cell, \(v_{g}=|\nabla_{q}\omega|\) is the magnitude of the group velocity of a phonon, \(S\) represents the surface "area" of the zone boundary. By implementing the polar coordinate substitution \(q_{x}=r\cos\varphi\) and \(q_{y}=r\sin\varphi\) (\(r\in[0,\omega/v_{\rm in}],\varphi\in[0,2\pi]\).), one could simplify the above equation to \[\rho(\omega)=\frac{V\omega^{2}}{2\pi^{2}v_{\rm in}^{2}v_{\rm out}}\] (S5) This result holds for \(\omega<\omega_{Dz}\) - consistent with our work. Substituting Eq. (S5) back into Eq. 23 in the maintext, we could get the damping coefficient: \[\zeta_{z}=\frac{|\Phi_{12}^{zz}|^{2}}{m^{2}}\frac{V}{4\pi v_{\rm in}^{2}v_{ \rm out}}\] (S6) By using sound speed \(v_{\rm in}=22\) km/s and \(v_{\rm out}=1.48\) km/s [7], one could get the damping coefficient \(\zeta\approx 0.02\) ps\({}^{-1}\) - agree qualitatively with the estimation given in the main text. Thermal fluctuations from simulations In this section, we give details on the mean-square (out-of-plane) thermal fluctuation \(\langle H_{T}^{2}\rangle\) of the interfacial layer (region B) and its temperature dependence. For bilayer graphene simulations (described in maintext Sect. 6.1), we can get trajectory of all atoms, \(x_{i}(t)\), \(y_{i}(t)\), and \(z_{\text{i}}(t)\). According to definition, \[\langle H_{T}^{2}\rangle=\sum_{q}\langle|h_{q}|^{2}\rangle\] (S7) where \(\langle|h_{q}|^{2}\rangle\) could be estimated from 2D Fourier transform of \(z(x,y)\): \[\langle|h_{q}|^{2}\rangle=\langle|\text{FFT}_{q}|^{2}\rangle\] (S8) here \(|...|\) represents complex modulus, and \[\text{FFT}(q_{x},q_{y})=\frac{4}{N_{x}N_{y}}\sum_{j=0}^{N_{x}-1}\sum_{k=0}^{N_ {y}-1}\exp(-\frac{2\pi iq_{x}j}{N_{x}})\exp(-\frac{2\pi iq_{y}k}{N_{y}})z(x_{j },y_{k})\] (S9) where \(z(x_{j},y_{k})\) is the out-of-plane position of the substrate (or slider), remapping from the original honeycomb lattice to square lattice with spatial resolution \(N_{x}\times N_{y}\). The temperature dependence of \(\langle H_{T}^{2}\rangle\) (for \(T>T_{c}\)) is shown in Fig. S1a. Note that at low temperature limit (\(T\to 0\)), the mean-square-corrugation becomes \(H_{\text{moire}}^{2}\). For \(\theta=6^{\circ}\), it is approximately \(10^{-2}\) A\({}^{2}\). From simulation results, we can determine the value of \(c_{2}\) by: \[c_{2}=\frac{\kappa\langle H_{T}^{2}\rangle}{k_{\text{B}}Ta_{\text{Gr}}^{2}} \approx 2.8\] (S10) where \(\kappa\) is the bending stiffness of monolayer graphene, \(a_{\text{Gr}}\) is the lattice constant of graphene. Substituting this \(c_{2}\) back into Eq. (28) in the maintext, we find that the friction force estimated from our theory is in good agreement with the simulation results (Fig. 5d in the maintext). The origin of this temperature dependence of friction (at high temperatures \(T>T_{c}\)) is \(\langle H_{T}^{2}\rangle\), as we formulate in Eq. (27) in the maintext. This could also be demonstrated directly from our simulation as shown in Fig. S1b. The errorbar of the friction force is the standard deviation of three independent simulations. Simulation results show that \(F_{\text{norm}}=F_{\text{k}}a_{\text{Gr}}^{2}(Nm\zeta_{z}v_{0})^{-1}\simeq \langle H_{T}^{2}\rangle\) at high temperatures, which lead immediately to: \[F_{\text{k}}\simeq\frac{Nm\zeta_{z}\langle H_{T}^{2}\rangle v_{0}}{a_{\text{ Gr}}^{2}}\] (S11) ## V Corrugations with different \(k_{z}\) In the maintext, we use the same out-of-plane restriction \(k_{z}\) to slider and substrate. This set-up naturally gives rise to the same moire height and thermal corrugations. In real nature there are cases where \(k_{z}\) is different for slider and substrate, e.g, graphite/hBN heterostructures, twisted monolayer graphene on Bernal graphite, etc. Discussions on fluctuations and sliding frictions for these variety of systems with "asymmetric" \(k_{z}\) will be given in this section. At low temperature limit, the sliding friction is dominated by the moire corrugation, i.e., Eq. (12) in the maintext. With different \(k_{z}\) for the substrate and slider, the moire corrugation changes correspondingly. Thus, Eq. (12) could be generalized by defining \(H_{\text{sub}}\) and \(H_{\text{sli}}\) whose values can be determined by finding the global minimum potential energy. Here, instead, we show the simulation results of moire corrugation for different slider's \(k_{z}\). Adopting the same optimization protocol with substrate's \(k_{z}=0.33\) N/m and slider's \(k_{z}\) ranging from 0 to 3 N/m, the moire corrugations for the slider \(H_{\text{sli}}\) and substrate \(H_{\text{sub}}\) of a test case with \(6^{\circ}\)-twist bilayer graphene are shown in Fig. S2. From simulations, whether \(k_{z}\) of the slider is zero or equals to that of the substrate has weak influence on \(H_{\text{sub}}\) and negligible effect on the average moire height \((H_{\text{sub}}+H_{\text{sli}})/2\). Therefore, it is safe to continue using Eq. (12) in the maintext to approximate the moire corrugations for more general SSL systems. For high temperature cases, where moire fluctuations become negligible, the sliding friction is dominated by mean-square-fluctuations \(\langle H_{T}^{2}\rangle\). The larger \(k_{z}\) results in higher deformation energy, which leads to a decrease in \(\langle H_{T}^{2}\rangle\) (and the sliding friction) at the same temperature. We test on the parameter-free multilayer simulation system with two cases: \(k_{z}\) of the slider is equal to 0 and 0.33 N/m. From the simulation results shown in Fig. S3, the frictional stress at room temperature for \(k_{z}=0\) case (green) increased by \(\approx 30\%\) compared to the finite \(k_{z}\) case (red). Simulation and theoretical results for the bilayer system with equal \(k_{z}\) are also shown in the figure.
2D構造の超低摩擦係数F_k、高速な不対称モイヤーパターンとの接続により、速度v_0と面積Aに比例して増加する、ことがよく挙げられますが、これに対して、Seriousな検討と計算が行われていません。ここでは、この現象を説明するため、層状のグラフene層がグラフィットの塊の上で滑っているというモデルケースを用いて、その実証例として示しています。このモデルケースは、他のシステムへの一般化が容易です。量子効果を無視し、古典的なランジビンのダイナミクスを仮定して、摩擦式を導き出す。温度の2つの領域で有効です。低温では、滑り摩擦速度の微分 dF_k/dv_0 は、準位を介した効果的減衰パラメータによって、影響を受ける基材が影響を受ける bilayer の摩擦速度と等価であることが示
2306.17779
Ambiguities in Partial Wave Analysis of Two Spinless Meson Photoproduction
We describe the formalism to analyze the mathematical ambiguities arising in partial-wave analysis of two spinless mesons produced with a linearly polarized photon beam. We show that partial waves are uniquely defined when all accessible observables are considered, for a wave set which includes $S$ and $D$ waves. The inclusion of higher partial waves does not affect our results, and we conclude that there are no mathematical ambiguities in partial-wave analysis of two mesons produced with a linearly polarized photon beam. We present Monte Carlo simulations to illustrate our results.
JPAC Collaboration, W. A. Smith, D. I. Glazier, V. Mathieu, M. Albaladejo, M. Albrecht, Z. Baldwin, C. Fernández-Ramírez, N. Hammoud, M. Mikhasenko, G. Montaña, R. J. Perry, A. Pilloni, V. Shastry, A. P. Szczepaniak, D. Winney
2023-06-30T16:33:20
http://arxiv.org/abs/2306.17779v1
# Ambiguities in Partial Wave Analysis of Two Spinless Meson Photoproduction ###### Abstract We describe the formalism to analyze the mathematical ambiguities arising in partial-wave analysis of two spinless mesons produced with a linearly polarized photon beam. We show that partial waves are uniquely defined when all accessible observables are considered, for a wave set which includes \(S\) and \(D\) waves. The inclusion of higher partial waves does not affect our results, and we conclude that there are no mathematical ambiguities in partial-wave analysis of two mesons produced with a linearly polarized photon beam. We present Monte Carlo simulations to illustrate our results. + Footnote †: preprint: JLAB-THY-23-3873 ## I Introduction In hadron spectroscopy, the extraction and interpretation of data from scattering experiments typically employ partial-wave analyses to isolate resonant contributions. However, these partial-wave expansions need not be unique, and, depending on the reaction, one may find multiple wave sets which produce mathematically equivalent predictions for the observables. This causes significant problems in the analysis and interpretation of data. These mathematical ambiguities have been extensively studied for various processes [1; 2; 3; 4] and there is no generic prescription to remedy them. Hence, the issue must be addressed on a case-by-case basis (see Refs. [5; 6; 7; 8] for some recent examples). To remedy ambiguities, typically one must generate all possible ambiguous wave sets and select one of them by enforcing additional constraints like global continuity [9] or unitarity [10]. Most previous analyses of mathematical ambiguities for partial-wave analysis examine nucleon or pion-beam production processes. In this work, we introduce the formalism for the examination of mathematical ambiguities in two pseudoscalar meson photoproduction processes with a linearly polarized photon beam, such as those present in the GlueX experiment at Jefferson Lab [11]. The physics program for the GlueX experiment focuses on the search for light exotic mesons. Some of the final states under consideration involve the two pseudoscalar mesons \(\eta^{(J)}\pi\), for which odd waves have exotic quantum numbers incompatible with a \(q\bar{q}\) assignment [12]. The dominant non-exotic signal in these final states is the \(a_{2}(1320)\) resonance which populates the \(D\) waves [13]. It is essential to first accurately identify all relevant \(D\)-wave components before extracting the weaker exotic signal in the \(P\)-waves [5; 14; 15; 16]. In this paper, we address the issue of ambiguous solutions in partial wave analyses which are relevant to the extraction of the \(D\)-wave components, but our work is applicable to the general case of photoproduction of any two spinless mesons. Our methods are based on the concept of Barrelet zeros, which we review in Appendix A for completeness. In Section II we introduce our notation and formalism for the photoproduction of two spinless mesons with a linearly polarized photon beam. We then demonstrate, using a wave set with two or three \(D\)-wave components accompanied by an \(S\)-wave, that there are no mathematical ambiguities. We also provide arguments supporting the absence of ambiguous solutions in more general cases. In Section IV we present results of numerical simulations, which show that there is indeed a unique solution with the highest likelihood. However, the likelihood function contains many local maxima that may lead to false solutions if appropriate care is not taken when performing fits. The summary and conclusions are given in Section V. ## II Formalism We consider the photoproduction on a nucleon target of a meson resonance decaying into two spinless mesons, _e.g._\(\gamma p\to p\,\eta\,\pi^{0}\). We follow Ref. [11], writing \[I(\Omega,\Phi) =\frac{\mathrm{d}\sigma}{\mathrm{d}t\,\mathrm{d}m_{\eta\pi^{0}} \,\mathrm{d}\Omega\,\mathrm{d}\Phi}\] \[=\kappa\sum_{\begin{subarray}{c}\lambda_{\gamma}\lambda_{\gamma}^ {\prime}\\ \lambda_{1}\lambda_{2}\end{subarray}}A_{\lambda_{\gamma};\lambda_{1}\lambda_{2 }}(\Omega)\rho_{\lambda_{\gamma}\lambda_{\gamma}^{\prime}}^{\gamma}(\Phi)A_{ \lambda_{\gamma}^{\prime};\lambda_{1}\lambda_{2}}^{*}(\Omega), \tag{1}\] where \(\Omega=(\theta,\phi)\) are the decay angles of the resonance in the Gottfried-Jackson or helicity frame, and \(\Phi\) is the polarization angle with respect to the production plane. The spin density matrix is given by \(\rho_{\gamma}(\Phi)=\frac{1}{2}\left(1-P_{\gamma}\cos 2\Phi\,\sigma_{x}-P_{ \gamma}\sin 2\Phi\,\sigma_{y}\right)\), and \(P_{\gamma}\) indicates the degree of polarization. Since the analysis of ambiguities is performed independently in each bin of \(t\) and \(\eta\pi^{0}\) invariant mass, these dependences are understood. The phase space factor \(\kappa\) does not depend on angular variables and will be absorbed into the amplitudes. We neglect the dependence on the nucleon spin 1 and write for the helicity amplitudes Footnote 1: For the complete discussion including nucleon spin, see [11]. \[A_{\lambda_{\gamma}}(\Omega)=\sum_{\ell m}[\ell]_{\lambda_{\gamma};m}Y_{\ell}^ {m}(\Omega), \tag{2}\] where \([\ell]_{\lambda_{\gamma};m}\) refers to the partial wave with angular momentum \(\ell\), spin projection \(m\) produced with photon helicity \(\lambda_{\gamma}\). One can construct partial waves with definite reflectivity as linear combinations of the partial waves in such a way that in the high energy limit positive (negative) reflectivity corresponds to natural (unnatural) parity exchanges in the Gottfried-Jackson frame represented in Fig. 1[11; 2], \[[\ell]_{m}^{(\epsilon)}=\frac{1}{2}\left([\ell]_{+1;m}-\epsilon(-1)^{m}[\ell] _{-1;-m}\right). \tag{3}\] In doing so, we have essentially traded the photon helicity \(\lambda_{\gamma}\) for reflectivity \(\epsilon\). For convenience, we define the amplitudes \(U^{(\epsilon)}\) and \(\tilde{U}^{(\epsilon)}\) in the reflectivity basis: \[U^{(\epsilon)}(\Omega) =\sum_{\ell m}[\ell]_{m}^{(\epsilon)}Y_{\ell}^{m}(\Omega), \tag{4a}\] \[\tilde{U}^{(\epsilon)}(\Omega) =\sum_{\ell m}[\ell]_{m}^{(\epsilon)}\left[Y_{\ell}^{m}(\Omega) \right]^{*}. \tag{4b}\] We write the intensity of the final products from Eq. (1), \[I(\Omega,\Phi)=I^{0}(\Omega) -P_{\gamma}I^{1}(\Omega)\cos(2\Phi)\] \[-P_{\gamma}I^{2}(\Omega)\sin(2\Phi), \tag{5}\] where \(I^{0}\) is the unpolarized intensity, and \(I^{1,2}\) are polarized intensities. The intensities are quadratic in the partial waves and can be expressed in terms of the amplitudes in Eq. (4): \[I^{0}(\Omega) =\phantom{-}\sum_{\epsilon}\left\{|U^{(\epsilon)}(\Omega)|^{2}+| \tilde{U}^{(\epsilon)}(\Omega)|^{2}\right\}\,, \tag{6a}\] \[I^{1}(\Omega) =-2\sum_{\epsilon}\epsilon\,\mathrm{Re}\left\{U^{(\epsilon)}( \Omega)\,\left[\tilde{U}^{(\epsilon)}(\Omega)\right]^{*}\right\}\,,\] (6b) \[I^{2}(\Omega) =-2\sum_{\epsilon}\epsilon\,\mathrm{Im}\left\{U^{(\epsilon)}( \Omega)\,\left[\tilde{U}^{(\epsilon)}(\Omega)\right]^{*}\right\}\,. \tag{6c}\] The dependence on the polar angle \(\theta\) can be written explicitly by expanding the intensities in a Fourier series in the azimuthal decay angle \(\phi\): \[I^{0}(\Omega) =\phantom{-}\frac{1}{2\pi}\Big{[}\phantom{-}h_{0}^{0}(\theta) \phantom{+}h_{1}^{0}(\theta)\cos(\phi)\phantom{+}+\phantom{-}...\phantom{-} \Big{]}, \tag{7a}\] \[I^{1}(\Omega) =-\frac{1}{2\pi}\Big{[}\phantom{-}h_{0}^{1}(\theta)\phantom{+}h_ {1}^{1}(\theta)\cos(\phi)\phantom{+}+\phantom{-}...\phantom{-}\Big{]},\] (7b) \[I^{2}(\Omega) =-\frac{1}{2\pi}\Big{[}\phantom{-}0\phantom{-}+\phantom{-}h_{1}^{2}( \theta)\sin(\phi)\phantom{+}+\phantom{-}...\phantom{-}\Big{]}. \tag{7c}\] Figure 1: Definition of the angles in the Gottfried-Jackson frame. In the two-meson rest frame, the \(z\) axis is given by the photon beam (\(\gamma\)), and the \(xz\) reaction plane contains also the nucleon target (\(p\)) and recoiling nucleon (\(p^{\prime}\)) momenta. \(\theta\) and \(\phi\) are the polar and azimuthal angles of the \(\eta\). The polarization vector of the photon (\(\overline{\epsilon}_{\gamma}\)) forms an angle \(\Phi\) with the reaction plane. Here the ellipses denote terms of higher order harmonics in \(\phi\). The functions \(h^{\alpha}_{M}(\theta)\), which we will refer to as (un)polarized moments, are quadratic in the partial waves and relate them to the measurable angular distribution of the two mesons in their center of mass frame. We note that positive and negative reflectivity contributions sum up incoherently, and one can decompose \(h^{\alpha}_{M}(\theta)\) into an explicit sum of reflectivity components, _i.e._\(h^{\alpha}_{M}(\theta)=\ ^{(+)}h^{\alpha}_{M}(\theta)+\ ^{(-)}h^{\alpha}_{M}(\theta)\). The two reflectivities can be distinguished from each other due to the dependence on the polarization angle \(\Phi\). We can therefore deal with each reflectivity independently, noting that the mathematical treatment of ambiguities is identical for each. To pursue our analysis of Barrelet zeros, we need to express the observables \(h^{\alpha}_{M}(\theta)\) as polynomials of \(\tan\frac{\theta}{2}\), and then extract their roots. We first employ Eqs. (4) and (6) and rewrite Eq. (7) as: \[I^{0}(\Omega) =\frac{1}{2\pi}\sum_{emm^{\prime}}f^{(\epsilon)}_{m}(\theta)f^{( \epsilon)*}_{m^{\prime}}(\theta)\cos[(m-m^{\prime})\phi], \tag{8a}\] \[I^{1}(\Omega) =\frac{-1}{2\pi}\sum_{emm^{\prime}}\epsilon f^{(\epsilon)}_{m}( \theta)f^{(\epsilon)*}_{m^{\prime}}(\theta)\cos[(m+m^{\prime})\phi],\] (8b) \[I^{2}(\Omega) =\frac{-1}{2\pi}\sum_{emm^{\prime}}\epsilon f^{(\epsilon)}_{m}( \theta)f^{(\epsilon)*}_{m^{\prime}}(\theta)\sin[(m+m^{\prime})\phi], \tag{8c}\] where, \[f^{(\epsilon)}_{m}(\theta) =\sum_{\ell}\sqrt{4\pi}\,[\ell]^{(\epsilon)}_{m}Y^{m}_{\ell}( \theta,0)\] \[=\sum_{\ell}\sqrt{2\ell+1}\,[\ell]^{(\epsilon)}_{m}d^{\ell}_{m0} (\theta). \tag{9}\] The Wigner \(d\)-function, \(d^{\ell}_{m0}(\theta)\),2 is a polynomial in \(\cos\theta\) only for \(m=0\). For \(m\neq 0\) it is a polynomial of \(\cos\theta\) of order \(l-|m|\) multiplied by a factor \(\sin^{|m|}(\theta)\). We thus represent the \(d\)-functions in terms of \(u=\tan\theta/2\) by [2]: Footnote 2: We use the Wigner \(d\)-function with the convention \(d^{j}_{m^{\prime}m}(\theta)=\langle jm^{\prime}|e^{-i\theta J_{y}}\,|jm\rangle\) \[d^{\ell}_{m0}(\theta)=\left(\frac{u}{1+u^{2}}\right)^{\ell}(-1)^{m}\varepsilon ^{\ell}_{m}(u)\,, \tag{10}\] with the polynomial \(\varepsilon^{\ell}_{m}(u)\) defined as: \[\varepsilon^{\ell}_{m}(u)=\sum_{k}(-1)^{k}\frac{u^{2k+m-\ell}\ell! [(\ell-m)!(\ell+m)!]^{1/2}}{(\ell-m-k)!(\ell-k)!(m+k)!k!}\,. \tag{11}\] The summation over \(k\) is restricted to the range \(k\in[\max(0,-m),\min(\ell,\ell-m)]\). By matching Eqs. (7) and (8), we obtain a relation between the observable quantities and the reflectivity partial waves: \[{}^{(\epsilon)}h^{0}_{M} =\sum_{mm^{\prime}}f^{(\epsilon)}_{m}f^{(\epsilon)*}_{m^{\prime}} \delta_{M,|m-m^{\prime}|}\,, \tag{12a}\] \[{}^{(\epsilon)}h^{1}_{M} =\epsilon\sum_{mm^{\prime}}f^{(\epsilon)}_{m}f^{(\epsilon)*}_{m^ {\prime}}\delta_{M,|m+m^{\prime}|}\,,\] (12b) \[{}^{(\epsilon)}h^{2}_{M} =\epsilon\sum_{mm^{\prime}}f^{(\epsilon)}_{m}f^{(\epsilon)*}_{m^ {\prime}}\delta_{M,|m+m^{\prime}|}\,\text{sign}(m+m^{\prime})\,. \tag{12c}\] Since each \(f^{(\epsilon)}_{m}(\theta)\) is a complex function, and each \(h^{\alpha}_{M}(\theta)\) is a real observable expressible as a sum of products of \(f\)-functions, one may simplify the problem by expressing \(h^{\alpha}_{M}(\theta)\) as a sum of squares of complex functions, \[h^{\alpha}_{M}(u)=\sum_{i}|g_{i}(u)|^{2}. \tag{13}\] Here, each \(g_{i}(u)\) is a linear combination of the \(f^{(\epsilon)}_{m}(\theta)\), and therefore is also a rational function in \(u\). Hence, conjugation of the roots of each \(g_{i}(u)\) may generate ambiguities of the partial waves. We note that it is most convenient to express every moment in terms of a single basis set of \(g\)'s. Eq. (12) represent bilinear matrix equations which connect the coefficients of the intensity Eq. (7) to the partial wave amplitudes, while Eq. (13) represents a diagonalization of the same equations. Since the moments can be extracted directly from experimental data, the presence of mathematical ambiguities is determined by whether or not replacing roots of the basis functions \(g_{i}\) with their conjugates provide alternate solutions to these matrix equations. To address this we will consider a few examples explicitly to show that for a few sets of partial waves \(\{[\ell]^{(\epsilon)}_{m}\}\), the relations Eq. (12) are uniquely determined and no ambiguities exist. In other words there is no way to construct a different set, \(\{[\widehat{\ell}]^{(\epsilon)}_{m}\}\) which will yield the same moments. ## III Case studies Ambiguities in partial wave analysis with a high energy pion beam were studied in Ref. [2], where several wave sets with different combinations of waves up to the \(G\) wave were considered. In all cases the spin projections were limited to \(m=0,1\).3 With this restriction, the intensity only includes three terms in the azimuthal expansion of Eq. (7). Since for a pion beam there is no \(S\) wave with positive reflectivity, there is one relevant \(g\)-function for the positive reflectivity components, and two for the negative reflectivity components. The polynomials which generate ambiguities in the negative reflectivity components are not independent, so the ambiguities for the waves in each reflectivity component are obtained using the roots of a single polynomial. That is, there are ambiguous solutions in the partial wave extraction because the observable depends on only two independent polynomials, one for each reflectivity component, and transformations built from combinations of conjugations of roots of each polynomial produce the same intensity profile. In this section, we will argue that there are no ambiguities in the extraction of partial waves from an experiment using a linearly polarized photon beam. First, we note that any possible ambiguities arising from switching contributions between the two different reflectivity waves may be resolved by making use of the \(\Phi\) dependence of the linearly polarized photon beam. We will thus only consider one reflectivity component and suppress all reflectivity superscripts for convenience. We will consider first the simplest non-trivial case by including only the waves \(\{S_{0},D_{0},D_{1}\}\). This case is analogous to Ref. [2], however, as we will see, the polarized intensity allows us to determine the partial waves without ambiguity. We then will consider the wave set \(\{S_{0},D_{-1},D_{0},D_{1}\}\). These \(D\) waves dominate the production of the \(a_{2}(1320)\) resonance in the \(\eta\pi\) final state _via_ pion exchange [17]. We will not find any ambiguous solution for the extraction of this wave set, once the polarized moments are taken into account. The \(a_{2}(1320)\) is also produced by vector exchanges. In this case, the dominant \(D\) waves are \(\{D_{0},D_{1},D_{2}\}\)[17]. We have confirmed that this wave set is also free of ambiguities, although we omit the calculation for brevity. Our key result is that there are at least two unique \(g\)'s which appear in the Fourier series of the polar angle when two or more spin projections are allowed. These polynomials are independent and have distinct roots. Consequently, these Fourier moments are enough to uniquely determine the partial waves. No transformations on the partial waves leave every observable invariant, and the observables uniquely define the partial waves for linearly polarized meson photoproduction. We illustrate this fact only with \(S\) and \(D\) waves, but the addition of other waves should not change our results. Adding more waves increases the number of roots of each \(g\)-function, and hence the number of possible ambiguities, but in general we argue that there is no relation between the roots, and therefore partial waves can be unambiguously extracted from the polarized observables. ### \(\mathbf{S}\) and \(\mathbf{D}\) waves with \(\mathbf{m=0,1}\) We start by analyzing the wave set with \(S\) and \(D\) waves with \(m\) projections \(0,1\) and positive reflectivities, as this set has been analyzed explicitly for a pion-beam production process [2]. Suppose that we have obtained one set of partial waves, \(\{S_{0},D_{0},D_{1}\}\), from an experiment. We can then attempt to generate an ambiguous set of partial waves, \(\left\{\tilde{S}_{0},\tilde{D}_{0},\tilde{D}_{1}\right\}\), from the original set. We start by writing the \(f\)'s from Eq. (9): \[f_{0}(u) =\frac{\sqrt{5}\left(u^{4}-4u^{2}+1\right)D_{0}}{\left(u^{2}+1 \right)^{2}}+S_{0}\,, \tag{14a}\] \[f_{1}(u) =\frac{\sqrt{30}\,u\left(u^{2}-1\right)D_{1}}{\left(u^{2}+1 \right)^{2}}\,. \tag{14b}\] With this wave set, there are seven non-zero functions \(h_{M}^{\alpha}(\theta)\), though they are not all linearly independent. When the wave set includes only positive \(m\)-projections, there is a simple relation between the polarized moments \(h_{M}^{2}=h_{M}^{1}\) for \(M>0\)[11]. (For \(M=0\), one has \(h_{1}^{2}=0\), see Eq. (7c)). In addition, we find the relation \(h_{1}^{1}=h_{1}^{0}\) and \(h_{2}^{1}=h_{0}^{0}-h_{0}^{1}\) for this particular wave set. So we are left with three linearly independent \(h_{M}^{\alpha}(\theta)\). We rewrite the conditions relating the \(h\)'s to the \(f\)'s in matrix form: \[h_{M}^{0}(\theta)=F^{\dagger}H_{M}^{0}F,\qquad h_{M}^{1}(\theta)=F^{\dagger}H _{M}^{1}F\,. \tag{15}\] Where \(F=(f_{0},f_{1})^{T}\). The three matrices are: \[H_{0}^{0}=\begin{pmatrix}1&0\\ 0&1\end{pmatrix},\quad H_{1}^{0}=\begin{pmatrix}0&1\\ 1&0\end{pmatrix},\quad H_{0}^{1}=\begin{pmatrix}1&0\\ 0&0\end{pmatrix}\,. \tag{16}\] Since the matrices \(H_{0}^{0}\) and \(H_{1}^{0}\) commute, we can simultaneously diagonalize them and simplify the unpolarized moments, obtaining: \[g_{0}(u) \equiv\frac{1}{\sqrt{2}}\left[f_{1}(u)+f_{0}(u)\right]\,, \tag{17a}\] \[g_{1}(u) \equiv\frac{1}{\sqrt{2}}\left[f_{1}(u)-f_{0}(u)\right]\,. \tag{17b}\] Since \(f_{0}(u)\) is even and \(f_{1}(u)\) is odd, the new functions fulfill \(g_{1}(-u)=-g_{0}(u)\). Thus, their roots and ambiguities from complex conjugation of the roots are the same. The three independent moments read: \[h_{0}^{0}= |g_{0}|^{2}+|g_{1}|^{2}, \tag{18a}\] \[h_{1}^{0}= |g_{0}|^{2}-|g_{1}|^{2},\] (18b) \[h_{0}^{1}= \frac{1}{2}|g_{0}-g_{1}|^{2}. \tag{18c}\] We note that the moments \(h_{M}^{\alpha}\) will simply change by a sign \((-1)^{M}\) under the substitution \(g_{0}\to g_{1}\). It is necessary and sufficient to require that any prospective ambiguity transformation leaves invariant \(|g_{0}|^{2}\) and \(|g_{0}-g_{1}|^{2}\) independently. In terms of the partial waves, these functions can be written: \[g_{0}= \sqrt{\frac{5}{2}}\frac{1}{(u^{2}+1)^{2}}\Big{[}D_{0}(u^{4}-4u^{2 }+1)\] \[+\sqrt{6}D_{1}(u^{3}-u)\Big{]}+\frac{1}{\sqrt{2}}S_{0}, \tag{19a}\] \[g_{0}-g_{1}= \sqrt{10}\frac{(u^{4}-4u^{2}+1)D_{0}}{(u^{2}+1)^{2}}+\sqrt{2}S_{0}. \tag{19b}\] Which can be simplified defining \(v=u-1/u=-2\cot\theta\): \[g_{0}= \sqrt{\frac{5}{2}}\frac{1}{v^{2}+4}\Big{[}Av^{2}+\sqrt{6}D_{1}v-2B \Big{]}, \tag{20a}\] \[g_{0}-g_{1}= \frac{\sqrt{10}}{v^{2}+4}\Big{[}Av^{2}-2B\Big{]}, \tag{20b}\] where \(A=D_{0}+S_{0}/\sqrt{5}\) and \(B=D_{0}-2S_{0}/\sqrt{5}\). Recalling that the ambiguous waves should be generated by conjugating roots of these polynomials, we start by considering the first polynomial in Eq. (20a), and factorize it into its Barrelet zeros \(r_{1,2}\): \[g_{0}\propto(v-r_{1})(v-r_{2}), \tag{21}\] where we have dropped the irrelevant factors. The roots read: \[r_{1,2}=\frac{-\sqrt{3}D_{1}\pm\sqrt{4AB+3D_{1}^{2}}}{\sqrt{2}A}. \tag{22}\] In this case, there are only two Barrelet zeros and there is thus only one non-trivial independent solution given by the substitution of one root by its complex conjugate. We invert Eq. (20a) and replace \(r_{1}\) with its conjugate to obtain: \[\tilde{S}_{0} =\sqrt{5}\frac{A}{6}(2+r_{1}^{*}r_{2}), \tag{23a}\] \[\tilde{D}_{0} =\frac{A}{6}(4-r_{1}^{*}r_{2}),\] (23b) \[\tilde{D}_{1} =-\frac{A}{\sqrt{6}}(r_{1}^{*}+r_{2}). \tag{23c}\] We note that the new waves obtained by the complex conjugation of \(r_{1}\) and \(r_{2}\) simultaneously lead to the set \(\{S_{0}^{*},D_{0}^{*},D_{1}^{*}\}\), the complex conjugate of the original wave set. For a given wave set \(\{S_{0},D_{0},D_{1}\}\), the set in Eq. (23) produces the same unpolarized moments \(h_{0,1}^{0}(\theta)\). In the absence of information on the polarized moments, the above wave set would constitute an ambiguous solution. In this example, the use of observables only accessible via a polarized beam are essential to ensure that no mathematical ambiguities can occur. In particular, we must consider the constraints implied by the polarized moment \(h_{0}^{1}=\frac{1}{2}|g_{0}-g_{1}|^{2}\). The combination \(g_{0}-g_{1}\) only has one Barrelet zero, _i.e._\(g_{0}-g_{1}\propto(v-r_{3})(v-r_{3}^{*})\), where \(r_{3}=\sqrt{2}B/A\). This is independent of \(r_{1,2}\), and the only transformation that leaves \(h_{0}^{1}(\theta)\) invariant is the one that replaces each wave by its complex conjugate, since all the waves are defined up to a global phase. Therefore, there is no nontrivial transformation of the partial waves which leaves both the unpolarized moments \(h_{0,1}^{0}(\theta)\) and the polarized moment \(h_{0}^{1}(\theta)\) invariant, and thus there are no ambiguous solutions for this wave set. We illustrate this case for one single energy bin by choosing three random complex numbers for the original waves \(\{S_{0},D_{0},D_{1}\}\),4 compute the associated ambiguous solutions \(\{\tilde{S}_{0},\tilde{D}_{0},\tilde{D}_{1}\}\) and display the three moments in Fig. 2. The numerical values of the waves are specified in Table 1. Here again, we see the value of incorporating polarized observables. While the two wave sets produce degenerate solutions for the two unpolarized moments, the incorporation of the polarized moment \(h_{0}^{1}\) breaks the degeneracy. Footnote 4: We choose \(S_{0}\) to be real positive without loss of generality and rotate the ambiguous solution to bring \(\tilde{S}_{0}\) also to the positive real axis, _i.e._ its phase is zero. The inclusion of more waves with only the projections \(m=0,1\) will not change our results. Adding more waves with different \(m\) projections could potentially produce ambiguous solutions, each of which leave invariant one single moment \(h_{M}^{\alpha}(\theta)\), but it would also generate additional nonzero \(h_{M}^{\alpha}(\theta)\) which must remain invariant under each of the ambiguity transformations. One can try to generate other prospective ambiguities, but each potentially ambiguous wave set will be subject to an increasing number of constraints. Hence, we argue that, for most sensible wave sets, the intersection between all these sets of potentially ambiguous waves will be empty. ### \(\mathbf{S}\) and \(\mathbf{D}\) waves with \(\mathbf{m=-1,0,1}\) We now consider the previous example in Section III.1 with the addition of the \(m=-1\) projection. The presence of three different \(m\) projections raises the number of independent \(\cos M\phi\) moments to three (\(M=0,1,2\) in this case), each of them being a function of the polar angle. As we will see, it is impossible to find an ambiguous set leaving all the polar angle distributions simultaneously invariant. We only consider here the \(m=-1,0,1\) components for the \(D\) wave but our conclusions can be generalized to any wave set with three (or more) spin projections. It was already noticed by the COMPASS collaboration that no ambiguities are found in the \(\eta\pi\) system once the \(m=2\) component is included in the partial wave analysis [18]. We again start with a set of partial waves, \(\{S_{0},D_{0},D_{1},D_{-1}\}\), and attempt to generate an ambigu \begin{table} \begin{tabular}{c|c c} \hline \hline \([\ell]_{m}\) & original & potentially ambiguous \\ \hline \(S_{0}\) & \(0.229\) & \(0.630\) \\ \(D_{0}\) & \(-0.217+0.310i\) & \(0.043+0.056i\) \\ \(D_{1}\) & \(0.770+0.448i\) & \(0.280-0.713i\) \\ \hline \hline \end{tabular} \end{table} Table 1: Numerical values of our example wave set and the potentially ambiguous wave set generated by the unpolarized moments. ous set \(\{\tilde{S}_{0},\tilde{D}_{0},\tilde{D}_{1},\tilde{D}_{-1}\}\). The \(f\)'s are: \[f_{0}(u) =\sqrt{5}\frac{\left(u^{4}-4u^{2}+1\right)}{\left(u^{2}+1\right)^{ 2}}D_{0}+S_{0}\,, \tag{24a}\] \[f_{\pm 1}(u) =\mp\sqrt{30}\frac{u\left(1-u^{2}\right)}{\left(u^{2}+1\right)^{ 2}}D_{\pm 1}\,. \tag{24b}\] Our wave set for this example contains all \(|m|\leq 1\) but only positive reflectivity components. The structure of the moments in Eq. (12) tells us that, when only one reflectivity component is included but all \(m\) projections are allowed, the polarized moments \(h_{M}^{1}\) are not independent of the unpolarized moments \(h_{M}^{0}\). It suffices to study the ambiguities which leave only \(h_{M}^{0}\) and \(h_{M}^{2}\) invariant. Let us first investigate only the unpolarized moments. We rewrite the conditions relating the \(h\)'s to the \(f\)'s in matrix form: \[h_{M}^{0}(\theta)=F^{\dagger}H_{M}^{0}F\,, \tag{25}\] where, \(F=(f_{-1},f_{0},f_{1})^{T}\) and \[H_{0}^{0}=\begin{pmatrix}1&0&0\\ 0&1&0\\ 0&0&1\end{pmatrix},\ \ H_{1}^{0}=\begin{pmatrix}0&1&0\\ 1&0&1\\ 0&1&0\end{pmatrix},\ \ H_{2}^{0}=\begin{pmatrix}0&0&1\\ 0&0&0\\ 1&0&0\end{pmatrix}. \tag{26}\] Notice here that \(H_{0}^{0}\), \(H_{1}^{0}\) and \(H_{2}^{0}\) are all _not_ simultaneously diagonalizable. Nevertheless, as before, we diagonalize \(H_{1}^{0}\), defining \(g_{0}\), \(g_{1}\), and \(g_{-1}\) as \[g_{\pm 1}(u) \equiv\frac{1}{2}\left[f_{1}(u)\pm\sqrt{2}f_{0}(u)+f_{-1}(u) \right]\,, \tag{27a}\] \[g_{0}(u) \equiv\frac{-1}{\sqrt{2}}\left[f_{1}(u)-f_{-1}(u)\right]\,. \tag{27b}\] Again, the parity of the \(f\)'s functions indicates that \(g_{\pm 1}(-u)=g_{\mp 1}(u)\) and \(g_{0}(-u)=-g_{0}(u)\). The two functions \(g_{1}\) and \(g_{-1}\) possess the same Barrelet zeros and therefore the same potential ambiguities As in the previous example, the moments are even functions of the polar angles and read, in the \(g\) basis, \[h_{0}^{0}= \left(|g_{-1}|^{2}+|g_{0}|^{2}+|g_{1}|^{2}\right)\,, \tag{28a}\] \[h_{1}^{0}= \sqrt{2}\left(|g_{1}|^{2}-|g_{-1}|^{2}\right)\,,\] (28b) \[h_{2}^{0}= \frac{1}{2}|g_{-1}+g_{1}|^{2}-|g_{0}|^{2}\,. \tag{28c}\] Again, any transformation on the partial waves which leaves each term above independently unchanged will produce a mathematically ambiguous set of waves. Introducing the change of variables \(v=u-1/u\) as before, the relevant rational fractions are \[g_{\pm 1} =\pm\sqrt{\frac{5}{2}}\frac{1}{v^{2}+4}\left[Av^{2}\pm\sqrt{6}vD ^{-}-2B\right], \tag{29a}\] \[g_{0} =-\sqrt{30}\frac{v}{v^{2}+4}D^{+}, \tag{29b}\] where \(A,B\) are defined as in the previous subsection and \(D^{\pm}=(D_{1}\pm D_{-1})/\sqrt{2}\). With these definitions, the roots of \(g_{\pm 1}(v)\) are given by Eq. (22) with the substitution \(D_{1}\to\pm D^{-}\). As already noted, the same ambiguous solution will simultaneously leave invariant \(|g_{1}|^{2}\) and \(|g_{-1}|^{2}\). The new wave set \(\{\tilde{S}_{0},\tilde{D}_{0},\tilde{D}^{-}\}\) is easily obtained from Eq. (23) with the substitution \(D_{1}\to D^{-}\). There is, in addition, a continuous transformation \(D^{+}\to\exp(i\alpha^{+})D^{+}\) leaving \(|g_{0}|^{2}\) invariant. Since this transformation is independent from the set \(\{\tilde{S}_{0},\tilde{D}_{0},\tilde{D}^{-}\}\), we have, so far, found an ambiguous solution, parametrized with a continuous parameter, leaving the moments \(h_{0}^{0}\) and \(h_{1}^{0}\) invariant. However, the invariance of \(h_{2}^{0}\) requires a continuous transformation of the type \(D^{-}\to\exp(i\alpha^{-})D^{-}\), which contradicts the ambiguous solution \(\{\tilde{S}_{0},\tilde{D}_{0},\tilde{D}^{-}\}\). Therefore the unpolarized moments \(h_{0,1,2}^{0}\) are left invariant only by the 1-parameter continuous transformation \[\{S_{0},D_{0},D^{-},D^{+}\}\to\{S_{0},D_{0},D^{-},e^{i\alpha^{+}}D^{+}\}. \tag{30}\] Figure 2: _Solid blue lines_, moments obtained from the original waves of Table 1; _dotted red lines_, moments obtained from the ambiguous solution of Table 1. The polarized moment \(h_{0}^{1}\) breaks the ambiguity between the two solutions. Since the polarized moments \(h^{1}_{0,1,2}\) are related to the unpolarized ones, we only need to consider the moments \(h^{2}_{1,2}\). Their respective matrices, in the form analogous to Eq. (25), are \[H^{2}_{1}=\begin{pmatrix}0&-1&0\\ -1&0&1\\ 0&1&0\end{pmatrix},\qquad H^{2}_{2}=\begin{pmatrix}-1&0&0\\ 0&0&0\\ 0&0&1\end{pmatrix}. \tag{31}\] Their expressions in the \(g\)'s basis are \[h^{2}_{1} =2\operatorname{Re}\left[(g_{1}-g_{-1})g_{0}^{*}\right]\,, \tag{32a}\] \[h^{2}_{2} =-\sqrt{2}\operatorname{Re}\left[(g_{1}+g_{-1})g_{0}^{*}\right]\,. \tag{32b}\] The continuous transformation in Eq. (30) changes the phase of \(g_{0}\) and does not leave the polarized moments Eq. (32) invariant. We thus conclude that there is no ambiguity associated with the extraction of partial waves with a linearly polarized beam for this wave set, other than the trivial ambiguities given by the rotation of all waves by a common phase, or by the complex conjugation of all waves. ## IV Simulations While in the previous sections we have provided arguments that no mathematical ambiguities exist in partial-wave analysis of two mesons produced with a linearly polarized photon beam, the complicated multidimensional shape of likelihood functions or other functions used for fitting can present themselves as false solutions, which one might naively label as mathematically ambiguous. In this section, we present some Monte Carlo studies showing this effect. We wish to emphasize that here we only investigate the dependence on statistics of a perfect model. Other factors such as acceptance corrections, resolutions, and other systematic effects are experiment-dependent and may qualitatively alter the results. Studies based on pseudodata or studies involving full experiment simulations will be an important part of subsequent analyses, and might be employed to help discard false solutions or assess the impact of limited statistics. First, pseudodata was generated following the angular intensity given by Eqs. (8) and (9). We used the wave set from Section III.2, and generated the pseudodata using the fixed "true solution" wave set, with non-zero, positive reflectivity partial waves shown in Table 2 and a mean linear polarization degree of \(P_{\gamma}=0.85\). We then performed event-by-event fits to extract these four waves. We used MINUIT[19] with random initial conditions to minimize the negative log likelihood (\(-2\log\mathcal{L}\)). To explore the effect of differing statistical information on fit results, we examined three different cases with generated data sets of \(10^{2}\), \(10^{4}\), and \(10^{6}\) events. In Fig. 3 we show the resulting negative log likelihood and amplitude components from 50 fits to the pseudodata with 100 events. Similar results are shown for \(10^{4}\) events in Fig. 5 and \(10^{6}\) events in Fig. 7. For clarity, in these plots we show only a single complex conjugate solution set, though the fitting procedure did also identify trivial ambiguities, _i.e._ the set with all phases simultaneously flipped in sign. In Figs. 4, 6 and 8 we show the projections of the intensity onto the polarization angle \(\Phi\), and the decay angles \(\phi\), \(\theta\) for the best five solutions, compared to the distributions generated from the true amplitudes. We observe in the case of the second best solution for each simulation (shown in red), the \(\cos\theta\) distribution is almost identical to the best fit's distribution (blue). However the second best \(\phi\) distribution is flat (red), while the true distribution has a \(\cos(2\phi)\) component. The reason for the flat distribution is that the magnitude of the \(D_{-1}\) amplitude is zero for this solution (first red triangle in the plots of Figs. 3, 5 and 7). The \(h^{0}_{2}\) moment, which contributes to the \(\cos(2\phi)\) amplitude, requires an interference between \(D_{-1}\) and \(D_{1}\), which is obviously zero when either of these waves has zero magnitude. Note that this solution is found despite MINUIT finishing with successful status. The other solutions do not agree well with the data and therefore clearly do not represent real solutions, but rather represent artifacts of local maxima in the likelihood. We also note that, for each level of statistics, we observe a similar behavior in the projection of the intensity onto \(\Phi\) for all solutions. The best (blue) and second-best (red) solutions closely match the true solution (dashed black) for each case, while the less-favored solutions cannot be immediately discarded from this projection even at high statistics We should emphasize here that although we have shown explicitly that there are no mathematical ambiguities present, the false solutions found in fits to data or pseudodata must still be addressed. In fits to real data one may not always be able to extract the most favored solution from a fitting procedure and claim that it is the true, mathematically unique, solution due to detector effects and other systematics. In practice, each solution could be shifted up or down in likelihood, and the 'true' solution could correspond to a local minimum rather than the global one. We do note that, in an environment with no systematics or detector effects, higher statistics allows \begin{table} \begin{tabular}{c|c c} \hline \hline \([\ell]_{m}\) & Magnitude & Phase \\ \hline \(S_{0}\) & \(0.499\) & \(0^{\circ}\) \\ \(D_{-1}\) & \(0.201\) & \(15.4^{\circ}\) \\ \(D_{0}\) & \(0.567\) & \(174^{\circ}\) \\ \(D_{1}\) & \(0.624\) & \(-81.6^{\circ}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Numerical values of our “true” wave set for the simulation studies. one to make qualitative judgments about which solution best fits the data by considering projections of the intensity onto the scattering angles. We also note that in these simulations we have relatively few waves. In larger wave sets, the probability of finding the global minima from fifty random starting points reduces drastically. These issues are outside the scope of this paper, and we leave methods to address them for future work. ## V Summary and Conclusions In this work, we have presented our formalism for the analysis of mathematical ambiguities for linearly polarized photoproduction of two spinless particles. We demonstrated for two wave sets that, even with a small number of constraints on the partial waves, the partial waves are over-specified by experimental data. We illustrated our results by generating pseudodata and extracting back the partial waves. We found that the best solution matches the input waves. We do not expect larger wave sets to exhibit root-conjugation ambiguities, as the number of constraints increases rapidly with the size of the fitted wave set. Rather, we expect that false Figure 4: Projections of the angular distributions (upper: \(\cos\theta\), center: \(\phi\), lower: \(\Phi\)) as defined in Eqs. (5) and (8). Shown are the data (black circles), the true solution (dashed black), and the different solutions (colored lines), with colors matching the plots in Fig. 3. Bin widths are \(\Delta\cos\theta=0.1\) and \(\Delta\phi=\Delta\Phi=18^{\rm o}\). Figure 3: Results of the 5 best (highest likelihood) fits from 50, to 100 events generated with the partial waves given in Table 2 showing the likelihood versus the amplitude magnitude (upper) and phase (lower), the dashed lines show the true values. The wave is indicated by the marker shape (see legend) while the color represents different solutions. The highest likelihood is at the bottom of the plots. The phase for the \(D_{-1}\) (\(D_{1}\)) wave is not shown for the red and orange (purple) fits, as associated magnitude is zero and, hence, the phase is undetermined. Fits and uncertainties are computed using the HESSE option of MINUIT [19]. solutions which appear in fits to real data come about as artifacts of complicated multidimensional properties of log-likelihood functions. These may be identified through examination of the angular dependence of the polarized observables. ###### Acknowledgements. This work was supported by the U.S. Department of Energy contract DE-AC05-06OR23177, under which Jefferson Science Associates, LLC operates Jefferson Lab and also by the U.S. Department of Energy Grant Nos. DE-FG02-87ER40365, DE-FG02-92ER40735, and DE-FG02-87ER40315, by the Spanish Ministerio de Ciencia e Innovacion (MICINN) Grant Nos. PID2019-106080GB-C21, PID2020-118758GB-I00, and PID2020-112777GB-I00, and by the U.K. Science and Technology Facilities Council under grants ST/P004458/1 and ST/V00106X/1. VM is a Serra Hunter fellow. MA is supported by Generalitat Valenciana under Grant No. CIDEGENT/2020/002. CFR is supported by Spanish Ministerio de Educacion y Formacion Profesional (MEUFP) under Grant No. BG20/00133. The work of MM is funded by the Deutsche Forschungsgemeinschaft under Germany's Excellence Strategy-EXC-2094-390783311. NH is supported by Polish research project Grant No. 2018/29/B/ST2/02576 (National Science Center). DW is supported by National Natural Science Foundation of China Grant No. 12035007 and the NSFC and the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through the funds pro Figure 6: As Fig. 4 for fits to \(10^{4}\) events with colors matching the plots in Fig. 5. Figure 5: As Fig. 3 for fits to \(10^{4}\) events. The phase for the \(D_{-1}\) wave is not shown for the red and orange fits, as associated magnitude is zero and, hence, the phase is undetermined. vided to the Sino-German Collaborative Research Center TRR110 "Symmetries and the Emergence of Structure in QCD" (NSFC Grant No. 12070131001, DFG ProjectID 196253076-TRR 110). This research was supported by the Munich Institute for Astro-, Particle and BioPhysics (MIAPbP) which is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC-2094 - 390783311. This work contributes to the aims of the U.S. Department of Energy ExoHad Topical Collaboration, contract DE-SC0023598. ## Appendix A Barrelet zeros for spinless meson scattering We consider the elastic scattering of two spinless mesons [1; 20]. Lorentz invariance allows us to choose the scattering plane as the \(xz\) plane, and to write the intensity as a real positive function of the scattering angle \(z=\cos\theta\). The differential cross section, \[\frac{\mathrm{d}\sigma}{\mathrm{d}\Omega}=|f(s,z)|^{2}, \tag{10}\] is decomposed into partial waves of the decaying resonance, with angular momentum \(\ell\) as \[f(s,z)=\sum_{\ell}(2\ell+1)a_{\ell}(s)P_{\ell}(z). \tag{11}\] The center-of-mass energy \(s\) is a fixed variable in our treatment. In practice, for each bin in \(s\), the sum in Eq. (11) is truncated to \(\ell_{\mathrm{M}}\) and the differential cross section is thus a polynomial of order \(2\ell_{M}\) in the cosine of the scattering angle, \(z\). The \(\ell_{\mathrm{M}}+1\) partial waves are in general complex numbers, but since the intensity is Figure 8: As Fig. 4 for fits to \(10^{6}\) events with colors matching the plots in Fig. 7. Figure 7: As Fig. 3 for fits to \(10^{6}\) events. Uncertainties are negligible and not shown. positive, the cross section can be factorized into its roots, also denoted Barrelet zeros [1, 20], in the following way \[\frac{\mathrm{d}\sigma}{\mathrm{d}\Omega}=C\prod_{i=0}^{\ell_{M}}(z-z_{i})(z-z_{ i}^{*}), \tag{20}\] where the \(s\) dependence of the normalization factor \(C\) and the Barrelet zeros \(z_{i}\) have been omitted. Clearly, the knowledge of a set of partial waves \(\{a_{\ell}\}\) determines the Barrelet zeros \(\{z_{i}\}\), and _vice versa_. However, the differential cross section includes both the roots \(z_{i}\) and their conjugates \(z_{i}^{*}\) while only one of \(\{z_{i},z_{i}^{*}\}\) is used to generate the partial waves; there is no physical distinction between a zero and its complex conjugate, which can lead to ambiguities in the values of the partial waves in Eq. (21). To see this, suppose we know \(\ell_{M}+1\) Barrelet zeros \(\{z_{i}\}\) from which we reconstruct the partial waves: \[a_{\ell}=F_{\ell}(z_{0},z_{1},\ldots,z_{\ell_{M}-1},z_{\ell_{M}}), \tag{21}\] where the functions \(F_{\ell}\) are known for a given \(\ell_{M}\). Alternatively one could choose to use the complex conjugate of any of the \(\ell_{M}+1\) Barrelet zeros. For instance by choosing \[a_{\ell}^{\prime}=F_{\ell}(z_{0}^{*},z_{1},\ldots,z_{\ell_{M}-1}^{*},z_{\ell _{M}}). \tag{22}\] There are \(2^{\ell_{M}+1}\) sets of potentially ambiguous partial waves \(\{a_{\ell}^{\prime}\}\) which lead to the same differential cross section. One can always rotate all the waves with a constant phase (in each bin of energy) such that the \(S\)-wave is real and positive. We are nevertheless left with \(2^{\ell_{M}}\) possibilities for the partial waves in the case of spinless meson scattering.
We are describing the formalism to analyze the mathematical ambiguities arising in partial-wave analysis of two spinless mesons produced with a linearly polarized photon beam. We show that partial waves are uniquely defined when all accessible observables are considered, for a wave set which includes S and D waves. The inclusion of higher partial waves does not affect our results, and we conclude that there are no mathematical ambiguities in partial-wave analysis of two mesons produced with a linearly polarized photon beam. We present Monte Carlo simulations to illustrate our results. **Please note:** This is a complex sentence with a lot of scientific jargon. It's important to preserve the meaning and nuance of the original sentence while translating it into Japanese.
2302.14787
Weyl modules for queer Lie superalgebras
We define global and local Weyl modules for $q \otimes A$, where $q$ is the queer Lie superalgebra and $A$ is an associative commutative unital $\mathbb{C}-$algebra. We prove that global Weyl modules are universal highest weight objects in certain category upto parity reversing functor $\Pi$. Then with the assumption that $A$ is finitely generated and with a special technical condition which simple root system of $q$ satisfy it is shown that the local Weyl modules are finite dimensional. Further they are universal highest map-weight objects in certain category upto $\Pi$. Finally we prove a tensor product property for local Weyl modules.
Saudamini Nayak
2023-02-28T17:41:06
http://arxiv.org/abs/2302.14787v1
# Weyl modules for queer Lie superalgebras ###### Abstract. We define global and local Weyl modules for \(q\otimes A\), where \(q\) is the queer Lie superalgebra and \(A\) is an associative commutative unital \(\mathbb{C}-\)algebra. We prove that global Weyl modules are universal highest weight objects in certain category upto parity reversing functor \(\Pi\). Then with the assumption that \(A\) is finitely generated and with a special technical condition which simple root system of \(q\) satisfy it is shown that the local Weyl modules are finite dimensional. Further they are universal highest map-weight objects in certain category upto \(\Pi\). Finally we prove a tensor product property for local Weyl modules. Key words and phrases:Queer Lie superalgebra, Weyl module, tensor product 2010 Mathematics Subject Classification: 17B65, 17B10 ## 1. Introduction The theory of Lie superalgebras and their representations have a wide range of applications in many areas of physics and mathematics such as describing supersymmetry, in string theory, conformal field theory and number theory to name a few. In 1977, Kac classified the simple Lie superalgebras \(\mathfrak{g}\) over \(\mathbb{C}\)[10]. These are divided into three groups namely: basic Lie superalgebras (which means the classical and exceptional series), the strange ones (often also called periplectic and queer) and the ones of Cartan type. Kac also classified the simple finite dimensional representations of the basic classical Lie superalgebras in [10, 11, 12]. In recent times there has been much interest in understanding finite dimensional modules for the Lie superalgebra \(\mathfrak{g}\otimes A\) where \(\mathfrak{g}\) is simple finite dimensional Lie superalgebra and \(A\) is commutative associative algebra with unit over complex numbers \(\mathbb{C}\). For example, If we take \(A=\mathbb{C}[X]\), then the Lie superalgebra \(\mathfrak{g}\otimes\mathbb{C}[X]\) is called a current superalgebra. If we take \(A=\mathbb{C}[X,X^{-1}]\), then \(\mathfrak{g}\otimes\mathbb{C}[X,X^{-1}]\) is called a loop superalgebra. If we take \(A=\mathbb{C}[X_{1}^{\pm 1},\dots,X_{n}^{\pm 1}]\), then \(\mathfrak{g}\otimes\mathbb{C}[X_{1}^{\pm 1},\dots,X_{n}^{\pm 1}]\) is called a multiloop superalgebra. The classification of finite dimensional irreducible modules for multi-loop superalgebra is obtained in [1, 1]. In more general setting, the irreducible finite dimensional modules were classified in [1, 10]. The Weyl modules play important role in the representation theory of infinite-dimensional Lie algebras. Chari and Pressley [14] introduced Weyl modules (global and local) for the loop algebra \(\mathfrak{g}\otimes\mathbb{C}[X^{\pm 1}]\), where \(\mathfrak{g}\) is simple Lie algebra over \(\mathbb{C}\) and proved that these modules are indexed by dominant integral weights of \(\mathfrak{g}\) and are closely related to certain irreducible modules for quantum affine algebras. Feigin and Loktev [13] extended the notion of Weyl modules to the higher- dimensional case, i.e., instead of the loop algebra they worked with the Lie algebra \(\mathfrak{g}\otimes A\) where \(A\) is the coordinate ring of an algebraic variety and obtained analogs of some of the results of [14]. Later in [11], Chari et. al., consider a more general functorial approach to Weyl modules associated to the algebra \(\mathfrak{g}\otimes A\) where \(A\) is commutative associative unital algebra over \(\mathbb{C}\). Also twisted versions of Weyl modules have been defined and investigated in [11, 12]. The Weyl modules for equivariant map algebras has been studied in [12]. However, in super setting the study of Weyl modules has less developed than the corresponding theory in Lie algebras. At first Zhang in [15], define and study the Weyl modules in the spirit of Chari-Pressley for a quantum analogue in the loop case for \(\mathfrak{g}=\mathfrak{sl}(m,n)\). In [16], Calixto, Lemay and Savage study Weyl modules for Lie superalgebras of the form \(\mathfrak{g}\otimes_{\mathbb{C}}A\), where A is an associative commutative unital \(\mathbb{C}\)-algebra and \(\mathfrak{g}\) is a classical Lie superalgebra or \(\mathfrak{sl}(n,n),n\geq 2\). Particularly, they define Weyl modules (global and local) for the Lie superalgebras \(\mathfrak{g}\otimes_{\mathbb{C}}A\) and prove that global Weyl modules are universal highest weight objects in a certain category and local Weyl modules are finite dimensional. Furthermore recently, Bagci, Calixto and Macedo [1] study Weyl modules (global and local) and Weyl functors for the superalgebras \(\mathfrak{g}\otimes A\), where \(\mathfrak{g}\) is either \(\mathfrak{sl}(n,n),\ n\geq 2\), or any finite dimensional simple Lie superalgebra not of type \(\mathfrak{q}(n)\), and \(A\) is an associative, commutative algebra with unit. The goal of this paper is to study global and local Weyl modules for Lie superalgebras \(\mathfrak{g}\otimes_{\mathbb{C}}A\), where A is an associative commutative unital \(\mathbb{C}\)-algebra and \(\mathfrak{g}\) is the _queer Lie superalgebra_. To prove our results, we follow [11]. ## 2. Preliminaries Throughout the paper ground field will be the field of complex numbers \(\mathbb{C}\). By \(\mathbb{Z}_{\geq 0}\) and \(\mathbb{Z}_{>0}\) we denote the nonnegative integers and strictly positive integers, respectively. Also we set \(\mathbb{Z}_{2}=\mathbb{Z}/2\mathbb{Z}\). All supervectorspaces, superalgebras, tensor products etc. are defined over \(\mathbb{C}\). In this section, we review some facts about associative commutative algebras and queer Lie superalgebras that we need in the sequel. ### Basic definitions A vector space \(V\) is called a _supervectorspace_ if \(V\) is \(\mathbb{Z}_{2}\)-graded, i.e., \(V=V_{\bar{0}}\oplus V_{\bar{1}}\). The dimension of the vector space \(V\) is the tuple \((\dim V_{\bar{0}}\mid\dim V_{\bar{1}})\). The parity of a homogeneous element \(v\in V_{i}\) is denoted by \(|v|=i,i\in\mathbb{Z}_{2}\). An element in \(V_{\bar{0}}\) is called even, while an element in \(V_{\bar{1}}\) is called odd. A _subspace_ of \(V\) is a \(\mathbb{Z}_{2}\)-graded vector space \(W=W_{\bar{0}}\oplus W_{\bar{1}}\subseteq V\) with compatible \(\mathbb{Z}_{2}\)-gration, i.e., \(W_{i}\subseteq V_{i}\), for \(i\in\mathbb{Z}_{2}\). We denote by \(\mathbb{C}^{m|n}\) the supervectorspace \(\mathbb{C}^{m}\oplus\mathbb{C}^{n}\), where the first summand is even and the second summand is odd. Given two supervectorspaces \(V\) and \(W\), a linear mapping \(T:V\longrightarrow W\) is _homogeneous of degree \(d\in\mathbb{Z}_{2}\)_ if \(T(V)_{i}\subset W_{i+d}\) for \(i\in\mathbb{Z}_{2}\). The map \(T\) is called even (respectively, odd) if \(d=\bar{0}\) (respectively, \(d=\bar{1}\)). Consider the vector space of all linear transformations from \(V\) to \(W\) denoted as \(\operatorname{Hom}(V,W)\) is \(\mathbb{Z}_{2}\)-graded with \[\operatorname{Hom}(V,W)_{d}=\{T:V\longrightarrow W\mid T\text{ is homogeneous of degree }d\},\] where \(d\in\mathbb{Z}_{2}\). Define \(\operatorname{End}(V):=\operatorname{Hom}(V,V)\). The supervectorspaces and homogeneous mappings define a category. If we restrict the mappings to homogeneous even mappings we obtain an abelian category say _Vec_. We denote by \(\Pi\) the _parity change functor_, on category _Vec_ which is defined as \[\Pi(V)=\Pi(V)_{\bar{0}}\oplus\Pi(V)_{\bar{1}},\quad\Pi(V)_{i}=V_{i+\bar{1}},\ \ i \in\mathbb{Z}_{2}\] and \(\Pi f=f\) for \(V\in\operatorname{Vec}\) and \(f:V\longrightarrow W\in\operatorname{Vec}\). For example consider if \(V\) is of dimension \((1\mid 0)\) then \(\Pi V\) has dimension \((0\mid 1)\). We assume the field \(\mathbb{C}\) is of homogeneous even dimension. A supervectorspace \(\mathcal{A}=A_{0}\oplus A_{\bar{1}}\), equipped with a bilinear associative multiplication satisfying \(A_{i}A_{j}\subseteq A_{i+j}\), for \(i,j\in\mathbb{Z}_{2}\) is called a \(\mathbb{Z}_{2}\)-graded associative algebra or, _associative superalgebra_. For instance \(\operatorname{End}(V)\) is an associative superalgebra. A homomorphism between two superalgebras \(\mathcal{A}\) and \(\mathcal{B}\) i.e., \(f:\mathcal{A}\longrightarrow\mathcal{B}\), is a even linear map\((f(\mathcal{A}_{i})\subseteq\mathcal{B}_{i}\) for \(i\in\mathbb{Z}_{2})\) with \(f(ab)=f(a)f(b)\). The tensor product \(\mathcal{A}\otimes\mathcal{B}\) is a superalgebra, with underlying vector space is the tensor product of supervectorspaces of \(\mathcal{A}\) and \(\mathcal{B}\), with the induced \(\mathbb{Z}_{2}\)-grading and multiplication is given by \((a_{1}\otimes b_{1})(a_{2}\otimes b_{2})=(-1)^{|a_{2}||_{0}|}a_{1}a_{2}\otimes b _{1}b_{2}\) for homogeneous elements \(a_{i}\in\mathcal{A}\) and \(b_{i}\in\mathcal{B}\). A _module_\(M\) over a superalgebra \(\mathcal{A}\) is always understood in the \(\mathbb{Z}_{2}\)-graded sense, that is \(M=M_{\bar{0}}\oplus M_{\bar{1}}\) such that \(A_{i}M_{j}\subseteq M_{i+j}\), for \(i,j\in\mathbb{Z}_{2}\). Subalgebras and ideals of superalgebras are \(\mathbb{Z}_{2}\)-graded subalgebras and ideals. A superalgebra that has no non-trivial ideal is called _simple_. A homomorphism between \(\mathcal{A}\)-modules \(M\) and \(N\) is an even linear map \(f:M\longrightarrow N(\) i.e., \(f(M_{i})\subseteq N_{i}\) for \(i\in\mathbb{Z}_{2})\), with \(f(am)=af(m)\), for all \(a\in\mathcal{A},m\in M\). ### Lie superalgebras **Definition 2.1** (Lie superalgebra).: A _Lie superalgebra_ is a \(\mathbb{Z}_{2}\)-graded vector space \(\mathfrak{g}=\mathfrak{g}_{0}\oplus\mathfrak{g}_{\bar{1}}\) with a bilinear multiplication \([\cdot,\cdot]\) satisfying the following axioms: 1. The multiplication respects the grading: \([\mathfrak{g}_{i},\mathfrak{g}_{j}]\subseteq\mathfrak{g}_{i+j}\) for all \(i,j\in\mathbb{Z}_{2}\). 2. Skew-supersymmetry: \([a,b]=-(-1)^{|a||b|}[b,a]\), for all homogeneous elements \(a,b\in\mathfrak{g}\). 3. Super Jacobi Identity: \([a,[b,c]]=[[a,b],c]+(-1)^{|a||b|}[b,[a,c]]\), for all homogeneous elements \(a,b,c\in\mathfrak{g}\). **Example 2.2**.: Let A be any associative superalgebra. Then we can make \(A\) into a Lie superalgebra by defining \([a,b]:=ab-(-1)^{|a||b|}ba\) for all homogeneous elements \(a,b\in A\) and extending \([.,.]\) by linearity. We call this is the Lie superalgebra associated with \(A\). A concrete example is the general linear Lie superalgebra \(\mathfrak{gl}(V)\) associated with associative superalgebra \(End(V)\) of all linear operators on a \(Z_{2}\)-graded vectorspace \(V\). A homomorphism \(\rho\) between Lie superalgebras is a map which preserves the structure in them. Precisely \(\rho:\mathfrak{g}\longrightarrow\mathfrak{g}_{1}\) is an even linear map with \(\rho([x,y])=[\rho x,\rho y]\) for all \(x,y\in\mathfrak{g}\). **Definition 2.3**.: A representaion of Lie superalgebra \(\mathfrak{g}\) is a Lie superalgebra homomorphism \(\rho:\mathfrak{g}\longrightarrow\mathfrak{gl}(V)\),i.e., \(\rho\) is an even linear with \(\rho[x,y]=\rho(x)\rho(y)-(-1)^{|x||y|}\rho(y)\rho(x)\). Alternatively \(V\) is called \(\mathfrak{g}\)-module and \(V\) is irreducible if there are no submodule other than \(0\) and \(V\) itself. **Lemma 2.4**.: _[_1_]_ _Suppose \(\mathfrak{g}\) is a Lie superalgebra and \(V\) is an irreducible \(\mathfrak{g}\)-module such that \(Iv=0\) for some ideal \(I\) of \(\mathfrak{g}\) and non-zero vector \(v\in V\). Then \(IV=0\)._ Given a Lie superalgebra \(\mathfrak{g}\), we will denote by \(\mathbf{U}(\mathfrak{g})\) its _universal enveloping superalgebra_. The universal enveloping superalgebra \(\mathbf{U}(\mathfrak{g})\) is constructed from the tensor algebra \(T(\mathfrak{g})\) by factoring out the ideal generated by the elements \([u,v]-u\otimes v+(-1)^{|u||v|}v\otimes u\), for homgeneous elements \(u,v\) in \(\mathfrak{g}\). Now we state an analogous of PBW Theorem in super setting, which ensures that \(\mathfrak{g}\mapsto\mathbf{U}(\mathfrak{g})\) is an inclusion by precisely giving a basis for \(\mathbf{U}(\mathfrak{g})\). **Lemma 2.5** ([12], Theorem 6.1.1).: _Let \(\mathfrak{g}=\mathfrak{g}_{\bar{0}}\oplus\mathfrak{g}_{\bar{1}}\) be a Lie superalgebra. If \(x_{1},\dots,x_{m}\) be a basis of \(\mathfrak{g}_{\bar{0}}\) and \(y_{1},\dots,y_{n}\) be a basis of \(\mathfrak{g}_{\bar{1}}\), then the monomials_ \[x_{1}^{a_{1}}\cdots x_{m}^{a_{m}}y_{1}^{b_{1}}\cdots y_{n}^{b_{n}},\quad a_{1},\dots,a_{m}\geq 0,\quad\text{and}\quad b_{1},\dots,b_{n}\in\{0,1\},\] _form a basis of \(\mathbf{U}(\mathfrak{g})\). In particular, if \(\mathfrak{g}\) is finite dimensional and \(\mathfrak{g}_{\bar{0}}=0\), then \(\mathbf{U}(\mathfrak{g})\) is finite dimensional._ ### The queer Lie superalgebra Let \(V=V_{\bar{0}}\oplus V_{\bar{1}}\) be a supervectorspace with \(\dim V_{\bar{0}}=\dim V_{\bar{1}}\). Choose \(P\in\operatorname{End}(V)_{\bar{1}}\) such that \(P^{2}=-1\). The subspace \[\mathfrak{q}(V)=\{T\in\operatorname{End}(V)\mid[T,P]=0\}\] is a subalgebra of \(\mathfrak{gl}(V)\) called the, queer Lie superalgebra. If \(V=\mathbb{C}^{n|n}\), then with a homogeneous basis we identify \(\mathfrak{gl}(V)\) with \(\mathfrak{gl}(n\mid n)\). Now for explicit realization of the queer Lie superalgebra \(\mathfrak{q}(n)\) in terms of matrices, set \[P:=\left(\begin{array}{c|c}0&I_{n}\\ \hline-I_{n}&0\end{array}\right). \tag{2.1}\] Then, for \(X\in\mathfrak{gl}(n\mid n)\), we have \(X\in\mathfrak{q}(n)\) if and only if \(XP-(-1)^{|X|}PX=0\) holds. Hence \(\mathfrak{q}(n)\) consisting of matrices of the form \[\begin{pmatrix}A&B\\ \hline B&A\end{pmatrix} \tag{2.2}\] where \(A\) and \(B\) arbitrary \(n\times n\) matrices with \[\mathfrak{q}(n)_{\bar{0}}=\left\{\begin{pmatrix}A&0\\ \hline 0&A\end{pmatrix}\ |\ A\in\mathfrak{gl}(n)\right\}\quad\mathfrak{q}(n)_{ \bar{1}}=\left\{\begin{pmatrix}0&B\\ \hline B&0\end{pmatrix}\ |\ B\in\mathfrak{gl}(n)\right\}. \tag{2.3}\] From now on we denote \(\mathfrak{q}(n)=:\mathfrak{q}\). A subalgebra of \(\mathfrak{q}\) is called a Cartan subalgebra if it is a self-normalizing nilpotent subalgebra. Every such subalgebra has a non-trivial odd part. Denote by \(N^{-},H,N^{+}\) respectively the strictly lower triangular, diagonal and strictly upper triangular matrices in \(\mathfrak{gl}(n)\). Then we define \[\mathfrak{h}_{\bar{0}}=\left\{\begin{pmatrix}A&0\\ \hline 0&A\end{pmatrix}\ |\ A\in H\right\}\quad\mathfrak{h}_{\bar{1}}=\left\{ \begin{pmatrix}0&B\\ \hline B&0\end{pmatrix}\ |\ B\in H\right\}, \tag{2.4}\] \[\mathfrak{n}_{\bar{0}}^{\pm}=\left\{\begin{pmatrix}A&0\\ \hline 0&A\end{pmatrix}\ |\ A\in N^{\pm}\right\}\quad\mathfrak{n}_{\bar{1}}^{\pm}= \left\{\begin{pmatrix}0&B\\ \hline B&0\end{pmatrix}\ |\ B\in N^{\pm}\right\}, \tag{2.5}\] \[\mathfrak{h}=\mathfrak{h}_{\bar{0}}\oplus\mathfrak{h}_{\bar{1}}\quad\text{ and}\quad\mathfrak{n}^{\pm}=\mathfrak{n}_{\bar{0}}^{\pm}\oplus\mathfrak{n}_{\bar{1}}^{\pm}. \tag{2.6}\] **Lemma 2.6** ([12], Lemma 2.4.1).: _We have a vector space decomposition_ \[\mathfrak{q}=\mathfrak{n}^{-}\oplus\mathfrak{h}\oplus\mathfrak{n}^{+}\] _such that \(\mathfrak{n}^{-},\mathfrak{n}^{+},\mathfrak{h}\) are graded subalgebra of \(\mathfrak{q}\) with \(\mathfrak{n}^{\pm}\) nilpotent. The subalgebra \(\mathfrak{h}\) is called the standard Cartan subalgebra of \(\mathfrak{q}\)._ Given any Lie superalgebra \(\mathfrak{a}\), the map \(x\longrightarrow x\otimes 1\oplus 1\otimes x,x\in\mathfrak{a}\) extends to an algebra homomorphism \(\mathbf{U}(\mathfrak{a})\longrightarrow\mathbf{U}(\mathfrak{a})\otimes \mathbf{U}(\mathfrak{a})\). By the PBW Theorem (see Lemma 2.5), we know that if \(\mathfrak{b}\) and \(\mathfrak{c}\) are subalgebras of \(\mathfrak{a}\) such that \(\mathfrak{a}=\mathfrak{b}\oplus\mathfrak{c}\) as vector spaces \[\mathbf{U}(\mathfrak{a})\cong\mathbf{U}(\mathfrak{b})\otimes\mathbf{U}( \mathfrak{c}).\] Thus, from Lemma 2.6, we obtain the triangular decomposition of \(\mathbf{U}(\mathfrak{q})\): \[\mathbf{U}(\mathfrak{q})\cong\mathbf{U}(\mathfrak{n}^{-})\otimes\mathbf{U}( \mathfrak{h})\otimes\mathbf{U}(\mathfrak{n}^{+}). \tag{2.7}\] ### Root system for \(\mathfrak{q}\) We fix \(\mathfrak{h}=\mathfrak{h}_{\bar{0}}\oplus\mathfrak{h}_{\bar{1}}\) to be standard Cartan subalgebra of \(q\), is given by \[\mathfrak{h}_{\bar{0}}=\mathbb{C}k_{1}\oplus\cdots\oplus\mathbb{C}k_{n}\quad \text{and}\quad\mathfrak{h}_{\bar{1}}=\mathbb{C}k_{1}^{\prime}\oplus\cdots \oplus\mathbb{C}k_{n}^{\prime}\] where \[k_{i}=\begin{pmatrix}E_{i,i}&0\\ \hline 0&E_{i,i}\end{pmatrix}\quad\text{and}\quad k_{i}^{\prime}=\begin{pmatrix}0& E_{i,i}\\ \hline E_{i,i}&0\end{pmatrix}\] and \(E_{i,j}\) is the \(n\times n\) matrix having \(1\) at the \((i,j)\)-entry and \(0\) elsewhere. The Cartan subalgebra \(\mathfrak{h}\) has a nontrivial odd part \(\mathfrak{h}_{\bar{1}}\) and hence is not abelian, as \([\mathfrak{h}_{\bar{0}},\mathfrak{h}]=0\) and \([\mathfrak{h}_{\bar{1}},\mathfrak{h}_{\bar{1}}]=\mathfrak{h}_{\bar{0}}\). Note that all Cartan subalgebra of \(\mathfrak{q}\) are conjugate to \(\mathfrak{h}\). For \(1\leq i\neq j\leq n\), we set \[e_{i,j}=\begin{pmatrix}E_{i,j}&0\\ \hline 0&E_{i,j}\end{pmatrix}\quad\text{and}\quad e_{i,j}^{\prime}=\begin{pmatrix} 0&E_{i,j}\\ \hline E_{i,j}&0\end{pmatrix}.\] The set \(\{e_{i,j},e_{i,j}^{{}^{\prime}}\ |\ 1\leq i,j\leq n\}\) is a homogeneous linear basis for \(\mathfrak{q}\). The even subalgebra is \(\mathfrak{q}_{\bar{0}}\) is spanned by \(\{e_{i,j}\ |\ 1\leq i,j\leq n\}\) and hence is isomorphic to the general linear Lie algebra \(\mathfrak{gl}(n)\) and odd space \(\mathfrak{q}_{\bar{1}}\) is is isomorphic to the adjoint module. Let \(\{\epsilon_{1},\ldots,\epsilon_{n}\}\) be the basis of dual to \(\{k_{1},\cdots,k_{n}\}\) defined as \(\epsilon_{i}(\left(\begin{array}{c|c}h&0\\ \hline 0&h\end{array}\right))=a_{i}\), for any diagonal matrix \(h\) with diagonal entries \((a_{1},a_{2},\cdots,a_{n})\). We denote \(h_{i}:=k_{i}-k_{i+1}\) for \(1\leq i\leq n-1\). Given \(\alpha\in\mathfrak{h}_{\bar{0}}^{*}\), let \[\mathfrak{q}_{\alpha}=\{x\in\mathfrak{q}\mid[h,x]=\alpha(h)x\;\text{ for all }h\in\mathfrak{h}_{\bar{0}}\}.\] Note that \(\mathfrak{q}_{0}=\mathfrak{h}\). We call \(\alpha\neq 0\) a root if \(\mathfrak{q}_{\alpha}\neq 0\). The set \(\Phi=\{\alpha|q_{\alpha}\neq 0\}\) is called the root system of \(\mathfrak{q}\). A root \(\alpha\) is called even root if \(\mathfrak{q}_{\alpha}\cap\mathfrak{q}_{\bar{0}}\neq 0\) and it is called odd if \(\mathfrak{q}_{\alpha}\cap\mathfrak{q}_{\bar{1}}\neq 0\). The root system \(\Phi=\Phi_{\bar{0}}\cup\Phi_{\bar{1}}\) of \(\mathfrak{q}\) has identical even and odd parts where \(\Phi_{\bar{0}}\) denote the set of even roots and \(\Phi_{\bar{1}}\) denote the set of odd roots. Namely \(\Phi_{\bar{0}}=\Phi_{\bar{1}}=\{\epsilon_{i}-\epsilon_{j}\mid 1<i\neq j<n\}\). For each root \(\alpha=\epsilon_{i}-\epsilon_{j},\;1\leq i\neq j\leq n\), we have root spaces has dimension \((1\mid 1)\), \[\mathfrak{q}_{\alpha}=\mathbb{C}e_{i,j}\oplus\mathbb{C}e^{\prime}_{i,j}.\] and \[\mathfrak{q}=\bigoplus_{\alpha\in\mathfrak{h}_{\bar{0}}^{*}}\mathfrak{q}_{ \alpha}.\] is the root space decomposition of \(\mathfrak{q}\). A root \(\alpha\) is called _positive_ (resp. _negative_) if \(\mathfrak{q}_{\alpha}\cap\mathfrak{n}^{+}\neq 0\) (resp. \(\mathfrak{q}_{\alpha}\cap\mathfrak{n}^{-}\neq 0\)). We denote by \(\Phi^{+}\) (resp. \(\Phi^{-}\)) the subset of positive (resp. negative) roots. Denote by \(\Delta\) the set of simple roots. Thus, \[\Phi^{+}=\{\epsilon_{i}-\epsilon_{j}\mid 1\leq i<j\leq n\},\;\Phi^{-}=-\Phi^{+}, \;\Phi=\Phi^{+}\cup\Phi^{-},\;\Delta=\{\epsilon_{i}-\epsilon_{i+1}\mid 1\leq i \leq n-1\}.\] Hence, \[\mathfrak{n}^{+}=\bigoplus_{\alpha\in\Phi^{+}}\mathfrak{q}_{\alpha}\quad \text{and}\quad\mathfrak{n}^{-}=\bigoplus_{\alpha\in\Phi^{-}}\mathfrak{q}_{ \alpha}.\] A maximal solvable subalgebra of \(\mathfrak{q}\) is called Borel subalgebra \(\mathfrak{b}\). Borel subalgebra of \(\mathfrak{q}\) is conjugate to the standard Borel subalgebra \(\mathfrak{b}_{+}=\mathfrak{h}\oplus\mathfrak{n}^{+}\) of \(\mathfrak{q}\). Set \(\alpha_{i}:=\epsilon_{i}-\epsilon_{i+1}\) and the root space \(\mathfrak{q}_{\alpha_{i}}\) is spanned by \[e_{i}:=\left(\begin{array}{c|c}E_{i,i+1}&0\\ \hline 0&E_{i,i+1}\end{array}\right)\quad\text{and}\quad e^{\prime}_{i}=\left( \begin{array}{c|c}0&E_{i,i+1}\\ \hline E_{i,i+1}&0\end{array}\right),\] while \(\mathfrak{q}_{-\alpha_{i}}\) is spanned by \[f_{i}=\left(\begin{array}{c|c}E_{i+1,i}&0\\ \hline 0&E_{i,i+1}\end{array}\right)\quad\text{and}\quad f^{\prime}_{i}=\left( \begin{array}{c|c}0&E_{i+1,i}\\ \hline E_{i+1,i}&0\end{array}\right).\] Hence \(n^{+}\) is spanned by \(e_{i},e^{\prime}_{i}\) and \(n^{-}\) is spanned by \(f_{i},f^{\prime}_{i}\) and the standard Borel is spanned by \(e_{i},e^{\prime}_{i},h_{i},k^{\prime}_{i}\). For \(\alpha=\epsilon_{i}-\epsilon_{j}\in\Phi_{\bar{0}}^{+}\), let \(s_{\alpha}:\mathfrak{h}_{\bar{0}}^{*}\longrightarrow\mathfrak{h}_{\bar{0}}^{*}\) be the corresponding reflection and is defined by \[s_{\epsilon_{i}-\epsilon_{j}}(\epsilon_{i})=\epsilon_{j},\quad s_{\epsilon_{i}- \epsilon_{j}}(\epsilon_{k})=\epsilon_{k},\text{ for }k\neq i,j.\] The Weyl group of \(\mathfrak{q}\) is the Weyl group \(W\) of \(\mathfrak{q}_{\bar{0}}\) generated by \(s_{\alpha}\) where \(\alpha\in\Phi_{\bar{0}}^{+}\)which is the symmetric group \(\mathfrak{S}_{n}\) in \(n\) letters. Let \(I:=\{1,2,\ldots,n-1\}\) and \(J:=\{1,2,\ldots,n\}\). **Proposition 2.7**.: _[_6_]_ _The Lie superalgebra \(\mathfrak{q}\) generated by the elements \(e_{i},e^{\prime}_{i},f_{i},f^{\prime}_{i}\) for \(i\in I\), \(\mathfrak{h}_{\bar{0}}\) and \(k^{\prime}_{j}\) for \(j\in J\) with the following defining relations_ \[[h,h^{\prime}]=0\,\,\,forh,h^{\prime}\in\mathfrak{h}_{\bar{0}},\] \[[h,e_{i}]=\alpha_{i}(h)e_{i},[h,e^{\prime}_{i}]=\alpha_{i}(h)e^{ \prime}_{i}\quad\text{ for }h\in\mathfrak{h}_{\bar{0}},i\in I,\] \[[h,f_{i}]=-\alpha_{i}(h)f_{i},[h,f^{\prime}_{i}]=-\alpha_{i}(h)f^ {\prime}_{i}\quad\text{ for }h\in\mathfrak{h}_{\bar{0}},i\in I,\] \[[h,k^{\prime}_{l}]=0,\,\,\,\text{for }h\in\mathfrak{h}_{\bar{0}} \,\,l\in J,\] \[[e_{i},f_{j}]=\delta_{ij}(k_{i}-k_{i+1}),[e_{i},f^{\prime}_{j}]= \delta_{ij}(k^{\prime}_{i}-k^{\prime}_{i+1})\text{for }i,j\in I,\] \[[e^{\prime}_{i},f_{j}]=\delta_{ij}(k^{\prime}_{i}-k^{\prime}_{i+ 1}),[k^{\prime}_{l},e_{i}]=\alpha_{i}(k_{l})e^{\prime}_{i}\text{ for }i,j\in I,l\in J,\] \[[k^{\prime}_{l},f_{i}]=-\alpha_{i}(k_{l})f^{\prime}_{i},[e^{\prime }_{i},f^{\prime}_{j}]=\delta_{ij}(k_{i}+k_{i+1}),\text{for }i,j\in I,l\in J,\] \[[k^{\prime}_{l},e^{\prime}_{i}]=\begin{cases}e_{i}&\text{if }l=i,i+1 \\ 0&\text{otherwise}\end{cases}\quad\text{ for }i\in I,j\in J,\] \[[k^{\prime}_{l},f^{\prime}_{i}]=\begin{cases}f_{i}&\text{if }l=i,i+1 \\ 0&\text{otherwise}\end{cases}\quad\text{ for }i\in I,j\in J,\] \[[e_{i},e^{\prime}_{j}]=[e^{\prime}_{i},e^{\prime}_{j}]=[f_{i},f^ {\prime}_{j}]=[f^{\prime}_{i},f^{\prime}_{j}]=0\,\,\,\text{for }1\leq i,j\leq n-1,|i-j|\neq 1\] \[[e_{i},e_{j}]=[f_{i},f_{j}]=0,\,\text{for }i,j\in I,|i-j|>1,\] \[[e_{i},e_{i+1}]=[e^{\prime}_{i},e^{\prime}_{i+1}],[e_{i},e^{ \prime}_{i+1}]=[e^{\prime}_{i},e_{i+1}],\] \[[f_{i+1},f_{i}]=[f^{\prime}_{i+1},f^{\prime}_{i}],[f_{i+1},f^{ \prime}_{i}]=[f^{\prime}_{i+1},f_{i}],\] \[[k^{\prime}_{i},k^{\prime}_{j}]=\delta_{ij}2k_{i}\,\,\,\text{ for }j\in J\] \[[e_{i},[e_{i},e_{j}]]=[e^{\prime}_{i},[e_{i},e_{j}]]=0\,\,\,\text{ for }i,j\in I,|i-j|=1\] \[[f_{i},[f_{i},f_{j}]]=[f^{\prime}_{i},[f_{i},f_{j}]]=0\,\,\,\text{ for }i,j\in I,|i-j|=1.\] The simple roots \(\Delta\) of \(\mathfrak{q}\) satisfy the following property: \[\text{For all }\alpha\in\Delta_{\bar{1}},\,\text{there exists }\alpha^{\prime}\in\Phi_{\bar{1}}^{+}\,\text{ such that }\alpha+\alpha^{\prime}\in\Phi. \tag{2.8}\] This is true, as every root of \(\mathfrak{q}\) is even as well as odd. ### Clifford Algebra **Definition 2.8** (Clifford Algebra).: Let \(V\) be a finite dimensional vector space and \(f:V\times V\to\mathbb{C}\) be a symmetric bilinear form. We call the pair \((V,f)\) a quadratic pair. Let \(I\) be the ideal of the tensor algebra \(T(V)\) generated by the elements \[x\otimes x-f(x,x)1,\quad x\in V\] and set \(\text{Cliff}(V,f)=T(V)/I\). The algebra \(\text{Cliff}(V,f)\) is all the Clifford algebra of the pair \((V,f)\) over \(\mathbb{C}\). **Remark 2.9** ([19], Ch. 12, Def. 4.1 and Theorem 4.2).: For a quadratic pair \((V,f)\), there exists a linear map \(\theta:V\to\text{Cliff}(V,f)\) such that the pair \((\text{Cliff}(V,f),\theta)\) has the following universal property: For all linear maps \(\eta:V\to A\) such that \(\eta(v)^{2}=f(v,v)1_{A}\) for all \(v\in V\), where \(A\) is a unital algebra, there exists a unique algebra homomorphism \(\eta^{\prime}:\text{Cliff}(V,f)\to A\) such that \(\eta^{\prime}\circ\theta=\eta\), in other words, we have the following commutative diagram. Clifford algebra have a natural superalgebra structure. In fact, \(T(V)\) possess a \(\mathbb{Z}_{2}\)-grading such that \(I\) is homogeneous, so the grading descends to \(\operatorname{Cliff}(V,f)\). Thus resulting superalgbera \(\operatorname{Cliff}(V,f)\) is sometimes called the Clifford superalgebra. When \(f\) is known from the context, we shall write \(\operatorname{Cliff}(V)\) instead of \(\operatorname{Cliff}(V,f)\). For \(\lambda\in\mathfrak{h}_{\bar{0}}^{*}\), define an even super antisymmetric bilinear form \(F_{\lambda}\) on \(\mathfrak{h}_{\bar{1}}\), by setting \(F_{\lambda}(u,v)=\lambda([u,v])\) and denote \(E_{\lambda}:=\mathfrak{h}_{\bar{1}}/\ker F_{\lambda}\). Let \(\operatorname{Cliff}(\lambda)\) be the Clifford superalgebra with respect to quadratic pair \((E_{\lambda},F_{\lambda})\) and \(\operatorname{Cliff}(\lambda)\) is endowed with a canonical \(\mathbb{Z}_{2}\)-grading. By definition we have an isomorphism of superalgebras \[\operatorname{Cliff}(\lambda)\cong U(\mathfrak{h})/I_{\lambda}, \tag{2.9}\] where \(I_{\lambda}\) denoted the ideal of \(U(\mathfrak{h})\) generated by \(\ker F_{\lambda}\) and \(a-\lambda(a)\) for \(a\in\mathfrak{h}_{\bar{0}}\). Let \(\mathfrak{h}_{1}^{\prime}\subseteq\mathfrak{h}_{\bar{1}}\) be a maximal isotropic subspace with respect to \(F_{\lambda}\) and define the Lie superalgebra \(\mathfrak{h}^{\prime}:=\mathfrak{h}_{\bar{0}}\oplus\mathfrak{h}_{\bar{1}}^{\prime}\). Let \(\mathbb{C}v_{\lambda}\), be the one-dimensional \(\mathfrak{h}_{\bar{0}}\)-module defined by \(hv_{\lambda}=\lambda(h)v_{\lambda}\) for all \(h\in\mathfrak{h}_{\bar{0}}\), extends to an \(\mathfrak{h}^{\prime}\)-module by setting \(\mathfrak{h}_{\bar{1}}^{\prime}v_{\lambda}=0\). Then the induced module \[\operatorname{Ind}_{\mathfrak{h}^{\prime}}^{\mathfrak{h}}\mathbb{C}v_{\lambda }=\mathfrak{h}\otimes\mathbb{C}v_{\lambda}\] is an irreducible \(\mathfrak{h}\)-module. If \(\operatorname{Ind}_{\mathfrak{h}^{\prime}}^{\mathfrak{h}}\mathbb{C}v_{\lambda}\) is a finite dimensional irreducible \(\mathfrak{h}\)-module, then \(\operatorname{Ind}_{\mathfrak{h}^{\prime}}^{\mathfrak{h}}\mathbb{C}v_{\lambda}\) is a finite dimensional irreducible module over \(\operatorname{Cliff}(\lambda)\) via the pullback through (2.9). We may consider \(\operatorname{Cliff}(\lambda)\) as the associative \(\mathbb{C}\)-algebra generated by the identity \(\mathbf{1}=1+I_{\lambda}\) and \(t_{\bar{i}}:=k_{\overline{i}}+I_{\lambda}\) satisfying the relations \[t_{\bar{i}}t_{\bar{j}}+t_{\bar{j}}t_{\bar{i}}=2\delta_{ij},\quad i,j=1,2,\dots,n. \tag{2.10}\] Let \(S=\oplus_{i=1}^{n}\mathbb{C}t_{\bar{i}}\) and \(\lambda=(\lambda_{1},\dots,\lambda_{n})\in\mathbb{C}^{n}\) and denote by \(B_{\lambda}:S\times S\to\mathbb{C}\) the symmetric bilinear form defined by \(B_{\lambda}(t_{\bar{i}},t_{\bar{j}})=\delta_{ij}\lambda_{i}\). Let \(\operatorname{Cliff}_{S}(\lambda)\) be the unique up to isomorphism Clifford algebra associated to \(S\) and \(B_{\lambda}\). Now define \(S(\lambda):=S/\ker B_{\lambda}\) and denote by \(\beta_{\lambda}\) the restriction of \(B_{\lambda}\) on \(S(\lambda)\). Let \(N_{\lambda}=\{i\mid\lambda_{i}\neq 0\},Z_{\lambda}=\{j\mid\lambda_{j}=0\}\) and \(\ell=\#N_{\lambda}\). Set \[\lambda_{N}:=(\lambda_{i_{1}},\dots,\lambda_{i_{\ell}}),0_{Z}:=(\lambda_{j_{1} },\dots,\lambda_{j_{n-\ell}})=(0,\dots,0).\] One can see that \(\ker B_{\lambda}=\oplus_{j\in Z_{\lambda}}\mathbb{C}t_{\overline{j}}\) and \(\operatorname{Cliff}_{S}(\lambda_{N})=\oplus_{i\in N_{\lambda}}\mathbb{C}t_{ \overline{i}}\) is the Clifford algebra corresponding to \((S(\lambda),\beta_{\lambda})\). Further, \[\operatorname{Cliff}_{S}(\lambda)\cong\operatorname{Cliff}_{S}(\lambda_{N}) \otimes_{\mathbb{C}}\operatorname{Cliff}_{S}(0_{Z})\cong\operatorname{Cliff}_ {S}(\lambda_{N})\otimes_{\mathbb{C}}\bigwedge\ker B_{\lambda}. \tag{2.11}\] Here \(\bigwedge U\) denotes the exterior algebra of the vector space \(U\). Thus by the isomorphisms in (2.11), every \(\operatorname{Cliff}_{S}(\lambda)\)-module can be considered as a \(\operatorname{Cliff}_{S}(\lambda_{N})\)-module under the embedding \[\operatorname{Cliff}_{S}(\lambda_{N})=\operatorname{Cliff}_{S}(\lambda_{N}) \otimes_{\mathbb{C}}1\to\operatorname{Cliff}_{S}(\lambda_{N})\otimes_{ \mathbb{C}}\operatorname{Cliff}_{S}(0_{Z}).\] Then one can easily prove the following lemma. **Lemma 2.10**.: _Let \(M\) be an irreducible \(\operatorname{Cliff}_{S}(\lambda)\) module. Then \(M\) is an irreducible \(\operatorname{Cliff}_{S}(\lambda_{N})\)-module and \(t_{\overline{i}}v=0\) for every \(i\in Z_{N}\)._ ### Highest weight modules over \(\mathfrak{q}\) From now on, for a superalgebra \(\mathcal{A}\), an \(\mathcal{A}\)-module will be understood as an \(\mathcal{A}\)-supermodule. A \(\mathfrak{q}\)-module \(M\) is called a weight module if it admits a weight space decomposition \[M=\bigoplus_{\mu\in\mathfrak{h}_{0}^{*}}M_{\mu},\;\;\text{where}\;\;M_{\mu}=\{m \in M\mid\text{$hm=\mu(h)m$ for all $h\in\mathfrak{h}_{\bar{0}}$}\}.\] An element \(\mu\in\mathfrak{h}_{\bar{0}}^{*}\) such that \(M_{\mu}\neq 0\) is called a _weight_ of \(M\) and \(M_{\mu}\) is called weight space. The set of all weights of \(M\) is denoted by \(\operatorname{wt}(M)\). **Definition 2.11**.: A weight module \(M\) is called a _highest weight module_ with highest weight \(\lambda\in\mathfrak{h}_{0}^{*}\) if \(M_{\lambda}\) is finite dimensional and satisfies the following conditions: 1. \(M\) is generated by \(M_{\lambda}\), 2. \(e_{i}v=e_{i}^{\prime}v=0\) for all \(v\in M_{\lambda},\;i\in I\). **Definition 2.12**.: Let \(\Lambda_{\bar{0}}^{+}\) and \(\Lambda^{+}\) be the set of \(\mathfrak{gl}(n)\)-dominant integral weights and the set of \(\mathfrak{q}\)-dominant integral weights respectively, given by \[\Lambda_{0}^{+} :=\{\lambda_{1}\epsilon_{1}+\cdots+\lambda_{n}\epsilon_{n}\in \mathfrak{h}_{0}^{*}\mid\lambda_{i}-\lambda_{i+1}\in\mathbb{Z}_{\geq 0}\;\; \text{for all}\;\;i\in I\}\] \[\Lambda^{+} :=\{\lambda_{1}\epsilon_{1}+\cdots+\lambda_{n}\epsilon_{n}\in \Lambda_{0}^{+}\mid\lambda_{i}=\lambda_{i+1}\implies\lambda_{i}=\lambda_{i+1} \;=0\;\;\text{for all}\;\;i\in I\}.\] **Proposition 2.13** ([6], Prop. 1).: _Let \(\mathbf{v}\) be a finite dimensional simple \(\mathfrak{b}_{+}\)-module:_ 1. _The maximal nilpotent subalgebra_ \(\mathfrak{n}^{+}\) _of_ \(\mathfrak{b}_{+}\) _acts on_ \(\mathbf{v}\) _trivially._ 2. _There exists a unique weight_ \(\lambda\in\mathfrak{h}_{0}^{*}\) _such that_ \(\mathbf{v}\) _is endowed with a canonical left Cliff_\((\lambda)\)_-module structure and_ \(\lambda\) _determines_ \(\mathbf{v}\) _up to the parity reversing functor_ \(\Pi\)_._ 3. _For all_ \(h\in\mathfrak{h}_{\bar{0}},v\in\mathbf{v}\)_, we have_ \(hv=\lambda(h)v\)_._ **Remark 2.14**.: From Proposition 2.13, we know that the dimension of the highest weight space of a highest weight \(\mathfrak{q}\)-module with highest weight \(\lambda\) is the same as the dimension of an irreducible Cliff(\(\lambda\))-module. On the other hand all irreducible Cliff(\(\lambda\))-modules have the same dimension (see, for example, [1, Table 2]). Thus the dimension of the highest weight space is constant for all highest weight modules with highest weight \(\lambda\). **Proposition 2.15**.: _Let \(\lambda\in\Lambda^{+}\) and \(V(\lambda)\) be the irreducible highest weight \(\mathfrak{q}\)-module generated by an irreducible finite dimensional \(\mathfrak{b}_{+}\)-module \(\mathbf{v}\). Then \(f_{i}^{\lambda(h_{i})+1}v=0\), for all \(v\in\mathbf{v}\) and \(i\in I\)._ Proof.: Note that one can easily show by induction that for \(k\in\mathbb{Z}_{\geq 0}\), \[e_{i}f_{i}^{k}=f_{i}^{k}e_{i}+kf_{i}^{k-1}(h_{i}-(k-1)).\] Since \(\mathfrak{n}^{+}v=0\) for all \(v\in\mathbf{v}\), then \[e_{i}f_{i}^{k}v =f_{i}^{k}e_{i}(v)+kf_{i}^{k-1}(h_{i}-(k-1))v\] \[=(\lambda(h_{i})-(k-1))kf_{i}^{k-1}v.\] If \(k=\lambda(h_{i})+1\), one can see that \(e_{i}f_{i}^{\lambda(h_{i})+1}v=0\). For \(i\neq j\), as \([e_{j}^{\prime},f_{i}]=0=[e_{j},f_{i}]\), we have \(e_{j}f_{i}^{\lambda(h_{i})+1}v=0=e_{j}^{\prime}f_{i}^{\lambda(h_{i})+1}v\). Now suppose that \(e_{i}^{\prime}f_{i}^{\lambda(h_{i})+1}v\neq 0\). Since \([e_{i},e_{i}^{\prime}]=0\) for \(|i-j|\neq 1\), so \[0=[e_{i},e_{i}^{\prime}]f_{i}^{\lambda(h_{i})+1}v=e_{i}(e_{i}^{\prime}f_{i}^{ \lambda(h_{i})+1}v)-e_{i}^{\prime}(e_{i}f_{i}^{\lambda(h_{i})+1}v).\] We get \(e_{i}(e_{i}^{\prime}f_{i}^{\lambda(h_{i})+1}v)=0\) and \(e_{i}^{\prime}(e_{i}f_{i}^{\lambda(h_{i})+1}v)=0\). Similarly, as \([e_{i}^{\prime},e_{j}^{\prime}]=0\) for \(|i-j|\neq 1\), we get \[e_{i}^{\prime}(e_{i}^{\prime}f_{i}^{\lambda(h_{i})+1}v)=0.\] Also, for \(i\neq j\), we have \[e_{j}(e_{i}^{\prime}f_{i}^{\lambda(h_{i})+1}v)=e_{j}^{\prime}e_{i}^{\prime}f_{i}^ {\lambda(h_{i})+1}v)=0.\] If \(\lambda(h_{i})\geq 1\), then weight of the weight vector \(e_{i}^{\prime}f_{i}^{\lambda(h_{i})+1}v\)is \(\lambda-\lambda(h_{i})\alpha_{i}<\lambda\). Thus, \(e_{i}^{\prime}f_{i}^{\lambda(h_{i})+1}v\) would generate a nontrivial proper submodule of \(V(\lambda)\), which contradicts the irreducibility of \(V(\lambda)\). If \(\lambda(h_{i})=0\), then \(\lambda_{i}=\lambda_{i+1}=0\) and since \(v\in\mathbf{v}\), so by Lemma 2.10, we get \(k_{i}^{\prime}v=k_{i+1}^{\prime}v=0\). Now \[e_{i}^{\prime}f_{i}v=f_{i}e_{i}^{\prime}v+(k_{i}^{\prime}-k_{i+1}^{\prime})v=0.\] Therefore, in any case \(e_{i}^{\prime}f_{i}^{\lambda(h_{i})+1}v=0\). Similarly, if \(f_{i}^{\lambda(h_{i})+1}v\neq 0\), it would generate a non-trivial proper submodule of \(V(\lambda)\). Hence, \(f_{i}^{\lambda(h_{i})+1}v=0\) for all \(v\in\mathbf{v}\). **Definition 2.16**.: Let \(\mathbf{v}(\lambda)\) be a finite dimensional irreducible \(\mathfrak{b}_{+}\)-module determined by \(\lambda\) up to \(\mathrm{II}\). The _Weyl module_\(W(\lambda)\) of \(\mathfrak{q}\) with highest weight \(\lambda\) is defined to be \[W(\lambda):=\mathbf{U}(\mathfrak{q})\otimes_{\mathbf{U}(\mathfrak{b}_{+})} \mathbf{v}(\lambda).\] Note that in the above definition, the structure of \(W(\lambda)\) is determined by \(\lambda\) up to \(\mathrm{II}\). **Proposition 2.17** ([6], Theorem 2, 4).: * _For any weight_ \(\lambda\)_,_ \(W(\lambda)\) _has a unique maximal submodule_ \(N(\lambda)\)_._ * _For each finite dimensional simple_ \(\mathfrak{q}\)_-module_ \(M\)_, there exists a unique weight_ \(\lambda\in\Lambda_{0}^{+}\) _and a surjective homomorphism_ \(W(\lambda)\to M\) _(one of the two possible_ \(W(\lambda)\)_)._ * _The irreducible quotient_ \(L(\lambda):=W(\lambda)/N(\lambda)\) _is finite dimensional if and only if_ \(\lambda\in\Lambda^{+}\)_._ **Remark 2.18**.: For \(\lambda\in\Lambda^{+}\), up to isomorphism, there exists two simple finite dimensional modules w.r.t. highest weight \(\lambda\) namely \(L(\lambda)\) and \(\Pi L(\lambda)\), where \(\Pi\) is the parity change functor. Let \(P(\lambda)=\{\mu\in\mathfrak{h}_{\bar{0}}^{*}\mid M_{\mu}\neq 0\}\). Let \(Q\) (resp. \(Q^{+}\)) be the integer span (resp. \(\mathbb{Z}_{>0}\)-span ) of the simple roots. Denote by \(\leq\) the usual partial order on \(P(\lambda)\), \[\mu_{1},\mu_{2}\in P(\lambda),\quad\mu_{1}\leq\mu_{2}\iff\mu_{2}-\mu_{1}\in Q ^{+}.\] Since \(\mathfrak{q}_{\bar{0}}=\mathfrak{gl}(n)\) is reductive Lie algebra, for each even simple root \(\alpha_{i}\) we can choose elements \(e_{i}\in\mathfrak{q}_{\alpha_{i}},f_{i}\in\mathfrak{q}_{-\alpha_{i}}\), and \(h_{i}\in\mathfrak{h}_{\bar{0}}\), such that the subalgebra generated by these elements is isomorphic to \(\mathfrak{sl}(2)\), with these elements satisfying the relations for the standard Chevalley generators. In this case, we say that the set \(\{e_{i},f_{i},h_{i}\}\) is an \(\mathfrak{sl}(2)\)-triple. Denote the irreducible highest weight \(\mathfrak{q}\)-module with highest weight \(\lambda\in\mathfrak{h}_{0}^{*}\), by \(L(\lambda)\) which is unique upto \(\Pi\) and consider the weight space decomposition \(L(\lambda)=\bigoplus_{\mu\in\mathfrak{h}_{0}^{*}}L(\lambda)_{\mu}\). **Definition 2.19** (The module \(\bar{L}(\lambda)\)).: For \(\lambda\in\Lambda^{+}\), we define \(\bar{L}(\lambda)\) (up to \(\Pi\)) to be the \(\mathfrak{q}\)-module generated by \(L(\lambda)_{\lambda}\) with defining relations \[\mathfrak{n}^{+}k_{\lambda}=0,\quad hk_{\lambda}=\lambda(h)k_{\lambda},\quad f _{i}^{\lambda(h_{i})+1}k_{\lambda}=0,\quad\text{for all}\quad k_{\lambda}\in L (\lambda)_{\lambda},h\in\mathfrak{h}_{\bar{0}},\;\;i\in I. \tag{2.12}\] **Proposition 2.20**.: _The module \(\bar{L}(\lambda)\) is finite dimensional for all \(\lambda\in\Lambda^{+}\)._ Proof.: Let \(x_{1},\ldots,x_{m}\) and \(y_{1},\ldots,y_{m}\) be a homogeneous basis of \(\mathfrak{q}_{\bar{0}}\) and \(\mathfrak{q}_{\bar{1}}\), respectively. Then by Lemma 2.5, the monomials \[x_{1}^{a_{1}}\cdots x_{m}^{a_{m}}y_{1}^{b_{1}}\cdots y_{m}^{b_{m}},\quad a_{1}, \ldots,a_{m}\geq 0,\quad\text{and}\quad b_{1},\ldots,b_{m}\in\{0,1\},\] form a basis of \(\mathbf{U}(\mathfrak{q})\). Since \(\{y_{1}^{b_{1}}\cdots y_{m}^{b_{m}}\mid b_{j}=0,1\}\) is a finite set, it is enough to show \(\mathbf{U}(\mathfrak{q}_{\bar{0}})L(\lambda)_{\lambda}\) is finite dimensional. Consider irreducible \(\mathfrak{q}_{\bar{0}}\)-module \(V(\lambda)\) with highest weight \(\lambda\in\Lambda^{+}.\) Since \(\mathfrak{q}_{\bar{0}}=\mathfrak{gl}(n)\) is reductive Lie algebra, \(\lambda(h_{i})\in\mathbb{N}\) for each even simple root \(\alpha_{i}\) with \(i\in I\). Hence we have \(V(\lambda)\) is finite dimensional. Note the centre \(Z(\mathfrak{q}_{\bar{0}})\) acts as a scalar on \(V(\lambda)\). Hence \(V(\lambda)\) is isomorphic to \(\mathfrak{q}_{\bar{0}}\)-module generated by a vector \(u_{\lambda}\) with defining relations \[\mathfrak{n}_{\bar{0}}^{+}u_{\lambda}=0,\quad hu_{\lambda}=\lambda(h)u_{ \lambda},\quad f_{\alpha}^{\lambda(h_{i})+1}u_{\lambda}=0,\quad\text{for all} \quad h\in\mathfrak{h}_{\bar{0}},\;\;i\in I.\] Now \(\mathbf{U}(\mathfrak{q}_{\bar{0}})L(\lambda)_{\lambda}\subset\sum_{k_{\lambda} \in L(\lambda)_{\lambda}}\mathbf{U}(\mathfrak{q}_{\bar{0}})k_{\lambda}\subseteq \bar{L}(\lambda)\) be the \(\mathfrak{q}_{\bar{0}}\)-submodule of \(\bar{L}(\lambda)\). For any \(k_{\lambda}\in L(\lambda)_{\lambda}\), we known that \(\mathbf{U}(\mathfrak{q}_{\bar{0}})k_{\lambda}\) is a highest weight module over \(\mathfrak{q}_{\bar{0}}\) with highest weight \(\lambda\) satisfying \(f_{\alpha}^{\lambda(h_{\alpha})+1}k_{\lambda}=0\). Thus, \(\mathbf{U}(\mathfrak{q}_{\bar{0}})k_{\lambda}\) is cyclic and \(k_{\lambda}\) satisfies (2.12) for any \(k_{\lambda}\in L(\lambda)_{\lambda}\). Then there exists a unique surjective homomorphism of \(\mathfrak{q}_{\bar{0}}\)-submodules satisfying \[\psi:V(\lambda)\longrightarrow\mathbf{U}(\mathfrak{q}_{\bar{0}})k_{\lambda}, \quad xu_{\lambda}\mapsto xk_{\lambda}\] for all \(x\in\mathbf{U}(\mathfrak{q}_{\bar{0}})\). Since \(\psi\) is surjective and \(V(\lambda)\) is finite dimensional, it follows that \(\mathbf{U}(\mathfrak{q}_{\bar{0}})k_{\lambda}\) finite dimensional for any \(k_{\lambda}\in L(\lambda)_{\lambda}\) and hence \(\mathbf{U}(\mathfrak{q}_{\bar{0}})L(\lambda)_{\lambda}\) is finite dimensional. **Proposition 2.21**.: _For highest weight \(\lambda\in\Lambda^{+}\), consider finite dimensional highest weight \(\mathfrak{q}\)-module \(V\). Then there exists a surjective homomorphism of \(\mathfrak{q}\)-modules \(\psi_{1}:\bar{L}(\lambda)\longrightarrow V\) up to \(\Pi\). Moreover, there exists a unique upto \(\Pi\) submodule \(W\) of \(\bar{L}(\lambda)\) such that \(V\cong\bar{L}(\lambda)/W\) or \(V\cong\Pi\left(\bar{L}(\lambda)\right)/\Pi\left(W\right)\) as \(\mathfrak{q}\)-modules._ Proof.: Consider the highest weight \(\mathfrak{q}\)-module \(V\), with highest weight \(\lambda\), is generated by an irreducible \(\mathfrak{b}_{+}\)-module \(\mathbf{v}\). For \(v_{\lambda}\in\mathbf{v}\) the first two relations in (2.12) hold. Since \(V\) is finite dimensional, then by Proposition 2.15, \(f_{i}^{\lambda(h_{i})+1}v_{\lambda}=0\), for all \(v_{\lambda}\in\mathbf{v},i\in I\). Also, by Remark 2.14, the dimension of the highest weight space \(\bar{L}(\lambda)_{\lambda}\) is equal to the dimension of the highest weight space \(\mathbf{v}\). Thus the map \(\psi_{1}:\bar{L}(\lambda)\longrightarrow V\) induced by \(\bar{L}(\lambda)_{\lambda}\longrightarrow\mathbf{v}\), is a surjective homomorphism of \(\mathfrak{q}\)-modules up to \(\Pi\). Since module homomorphism preserve weight spaces, the kernel of \(\psi_{1}\) is unique upto \(\Pi\) and say \(\ker(\psi_{1})=W\). Since every simple finite dimensional \(\mathfrak{q}\)-module is a highest weight module with highest weight \(\lambda\in\Lambda^{+}\), Proposition 2.21 applies to all simple finite dimensional \(\mathfrak{q}\)-modules. ## 3. Global Weyl modules Let \(A\) denote a finitely generated commutative associative unital algebra and \(\mathfrak{q}(n)=:\mathfrak{q}\) with \(n\geq 2\). Take \(\mathfrak{q}\otimes A\), with \(\mathbb{Z}_{2}\)-grading is given by \((\mathfrak{q}\otimes A)_{j}=\mathfrak{q}_{j}\otimes A,j\in\mathbb{Z}_{2}\). Then \(\mathfrak{q}\otimes A\) with bracket of any two homogeneous elements \[[x\otimes a,y\otimes b]:=[x,y]\otimes ab,\quad x,y\in\mathfrak{q}_{j},a,b\in A\] is a Lie superalgebra. Further, we identify \(\mathfrak{q}\) with a subalgebra of \(\mathfrak{q}\otimes A\) via the isomorphism \(\mathfrak{q}\cong\mathfrak{q}\otimes\mathbb{C}\) and the inclusion \(\mathfrak{q}\otimes\mathbb{C}\subseteq\mathfrak{q}\otimes A\). Let \(\mathcal{I}\) be the full subcategory of the category of \(\mathfrak{q}\) modules whose objects are those modules that are isomorphic to direct sums of irreducible finite dimensional \(\mathfrak{q}_{\bar{0}}\) modules. Note that if \(V\in\mathcal{I}\) then any element of \(V\) lies in a finite dimensional \(\mathfrak{q}_{\bar{0}}\) submodule of \(V\). Let \(\mathcal{I}_{\mathfrak{q}\otimes A,\mathfrak{q}_{0}}\) denote the full subcategory of the category of \(\mathfrak{q}\otimes A\)-modules whose objects are the \(\mathfrak{q}\otimes A\)-modules whose restriction to \(\mathfrak{q}_{\bar{0}}\) lies in \(\mathcal{I}\). **Lemma 3.1**.: _Category \(\mathcal{I}\) is closed under taking submodules, quotients, arbitrary direct sums and finite tensor products._ Regard \(U(\mathfrak{q}\otimes A)\) as a right \(\mathfrak{q}\)-module via right multiplication and given a left \(\mathfrak{q}\)- module \(V\) (up to \(\Pi\)), set \[P_{A}(V):=\mathbf{U}(\mathfrak{q}\otimes A)\otimes_{\mathbf{U}(\mathfrak{q})}V. \tag{3.1}\] Then \(P_{A}(V)\) is left \(\mathfrak{q}\otimes A\)-module by left multiplication and we have an isomorphism of vector spaces \[P_{A}(V)\cong\mathbf{U}(\mathfrak{q}\otimes A_{+})\otimes_{\mathbb{C}}V \tag{3.2}\] where \(A_{+}\) is a vector space complement to \(\mathbb{C}\subseteq A\). Note that \(P_{A}(V)\) is defined up to \(\Pi\). **Lemma 3.2**.: _Let \(V\) be a \(\mathfrak{q}\)-module whose restriction to \(\mathfrak{q}_{\bar{0}}\) lies in \(\mathcal{I}\). Then \(P_{A}(V)\in\mathcal{I}_{\mathfrak{q}\otimes A,\mathfrak{q}_{\bar{0}}}\)._ Proof.: Note that \(\mathfrak{q}\) is finitely semisimple, that is, it is isomorphic to direct sums of irreducible finite dimensional \(\mathfrak{q}_{\bar{0}}\)-modules via adjoint representation. So \(\mathfrak{q}\in\mathcal{I}\) and \(\mathfrak{q}\otimes A\cong\mathfrak{q}^{\oplus\dim(A)}\) as \(\mathfrak{q}_{\bar{0}}\)-modules. By Lemma 3.1, \(\mathfrak{q}\otimes A\) is a finitely semisimple \(\mathfrak{q}_{\bar{0}}\)-module, that is, it is isomorphic to direct sums of irreducible finite dimensional \(\mathfrak{q}_{\bar{0}}\) modules. Again, by Lemma 3.1, as \(\mathcal{I}\) is closed under finite tensor product and arbitrary direct sum so \(\mathbf{U}(\mathfrak{q}\otimes A)\) is finitely semisimple \(\mathfrak{q}_{\bar{0}}\)-module. Also as \(V\in\mathcal{I}\), we have \(\mathbf{U}(\mathfrak{q}\otimes A)\otimes_{\mathbb{C}}V\) is a finitely semisimple \(\mathfrak{q}_{\bar{0}}\)-module. Consider the map \[\mathbf{U}(\mathfrak{q}\otimes A)\otimes_{\mathbb{C}}V\to\mathbf{U}( \mathfrak{q}\otimes A)\otimes_{\mathbf{U}(\mathfrak{q})}V,\quad u\otimes v \mapsto u\otimes v. \tag{3.3}\] For every \(u\in\mathbf{U}(\mathfrak{q}\otimes A),\ v\in V\) and every homogenenoum element \(x\in\mathfrak{q}\), we have \[x\cdot(u\otimes v)=xu\otimes v=([x,u]+(-1)^{|x||u|}ux)\otimes v=[x,u]\otimes v +(-1)^{|x||u|}u\otimes x\cdot v.\] Hence, the map in (3.3) is a surjective homomorphism of \(\mathfrak{q}\)-modules. This shows that \(P_{A}(V)\) is a quotient of \(\mathbf{U}(\mathfrak{q}\otimes A)\otimes_{\mathbb{C}}V\). Hence the lemma follows from Lemma 3.1. **Proposition 3.3**.: _If \(\lambda\in\Lambda^{+}\), then \(P_{A}(\bar{L}(\lambda))\) (up to \(\Pi\)) is generated as a left \(\mathbf{U}(\mathfrak{q}\otimes A)\)-module by \(L(\lambda)_{\lambda}\) satisfying the following relations:_ \[\begin{split}\mathfrak{n}^{+}p_{\lambda}=0,&\ hp_{ \lambda}=\lambda(h)p_{\lambda},\quad\ f_{i}^{\lambda(h_{i})+1}p_{\lambda}=0, \quad(p_{\lambda}:=1\otimes k_{\lambda})\\ &\text{for all}\quad k_{\lambda}\in L(\lambda)_{\lambda},h\in \mathfrak{h}_{\bar{0}},\ \ \alpha_{i}\in\Delta(\mathfrak{q}_{\bar{0}}),i\in I.\end{split} \tag{3.4}\] Proof.: Note that \(p_{\lambda}=1\otimes k_{\lambda}\in P_{A}(\bar{L}(\lambda))\) when \(k_{\lambda}\in\bar{L}(\lambda)\). Since \(k_{\lambda}\) satisfies the relation in (2.12), \(1\otimes k_{\lambda}\) satisfies relations (3.4). We have to check these are all the relations. To do this, suppose that \(M\) is the highest weight \(\mathfrak{q}\otimes A\)-module with highest weight \(\lambda\), generated by an irreducible \(\mathfrak{b}_{+}\)-module \(\mathfrak{m}\) such that \[\mathfrak{n}^{+}m=0,\ \ hm=\lambda(h)m,\quad\ f_{i}^{\lambda(h_{i})+1}m=0,\ \ \text{for all}\quad m\in\mathfrak{m},h\in\mathfrak{h}_{\bar{0}},\ \ i\in I. \tag{3.5}\] By Remark 2.14, the dimension of the highest weight space \(P_{A}(\bar{L}(\lambda)_{\lambda})\) is equal to the dimension of the highest weight space \(\mathfrak{m}\). Then we have a surjective homomorphism (up to \(\Pi\)) of \(\mathfrak{q}\otimes A\)-modules \(\phi:M\longrightarrow P_{A}(\bar{L}(\lambda))\) induced by \(\phi(\mathfrak{m})=P_{A}(\bar{L}(\lambda)_{\lambda})\). From (3.4), let \(m\in\mathfrak{m}\) generates \(\mathfrak{q}\)-submodule \(M^{\prime}\) of \(M\) which is isomorphic to \(\bar{L}(\lambda)\). Thus, the map \(\psi:P_{A}(\bar{L}(\lambda))\longrightarrow M\) induced by \(\bar{L}(\lambda)\to M^{\prime}\) is a surjective homomorphism (up to \(\Pi\)). Since \(\phi=\psi^{-1}\), we have \(M\cong P_{A}(\bar{L}(\lambda))\) or \(M\cong\Pi\left(P_{A}(\bar{L}(\lambda))\right)\). For \(\nu\in\Lambda^{+}\) and \(M\in\mathcal{I}_{\mathfrak{q}\otimes A,\mathfrak{q}_{\bar{0}}}\), let \(M^{\nu}\) be the unique maximal \(\mathfrak{q}\otimes A\)-module quotient of M satisfying \[\operatorname{wt}(M^{\nu})\subset\nu-Q^{+},\] or equivalently, \[M^{\nu}:=M/\sum_{\mu\not\in\nu-Q^{+}}\mathbf{U}(\mathfrak{q}\otimes A)M_{\mu}. \tag{3.6}\] Let \(\mathcal{I}^{\nu}_{\mathfrak{q}\otimes A,\mathfrak{q}_{\bar{0}}}\) be the full subcategory of \(\mathcal{I}_{\mathfrak{q}\otimes A,\mathfrak{q}_{\bar{0}}}\) whose objects are the left \(\mathbf{U}(\mathfrak{q}\otimes A)\)-modules \(M\in\mathcal{I}_{\mathfrak{q}\otimes A,\mathfrak{q}_{\bar{0}}}\) such that \(M^{\nu}=M\). **Definition 3.4** (Global Weyl module).: Let \(\lambda\in\Lambda^{+}\). We define the global Weyl module (up to \(\Pi\)) associated to \(\lambda\in\Lambda^{+}\) to be \[W_{A}(\lambda):=P_{A}(\bar{L}(\lambda))^{\lambda}.\] From \[\bar{L}(\lambda)\longrightarrow P_{A}(\bar{L}(\lambda))\longrightarrow P_{A}( \bar{L}(\lambda))/\sum_{\mu\not\in\lambda-Q^{+}}\mathbf{U}(\mathfrak{q}\otimes A )M_{\mu},\] we note that \(w_{\lambda}\) is the image of \(k_{\lambda}\) in \(W_{A}(\lambda)\). The next result gives a description of global Weyl modules by generators and relations. **Proposition 3.5**.: _For \(\lambda\in\Lambda^{+}\), the global Weyl module \(W_{A}(\lambda)\) (up to \(\Pi\)) is generated by \(W_{A}(\lambda)_{\lambda}\) with defining relations_ \[(\mathfrak{n}^{+}\otimes A)w_{\lambda}=0,\quad hw_{\lambda}=\lambda(h)w_{ \lambda},\quad f_{i}^{\lambda(h_{i})+1}w_{\lambda}=0,\quad\text{for all}\;\;h\in \mathfrak{h}_{\bar{0}},\;\;\alpha_{i}\in\Delta(\mathfrak{q}_{\bar{0}}) \tag{3.7}\] _and \(w_{\lambda}\) in \(W_{A}(\lambda)_{\lambda}\)._ Proof.: Let for any \(k_{\lambda}\in\bar{L}(\lambda)_{\lambda}\), \(w_{\lambda}\) is the image of \(k_{\lambda}\) in \(W_{A}(\lambda)\). Note that \((\mathfrak{q}_{\alpha}\otimes A)V_{\mu}\subseteq V_{\mu+\alpha}\) for all \(\alpha\in\Delta,\mu\in\mathfrak{h}_{\bar{0}}^{*}\). Since the weights of \(W_{A}(\lambda)\) lie in \(\lambda-Q^{+}\), it follows that \((\mathfrak{n}^{+}\otimes A)w_{\lambda}=0\). The remaining relations are satisfied by \(w_{\lambda}\) since they are satisfied by \(k_{\lambda}\). To prove that these are the only relations, let \(W^{\prime}(\lambda)\) be the highest weight module generated by an irreducible \(\mathfrak{b}_{+}\)-module \(\mathfrak{m}\) with relations \[(\mathfrak{n}^{+}\otimes A)m=0,\quad hm=\lambda(h)m,\quad f_{i}^{\lambda(h_{i} )+1}m=0,\quad\text{for all}\;\;m\in\mathfrak{m},h\in\mathfrak{h}_{\bar{0}},\; \;i\in I. \tag{3.8}\] Then we have a surjective homomorphism \(\phi:W^{\prime}(\lambda)\to W_{A}(\lambda)\) induced by \(\phi(\mathfrak{m})=W_{A}(\lambda)_{\lambda}\). Note that the relations (3.8) implies \(W^{\prime}(\lambda)\) is a weight module. So \(m\in\mathfrak{m}\) generates \(\mathfrak{q}\)-submodule \(W^{\prime\prime}\) of \(W^{\prime}(\lambda)\) which is isomorphic to \(\bar{L}(\lambda)\). Thus, the map \(\psi:P_{A}(\bar{L}(\lambda))\longrightarrow W^{\prime}\) induced by \(\bar{L}(\lambda)\to W^{\prime\prime}\) is a surjective homomorphism. Further, \(\mathfrak{q}\)-weights of \(W^{\prime}(\lambda)\) are bounded above by \(\lambda\), it follows that \(\psi\) induces a map \(W_{A}(\lambda)\to W^{\prime}(\lambda)\) inverse to \(\phi\). **Theorem 3.6**.: _Any global Weyl module with highest weight \(\lambda\) in the category \(I(\mathfrak{q}\otimes A,\mathfrak{q}_{\bar{0}})\) is isomorphic to \(W_{A}(\lambda)\) or \(\Pi(W_{A}(\lambda))\). Furthermore, if any object \(V(\lambda)\in I(\mathfrak{q}\otimes A,\mathfrak{q}_{\bar{0}})\) is generated by an irreducible \(\mathfrak{b}_{+}\)-module \(\mathbf{v}\) of weight \(\lambda\), then there exists a surjective homomorphism form \(W_{A}(\lambda)\) to \(V(\lambda)\) (up to \(\Pi\))._ Proof.: Let \(V(\lambda)\in I(\mathfrak{q}\otimes A,\mathfrak{q}_{\bar{0}})\) be highest weight \(\mathfrak{q}\otimes A\)-module with highest weight \(\lambda\) is generated by an irreducible \(\mathfrak{b}+\)-module \(\mathbf{v}\). Then by definition \[(\mathfrak{n}^{+}\otimes A)v=0,\quad hv=\lambda(h)v,\quad\text{for all}\;\;h\in \mathfrak{h}_{\bar{0}},\;\;v\in\mathbf{v}.\] Since the \(\mathfrak{q}_{\bar{0}}\)-module generated by \(\mathbf{v}\) is finite dimensional, then by Proposition 2.15, \(f_{i}^{\lambda(h_{i})+1}v=0\) for all \(v\in\mathbf{v}\) and \(\alpha_{i}\in\Delta(\mathfrak{q}_{\bar{0}}),i\in I\). Thus, by Proposition 3.5, we have a surjective homomorphism \(W_{A}(\lambda)\to V(\lambda)\) induced by \(W_{A}(\lambda)_{\lambda}\to\mathbf{v}\) (up to \(\Pi\)). Suppose \(W^{\prime}_{A}(\lambda)\) is another object in \(I(\mathfrak{q}\otimes A,\mathfrak{q}_{\bar{0}})\) that is generated by an irreducible \(\mathfrak{b}_{+}\)-module \(\mathbf{m}\) with highest weight \(\lambda\) and admits a surjective homomorphism to any object of \(I(\mathfrak{q}\otimes A,\mathfrak{q}_{\bar{0}})\) which is also generated by highest weight vectors of weight \(\lambda\). In particular, we have a surjective homomorphism \(\phi:W^{\prime}_{A}(\lambda)\to W_{A}(\lambda)\). It follows from PBW theorem that \(W_{A}(\lambda)_{\lambda}=\mathbf{U}(\mathfrak{h}\otimes A_{+})\otimes_{\mathbb{ C}}\mathbf{m}\). Hence the elements of this weight space that generate \(W_{A}(\lambda)\) are the \(\mathbb{C}\)-multiples of \(m\) for all \(m\in\mathbf{m}\). Thus, we have \(\phi(\mathbf{m})=W_{A}(\lambda)_{\lambda}\). Now the relation (3.7) hold for all \(m\in\mathbf{m}\). Thus, there exists a homomorphism \(\psi:W_{A}(\lambda)\to W^{\prime}_{A}(\lambda)\) induced by \(W_{A}(\lambda)_{\lambda}\to\mathbf{m}\) (up to \(\Pi\)) and hence \(W_{A}(\lambda)\cong W^{\prime}_{A}(\lambda)\) or \(\Pi(W_{A}(\lambda))\cong W^{\prime}_{A}(\lambda)\) ## 4. Local Weyl modules An ideal \(I\) of \(A\) is said to be of finite co-dimension if \(\dim A/I\) is finite. Let \[\mathcal{L}(\mathfrak{h}\otimes A)=\{\psi\in\left(\mathfrak{h}_{\bar{0}}\otimes A \right)^{*}\mid\psi(\mathfrak{h}_{\bar{0}}\otimes I)=0,\text{for some finite co-dimensional ideal $I\subseteq A$}\}.\] For any \(\psi\in\mathcal{L}(\mathfrak{h}\otimes A)\), there exists unique, up to \(\Pi\), simple finite dimensional \(\mathfrak{h}\otimes A\)-module \(H(\psi)\) such that \(xv=\psi(x)v\), for all \(x\in\mathfrak{h}_{\bar{0}}\otimes A\) and \(v\in H(\psi)\) (see [13, Th. 4.3]). Define an action of \(\mathfrak{h}\otimes A\) on \(H(\psi)\) by \(\mathfrak{n}^{+}\otimes A\) to act by zero. Then consider the induced module \[\bar{V}(\psi):=\mathbf{U}(\mathfrak{q}\otimes A)\otimes_{\mathbf{U}( \mathfrak{h}\otimes A)}H(\psi),\] which is a highest weight module. Notice that \(\bar{V}(\psi)\) is defined up to the parity reversing functor \(\Pi\). Further, a submodule of \(\bar{V}(\psi)\) is proper if and only if its intersection with \(H(\psi)\) is zero. Moreover, any \(\mathfrak{q}\otimes A\)-submodule of a weight module is also a weight module. Hence, if \(W\subset\bar{V}(\psi)\) is proper \(\mathfrak{q}\otimes A\)-submodule, then \[W=\bigoplus_{\mu\neq\lambda}W_{\mu},\quad\text{where}\;\;\lambda=\psi\mid_{ \mathfrak{h}_{\bar{0}}}.\] Therefore, \(\bar{V}(\psi)\) has a unique maximal proper submodule \(N(\psi)\) \[V(\psi)=\bar{V}(\psi)/N(\psi)\] is an irreducible highest weight \(\mathfrak{q}\otimes A\)-module. So, every finite dimensional irreducible \(\mathfrak{q}\otimes A\)-module is isomorphic to \(V(\psi)\) for some \(\psi\in\mathcal{L}(\mathfrak{h}\otimes A)\)(see [13, Proposition 5.4]). Note that the highest weight space of \(V(\psi)\cong_{\mathfrak{h}\otimes A}H(\psi)\) or \(V(\psi)\cong_{\mathfrak{h}\otimes A}\Pi(H(\psi))\). **Definition 4.1**.: Let \(\psi\in\mathcal{L}(\mathfrak{h}\otimes A)\) such that \(\lambda=\psi\mid_{\mathfrak{h}_{\bar{0}}}\in\Lambda^{+}\). We define the _local Weyl module_\(W^{\text{loc}}_{A}(\psi)\) associated to \(\psi\) upto \(\Pi\) to be the \(\mathfrak{q}\otimes A\)-module generated by \(H(\psi)\) with defining relations \[(\mathfrak{n}^{+}\otimes A)w_{\psi}=0,\quad xw_{\psi}=\psi(x)w_{\psi},\quad f _{i}^{\lambda(h_{i})+1}w_{\psi}=0,\quad\text{for all}\quad w_{\psi}\in H(\psi),x \in\mathfrak{h}_{\bar{0}}\otimes A,\;\;i\in I.\] A \(\mathfrak{q}\otimes A\)-module generated by \(H(\psi)\) is called _highest map-weight module_ with _highest map-weight_\(\psi\) if \[(\mathfrak{n}^{+}\otimes A)w_{\psi}=0,\quad xw_{\psi}=\psi(x)w_{\psi},\quad \text{for all}\quad w_{\psi}\in H(\psi),x\in\mathfrak{h}_{\bar{0}}\otimes A.\] A vector \(w_{\psi}\in H(\psi)\) is called _highest map-weight vector_ of highest map-weight \(\psi\). Recall the even part of queer Lie superalgebra \(\mathfrak{q}(=\mathfrak{q}_{\bar{0}}\oplus\mathfrak{q}_{\bar{1}})\) is isomorphic to \(\mathfrak{gl}(n+1)\). For each \(\alpha\in\Phi_{0}^{+}\) we have an \(\mathfrak{sl}(2)\)-triple \(x_{\alpha},y_{\alpha},h_{\alpha}\). **Lemma 4.2**.: _Suppose \(\psi\in\mathcal{L}(\mathfrak{h}\otimes A)\) such that \(\lambda=\psi\mid_{\mathfrak{h}_{\bar{0}}}\in\Lambda^{+}\). Consider \(\mathfrak{q}\otimes A\)-module generated by an irreducible module \(H(\psi)\). If \(\alpha_{i}\in\Phi_{0}^{+}\), then \(f_{i}^{\lambda(h_{i})+1}w_{\psi}=0\) for all \(w_{\psi}\in H(\psi),i\in I\)._ Proof.: Let \(\mathfrak{h}=\mathfrak{h}_{\bar{0}}\oplus\mathfrak{h}_{\bar{1}}\) be the Cartan subalgebra of \(\mathfrak{q}\) and \(\mathfrak{h}\otimes A\) be the Cartan subalgebra of \(\mathfrak{q}\otimes A\). Since \(\mathfrak{h}\subset\mathfrak{h}\otimes A\) is a subalgebra, \(hw_{\psi}=\lambda(h)w_{\psi}\) for all \(h\in\mathfrak{h}_{\bar{0}},w_{\psi}\in H(\psi)\) and \(\mathfrak{h}_{\bar{1}}w_{\psi}=0\). So \(\lambda\) is the highest weight with highest weight vector \(w_{\psi}\). The vector \(f_{i}^{\lambda(h_{i})+1}w_{\psi}\) has weight \(\lambda-(\lambda(h_{i})+1)\alpha_{i}\). On the other hand, by Theorem 3.6, \(W^{\text{loc}}_{A}(\psi)\) is a quotient of the global Weyl module \(W_{A}(\lambda)\) up to \(\Pi\), hence is direct sum of finite-dimensional irreducible \(\mathfrak{q}_{\bar{0}}\)-modules. This implies the weights of \(W^{\text{loc}}_{A}(\psi)\) are invariant under the action of Weyl group of \(\mathfrak{q}_{\bar{0}}\). Let \(s_{\alpha_{i}}\) denotes the reflection associated to the root \(\alpha_{i}\). Then \(s_{\alpha_{i}}(\lambda-(\lambda(h_{i})+1)=\lambda+\alpha_{i}\), and this implies \(f_{i}^{\lambda(h_{i})+1}w_{\psi}=0\). Let \(u\) be an indeterminate and for \(a\in A,\alpha\in\Phi_{0}^{+}\), define a power series with coefficients in \(\mathbf{U}(\mathfrak{h}\otimes A)\) by \[\mathbf{p}_{a,\alpha}(u)=\exp\left(-\sum_{r=1}^{\infty}\frac{h_{\alpha}\otimes a ^{r}}{r}u^{r}\right).\] For \(i\in\mathbb{N}\), let \(p^{i}_{a,\alpha}\) be the coefficient of \(u^{i}\) in \(\mathbf{p}_{a,\alpha}(u)\). **Lemma 4.3**.: _Suppose \(r\in\mathbb{N}\), \(a\in A\) and \(\alpha\in\Phi^{+}_{\bar{0}}\) then_ \[(x_{\alpha}\otimes a)^{r}(y_{\alpha}\otimes 1)^{r+1}-(-1)^{r}\sum_{i=0}^{r}(y_ {\alpha}\otimes a^{r-i})p^{i}_{a,\alpha}\in\mathbf{U}(\mathfrak{g}\otimes A)( \mathfrak{n}^{+}\otimes A). \tag{4.1}\] Proof.: When \(A=\mathbb{C}[t^{\pm 1}]\), the formula in (4.1) is proved in [10]. Further since the fact that \(t\) is an invertible element in \(\mathbb{C}[t^{\pm 1}]\) is not used in that proof, the result is still true when \(A=\mathbb{C}[t]\). Applying, the Lie algebra homomorphism, \[\mathfrak{sl}(2)\otimes\mathbb{C}[t]\rightarrow\mathfrak{sl}(2)\otimes A, \quad x\otimes t^{r}\mapsto x\otimes a^{r},\quad r\in\mathbb{N},\ \ x\in\mathfrak{sl}(2)\] gives the result. From now on we assume that \(A\) is finitely generated (say \(a_{1},\ldots,a_{m}\) be a set of generators of \(A\)). Using the first and third relation of the Definition 4.1 and Lemma 4.3, and then applying induction, one can prove the following. **Lemma 4.4**.: _Suppose \(\psi\in\mathcal{L}(\mathfrak{h}\otimes A)\) such that \(\lambda=\psi\mid_{\mathfrak{h}_{\bar{0}}}\in\Lambda^{+}\). If \(\alpha\in\Phi^{+}_{\bar{0}}\), \(a_{1},a_{2},\ldots,a_{m}\in A\), and \(s_{1},s_{2},\cdots,s_{m}\in\mathbb{N},w_{\psi}\in H(\psi)\), then_ \[(y_{\alpha}\otimes a_{1}^{s_{1}}\cdots a_{t}^{s_{m}})w_{\psi}\in\mathrm{span} _{\mathbb{C}}\{(y_{\alpha}\otimes a_{1}^{\ell_{1}}\cdots a_{m}^{\ell_{m}})w_{ \psi}\mid 0\leq\ell_{i}<\lambda(h_{\alpha}),i=1,\cdots,m\}. \tag{4.2}\] _In particular, \((y_{\alpha}\otimes A)H\psi)\) is finite dimensional._ **Lemma 4.5**.: _If \(\psi\not\in\mathcal{L}(\mathfrak{h}\otimes A)\), then \(W^{\mbox{loc}}_{A}(\psi)=0\)._ Proof.: Let \(\lambda=\psi\mid_{\mathfrak{h}_{\bar{0}}}\in\Lambda^{+}\), let \(\alpha\) be a positive root of \(\mathfrak{q}\) and let \(I_{\alpha}\) be the kernel of the linear map \[A \rightarrow\mathrm{Hom}_{\mathbb{C}}(W^{\mbox{loc}}_{A}(\psi)_{ \lambda}\otimes\mathfrak{q}_{-\alpha},(\mathfrak{q}_{-\alpha}\otimes A)H(\psi),\] \[a \rightarrow(v\otimes u\mapsto(u\otimes a)v),\quad a\in A,v\in W ^{\mbox{loc}}_{A}(\psi)_{\lambda},\ \ u\in\mathfrak{q}_{-\alpha}.\] Notice that \(I_{\alpha}=\{a\in A\mid(u\otimes a)v=0,v\in W^{\mbox{loc}}_{A}(\psi)_{\lambda},\ \ u\in\mathfrak{q}_{-\alpha}\}\). Since \(\mathfrak{q}_{-\alpha}=\mathbb{C}y_{\alpha}\), Lemma 4.4 gives that \((\mathfrak{q}_{-\alpha}\otimes A)w_{\psi})\) is finite dimensional. Thus, \(I_{\alpha}\) is a linear subspace of \(A\) of finite co-dimension. We claim that \(I_{\alpha}\) is an ideal of \(A\). Since \(\alpha\neq 0\), we can choose \(h\in\mathfrak{h}_{\bar{0}}\) such that \(\alpha(h)\neq 0\). Then, for all \(b\in A,a\in I_{\alpha},v\in W^{\mbox{loc}}_{A}(\psi)_{\lambda}\), \(\ u\in\mathfrak{q}_{-\alpha}\), we have \[0 =(h\otimes b)(u\otimes a)v\] \[=[h\otimes b,u\otimes a]v+(u\otimes a)(h\otimes b)v\] \[=-\alpha(h)(u\otimes ba)v+(u\otimes a)(h\otimes b)v.\] Since \((h\otimes b)v\in W^{\mbox{loc}}_{A}(\psi)_{\lambda}\) and \(a\in I_{\alpha}\), the last term above is zero. Since we have assumed that \(\alpha(h)\neq 0\), this implies that \((u\otimes ba)v\). As this holds for all \(v\in W^{\mbox{loc}}_{A}(\psi)_{\lambda}\) and \(u\in\mathfrak{q}_{-\alpha}\), we have \(ba\in I_{\alpha}\). Hence \(I_{\alpha}\) is an ideal of \(A\). Let \(I=\bigcap_{\alpha\in\Phi^{+}_{\bar{0}}}I_{\alpha}\). Since \(\mathfrak{q}\) is finite dimensional and hence a finite number of positive roots, this intersection is finite and thus \(I\) is an ideal of \(A\) of finite co-dimension. Then we have \[(\mathfrak{n}_{\bar{0}}^{-}\otimes I)W^{\mbox{loc}}_{A}(\psi)_{\lambda}=0.\] Since \(\lambda\) is the highest weight of \(W^{\mbox{loc}}_{A}(\psi)_{\lambda}\), we also have \((\mathfrak{n}^{+}\otimes A)W^{\mbox{loc}}_{A}(\psi)_{\lambda}=0\). Further, since \(\mathfrak{h}_{\bar{0}}\otimes I\subseteq[\mathfrak{n}^{+}\otimes A, \mathfrak{n}_{\bar{0}}^{-}\otimes I]\), we have \((\mathfrak{h}_{\bar{0}}\otimes I)W^{\mbox{loc}}_{A}(\psi)_{\lambda}=0\). In particular, \((\mathfrak{h}_{\bar{0}}\otimes a)w_{\psi}=0\) for all \(a\in I\). Since \(\psi\not\in\mathcal{L}(\mathfrak{h}\otimes A)\), then there exists \(a\in I\) such that \(\psi(h\otimes a)\neq 0\). So, we must have \(w_{\psi}=0\) and hence \[W_{A}^{\text{loc}}(\psi)=\mathbf{U}(\mathfrak{n}^{-}\otimes A)w_{\psi}=0.\] **Definition 4.6**.: Suppose \(\psi\in\mathcal{L}(\mathfrak{h}\otimes A)\) such that \(\lambda=\psi\mid_{\mathfrak{h}_{\bar{0}}}\in\Lambda^{+}\). Let \(I_{\psi}\) be the sum of all ideals \(I\subseteq A\) such that \((\mathfrak{h}_{\bar{0}}\otimes I)W_{A}^{\text{loc}}(\psi)_{\lambda}=0\). **Remark 4.7**.: It follows from the proof of Lemma that, \(I_{\psi}\) is a finite co-dimensional ideal in \(A\) and that \((y_{\alpha}\otimes I_{\psi})w_{\psi}=0\) for all \(\alpha\in\Phi_{\bar{0}}^{+}\). Furthermore, since \(I_{\psi}\) has finite co-dimension and \(A\) is assumed to be finitely generated, we have that \(I_{\psi}^{n}\) has finite co-dimension, for all \(n\in\mathbb{N}\) (see [12, Lemma 2.1(a), (b)]) **Lemma 4.8**.: _Suppose \(\psi\in\mathcal{L}(\mathfrak{h}\otimes A)\) such that \(\lambda=\psi\mid_{\mathfrak{h}_{\bar{0}}}\in\Lambda^{+}\). Then there exists \(n_{\psi}\in\mathbb{N}\) such that_ \[(\mathfrak{n}^{-}\otimes I_{\psi}^{n_{\psi}})w_{\psi}=0\quad\text{for all}\;\;w_{\psi}\in H(\psi).\] Proof.: For \(\alpha=\sum_{i=1}^{n}a_{i}\alpha_{i}\), with \(a_{i}\in\mathbb{N}\) and where the \(\alpha_{i}\) are the simple roots of \(\mathfrak{q}\), we define the height of \(\alpha\) to be \[\operatorname{ht}(\alpha)=\sum_{i=1}^{n}a_{i}.\] By induction on the height of \(\alpha\), we will show that \((\mathfrak{q}_{-\alpha}\otimes I_{\psi}^{\text{ht}(\alpha)})w_{\psi}=0\) for all \(\alpha\in\Phi^{+},w_{\psi}\in H(\psi)\). Since \(\mathfrak{q}\) is finite dimensional, the heights of elements of \(\Phi^{+}\) are bounded above, and thus the lemma will follow. For the base case, first we will show that \[(f_{i}\otimes I_{\psi})w_{\psi}=0,\quad\text{for all}\;\;w_{\psi}\in H(\psi), \alpha_{i}\in\sum, \tag{4.3}\] since the set \(\{f_{i}\mid\alpha_{i}\in\sum\}\) of generators of \(\mathfrak{n}^{-}\). By the above remark, it suffices to consider the case \(\alpha_{i}\in\sum_{\bar{1}}\). At first fix such an \(\alpha_{i}\). By (2.8), there exists \(\alpha_{j}\in\Phi_{\bar{1}}\) such that \(\alpha_{k}:=\alpha_{i}+\alpha_{j}\in\Phi_{\bar{0}}^{+}\). Note that \(\dim\mathfrak{q}_{\alpha}=(1\mid 1)\) for any \(\alpha\in\Phi\), that is, \(\mathfrak{q}_{\alpha}\) is generated by an even vector and an odd vector. Further, since \([h,[e_{j},f_{k}]]=\alpha_{i}(h)[e_{j},f_{k}]\), we can write after rescaling, if necessary \[[e_{j},f_{k}]=f_{i}. \tag{4.4}\] Then \[(f_{i}\otimes I_{\psi})w_{\psi} =[e_{j}\otimes A,f_{k}\otimes I_{\psi}]w_{\psi}\] \[\subseteq(e_{j}\otimes A)(f_{k}\otimes I_{\psi})w_{\psi}+(f_{k} \otimes I_{\psi})(e_{j}\otimes A)w_{\psi}.\] Since by Definition 4.1, \((e_{j}\otimes A)w_{\psi}\) and by the above remark, \((f_{k}\otimes I_{\psi})w_{\psi}=0\), then from above, (4.3) holds. Now suppose that \(\alpha_{j}\in\Phi^{+}\) with \(\operatorname{ht}(\alpha_{j})>1\). Then there exists \(\alpha_{k},\alpha_{\ell}\in\Phi^{+}\) with \(\operatorname{ht}(\alpha_{k}),\operatorname{ht}(\alpha_{\ell})<\operatorname{ ht}(\alpha_{i})\) such that \(f_{i}\in\mathbb{C}[f_{k},f_{\ell}]\). Then by induction hypothesis \[(f_{i}\otimes I_{\psi}^{\text{ht}(\alpha)})w_{\psi} =[f_{k}\otimes I_{\psi}^{\text{ht}(\alpha_{k})},f_{\ell}\otimes I _{\psi}^{\text{ht}(\alpha_{\ell})}]w_{\psi}\] \[=(f_{k}\otimes I_{\psi}^{\text{ht}(\alpha_{k})})(f_{\ell}\otimes I _{\psi}^{\text{ht}(\alpha_{\ell})})(w_{\psi})-(f_{\ell}\otimes I_{\psi}^{ \text{ht}(\alpha_{\ell})})(f_{k}\otimes I_{\psi}^{\text{ht}(\alpha_{k})})(w_{ \psi})\] \[=0.\] **Corollary 4.9**.: _Suppose \(\psi\in\mathcal{L}(\mathfrak{h}\otimes A)\) such that \(\lambda=\psi\mid_{\mathfrak{h}_{0}}\in\Lambda^{+}\) and let \(n\in\mathbb{N}\) as in Lemma 4.8. Then_ \[(\mathfrak{q}\otimes I_{\psi}^{n_{\psi}})H(\psi)=0.\] Proof.: By [16, Lemma 2.12], to prove that \((\mathfrak{q}\otimes I_{\psi}^{n_{\psi}})H(\psi)=0\), it is enough to prove that \((\mathfrak{q}\otimes I_{\psi}^{n_{\psi}})w_{\psi}=0\) for any non-zero \(w_{\psi}\in H(\psi)\). Since the triangular decomposition of \(\mathfrak{q}=\mathfrak{n}^{-}+\mathfrak{h}+\mathfrak{n}^{+}\), we have to show that \[(\mathfrak{n}^{-}\otimes I_{\psi}^{n_{\psi}})w_{\psi}=0,\quad(\mathfrak{h} \otimes I)^{n_{\psi}}w_{\psi}=0,\quad\mathfrak{n}^{+}\otimes I_{\psi}^{n_{ \psi}})w_{\psi}=0.\] From the first relation in Definition 4.1 we have \((\mathfrak{n}^{+}\otimes I_{\psi})w_{\psi}=0\) for all \(w_{\psi}\in H(\psi)\) and this implies \((\mathfrak{n}^{+}\otimes I_{\psi}^{n_{\psi}})w_{\psi}=0\). Also, \((\mathfrak{h}_{\bar{0}}\otimes I)w_{\psi}=0\) by the definition of \(I_{\psi}\). Again by [16, Lemma 4.1], \((\mathfrak{h}_{\bar{1}}\otimes I)w_{\psi}=0\). From Lemma 4.8, we get \((\mathfrak{n}^{-}\otimes I_{\psi}^{n_{\psi}})w_{\psi}=0\). Now we give a sufficient conditions for local Weyl modules to be finite dimensional. **Theorem 4.10**.: _Assume that \(A\) is finitely generated. Then the local Weyl module \(W^{\text{loc}}_{A}(\psi)\) is finite dimensional for all \(\psi\in\mathcal{L}(\mathfrak{h}\otimes A)\) such that \(\lambda=\psi\mid_{\mathfrak{h}_{0}}\in\Lambda^{+}\)._ Proof.: By Definition 4.1, we have \(W^{\text{loc}}_{A}(\psi)=\mathbf{U}(\mathfrak{n}^{-}\otimes A)H(\psi)\). Also, by Lemma 4.8, we have \((\mathfrak{n}^{-}\otimes I_{\psi}^{n_{\psi}})H(\psi)=0\). Thus, \[W^{\text{loc}}_{A}(\psi)=\mathbf{U}(\mathfrak{n}^{-}\otimes A/I_{\psi}^{n_{ \psi}})H(\psi).\] Since the set of \(\mathfrak{q}\)-weights of \(W^{\text{loc}}_{A}(\psi)\) is finite, there exists \(N\in\mathbb{N}\) such that \[W^{\text{loc}}_{A}(\psi)=\mathbf{U}_{n}(\mathfrak{n}^{-}\otimes A/I_{\psi}^{n _{\psi}})H(\psi),\ \ \text{for all}\ \ n\geq N,\] where \(\mathbf{U}(\mathfrak{g})=\sum_{n=0}^{\infty}\mathbf{U}_{n}(\mathfrak{g})\) is the usual filtration on the universal enveloping algebra of a Lie superalgebra \(\mathfrak{g}\) induced from the natural grading on the tensor algebra. Since the Lie superalgebra \(\mathfrak{n}^{-}\otimes A/I_{\psi}^{n_{\psi}}\) and \(H(\psi)\) are finite dimensional, the local Weyl module \(W^{\text{loc}}_{A}(\psi)\) is finite dimensional. **Theorem 4.11**.: _Let \(\psi\in\mathcal{L}(\mathfrak{h}\otimes A)\) such that \(\lambda=\psi\mid_{\mathfrak{h}_{\bar{0}}}\in\Lambda^{+}\). Then any finite dimensional local Weyl module with highest map-weight \(\psi\) in the category \(I(\mathfrak{q}\otimes A,\mathfrak{q}_{\bar{0}})\) is isomorphic to \(W^{\text{loc}}_{A}(\psi)\) or \(\Pi W^{\text{loc}}_{A}(\psi)\). Furthermore, if any finite dimensional object \(V(\psi)\in I(\mathfrak{q}\otimes A,\mathfrak{q}_{\bar{0}})\) is generated by an irreducible module \(H(\psi)\) of map-weight \(\psi\), then there exists a surjective homomorphism form \(W^{\text{loc}}_{A}(\psi)\) to \(V(\psi)\) (up to \(\Pi\))._ Proof.: Let \(V(\psi)\in I(\mathfrak{q}\otimes A,\mathfrak{q}_{\bar{0}})\) be a finite dimensional highest map-weight \(\mathfrak{q}\otimes A\)-module with highest map-weight \(\psi\) is generated by an irreducible module \(H(\psi)\). Then by Definition 4.1, \[(\mathfrak{n}^{+}\otimes A)v=0,\quad hv=\lambda(h)v,\quad\text{for all}\ \ h\in\mathfrak{h}_{\bar{0}},\ \ v\in H(\psi).\] Since the \(\mathfrak{q}_{\bar{0}}\)-module generated by \(H(\psi)\) is finite dimensional, then \(f_{i}^{\lambda(h_{i})+1}v=0\) for all \(v\in H(\psi)\) and \(i\in I\). Thus, we have a surjective homomorphism \(W^{\text{loc}}_{A}(\psi)\to V(\psi)\) induced by \(W^{\text{loc}}_{A}(\psi)_{\lambda}\to H(\psi)\) (up to \(\Pi\)). Suppose \(W^{\prime}_{A}(\psi)\) is another object in \(I(\mathfrak{q}\otimes A,\mathfrak{q}_{\bar{0}})\) that is generated by an irreducible module \(\mathbf{m}(\psi)\) with highest map-weight \(\psi\) and admits a surjective homomorphism to any object of \(I(\mathfrak{q}\otimes A,\mathfrak{q}_{\bar{0}})\) which is also generated by highest map-weight vectors of map-weight \(\psi\). Then \(W^{\prime}_{A}(\psi)\) is a quotient of \(W^{\text{loc}}_{A}(\psi)\) (up to \(\Pi\)) and vice-versa. Since both modules are finite dimensional, \(W^{\text{loc}}_{A}(\psi)\cong W^{\prime}_{A}(\psi)\) or \(\Pi(W^{\text{loc}}_{A}(\psi))\cong W^{\prime}_{A}(\psi)\) **Corollary 4.12**.: _Let \(\psi\in\mathcal{L}(\mathfrak{h}\otimes A)\) such that \(\lambda=\psi\mid_{\mathfrak{h}_{0}}\in\Lambda^{+}\). Then the local Weyl module \(W^{\text{loc}}_{A}(\psi)\) is the maximal finite dimensional quotient of the global Weyl module \(W_{A}(\lambda)\) (up to \(\Pi\)) that is a highest map-weight module of highest map-weight \(\psi\)._ **Remark 4.13**.: By [16, Theorem 5.6], any finite dimensional \(\mathfrak{q}\otimes A\)-module is a highest map-weight module for some \(\psi\in\mathcal{L}(\mathfrak{h}\otimes A)\) such that \(\lambda=\psi\mid_{\mathfrak{h}_{0}}\in\Lambda^{+}\). Then by Theorem 4.11, there exists a surjective homomorphism from the local Weyl module \(W^{\text{loc}}_{A}(\psi)\) to such a module (up to \(\Pi\)). Equivalently, all finite dimensional \(\mathfrak{q}\otimes A\)-modules are quotients of local Weyl modules (up to \(\Pi\)). ## 5. Tensor product of local Weyl modules If \(A\) and \(B\) are associative unitary algebras, then all irreducible representations of \(A\otimes B\) are of the form \(V_{A}\otimes V_{B}\). Further, all such modules are irreducible. However, when \(A\) and \(B\) are allowed to be superalgebras, then \(V_{A}\otimes V_{B}\) is not necessarily irreducible. If \(\mathfrak{g}_{i}\) for \(i=1,2\) are two finite dimensional Lie superalgebras, and \(V_{i}\) is an irreducible finite-dimensional \(\mathfrak{g}_{i}\)-module for \(i=1,2\), then \(\mathfrak{g}_{1}\oplus\mathfrak{g}_{2}\)-module \(V_{1}\otimes V_{2}\) is irreducible only if \(\text{End}_{\mathfrak{g}_{i}}(V_{i})_{\bar{1}}=0\) for some \(i=1,2\). When \(\text{End}_{\mathfrak{g}_{i}}(V_{i})_{\bar{1}}=\mathbb{C}\phi_{i},\phi_{i}^{2}=-1\) for \(i=1\) and \(i=2\), we have \[\widehat{V}=\{v\in V_{1}\otimes V_{2}\mid(\tilde{\phi_{1}}\otimes\phi_{2})(v) =v\},\quad\text{where }\;\tilde{\phi_{1}}=\sqrt{-1}\phi_{1}\] is an irreducible \(\mathfrak{g}_{1}\oplus\mathfrak{g}_{2}\)-submodule of \(V_{1}\otimes V_{2}\) such that \(V_{1}\otimes V_{2}\cong\widehat{V}\oplus\widehat{V}\) (see [11, p.27]). Now we set \[V_{1}\widehat{\otimes}V_{2}=\begin{cases}V_{1}\otimes V_{2}&\text{if }V_{1} \otimes V_{2}\text{ is irreducible},\\ \widehat{V}\subsetneq V_{1}\otimes V_{2}&\text{if }V_{1}\otimes V_{2}\text{ is not irreducible}.\end{cases}\] If \(V_{i}\) is an irreducible finite-dimensional \(\mathfrak{g}_{i}\)-module for \(i=1,2\), then it is proved that every irreducible finite dimensional \(\mathfrak{g}_{1}\oplus\mathfrak{g}_{2}\)-module is isomorphic to a module of the form \(V_{1}\widehat{\otimes}V_{2}\) (see [11, Prop. 8.4]). Given an ideal \(I\) of \(A\), we define its support to be the set \[\text{Supp}(I)=\{\mathfrak{m}\in\text{MaxSpec}(A)\mid I\subseteq\mathfrak{m}\}.\] **Theorem 5.1**.: _Assume that \(A\) is finitely generated. Let \(\psi_{1},\psi_{2}\in\mathcal{L}(\mathfrak{h}\otimes A)\) such that \(\lambda_{1}\mid_{\mathfrak{h}_{0}}=\psi_{1},\lambda_{2}\mid_{\mathfrak{h}_{0} }=\psi_{2}\) and suppose that \(\lambda_{1},\lambda_{2}\in\Lambda^{+}\) such that \(\lambda_{1}+\lambda_{2}\in\Lambda^{+}\). If \(\text{Supp}(I_{\psi_{1}})\cap\text{Supp}(I_{\psi_{2}})=\emptyset\), then we have_ \[W^{\text{loc}}_{A}(\psi_{1})\otimes W^{\text{loc}}_{A}(\psi_{2})\cong \begin{cases}W^{\text{loc}}_{A}(\psi_{1}+\psi_{2}),\text{ or}\\ W^{\text{loc}}_{A}(\psi_{1}+\psi_{2})\oplus W^{\text{loc}}_{A}(\psi_{1}+\psi_{2}) \end{cases}\] _as \(\mathfrak{q}\otimes A\)-modules._ Proof.: Let \(\psi_{1},\psi_{2}\in\mathcal{L}(\mathfrak{h}\otimes A)\) such that \(\lambda_{1}\mid_{\mathfrak{h}_{0}}=\psi_{1},\lambda_{2}\mid_{\mathfrak{h}_{0} }=\psi_{2}\) and suppose that \(\lambda_{1},\lambda_{2}\in\Lambda^{+}\) such that \(\lambda_{1}+\lambda_{2}\in\Lambda^{+}\). Let the local Weyl module \(W^{\text{loc}}_{A}(\psi_{i})\) associated to \(\psi_{i}\) (upto \(\Pi\)) be the \(\mathfrak{q}\otimes A\)-module generated by \(H(\psi_{i})\) for \(i=1,2\). Let \(\rho_{i}\) be the representation corresponding to \(W^{\text{loc}}_{A}(\psi_{i})\) for \(i=1,2\). By Corollary 4.9, there exists \(n_{1},n_{1}\in\mathbb{N}\) such that \((\mathfrak{q}\otimes I^{n_{i}}_{\psi_{i}})w_{\psi_{i}}=0\) for all \(w_{\psi_{i}}\in H(\psi_{i})\) for \(i=1,2\). Notice that \(W^{\text{loc}}_{A}(\psi_{1})\otimes W^{\text{loc}}_{A}(\psi_{2})\), as \((\mathfrak{q}\otimes A/I^{n_{1}}_{\psi_{1}})\oplus(\mathfrak{q}\otimes A/I^{n_ {2}}_{\psi_{2}})\)-module generated by \(H(\psi_{1})\otimes H(\psi_{2})\) is either irreducible or is isomorphic to \(\widehat{V}\oplus\widehat{V}\) where \(\widehat{V}\subsetneq W^{\text{loc}}_{A}(\psi_{1})\otimes W^{\text{loc}}_{A}( \psi_{2})\) is an irreducible \((\mathfrak{q}\otimes A/I^{n_{1}}_{\psi_{1}})\oplus(\mathfrak{q}\otimes A/I^{n_ {2}}_{\psi_{2}})\)-module. Now the representation \(\rho_{1}\otimes\rho_{2}\) factors through the composition \[\mathfrak{q}\otimes A\xrightarrow{\mathfrak{q}}(\mathfrak{q}\otimes A)\oplus( \mathfrak{q}\otimes A)\twoheadrightarrow(\mathfrak{q}\otimes A/I^{n_{1}}_{ \psi_{1}})\oplus(\mathfrak{q}\otimes A/I^{n_{2}}_{\psi_{2}}) \tag{5.1}\] where first map is the diagonal map and the second map is the projection on each summand. By [12, Lemma 2.1], we have that \(A=I_{\psi_{1}}^{n_{1}}+I_{\psi_{2}}^{n_{2}}\) and \(I_{\psi_{1}}^{n_{1}}\cap I_{\psi_{2}}^{n_{2}}=I_{\psi_{1}}^{n_{1}}I_{\psi_{2}}^{ n_{2}}\), since \(\operatorname{Supp}(I_{\psi_{1}})\cap\operatorname{Supp}(I_{\psi_{2}})=\emptyset\). Therefore, we have the following commutative diagram: It follows that the composition (5.1) is surjective. By the surjective of (5.1), it follows that \(W_{A}^{\text{loc}}(\psi_{1})\otimes W_{A}^{\text{loc}}(\psi_{2})\), as \((\mathfrak{q}\otimes A)\)-module generated by \(H(\psi_{1})\otimes H(\psi_{2})\) is either irreducible or is isomorphic to \(\widehat{V}\oplus\widehat{V}\) where \(\widehat{V}\subsetneq W_{A}^{\text{loc}}(\psi_{1})\otimes W_{A}^{\text{loc}}( \psi_{2})\). Moreover, \(\mathfrak{h}_{\bar{0}}\otimes A\) acts on \(w_{\psi_{1}}\otimes w_{\psi_{2}}\) as follows: \[x(w_{\psi_{1}}\otimes w_{\psi_{2}}) =xw_{\psi_{1}}\otimes w_{\psi_{2}}\oplus w_{\psi_{1}}\otimes xw_ {\psi_{2}}=\psi_{1}(x)w_{\psi_{1}}\otimes w_{\psi_{2}}\oplus w_{\psi_{1}} \otimes\psi_{2}(x)w_{\psi_{2}}\] \[=\psi_{1}(x)(w_{\psi_{1}}\otimes w_{\psi_{2}})\oplus\psi_{2}(x)(w _{\psi_{1}}\otimes w_{\psi_{2}})\] \[=(\psi_{1}+\psi_{2})(x)(w_{\psi_{1}}\otimes w_{\psi_{2}}),\quad \text{for all}\quad w_{\psi_{i}}\in H(\psi_{i}),x\in\mathfrak{h}_{\bar{0}} \otimes A,i=1,2\] and \[(\mathfrak{n}^{+}\otimes A)(w_{\psi_{1}}\otimes w_{\psi_{2}})=(\mathfrak{n}^{ +}\otimes A)w_{\psi_{1}}\otimes w_{\psi_{2}}\oplus w_{\psi_{1}}\otimes( \mathfrak{n}^{+}\otimes A)w_{\psi_{2}}=0.\] Thus, \(W_{A}^{\text{loc}}(\psi_{1})\otimes W_{A}^{\text{loc}}(\psi_{2})\) is a finite dimensional highest map-weight module generated by \(H(\psi_{1})\otimes H(\psi_{2})\) of highest map-weight \(\psi_{1}+\psi_{2}\). Therefore, by Theorem 4.11, \(W_{A}^{\text{loc}}(\psi_{1})\otimes W_{A}^{\text{loc}}(\psi_{2})\) is a quotient of \(W_{A}^{\text{loc}}(\psi_{1}+\psi_{2})\). By [12, Theorem 4.3], there exists a unique (up to \(\Pi\)) irreducible finite dimensional \(\mathfrak{h}\otimes A\)-module \(H(\psi_{1}+\psi_{2})\) such that \(xv=(\psi_{1}+\psi_{2})(x)v,x\in\mathfrak{h}_{\bar{0}}\otimes A,v\in H(\psi_{ 1}+\psi_{2})\). Let \(I=I_{\psi_{1}}\cap I_{\psi_{2}}=I_{\psi_{1}}I_{\psi_{2}}\) and \(n=n_{\psi_{1}+\psi_{2}}\). Then \(I\subseteq I_{\psi_{1}+\psi_{2}}\) and hence \((\mathfrak{h}_{\bar{0}}\otimes I_{\psi_{1}+\psi_{2}}^{n})H(\psi_{1}+\psi_{2})=0\). So, the action of \(\mathfrak{b}\otimes A\) on \(H(\psi_{1}+\psi_{2})\) descends to an action of \(\mathfrak{b}\otimes A/I^{n}\) on \(H(\psi_{1}+\psi_{2})\). Now consider the induced module \[M(\psi_{1}+\psi_{2}):=\mathbf{U}(\mathfrak{q}\otimes A/I_{\psi_{1}+\psi_{2}}^{ n})\otimes\mathbf{U}(\mathfrak{b}\otimes A/I^{n})\:H(\psi_{1}+\psi_{2}).\] It follows that \(W_{A}^{\text{loc}}(\psi_{1}+\psi_{2})\) is a quotient of \(M(\psi_{1}+\psi_{2})\). On the other hand, since \(\mathfrak{b}\otimes A\) module \(H(\psi_{1}+\psi_{2})\) is irreducible, by [12, Prop 6.3], we have \(H(\psi_{1}+\psi_{2})\) and \(H(\psi_{1})\otimes H(\psi_{2})\) are isomorphic. Hence, \[M(\psi_{1}+\psi_{2}) =\mathbf{U}(\mathfrak{q}\otimes A/I_{\psi_{1}+\psi_{2}}^{n}) \otimes_{\mathbf{U}(\mathfrak{b}\otimes A/I_{\psi_{1}+\psi_{2}}^{n})}H(\psi_{1} +\psi_{2})\] \[\cong\mathbf{U}(\mathfrak{q}\otimes(A/I_{\psi_{1}}^{n}\oplus A/I_{ \psi_{2}}^{n}))\otimes_{\mathbf{U}(\mathfrak{b}\otimes(A/I_{\psi_{\bar{0}}}^{n} \oplus A/I_{\psi_{2}}^{n}))}H(\psi_{1})\otimes H(\psi_{2})\] \[\cong\left(\mathbf{U}(\mathfrak{q}\otimes(A/I_{\psi_{1}}^{n})) \otimes\left(\mathbf{U}(\mathfrak{q}\otimes(A/I_{\psi_{1}}^{n}))\otimes_{ \mathbf{U}(\mathfrak{b}\otimes(A/I_{\psi_{1}}^{n}))\otimes_{\mathbf{U}( \mathfrak{b}\otimes(A/I_{\psi_{1}}^{n}))}H(\psi_{1})\otimes H(\psi_{2})\right)\] \[\cong\left(\mathbf{U}(\mathfrak{q}\otimes(A/I_{\psi_{1}}^{n})) \otimes_{\mathbf{U}(\mathfrak{b}\otimes(A/I_{\psi_{1}}^{n}))}H(\psi_{1}) \right)\otimes\left(\mathbf{U}(\mathfrak{q}\otimes(A/I_{\psi_{2}}^{n}))\otimes_ {\mathbf{U}(\mathfrak{b}\otimes(A/I_{\psi_{2}}^{n}))}H(\psi_{2})\right)\] \[=M(\psi_{1})\otimes M(\psi_{2}).\] Since \(W_{A}^{\text{loc}}(\psi_{1}+\psi_{2})\) is a quotient of \(M(\psi_{1}+\psi_{2})\), so \(W_{A}^{\text{loc}}(\psi_{1}+\psi_{2})\) is a quotient of \(M(\psi_{1})\otimes M(\psi_{2})\) and hence we can fix a surjection \[\eta:M(\psi_{1})\otimes M(\psi_{2})\twoheadrightarrow W_{A}^{\text{loc}}(\psi_{ 1}+\psi_{2}).\] Then one can show that the image of \(M(\psi_{1})_{\mu_{1}}\otimes M(\psi_{2})_{\mu_{2}}\) under the map \(\eta\) is zero except for a finite number of weights \(\mu_{1}\) and \(\mu_{2}\) and let \(D_{i}\) be the such finite set of weights. Now for \(i=1,2\) let \(M(\psi_{i})^{\prime}\) be the submodule of \(M(\psi_{i})\) generated by the weight subspaces \(M(\psi_{i})_{\lambda_{i}}\) with \(\lambda_{i}\not\in D_{i}\), and let \(\bar{M}(\psi_{i})=M(\psi_{i})/M(\psi_{i})^{\prime}\). Then \(W^{\text{loc}}_{A}(\psi_{1}+\psi_{2})\) is a quotient of \(\bar{M}(\psi_{1})\otimes\bar{M}(\psi_{2})\). Since \(I_{\psi_{i}}\) has finite co-dimension and there are only a finite number of weights occurring in the quotient \(\bar{M}(\psi_{i})\), this module is a finite dimensional highest map-weight module of highest map-weight \(\psi_{i}\). Hence, by Theorem 4.11, it is a quotient of \(W^{\text{loc}}_{A}(\psi_{i})\). Thus, \(\bar{M}(\psi_{1})\otimes\bar{M}(\psi_{2})\) is a quotient of \(W^{\text{loc}}_{A}(\psi_{1})\otimes W^{\text{loc}}_{A}(\psi_{2})\) and this implies \(W^{\text{loc}}_{A}(\psi_{1}+\psi_{2})\) is a quotient of \(W^{\text{loc}}_{A}(\psi_{1})\otimes W^{\text{loc}}_{A}(\psi_{2})\). Since the modules \(W^{\text{loc}}_{A}(\psi_{1}+\psi_{2})\) and \(W^{\text{loc}}_{A}(\psi_{1})\otimes W^{\text{loc}}_{A}(\psi_{2})\) are both finite dimensional, and one is quotient of other implies that \(W^{\text{loc}}_{A}(\psi_{1}+\psi_{2})\cong W^{\text{loc}}_{A}(\psi_{1})\otimes W ^{\text{loc}}_{A}(\psi_{2})\). Note that \(W^{\text{loc}}_{A}(\psi_{1})\) and \(W^{\text{loc}}_{A}(\psi_{2})\) satisfy the hypothesis of Theorem 5.1, then \[W^{\text{loc}}_{A}(\psi_{1})\widehat{\otimes}W^{\text{loc}}_{A}(\psi_{2}) \cong W^{\text{loc}}_{A}(\psi_{1}+\psi_{2}).\]
2309.08962
Dynamic Separation Logic
This paper introduces a dynamic logic extension of separation logic. The assertion language of separation logic is extended with modalities for the five types of the basic instructions of separation logic: simple assignment, look-up, mutation, allocation, and de-allocation. The main novelty of the resulting dynamic logic is that it allows to combine different approaches to resolving these modalities. One such approach is based on the standard weakest precondition calculus of separation logic. The other approach introduced in this paper provides a novel alternative formalization in the proposed dynamic logic extension of separation logic. The soundness and completeness of this axiomatization has been formalized in the Coq theorem prover.
Frank S. de Boer, Hans-Dieter A. Hiep, Stijn de Gouw
2023-09-16T11:31:05
http://arxiv.org/abs/2309.08962v2
# Dynamic Separation Logic ###### Abstract This paper introduces a dynamic logic extension of separation logic. The assertion language of separation logic is extended with modalities for the five types of the basic instructions of separation logic: simple assignment, look-up, mutation, allocation, and de-allocation. The main novelty of the resulting dynamic logic is that it allows to combine different approaches to resolving these modalities. One such approach is based on the standard weakest precondition calculus of separation logic. The other approach introduced in this paper provides a novel alternative formalization in the proposed dynamic logic extension of separation logic. The soundness and completeness of this axiomatization has been formalized in the Coq theorem prover. S + Footnote †: footnote]Email: frb@cwi.nl Footnote 2: Email: hdh@cwi.nl Footnote 3: Email: sdg@ou.nl ## 1 Introduction This paper describes a study into the expressive power of separation logic (SL, for short) with regard to the formalization of _weakest preconditions_[7]. To this end, we introduce a novel dynamic logic extension of SL, which we abbreviate by DSL (for Dynamic Separation Logic). SL [19] extends Hoare logic for the specification and verification of heap manipulating programs in terms of pre- and postconditions. The assertion language of SL features the basic heap assertion (\(x\mapsto e\)), '\(x\) points to \(e\)', which expresses that the variable \(x\) denotes the single allocated memory location which stores the value of the expression \(e\). The so-called separating conjunction (\(p*q\)) allows to split the heap, that is, the set of allocated memory locations and their contents, into two disjoint parts one of which satisfies the conjunct \(p\) and the other satisfies \(q\). The separating implication (\(p\rightarrow q\)), roughly, holds if every extension of the heap satisfies \(q\), whenever \(p\) holds for the extension itself (separately). For an introduction to SL and an extensive survey of the literature, intended for a broad audience, see the paper by A. Chargueraud [5]. Dynamic logic [9] generalizes Hoare logics by introducing for each statement of the underlying programming language a corresponding modality, so that the formula \([S]p\) expresses the weakest precondition of the statement \(S\) with respect to the postcondition \(p\). Informally, \([S]p\) is valid if every terminating computation establishes \(p\). In this paper we extend the assertion language of SL with _modalities_ for the five types of the basic instructions of SL: simple assignment, look-up, mutation, allocation, and de-allocation. For any such basic instruction \(S\), we then can introduce in the Hoare logic the axiom \[\{[S]p\}\ S\ \{p\}\] which is trivially sound and complete by definition of \([S]p\). In case \(S\) is a simple assignment \(x:=e\) and \(p\) is an assertion in standard SL, we can resolve the weakest precondition \([S]p\), as in first-order dynamic logic, simply by _substituting_ every free occurrence of \(x\) in \(p\) by the expression \(e\).4 In SL we can resolve \([S]p\), for any other basic instruction \(S\), by a formula with a hole \(C_{S}(\cdot)\) in SL itself, such that \(C_{S}(p)\) is equivalent to \([S]p\). For example, the assertion Footnote 4: After suitable renaming of the bound variables in \(p\) such that no variable of \(e\) gets bound. \[(\exists y(x\mapsto y))\ast((x\mapsto e)\dashrightarrow p)\] states that the heap can be split in a sub-heap which consists of a single memory cell denoted by \(x\) such that \(p\) holds for every extension of the other part with a single memory cell denoted by \(x\) and which contains the value of \(e\). It follows that this assertion is equivalent to \([[x]:=e]p\), where the _mutation_ instruction \([x]:=e\) assigns the value of the expression \(e\) to the heap location denoted by the variable \(x\). The main contribution of this paper is a complementary approach to resolving \([S]p\), for any basic instruction. In this approach we obtain an alternative characterization of the weakest precondition \([S]p\) by a novel axiomatization of the modalities in DSL. This axiomatization allows for a characterization of \([S]p\)_compositionally_ in terms of the syntactical structure of \(p\). O'Hearn, Reynolds, and Yang introduced local axioms [15] and show how to derive from these local axioms a weakest precondition axiomatization of the basic instructions in SL, using the frame rule and the separating implication for expressing the weakest precondition. However, the separating implication is actually not needed to prove completeness of the local axioms for simple assignments, look-up, allocation, and de-allocation. We illustrate the expressiveness of DSL by extending this result to the local mutation axiom. We further illustrate the expressiveness of DSL by a novel _strongest postcondition_ axiomatization. Using the proof assistant Coq, we have formally verified the soundness and completeness proofs of the axiomatization of the DSL modalities. All our results can be readily extended to a programming language involving (sequential) control structures such as loops. ## Acknowledgments The authors are grateful for the constructive feedback provided by the anonymous referees. ## 2 Syntax and semantics We follow the presentation of SL in [19]. A heap5\(h\) is represented by a (finitely-based) _partial_ function \(\mathbb{Z}\rightharpoonup\mathbb{Z}\) and the domain of \(h\) is denoted by \(\textit{dom}(h)\). We write \(h(n)=\bot\) if \(n\not\in\textit{dom}(h)\). The heaps \(h,h^{\prime}\) are disjoint iff \(\textit{dom}(h)\cap\textit{dom}(h^{\prime})=\emptyset\). A heap \(h\) is partitioned in \(h_{1}\) and \(h_{2}\), denoted by \(h=h_{1}\uplus h_{2}\), iff \(h_{1}\) and \(h_{2}\) are disjoint, \(\textit{dom}(h)=\textit{dom}(h_{1})\cup\textit{dom}(h_{2})\), and \(h(n)=h_{i}(n)\) if \(n\in\textit{dom}(h_{i})\) for \(i\in\{1,2\}\). Footnote 5: All italicized variables are typical meta-variables, and we use primes and subscripts for other meta-variables of the same type, e.g. \(h\), \(h^{\prime}\), \(h^{\prime\prime}\), \(h_{1}\), \(h_{2}\) are all heaps. \(V\) denotes a countably infinite set of integer variables, with typical element \(x\). A store \(s\) is a total function \(V\to\mathbb{Z}\). We abstract from the syntax of arithmetic expressions \(e\), and Boolean expressions \(b\). By \(\textit{var}(e)\) (resp. \(\textit{var}(b)\)) we denote the finite set of variables that occur in \(e\) (resp. \(b\)). We have the Boolean constants **true** and **false**, and (\(e_{1}=e_{2}\)) is a Boolean expression given arithmetic expressions \(e_{1}\) and \(e_{2}\). \(\langle x:=e,h,s\rangle\Rightarrow(h,s[x:=s(e)])\), \(\langle x:=[e],h,s\rangle\Rightarrow(h,s[x:=h(s(e))])\) if \(s(e)\in\mathit{dom}(h)\), \(\langle x:=[e],h,s\rangle\Rightarrow\textbf{fail}\) if \(s(e)\not\in\mathit{dom}(h)\), \(\langle[x]:=e,h,s\rangle\Rightarrow(h[s(x):=s(e)],s)\) if \(s(x)\in\mathit{dom}(h)\), \(\langle[x]:=e,h,s\rangle\Rightarrow\textbf{fail}\) if \(s(x)\not\in\mathit{dom}(h)\), \(\langle x:=\textbf{cons}(e),h,s\rangle\Rightarrow(h[n:=s(e)],s[x:=n])\) where \(n\not\in\mathit{dom}(h)\). \(\langle\textbf{dispose}(x),h,s\rangle\Rightarrow(h[s(x):=\bot],s)\) if \(s(x)\in\mathit{dom}(h)\), \(\langle\textbf{dispose}(x),h,s\rangle\Rightarrow\textbf{fail}\) if \(s(x)\not\in\mathit{dom}(h)\). By \(s(e)\) we denote the integer value of \(e\) in \(s\), and by \(s(b)\) we denote the Boolean value of \(b\) in \(s\). Following [19] expressions thus do not refer to the heap. By \(s[x:=v]\) and \(h[n:=v]\) we denote the result of updating the value of the variable \(x\) and the location \(n\), respectively. The definition of \(h[n:=v]\) does not require that \(n\in\mathit{dom}(h)\). More specifically, we have \[h[n:=v](m)=\left\{\begin{aligned} & v&\text{if }n=m\\ & h(m)&\text{otherwise}\end{aligned}\right.\] Thus, \(\mathit{dom}(h[n:=v])=\mathit{dom}(h)\cup\{n\}\). For heaps we also define the clearing of a location, denoted by \(h[n:=\bot]\). We have \(h[n:=\bot](m)=\bot\) if \(n=m\), and \(h[n:=\bot](m)=h(m)\) otherwise. Similarly, we have \(\mathit{dom}(h[n:=\bot])=\mathit{dom}(h)\setminus\{n\}\). Following [19], we have the following basic instructions: \(x:=e\) (simple assignment), \(x:=[e]\) (look-up), \([x]:=e\) (mutation), \(x:=\textbf{cons}(e)\) (allocation), \(\textbf{dispose}(x)\) (de-allocation). Just like [10], _We will not give a full syntax of [statements], as the treatment of conditionals and looping statements is standard. Instead, we will concentrate on assignment statements, which is where the main novelty of the approach lies._ The successful execution of any basic instruction \(S\) is denoted by \(\langle S,h,s\rangle\Rightarrow(h^{\prime},s^{\prime})\), whereas \(\langle S,h,s\rangle\Rightarrow\textbf{fail}\) denotes a failing execution (e.g. due to access of a 'dangling pointer'). See Figure 1 for their semantics (and see Appendix, Figure A.1, for the full syntax and semantics). We follow [10] in the definition of the syntax and semantics of the assertion language of SL but we use a different atomic 'weak points to' formula (as in [18] and [6]). In DSL we have additionally a modality for each statement \(S\), which has highest binding priority. \[p,q:=b\mid(e\hookrightarrow e^{\prime})\mid(p\to q)\mid(\forall xp)\mid(p \ast q)\mid(p\twoheadrightarrow q)\mid[S]p\] By \(h,s\models p\) we denote the truth relation of classical SL, see Figure 2. Validity of \(p\) is denoted by \(\models p\). Semantics of DSL extends the semantics of SL by giving semantics to the modality, expressing the weakest precondition. We further have the usual abbreviations: \(\neg p\) denotes \((p\rightarrow\textbf{false})\), \((p\lor q)\) denotes \((\neg p\to q)\) (negation has binding priority over implication), \(p\equiv q\) denotes \((p\to q)\wedge(q\to p)\), \((\exists xp)\) denotes \(\neg(\forall x(\neg p))\) and note that \(x\) is bound in \(p\). By logical connective we mean the connectives \(\neg,\wedge,\vee,\rightarrow,\forall,\exists\), and by separating connective we mean \(\ast\) and \(\neg\ast\). Further, \((e\hookrightarrow-)\) denotes \(\exists x(e\hookrightarrow x)\) for a fresh \(x\), \(\textbf{emp}\) denotes \(\forall x(x\not\rightarrow-)\), and \((e\mapsto e^{\prime})\) denotes \((e\hookrightarrow e^{\prime})\wedge(\forall x((x\hookrightarrow-)\to x=e))\) for a fresh \(x\). We use \(\not\leadsto\) and \(\neq\) as negations of the predicate as usual, and in particular \((e\not\hookrightarrow-)\) is \(\neg\exists x(e\hookrightarrow x)\). We may drop matching parentheses if doing so would not give rise to ambiguity. Note that \(h,s\models\textbf{emp}\) iff \(\mathit{dom}(h)=\emptyset\), and \(h,s\models(e\mapsto e^{\prime})\) iff \(\mathit{dom}(h)=\{s(e)\}\) and \(h(s(e))=s(e^{\prime})\). An assertion is _first-order_ if its construction does not involve separating connectives or modalities. The assertion \((e\hookrightarrow e^{\prime})\) is implied by \((e\mapsto e^{\prime})\), and to express the latter using the former requires the use of separating connectives (i.e. \((e\hookrightarrow e^{\prime})\) is equivalent to \(\textbf{true}\ast(e\hookrightarrow e^{\prime})\)), whereas our definition of \((e\mapsto e^{\prime})\) requires only logical connectives, and thus we use \((e\hookrightarrow e^{\prime})\) as atomic formula. A specification \(\{p\}\)\(S\)\(\{q\}\) is a triple that consists of a precondition \(p\), a program \(S\), and a postcondition \(q\). Specifications are interpreted in the sense of strong partial correctness, which ensures absence of explicit Figure 1: Semantics of basic instructions of heap manipulating programs. \(h,s\models b\) iff \(s(b)=\mathbf{true}\), \(h,s\models(e\hookrightarrow e^{\prime})\) iff \(s(e)\in\mathit{dom}(h)\) and \(h(s(e))=s(e^{\prime})\), \(h,s\models(p\wedge q)\) iff \(h,s\models p\) and \(h,s\models q\), \(h,s\models(p\to q)\) iff \(h,s\models p\) implies \(h,s\models q\), \(h,s\models(\forall xp)\) iff \(h,s[x:=n]\models p\) for all \(n\), \(h,s\models(p*q)\) iff \(h_{1},s\models p\) and \(h_{2},s\models q\) for some \(h_{1},h_{2}\) such that \(h=h_{1}\uplus h_{2}\), \(h,s\models(p*q)\) iff \(h^{\prime},s\models p\) implies \(h^{\prime\prime},s\models q\) for all \(h^{\prime},h^{\prime\prime}\) such that \(h^{\prime\prime}=h\uplus h^{\prime}\), \(h,s\models[S]p\) iff \(\langle S,h,s\rangle\not\Rightarrow\mathbf{fail}\) and \(h^{\prime},s^{\prime}\models p\) for all \(h^{\prime},s^{\prime}\) such that \(\langle S,h,s\rangle\Rightarrow(h^{\prime},s^{\prime})\). ## 3 A sound and complete axiomatization of DSL In dynamic logic axioms are introduced to simplify formulas in which modalities occur. For example, we have the following basic equivalences **E1-3** for simple assignments. **Lemma 3.1** (Basic equivalences): _Let \(S\) denote a simple assignment \(x:=e\) and \(\circ\) denote a (binary) logical or separating connective._ \[[S]\mathbf{false} \equiv\mathbf{false}\] ( **E1** ) \[[S](p\circ q) \equiv[S]p\circ[S]q\] ( **E2** ) \[[S](\forall yp) \equiv\forall y([S]p)\] ( **E3** ) _In_ **E3** _we assume that_ \(y\) _does not appear in_ \(S\)_, neither in the left-hand-side of the assignment_ \(S\) _nor in its right-hand-side._ The proofs of these equivalences proceed by a straightforward induction on the structure of \(p\), where the base cases of Boolean expressions and the weak points to predicate are handled by a straightforward extension of the _substitution lemma_ for standard first-order logic. By \(b[e/x]\) we denote the result of replacing every occurrence of \(x\) in the Boolean expression \(b\) by the expression \(e\) (and similar for arithmetic expressions). **Lemma 3.2** (Substitution lemma): \[[x:=e]b\equiv b[e/x]\qquad[x:=e](e^{\prime}\hookrightarrow e^{\prime\prime}) \equiv(e^{\prime}[e/x]\hookrightarrow e^{\prime\prime}[e/x])\] ( **E4** ): **Proof.** This lemma follows from the semantics of simple assignment modality and the substitution lemma of first-order expressions: \(s(e^{\prime}[e/x])=s[x:=s(e)](e^{\prime})\). Note that expressions do not refer to the heap. \(\Box\) The above equivalences **E1-3** do not hold in general for the other basic instructions. For example, we have \(\left[x:=[e]\right]\mathbf{false}\equiv\neg(e\hookrightarrow-)\). On the other hand, \(\left[x:=\mathbf{cons}(0)\right]\mathbf{false}\equiv\mathbf{false}\), but \(\left[x:=\mathbf{cons}(0)\right](x\neq 0)\) is not equivalent to \(\neg([x:=\mathbf{cons}(0)](x=0))\), because \(\left[x:=\mathbf{cons}(0)\right](x\neq 0)\) is equivalent to \((0\hookrightarrow-)\) ('zero is allocated'), whereas \(\neg([x:=\mathbf{cons}(0)](x=0))\) expresses that \((n\not\hookrightarrow-)\), for some \(n\neq 0\) (which holds for any finite heap). The above equivalences **E1-3**, with **E2** restricted to the (standard) logical connectives, _do_ hold for the _pseudo_ instructions \(\langle x\rangle:=e\), a so-called _heap update_, and \(\langle x\rangle:=\bot\), a so-called _heap clear_. These pseudo instructions are defined by the transitions \[\langle\langle x\rangle:=e,h,s\rangle\Rightarrow(h[s(x):=s(e)],s)\text{ and } \langle\langle x\rangle:=\bot,h,s\rangle\Rightarrow(h[s(x):=\bot],s)\] Figure 2: Semantics of Dynamic Separation Logic. In contrast to the mutation and de-allocation instructions, these pseudo-instructions do not require that \(s(x)\in\mathit{dom}(h)\), e.g., if \(s(x)\not\in\mathit{dom}(h)\) then the heap update \(\langle x\rangle:=e\) extends the domain of the heap, whereas \([x]:=e\) leads to failure in that case. From a practical viewpoint, the heap update and heap clear pseudo-instructions are 'lower level' instructions, e.g. in processors that implement virtual memory (where an operating system allocates memory on the fly whenever a program performs a write to a virtual address that is not allocated), and on top of these instructions efficient memory allocation algorithms are implemented, e.g. malloc and free in C. In the following lemma we give an axiomatization in DSL of the basic SL instructions in terms of simple assignments and these two pseudo-instructions. For comparison we also give the standard SL axiomatization [19, 8, 3]. **Lemma 3.3** (Axioms basic instructions): \[[x:=[e]]p\equiv\exists y((e\hookrightarrow y)\wedge[x:=y]p),\] ( **E5 **)** \[[[x]:=e]p\equiv\left\{\begin{array}{l}(x\hookrightarrow-)\wedge[ \langle x\rangle:=e]p\\ (x\mapsto-)\ast((x\mapsto e)\twoheadrightarrow p)\end{array}\right.\] ( **E6 **)** \[[x:=\mathbf{cons}(e)]p\equiv\left\{\begin{array}{l}\forall x((x \not\rightarrow-)\rightarrow[\langle x\rangle:=e]p)\\ \forall x((x\mapsto e)\twoheadrightarrow p)\end{array}\right.\] ( **E7 **)** \[[\mathbf{dispose}(x)]p\equiv\left\{\begin{array}{l}(x \hookrightarrow-)\wedge[\langle x\rangle:=\bot]p\\ (x\mapsto-)\ast p\end{array}\right.\] ( **E8 **) Note that \([x:=y]p\) in **E5** reduces to \(p[y/x]\) by **E1-4**. For technical convenience only, we require in the axioms for \(x:=\mathbf{cons}(e)\) that \(x\) does not appear in \(e\) (see Section 5 to lift this restriction). In the sequel **E5-8** refer to the corresponding DSL equivalences. The proofs of these equivalences are straightforward (consist simply of expanding the semantics of the involved modalities) and therefore omitted. We have the following SL axiomatization of the heap update and heap clear pseudo-instructions. \[[\langle x\rangle:=e]p\equiv((x\mapsto-)\ast((x\mapsto e) \twoheadrightarrow p))\vee((x\not\rightarrow-)\wedge((x\mapsto e)\twoheadrightarrow p))\] \[[\langle x\rangle:=\bot]p\equiv((x\mapsto-)\ast p)\vee((x \not\rightarrow-)\wedge p)\] This axiomatization thus requires a case distinction between whether or not \(x\) is allocated. For the complementary approach, we want to resolve the modalities for the heap update and heap clear instructions compositionally in terms of \(p\). What thus remains for a complete axiomatization is a characterization of \([S]b\), \([S](e\hookrightarrow e^{\prime})\), \([S](p\ast q)\), and \([S](p\twoheadrightarrow q)\), where \(S\) denotes one of the two pseudo-instructions. Lemma 3.4 provides an axiomatization in DSL of a heap update. **Lemma 3.4** (Heap update): _We have the following equivalences for the heap update modality._ \[[\langle x\rangle:=e]b\equiv b,\] ( **E9 **)** \[[\langle x\rangle:=e](e^{\prime}\hookrightarrow e^{\prime\prime}) \equiv(x=e^{\prime}\wedge e^{\prime\prime}=e)\vee(x\neq e^{ \prime}\wedge e^{\prime}\hookrightarrow e^{\prime\prime}),\] ( **E10 **)** \[[\langle x\rangle:=e](p\ast q)\equiv([\langle x\rangle:=e]p\ast q^{ \prime})\vee(p^{\prime}\ast[\langle x\rangle:=e]q),\] ( **E11 **)** \[[\langle x\rangle:=e](p\twoheadrightarrow q)\equiv p^{\prime} \twoheadrightarrow[\langle x\rangle:=e]q,\] ( **E12 **) _where \(p^{\prime}\) abbreviates \(p\wedge(x\not\rightarrow-)\) and, similarly, \(q^{\prime}\) abbreviates \(q\wedge(x\not\rightarrow-)\)._ These equivalences we can informally explain as follows. Since the heap update \(\langle x\rangle:=e\) does not affect the store, and the evaluation of a Boolean condition \(b\) only depends on the store, we have that \(([\langle x\rangle:=e]b)\equiv b\). Predicting whether \((e^{\prime}\hookrightarrow e^{\prime\prime})\) holds after \(\langle x\rangle:=e\), we only need to make a distinction between whether \(x\) and \(e^{\prime}\) are aliases, that is, whether they denote the same location, which is simply expressed by \(x=e^{\prime}\). If \(x=e^{\prime}\) then \(e^{\prime\prime}=e\) should hold, otherwise \((e^{\prime}\hookrightarrow e^{\prime\prime})\) (note again, that \(\langle x\rangle:=e\) does not affect the values of the expressions \(e,e^{\prime}\) and \(e^{\prime\prime}\)). As a basic example, we compute \[[\langle x\rangle:=e](y\hookrightarrow-) \equiv\text{(definition $y\hookrightarrow-$)}\] \[[\langle x\rangle:=e]\exists z(y\hookrightarrow z) \equiv\text{(\bf E3)}\] \[\exists z[\langle x\rangle:=e](y\hookrightarrow z) \equiv\text{(\bf E10)}\] \[\exists z((y=x\wedge e=z)\vee(y\neq x\wedge(y\hookrightarrow z))) \equiv\text{(semantics SL)}\] \[y\neq x\rightarrow(y\hookrightarrow-)\] We use this derived equivalence in the following example: \[[\langle x\rangle:=e](y\mapsto-) \equiv\text{(definition $y\mapsto-$)}\] \[[\langle x\rangle:=e]((y\hookrightarrow-)\wedge\forall z((z \hookrightarrow-)\to z=y)) \equiv\text{(\bf E2, E3, E9)}\] \[[\langle x\rangle:=e](y\hookrightarrow-)\wedge\forall z([\langle x \rangle:=e](z\hookrightarrow-)\to z=y) \equiv\text{(see above)}\] \[(y\neq x\rightarrow(y\hookrightarrow-))\wedge\forall z((z\neq x \rightarrow(z\hookrightarrow-))\to z=y) \equiv\text{(semantics SL)}\] \[y=x\wedge(\mathbf{emp}\vee(x\mapsto-))\] Predicting whether \((p*q)\) holds after the heap update \(\langle x\rangle:=e\), we need to distinguish between whether \(p\) or \(q\) holds for the sub-heap that contains the (updated) location \(x\). Since we do not assume that \(x\) is already allocated, we instead distinguish between whether \(p\) or \(q\) holds initially for the sub-heap that does _not_ contain the updated location \(x\). As a simple example, we compute \[[\langle x\rangle:=e](\mathbf{true}*(x\mapsto-)) \equiv\text{(\bf E9,E11)}\] \[(\mathbf{true}*((x\mapsto-)\wedge(x\not\hookrightarrow-)))\vee((x \not\hookrightarrow-)*[\langle x\rangle:=e](x\mapsto-) \equiv\text{(see above)}\] \[(\mathbf{true}*((x\mapsto-)\wedge(x\not\hookrightarrow-)))\vee((x \not\hookrightarrow-)*(\mathbf{emp}\vee(x\mapsto-))) \equiv\text{(semantics SL)}\] \[(\mathbf{true}*\mathbf{false})\vee((x\not\hookrightarrow-)*( \mathbf{emp}\vee(x\mapsto-))) \equiv\text{(semantics SL)}\] \[\mathbf{true}\] Note that this coincides with the above calculation of \([\langle x\rangle:=e](y\hookrightarrow-)\), which also reduces to \(\mathbf{true}\), instantiating \(y\) by \(x\). The semantics of \((p\twoheadrightarrow q)\) after the heap update \(\langle x\rangle:=e\) involves universal quantification over all disjoint heaps that do not contain \(x\) (because after the heap update \(x\) is allocated). Therefore we simply add the condition that \(x\) is not allocated to \(p\), and apply the heap update to \(q\). As a very basic example, we compute \[[\langle x\rangle:=0]((y\hookrightarrow 1)\twoheadrightarrow(y \hookrightarrow 1)) \equiv\text{(\bf E12)}\] \[((y\mapsto 1)\wedge(x\not\hookrightarrow-))\twoheadrightarrow[ \langle x\rangle:=0](y\hookrightarrow 1)) \equiv\text{(\bf E10)}\] \[((y\mapsto 1)\wedge(x\not\hookrightarrow-))\twoheadrightarrow((y =x\wedge 0=1)\vee(y\neq x\wedge y\hookrightarrow 1)) \equiv\text{(semantics SL)}\] \[\mathbf{true}\] Note that \((y\hookrightarrow 1)\twoheadrightarrow(y\hookrightarrow 1)\equiv\mathbf{true}\) and \([\langle x\rangle:=0]\mathbf{true}\equiv\mathbf{true}\). **Proof of Lemma 3.4.** **E9**: \(h,s\models[\langle x\rangle:=e]b\) iff (semantics heap update modality) \(h[s(x):=s(e)],s\models b\) iff (\(b\) does not depend on the heap) \(h,s\models b\) **E10**: \(h,s\models[\langle x\rangle:=e](e^{\prime}\hookrightarrow e^{\prime\prime})\) iff (semantics heap update modality) \(h[s(x):=s(e)],s\models e^{\prime}\hookrightarrow e^{\prime\prime}\) iff (semantics points-to) \(h[s(x):=s(e)](s(e^{\prime}))=s(e^{\prime\prime})\) iff (definition \(h[s(x):=s(e)]\)) if \(s(x)=s(e^{\prime})\) then \(s(e)=s(e^{\prime\prime})\) else \(h(s(e^{\prime}))=s(e^{\prime\prime})\) iff (semantics assertions) \(h,s\models(x=e^{\prime}\wedge e^{\prime\prime}=e)\vee(x\neq e^{\prime}\wedge e ^{\prime}\hookrightarrow e^{\prime\prime})\) **E11**: \(h,s\models[\langle x\rangle:=e](p*q)\) iff (semantics heap update modality) \(h[s(x):=s(e)],s\models p*q\). From here we proceed as follows. By the semantics of separating conjunction, there exist \(h_{1}\) and \(h_{2}\) such that \(h[s(x):=s(e)]=h_{1}\uplus h_{2}\), \(h_{1},s\models p\), and \(h_{2},s\models q\). Let \(s(x)\in\mathit{dom}(h_{1})\) (the other case runs similarly). So \(h[s(x):=s(e)]=h_{1}\uplus h_{2}\) implies \(h_{1}(s(x))=s(e)\) and \(h=h_{1}[s(x):=h(x)]\uplus h_{2}\), By the semantics of the heap update modality, \(h_{1}(s(x))=s(e)\) and \(h_{1},s\models p\) implies \(h_{1}[s(x):=h(x)],s\models[\langle x\rangle:=e]p\). Since \(s(x)\not\in\mathit{dom}(h_{2})\), we have \(h_{2},s\models q\wedge x\not\hookrightarrow-\). By the semantics of separation conjunction we conclude that \(h,s\models[\langle x\rangle:=e]p*q^{\prime}\) (\(q^{\prime}\) denotes \(q\wedge x\not\hookrightarrow-\)). In the other direction, from \(h,s\models[\langle x\rangle:=e]p*q^{\prime}\) (the other case runs similarly) we derive that there exist \(h_{1}\) and \(h_{2}\) such that \(h=h_{1}\uplus h_{2}\), \(h_{1},s\models[\langle x\rangle:=e]p\) and \(h_{2},s\models q^{\prime}\). By the semantics of the heap update modality it follows that \(h_{1}[s(x):=s(e)],s\models p\). Since \(s(x)\not\in\mathit{dom}(h_{2})\), we have that \(h[s(x):=s(e)]=h_{1}[s(x):=s(e)]\uplus h_{2}\), and so \(h[s(x):=s(e)],s\models p*q\), that is, \(h,s\models[\langle x\rangle:=e](p*q)\). **E12**: \(h,s\models[\langle x\rangle:=e](p*q)\) iff (semantics of heap update modality) \(h[s(x):=s(e)],s\models p\twoheadrightarrow q\) iff (semantics separating implication) for every \(h^{\prime}\) disjoint from \(h[s(x):=s(e)]\): if \(h^{\prime},s\models p\) then \(h[s(x):=s(e)]\uplus h^{\prime},s\models q\) iff (since \(s(x)\not\in\mathit{dom}(h^{\prime})\)) for every \(h^{\prime}\) disjoint from \(h\): if \(h^{\prime},s\models p\wedge x\not\hookrightarrow-\) then \((h\uplus h^{\prime})[s(x):=s(e)],s\models q\) iff (semantics of heap update modality) for every \(h^{\prime}\) disjoint from \(h\): if \(h^{\prime},s\models p\wedge x\not\hookrightarrow-\) then \(h\uplus h^{\prime},s\models[s(x):=s(e)]q\) iff (semantics separating implication) \(h,s\models(p\wedge x\not\hookrightarrow-)\twoheadrightarrow[\langle x\rangle:=e]q\). The equivalences for the heap clear modality in the following lemma can be informally explained as follows: Since \(\langle x\rangle:=\bot\) does not affect the store, and the evaluation of a Boolean condition \(b\) only depends on the store, we have that \([\langle x\rangle:=\bot]b=b\). For \(e\hookrightarrow e^{\prime}\) to hold after executing \(\langle x\rangle:=\bot\), we must initially have that \(x\neq e\) and \(e\hookrightarrow e^{\prime}\). As a simple example, we have that \(\forall y,z(y\not\hookrightarrow z)\) characterizes the empty heap. It follows that \([\langle x\rangle:=\bot](\forall y,z(y\not\hookrightarrow z))\) is equivalent to \(\forall y,z(\neg(y\neq x\wedge y\hookrightarrow z))\). The latter first-order formula is equivalent to \(\forall y,z(y=x\lor y\not\hookrightarrow z)\). This assertion thus states that the domain consists at most of the location \(x\), which indeed ensures that after \(\langle x\rangle:=\bot\) the heap is empty. To ensure that \(p*q\) holds after clearing \(x\) it suffices to show that the initial heap can be split such that both \(p\) and \(q\) hold in their respective sub-heaps with \(x\) cleared. The semantics of \(p\twoheadrightarrow q\) after clearing \(x\) involves universal quantification over all disjoint heaps that do may contain \(x\), whereas before executing \(\langle x\rangle:=\bot\) it involves universal quantification over all disjoint heaps that do _not_ contain \(x\), in case \(x\) is allocated initially. To formalize in the initial configuration universal quantification over all disjoint heaps we distinguish between all disjoint heaps that do not contain \(x\) and _simulate_ all disjoint heaps that contain \(x\) by interpreting both \(p\) and \(q\) in \(p\twoheadrightarrow q\) in the context of heap updates \(\langle x\rangle:=y\) with _arbitrary_ values \(y\) for the location \(x\). As a very basic example, consider \([\langle x\rangle:=\bot\!]((x\mathrel{\hbox to 0.0pt{\raisebox{1.29pt}{$\rightharpoonup$}} \raisebox{-1.29pt}{$\rightharpoonup$}}0)\twoheadrightarrow(x\mathrel{\hbox to 0.0pt{ \raisebox{1.29pt}{$\rightharpoonup$}} \raisebox{-1.29pt}{$\rightharpoonup$}}0))\), which should be equivalent to **true**. The left conjunct \(((x\mathrel{\hbox to 0.0pt{\raisebox{1.29pt}{$\rightharpoonup$}} \raisebox{-1.29pt}{$\rightharpoonup$}}0)\wedge(x\mathrel{\hbox to 0.0pt{ \raisebox{1.29pt}{$\rightharpoonup$}} \raisebox{-1.29pt}{$\rightharpoonup$}}-))\twoheadrightarrow[\langle x\rangle:= \bot\!](x\mathrel{\hbox to 0.0pt{\raisebox{1.29pt}{$\rightharpoonup$}} \raisebox{-1.29pt}{$\rightharpoonup$}}0))\) of the resulting formula after applying **E16** is equivalent to **true** (because \((x\mathrel{\hbox to 0.0pt{\raisebox{1.29pt}{$\rightharpoonup$}} \raisebox{-1.29pt}{$\rightharpoonup$}}0)\wedge(x\mathrel{\hbox to 0.0pt{ \raisebox{1.29pt}{$\rightharpoonup$}} \raisebox{-1.29pt}{$\rightharpoonup$}}-)\) is equivalent to **false**). We compute the second conjunct (in the application of **E10** we omitted some trivial reasoning steps): \[\forall y([\langle x\rangle:=y](x\mathrel{\hbox to 0.0pt{ \raisebox{1.29pt}{$\rightharpoonup$}} \raisebox{-1.29pt}{$\rightharpoonup$}}0)\twoheadrightarrow[\langle x\rangle:=y](x \mathrel{\hbox to 0.0pt{\raisebox{1.29pt}{$\rightharpoonup$}} \raisebox{-1.29pt}{$\rightharpoonup$}}0)\equiv(\hbox{\bf E10})\] \[\forall y(y=0\twoheadrightarrow y=0) \equiv(\hbox{\rm semantics SL})\] **true** **Lemma 3.5** (Heap clear): _We have the following equivalences for the heap clear modality._ \[[\langle x\rangle:=\bot]b \equiv b,\] (**E13**) \[[\langle x\rangle:=\bot](e\mathrel{\hbox to 0.0pt{ \raisebox{1.29pt}{$\rightharpoonup$}} \raisebox{-1.29pt}{$\rightharpoonup$}}e^{\prime}) \equiv(x\neq e)\wedge(e\mathrel{\hbox to 0.0pt{\raisebox{1.29pt}{$ \rightharpoonup$}}\raisebox{-1.29pt}{$\rightharpoonup$}}e^{\prime}),\] (**E14**) \[[\langle x\rangle:=\bot](p\ast q) \equiv[\langle x\rangle:=\bot]p\ast[\langle x\rangle:=\bot]q,\] (**E15**) \[[\langle x\rangle:=\bot](p\twoheadrightarrow q) \equiv((p\wedge x\not\mathrel{\hbox to 0.0pt{\raisebox{1.29pt}{$ \rightharpoonup$}} \raisebox{-1.29pt}{$\rightharpoonup$}}-)\twoheadrightarrow[\langle x\rangle:= \bot]q)\wedge\forall y([\langle x\rangle:=y]p\twoheadrightarrow[\langle x \rangle:=y]q),\] (**E16**) _where \(y\) is fresh._ **Proof.** Here we go. **E13**: \([\langle x\rangle:=\bot]b\equiv b\). As above, it suffices to observe that the evaluation of \(b\) does not depend on the heap. **E14**: \(h,s\models[\langle x\rangle:=\bot](e\mathrel{\hbox to 0.0pt{\raisebox{1.29pt}{$ \rightharpoonup$}}\raisebox{-1.29pt}{$\rightharpoonup$}}e^{\prime})\) iff (semantics heap clear modality) \(h[\langle s(x)\rangle:=\bot],s\models e\mathrel{\hbox to 0.0pt{\raisebox{1.29pt}{$ \rightharpoonup$}}\raisebox{-1.29pt}{$\rightharpoonup$}}e^{\prime}\) iff (semantics points-to) \(s(e)\in\mathit{dom}(h[\langle s(x)\rangle:=\bot])\) and \(h[\langle s(x)\rangle:=\bot](s(e))=h(s(e))=s(e^{\prime})\) iff (semantics assertions) \(h,s\models x\neq e\wedge e\mathrel{\hbox to 0.0pt{\raisebox{1.29pt}{$ \rightharpoonup$}}\raisebox{-1.29pt}{$\rightharpoonup$}}e^{\prime}\) iff (semantics heap clear modality) \(h[\langle s(x)\rangle:=\bot],s\models p\ast q\) iff (semantics separation conjunction) \(h_{1},s\models p\) and \(h_{2},s\models q\), for some \(h_{1},h_{2}\) such that \(h[\langle s(x)\rangle:=\bot]=h_{1}\uplus h_{2}\) iff (semantics heap clear modality) \(h_{1},s\models[\langle x\rangle:=\bot]p\) and \(h_{2},s\models[\langle x\rangle:=\bot]q\), for some \(h_{1},h_{2}\) such that \(h=h_{1}\uplus h_{2}\). Note: \(h=h_{1}\uplus h_{2}\) implies \(h[\langle s(x)\rangle:=\bot]=h_{1}[\langle s(x)\rangle:=\bot]\uplus h_{2}[ \langle s(x)\rangle:=\bot]\), and, conversely, \(h[\langle s(x)\rangle:=\bot]=h_{1}\uplus h_{2}\) implies there exists \(h_{1}^{\prime},h_{2}^{\prime}\) such that \(h=h_{1}^{\prime}\uplus h_{2}^{\prime}\) and \(h_{1}=h_{1}^{\prime}[\langle s(x)\rangle:=\bot]\) and \(h_{2}=h_{2}^{\prime}[\langle s(x)\rangle:=\bot]\). **E16**: \(h,s\models[\langle x\rangle:=\bot](p\twoheadrightarrow q)\) iff (semantics heap clear modality) \(h[s(x):=\bot],s\models p\twoheadrightarrow q\). From here we proceed as follows. First we show that \(h,s\models((p\wedge x\not\mathrel{\hbox to 0.0pt{\raisebox{1.29pt}{$ \rightharpoonup$}} \raisebox{-1.29pt}{$\rightharpoonup$}}-)\twoheadrightarrow[\langle x\rangle:= \bot]q)\) and \(h,s\models\forall y([\langle x\rangle:=y]p\twoheadrightarrow[\langle x\rangle:= y]q)\) implies \(h[s(x):=\bot],s\models p\twoheadrightarrow q\). Let \(h^{\prime}\) be disjoint from \(h[s(x):=\bot]\) and \(h^{\prime},s\models p\). We have to show that \(h[s(x):=\bot]\uplus h^{\prime},s\models q\). We distinguish the following two cases. * First, let \(s(x)\in\mathit{dom}(h^{\prime})\). We then introduce \(s^{\prime}=s[y:=h^{\prime}(s(x))]\). We have \(h^{\prime},s^{\prime}\models p\) (since \(y\) does not occur in \(p\)), so it follows by the semantics of the heap update modality that \(h^{\prime}[s(x):=\bot],s^{\prime}\models[\langle x\rangle:=y]p\). Since \(h^{\prime}[s(x):=\bot]\) and \(h\) are disjoint (which clearly follows from that \(h^{\prime}\) and \(h[s(x):=\bot]\) are disjoint), and since \(h,s^{\prime}\models[\langle x\rangle:=y]p\twoheadrightarrow[\langle x\rangle:= y]q\), we have that \(h\uplus(h^{\prime}[s(x):=\bot]),s^{\prime}\models[\langle x\rangle:=y]q\). Applying again the semantics of the heap update modality, we obtain \((h\uplus(h^{\prime}[s(x):=\bot]))[s(x):=\bot] \(s^{\prime}(y)],s^{\prime}\models q\). We then can conclude this case observing that \(y\) does not occur in \(q\) and that \(h[s(x):=\bot\uplus h^{\prime}=(h\uplus(h^{\prime}[s(x):=\bot]))[s(x):=s^{\prime}( y)]\). * Next, let \(s(x)\not\in dom(h^{\prime})\). So \(h^{\prime}\) and \(h\) are disjoint, and thus (since \(h,s\models(p\wedge x\not\hookrightarrow-)\twoheadrightarrow[\langle x\rangle:= \bot]q\)) we have \(h\uplus h^{\prime},s\models[\langle x\rangle:=\bot]q\). From which we derive \((h\uplus h^{\prime})[s(x):=\bot],s\models q\) by the induction hypothesis. We then can conclude this case by the observation that \(h[s(x):=\bot]\uplus h^{\prime}=(h\uplus h^{\prime})[s(x):=\bot]\). Conversely, assuming \(h[s(x):=\bot],s\models p\twoheadrightarrow q\), we first show that \(h,s\models(p\wedge x\not\hookrightarrow-)\twoheadrightarrow[\langle x\rangle:= \bot]q\) and then \(h,s\models\forall y([\langle x\rangle:=y]p\twoheadrightarrow[\langle x \rangle:=y]q)\). * Let \(h^{\prime}\) be disjoint from \(h\) and \(h^{\prime},s\models p\wedge x\not\hookrightarrow-\). We have to show that \(h\uplus h^{\prime},s\models[\langle x\rangle:=\bot]q\), that is, \((h\uplus h^{\prime})[s(x):=\bot],s\models q\) (by the semantics of the heap clear update). Clearly, \(h[s(x):=\bot]\) and \(h^{\prime}\) are disjoint, and so \(h[s(x):=\bot]\uplus h^{\prime},s\models q\) follows from our assumption. We then can conclude this case by the observation that \((h\uplus h^{\prime})[s(x):=\bot]=h[s(x):=\bot]\uplus h^{\prime}\), because \(s(x)\not\in dom(h^{\prime})\). * Let \(h^{\prime}\) be disjoint from \(h\) and \(s^{\prime}=s[y:=n]\), for some \(n\) such that \(h^{\prime},s^{\prime}\models[\langle x\rangle:=y]p\). We have to show that \(h\uplus h^{\prime},s^{\prime}\models[\langle x\rangle:=y]q\). By the semantics of the heap update modality it follows that \(h^{\prime}[s(x):=n],s^{\prime}\models p\), that is, \(h^{\prime}[s(x):=n],s\models p\) (since \(y\) does not occur in \(p\)). Since \(h^{\prime}[s(x):=n]\) and \(h[s(x):=\bot]\) are disjoint, we derive from the assumption \(h[s(x):=\bot],s\models p\twoheadrightarrow q\) that \(h[s(x):=\bot]\uplus h^{\prime}[s(x):=n],s\models q\). Again by the semantics of the heap update modality we have that \(h\uplus h^{\prime},s^{\prime}\models[\langle x\rangle:=y]q\) iff \((h\uplus h^{\prime})[s(x):=n],s^{\prime}\models q\) (that is, \((h\uplus h^{\prime})[s(x):=n],s\models q\), because \(y\) does not occur in \(q\)). We then can conclude this case by the observation that \((h\uplus h^{\prime})[s(x):=n]=h[s(x):=\bot]\uplus h^{\prime}[s(x):=n]\). \(\Box\) We denote by \(\mathbf{E}\) the _rewrite system_ obtained from the equivalences \(\mathbf{E1}\)-\(\mathbf{16}\) by orienting these equivalences from left to right, e.g., equivalence \(\mathbf{E1}\) is turned into a rewrite rule \([S]\mathbf{false}\Rightarrow\mathbf{false}\). The following theorem states that the rewrite system \(\mathbf{E}\) is complete, that is, confluent and strongly normalizing. Its proof is straightforward (using standard techniques) and therefore omitted. **Theorem 3.6** (Completeness of \(\mathbf{E}\)): * **Normal form.** _Every standard formula_ \(p\) _of SL is in normal form (which means that it cannot be reduced by the rewrite system_ \(\mathbf{E}\)_)._ * **Local confluence.** _For any two reductions_ \(p\Rightarrow q_{1}\) _and_ \(p\Rightarrow q_{2}\) _(_\(p\) _a formula of DSL) there exists a DSL formula_ \(q\) _such that_ \(q_{1}\Rightarrow q\) _and_ \(q_{2}\Rightarrow q\)_._ * **Termination.** _There does not exist an infinite chain of reductions_ \(p_{1}\Rightarrow p_{2}\Rightarrow p_{3}\cdots\)_._ We now show an example of the interplay between the modalities for heap update and heap clear. We want to derive \[\{\forall x((x\not\hookrightarrow-)\to p)\}\ x:=\mathbf{cons}(0); \mathbf{dispose}(x)\ \{p\}\] where statement \(x:=\mathbf{cons}(0);\mathbf{dispose}(x)\) simulates the so-called random assignment [9]: the program terminates with a value of \(x\) that is chosen non-deterministically. First we apply the axiom \(\mathbf{E8}\) for de-allocation to obtain \[\{(x\hookrightarrow-)\wedge[\langle x\rangle:=\bot]p\}\ \mathbf{dispose}(x)\ \{p\}.\] Next, we apply the axiom \(\mathbf{E8}\) for allocation to obtain \[\{\forall x((x\not\hookrightarrow-)\rightarrow[\langle x\rangle:=0]((x \hookrightarrow-)\wedge[\langle x\rangle:=\bot]p))\}\] \[x:=\mathbf{cons}(0)\] \[\{(x\hookrightarrow-)\wedge p[\langle x\rangle:=\bot]\}.\] Applying \(\mathbf{E10}\) (after pushing the heap update modality inside), followed by some basic first-order reason ing, we can reduce \([\langle x\rangle:=0](\exists y(x\hookrightarrow y))\) to true. So we obtain \[\{\forall x((x\not\hookrightarrow-)\rightarrow[\langle x\rangle:=0][ \langle x\rangle:=\bot]p)\}\] \[x:=\mathbf{cons}(0)\] \[\{(x\hookrightarrow-)\wedge p[\langle x\rangle:=\bot]\}.\] In order to proceed we formalize the interplay between the modalities for heap update and heap clear by the following general equivalence: \[[\langle x\rangle:=e][\langle x\rangle:=\bot]p\equiv[\langle x\rangle:=\bot]p\] We then complete the proof by applying the sequential composition rule and consequence rule, using the above equivalence and the following axiomatization of the heap clear modality: \[(x\not\hookrightarrow-)\wedge[\langle x\rangle:=\bot]p\equiv(x\not \hookrightarrow-)\wedge p\] The above axiomatization can be extended in the standard manner to a program logic for sequential while programs, see [9], which does not require the frame rule, nor any other adaptation rule besides the consequence rule. For recursive programs however one does need more adaptation rules: a further discussion about the use of the frame rule in a completeness proof for recursive programs is outside the scope of this paper. ## 4 Expressiveness DSL In this section, we illustrate the expressiveness of DSL in a completeness proof of the local mutation axiom and a novel strongest postcondition axiomatization. ### Completeness local axioms We consider the completeness of the following local mutation axiom (completeness of the local axioms for the other standard basic instructions have already been established, as observed in the Introduction) \[\{x\mapsto-\}\ [x]:=e\ \{x\mapsto e\}\] The proof itself does not make use of the separating implication. **Theorem 4.1** (Completeness local mutation axiom): _If \(\models\{p\}\ [x]:=e\ \{q\}\) then \(\{p\}\ [x]:=e\ \{q\}\) is derivable using the local mutation axiom, frame rule, and consequence rule._ **Proof.** The problem here is how to compute a 'frame' \(r\) for a given valid specification \(\{p\}\ [x]:=e\ \{q\}\) so that \(p\) implies \((x\mapsto-)*r\) and \((x\mapsto e)*r\) implies \(q\). We show here how the heap update modality can be used to describe such a frame. Let \(\models\{p\}\ [x]:=e\ \{q\}\) and \(r\) denote \(\exists y([\langle x\rangle:=y]p)\) for some fresh \(y\). By the local axiom and the frame rule, we first derive \[\{(x\mapsto-)*r\}\ [x]:=e\ \{(x\mapsto e)*r\}.\] Let \(h,s\models p\). To prove that \(h,s\models(x\mapsto-)*r\), it suffices to show that there exists a split \(h=h_{1}\uplus h_{2}\) such that \(h_{1},s\models(x\mapsto-)\) and \(h_{2},s[y:=n]\models[\langle x\rangle:=y]p\), for some \(n\). Since \(\models\{p\}\ [x]:=e\ \{q\}\) we have that \(s(x)\in\mathit{dom}(h)\). So we can introduce the split \(h=h_{1}\uplus h_{2}\) such that \(h_{1},s\models(x\mapsto-)\) and \(h_{2}=h[s(x):=\bot]\). By the semantics of the heap update modality it then suffices to observe that \(h_{2},s[y:=h(s(x))]\models[\langle x\rangle:=y]p\) if and only if \(h_{2}[s(x):=h(s(x))],s\models p\) (\(y\) does not appear in \(p\)), that is, \(h,s\models p\). On the other hand, we have that \((x\mapsto e)\ast r\) implies \(q\): Let \(h,s\models(x\mapsto e)\ast r\). So there exists a split \(h=h_{1}\uplus h_{2}\) such that \(h_{1},s\models x\mapsto e\) and \(h_{2},s\models r\). Let \(n\) be such that \(h_{2},s[y:=n]\models[\langle x\rangle:=y]p\). By the semantics of the heap update modality again we have that \(h_{2},s[y:=n]\models[\langle x\rangle:=y]p\) if and only if \(h_{2}[s(x):=n],s\models p\) (here \(y\) does not appear in \(p\)). Since \(\models\{p\}\ [x]:=e\ \{q\}\) it then follows that \(h_{2}[s(x):=s(e)],s\models q\), that is, \(h,s\models q\) (note that \(h=h_{2}[s(x):=s(e)]\) because \(h(s(x))=s(e)\) and \(h_{2}=h[s(x):=\bot]\)). \(\Box\) ### Strongest postcondition axiomatization Before we discuss a novel strongest postcondition axiomatization using the modalities of DSL, it should be noted that in general the semantics of program logics which require absence of certain failures gives rise to an asymmetry between weakest preconditions and strongest postconditions: For any statement \(S\) and postcondition \(q\) we have that \(\models\{\mbox{\bf false}\}\ S\ \{q\}\). However, for any precondition \(p\) which does not exclude failures, there does not exist _any_ postcondition \(q\) such that \(\models\{p\}\ S\ \{q\}\). We solve this by simply requiring that the given precondition does not give rise to failures (see below). Figure 3 contains our novel strongest postcondition axiomatization SP-DSL, where the main novelty is in the use of the heap update and heap clear modalities in the axiomatization of the mutation, allocation, and de-allocation instruction. It is worthwhile to contrast, for example, the use of the heap clear modality to express freshness in the strongest postcondition axiomatization of the allocation instruction with the following traditional axiom (assuming that \(x\) does not occur free in \(p\)): \[\{p\}\ x:=\mbox{\bf cons}(e)\ \{p\ast(x\mapsto e)\}\] where freshness is enforced by the introduction of the separating conjunction (which as such increases the complexity of the postcondition). More specifically, we have the following instance of the allocation axiom in Figure 3 (also making use of that \(x\) does not appear in the precondition) \[\{y\hookrightarrow 0\}\ x:=\mbox{\bf cons}(1)\ \{[\langle x\rangle:=\bot](y \hookrightarrow 0)\wedge(x\hookrightarrow 1)\}\] Applying **E14** we obtain \[\{y\hookrightarrow 0\}\ x:=\mbox{\bf cons}(1)\ \{y\neq x\wedge(y\hookrightarrow 0 )\wedge(x\hookrightarrow 1)\}\] On the other hand, instantiating the above traditional axiom we obtain \[\{y\hookrightarrow 0\}\ x:=\mbox{\bf cons}(1)\ \{(y\hookrightarrow 0)\ast(x \mapsto 1)\}\] which is implicit and needs unraveling the semantics of separating conjunction. Using the heap clear modality we thus obtain a basic assertion in predicate logic which provides an explicit but simple account of aliasing. Figure 3: Strongest postcondition axioms of separation logic (SP-DSL), where \(y\) is fresh everywhere and \(x\) does not occur in \(e\) in case of \(x:=\mbox{\bf cons}(e)\). **Theorem 4.2** (Soundness and completeness SP-DSL): _For any basic instruction \(S\), we have \(\models\{p\}\ S\ \{q\}\) if and only if \(\{p\}\ S\ \{q\}\) is derivable from the axioms in SP-DSL (Figure 3) and (a single application of) the rule of consequence._ Proof.: We showcase the soundness and completeness of the strongest postcondition axiomatization of allocation (soundness and completeness of the strongest postconditions for the mutation and de-allocation instructions follow in a straightforward manner from the semantics of the heap update modality). * \(\models\{p\}\ x:=\mathbf{cons}(e)\ \{[\langle x\rangle:=\bot](\exists y([x:=y]p)) \wedge x\hookrightarrow e\}\): Let \(h,s\models p\). We have to show that \(h[n:=s(e)],s[x:=n]\models[\langle x\rangle:=\bot](\exists y([x:=y]p))\wedge x \hookrightarrow e\), for \(n\not\in\mathit{dom}(h)\). By definition \(h[n:=s(e)],s[x:=n]\models x\hookrightarrow e\). By the semantics of the heap clear modality and existential quantification, it then suffices to show that \(h[n:=\bot],s[x:=n][y:=s(x)]\models[x:=y]p\), which by the semantics of the simple assignment modality boils down to \(h,s[y:=s(x)]\models p\) (note that \(n\not\in\mathit{dom}(h)\), that is, \(h,s\models p\) (\(y\) does not appear in \(p\)), which holds by assumption. * \(\models\{p\}\ x:=\mathbf{cons}(e)\ \{q\}\) implies \(\models\{[\langle x\rangle:=\bot](\exists y(p[x:=y]))\wedge x\hookrightarrow e )\to q\): Let \(h,s\models[\langle x\rangle:=\bot](\exists y([x:=y]p))\wedge x\hookrightarrow e\). We have to show that \(h,s\models q\). By the semantics of the heap clear modality we derive from the above assumption that \(h[s(x):=\bot],s\models\exists y(p[x:=y])\). Let \(h[s(x):=\bot],s[y:=n]\models p[x:=y]\), for some \(n\). It follows from the semantics of the simple assignment modality that \(h[s(x):=\bot],s[x:=n]\models p\) (\(y\) does not appear in \(p\)). Since \(s(x)\not\in\mathit{dom}(h[s(x):=\bot])\), we have that \(\langle x:=\mathbf{cons}(e),h[s(x):=\bot],s[x:=n]\rangle\Rightarrow(h[s(x):=s[ x:=n](e)],s)\). Since we can assume without loss of generality that \(x\) does not occur in \(e\) we have that \(s[x:=n](e)=s(e)\), and so from the assumption that \(h,s\models x\hookrightarrow e\) we derive that \(h[s(x):=s[x:=n](e)]=h\). From \(\{p\}\ x:=\mathbf{cons}(e)\ \{q\}\) then we conclude that \(h,s\models q\). ## 5 Extensions A straightforward extension concerns the general mutation instruction \([e]:=e^{\prime}\), which allows the use of an arbitrary arithmetic expression \(e\) to denote the updated location. We can simulate this by the statement \(x:=e;\ [x]:=e^{\prime}\), where \(x\) is a fresh variable. Applying the modalities we derive the following axiom \[\{(e\hookrightarrow-)\wedge[x:=e][\langle x\rangle:=e^{\prime}]p\}\ [e]:=e^{\prime}\ \{p\}\] where \(x\) is a fresh variable. Another straightforward extension concerns the allocation \(x:=\mathbf{cons}(e)\) in the case where \(x\) does occur in \(e\). The instruction \(x:=\mathbf{cons}(e)\) can be simulated by \(y:=x\); \(y:=\mathbf{cons}(e[y/x])\) where \(y\) is a fresh variable. Applying the sequential composition rule and the axiom for basic assignments, it is straightforward to derive the following generalized backwards allocation axiom: \[\{\forall y((y\not\hookrightarrow-)\rightarrow[y:=x][\langle y\rangle:=e[y /x]]p)\}\ x:=\mathbf{cons}(e)\ \{p\}\] where \(y\) is fresh. Reynolds introduced in [19] the allocation instruction \(x:=\mathbf{cons}(\bar{e})\), which allocates a consecutive part of the memory for storing the values of \(\bar{e}\): its semantics is described by \[\langle x:=\mathbf{cons}(\bar{e}),h,s\rangle\Rightarrow(h[\bar{m}:=s(\bar{e})],s[x:=m_{1}])\] where \(\bar{e}=e_{1},\ldots,e_{n}\), \(\bar{m}=m_{1},\ldots,m_{n}\), \(m_{i+1}=m_{i}+1\), for \(i=1,\ldots,n-1\), \(\{m_{1},\ldots,m_{n}\}\cap\mathit{dom}(h)=\emptyset\), and, finally, \[h[\bar{m}:=s(\bar{e})](k)=\left\{\begin{array}{ll}h(k)&\mbox{if $k\not\in\{m_{1}, \ldots,m_{n}\}$}\\ s(e_{i})&\mbox{if $k=m_{i}$ for some $i=1,\ldots,n$.}\end{array}\right.\] Let \(\bar{e}^{\prime}\) denote a sequence of expressions \(e^{\prime}_{1},\ldots e^{\prime}_{n}\) such that \(e^{\prime}_{1}\) denotes the variable \(x\) and \(e^{\prime}_{i}\) denotes the expression \(x+(i-1)\), for \(i=2,\ldots,n\). The storage of the values of \(e_{1},\ldots,e_{n}\) then can be modeled by a sequence of heap update modalities \([\langle e^{\prime}_{i}\rangle:=e_{i}]\), for \(i=1,\ldots,n\). We abbreviate such a sequence by \([\langle\bar{e}^{\prime}\rangle:=\bar{e}]\). Assuming that \(x\) does not occur in one of the expressions \(\bar{e}\) (this restriction can be lifted as described above), we have the following generalization of the above backwards allocation axiom \[\{\forall x(\Big{(}\bigwedge_{i=1}^{n}(e^{\prime}_{i}\not\hookrightarrow-) \Big{)}\to[\langle\bar{e}^{\prime}\rangle:=\bar{e}|p\rangle)\}\ x:=\mathbf{cons }(\bar{e})\ \{p\}\] #### Recursive predicates Next we illustrate the extension of our approach to recursive predicates for reasoning about a linked list. Assuming a set of user-defined predicates \(r(x_{1},\ldots,x_{n})\) of arity \(n\), we introduce corresponding basic assertions \(r(e_{1},\ldots,e_{n})\) which are interpreted by (the least fixed point of) a system of recursive predicate definitions \(r(x_{1},\ldots,x_{n}):=p\), where the user-defined predicates only occur positively in \(p\). If for any recursive definition \(r(x_{1},\ldots,x_{n}):=p\) only the formal parameters \(x_{1},\ldots,x_{n}\) occur free in \(p\), we can simply define \([x:=e]r(e_{1},\ldots,e_{n})\) by \(r(e_{1}[e/x],\ldots,e_{n}[e/x])\). However, allowing global variables in recursive predicate definitions does affect the interpretation of these definitions. As a very simple example, given \(r(y):=x=1\), clearly \(\{r(y)\}\ x:=0\ \{r(y)\}\) is invalid (and so we cannot simply define \([x:=0]r(y)\) by \(r(y[0/x])\)). Furthermore, substituting the parameters of \(r\) clearly does not make sense for modalities with heap modifications (such as mutation, allocation, etc.): as subformulas may depend on the heap, these may require alias analysis _in the definition of \(r\)_. We illustrate how our dynamic logic works with recursively defined predicates on a characteristic linked list example. In particular, let \(r\) be the recursively defined _reachability_ predicate \[r(x,y):=x=y\vee\exists z((x\mapsto z)*r(z,y)).\] We shall prove \(\{r(\mathit{first},y)\}\ \mathit{first}:=\mathbf{cons}(\mathit{first})\ \{r(\mathit{first},y)\}\). To do so, we model \(\mathit{first}:=\mathbf{cons}(\mathit{first})\) by \(u:=\mathit{first};\ \mathit{first}:=\mathbf{cons}(u)\), for some fresh variable \(u\). Thus it is sufficient to show \[\{r(\mathit{first},y)\}\ u:=\mathit{first};\ \mathit{first}:=\mathbf{cons}(u)\ \{r( \mathit{first},y)\}.\] We first calculate the weakest precondition of the last assignment: \([\mathit{first}:=\mathbf{cons}(u)]r(\mathit{first},y)\). Using equivalence (**E7**) of Lemma 3.3 we obtain \(\forall\mathit{first}((\mathit{first}\not\hookrightarrow-)\to[(\mathit{first }):=u]r(\mathit{first},y)\). Next, we simplify the modal subformula \([\langle\mathit{first}\rangle:=u]r(\mathit{first},y)\) we first unfold the definition of \(r\), obtaining \(\mathit{first}=y\vee\exists z((\mathit{first}\mapsto z)*r(z,y))\). By Lemma 3.4\((\mathbf{E11})\), \([\langle\mathit{first}\rangle:=u](\mathit{first}\mapsto z*r(z,y))\) reduces to the disjunction of \((\mathit{first}\mapsto z\wedge\mathit{first}\not\hookrightarrow-)*[\langle \mathit{first}\rangle:=u]r(z,y))\) and \([\langle\mathit{first}\rangle:=u](\mathit{first}\mapsto z)*(r(z,y)\wedge \mathit{first}\not\hookrightarrow-)\). In the first disjunct, the left-hand side of the separating conjunction asserts that \(\mathit{first}\) is allocated (and points to \(z\)) and that simultaneously \(\mathit{first}\) is not allocated. This clearly is false in every heap, so that whole disjunct reduces to \(\mathbf{false}\). Simplifying the second disjunct (reducing the modality with equivalence (**E10**) of Lemma 3.4) and applying standard logical equivalences, yields that the whole subformula is equivalent to \[\mathit{first}=y\vee(r(u,y)\wedge(\mathit{first}\not\hookrightarrow-)).\] Applying the allocation axiom and an application of the consequence rule, we obtain \[\{\forall\mathit{first}((\mathit{first}\not\hookrightarrow-)\to( \mathit{first}=y\lor r(u,y)))\}\] \[\mathit{first}:=\mathbf{cons}(u)\] \[\{r(\mathit{first},y)\}.\] Renaming _first_ by the fresh variable \(f\) does not affect \(r\), so \[\{\forall f((f\not\rightarrow-)\rightarrow(f=y\lor r(u,y)))\}\] \[\mbox{\it first}:=\mbox{\bf cons}(u)\] \[\{r(\mbox{\it first},y)\}\] can be derived. Also substituting \(u\) for _first_ does not affect the definition of \(r\). It then suffices to observe that \(r(\mbox{\it first},y)\) (trivially) implies \(\forall f((f\not\rightarrow-)\rightarrow(f=y\lor r(\mbox{\it first},y)))\). ## 6 Formalization in Coq The main motivation behind formalizing results in a proof assistant is to rigorously check hand-written proofs. For our formalization we used the dependently-typed calculus of inductive constructions as implemented by the Coq proof assistant. We have used no axioms other than the axiom of function extensionality (for every two functions \(f,g\) we have that \(f=g\) if \(f(x)=g(x)\) for all \(x\)). This means that we work with an underlying intuitionistic logic: we have not used the axiom of excluded middle for reasoning classically about propositions. However, the decidable propositions (propositions \(P\) for which the excluded middle \(P\vee\neg P\) can be proven) allow for a limited form of classical reasoning. We formalize the basic instructions of our programming language (assignment, look-up, mutation, allocation, and deallocation) and the semantics of basic instructions. For Boolean and arithmetic expressions we use a shallow embedding, so that those expressions can be directly given as a Coq term of the appropriate type (with a coincidence condition assumed, i.e. that values of expressions depend only on finitely many variables of the store). There are two approaches in formalizing the semantics of assertions: shallow and deep embedding. We have taken both approaches. In the first approach, the shallow embedding of assertions, we define assertions of DSL by their extension of satisfiability (i.e. the set of heap and store pairs in which the assertion is satisfied), that must satisfy a coincidence condition (assertions depend only on finitely many variables of the store) and a stability condition (see below). The definition of the modality operator follows from the semantics of programs, which includes basic control structures such as the **while**-loop. In the second approach, the deep embedding of assertions, assertions are modeled using an inductive type and we explicitly introduce two meta-operations on assertions that capture the heap update and heap clear modality. We have omitted the clauses for **emp** and \((e\mapsto e^{\prime})\), since these could be defined as abbreviations, and we restrict to the basic instructions. In the deep embedding we have no constructor corresponding to the program modality \([S]p\). Instead, two meta-operations denoted \(p[\langle x\rangle=e]\) and \(p[\langle x\rangle:=\bot]\) are defined recursively on the structure of \(p\). Crucially, we formalized and proven the following lemmas (the details are almost the same as showing the equivalences hold in the shallow embedding, Lemmas 3.4 and 3.5): **Lemma 6.1** (Heap update substitution lemma): \(h,s\models p[\langle x\rangle:=e]\) _iff \(h[s(x):=s(e)],s\models p\)._ **Lemma 6.2** (Heap clear substitution lemma): \(h,s\models p[\langle x\rangle:=\bot]\) _iff \(h[s(x):=\bot],s\models p\)._ By also formalizing a deep embedding, we show that the modality operator can be defined entirely on the meta-level by introducing meta-operations on formulas that are recursively defined by the structure of assertions: this captures Theorem 3.6. The shallow embedding, on the other hand, is easier to show that our approach can be readily extended to complex programs including **while**-loops. In both approaches, the semantics of assertions is classical, although we work in an intuitionistic meta-logic. We do this by employing a double negation translation, following the set-up by R. O'Connor [14]. In particular, we have that our satisfaction relation \(h,s\models p\) is stable, i.e. \(\neg\neg(h,s\models p)\) implies \(h,s\models p\). This allows us to do classical reasoning on the image of the higher-order semantics of our assertions. The source code of our formalization is accompanied with this paper as a digital artifact (which includes the files shallow/Language.v and shallow/Proof.v, and the files deep/Heap.v, deep/Language.v, deep/Classical.v). The artifact consists of the following files: * shallow/Language.v: Provides a shallow embedding of Boolean expressions and arithmetic expressions, and a shallow embedding of our assertion language, as presented in the prequel. * shallow/Proof.v: Provides proof of the equivalences (**E1-16**), and additionally standard equivalences for modalities involving complex programs. * deep/Heap.v: Provides an axiomatization of heaps as partial functions. * deep/Language.v: Provides a shallow embedding of Boolean expressions and arithmetic expressions, and a deep embedding of our assertion language, on which we inductively define the meta operations of heap update and heap clear. We finally formalize Hoare triples and proof systems using weakest precondition and strongest postcondition axioms for the basic instructions. * deep/Classical.v: Provides the classical semantics of assertions, and the strong partial correctness semantics of Hoare triples. Further it provides proofs of substitution lemmas corresponding to our meta-operators. Finally, it provides proofs of the soundness and completeness of the aforementioned proof systems. ## 7 Conclusion and related work To the best of our knowledge no other works exist that study dynamic logic extensions of SL. We have shown how we can combine the standard programming logics in SL with a new DSL axiomatization of both weakest preconditions and strongest postconditions. These new axiomatizations in DSL have the so-called property of _gracefulness_:6 any first-order postcondition gives rise to a first-order weakest precondition (for any basic instruction). A property that existing axiomatizations of SL, such as given by C. Bannister, P. Hofner and G. Klein [4], and M. Faisal Al Ameen and M. Tatsuta [9], lack. (See also [22].) As a simple example, in our approach \([[x]:=0](y\hookrightarrow z)\) can be resolved to the first-order formula Footnote 6: The term ‘graceful’, coined by J.C. Blanchette [23], comes from higher-order automated theorem proving where it means that a higher-order prover does not perform significantly worse on first-order problems than existing first-order provers that lack the ability to reason about higher-order problems. \[(x\hookrightarrow-)\wedge((y=x\wedge z=0)\vee(y\neq x\wedge y \hookrightarrow z))\] by applying the above equivalences **E6** and **E10**. The standard rule for backwards reasoning in [20] however gives the weakest precondition: \[(x\mapsto-)\ast((x\mapsto 0)\twoheadrightarrow(y\hookrightarrow z))\] Despite their different formulations, both formulas characterize \([[x]:=0](y\hookrightarrow z)\), and thus must be equivalent. In fact, the equivalence has been proven in our Coq formalization (Section 6). Surprisingly, this however exceeds the capability of all the automated SL provers in the benchmark competition for SL [21]. In particular, only the CVC4-SL tool [18] supports the fragment of SL that includes the separating implication connective. However, from our own experiments with that tool, we found that it produces an incorrect counter-example and reported this as a bug to one of the maintainers of the project [17]. In fact, the latest version, CVC5-SL, reports the same input as 'unknown', indicating that the tool is incomplete. Furthermore, we have investigated whether the equivalence of these formulas can be proven in an interactive tool for reasoning about SL: the Iris project [12]. However, also in that system it is not possible to show the equivalence of these assertions, at least not without adding additional axioms to its underlying model [13]. On the other hand, the equivalence between the above two formulas can be expressed in quantifier-free separation logic, for which a complete axiomatization of all valid formulas has been given in [7]. In general, the calculation of \([S]p\) by means of a compositional analysis of \(p\), in contrast with the standard approach, does not generate additional _nesting_ of the separating connectives. On the other hand, the compositional analysis generates a case distinction in the definitions of \([\langle x\rangle:=e](p\ast q)\) and \([\langle x\rangle:=\bot](p\rightarrow q)\). How the combined application of the two approaches works in practice needs to be further investigated. Such an investigation will also involve the use of the modalities for the basic instructions in the generation of the verification conditions of a program (as is done for example in the KeY tool [1] for the verification of Java programs), which allows to _postpone_ and _optimize_ their actual application. For example, the equivalence \[[x:=e][\langle y\rangle:=e^{\prime}]p\equiv[\langle y\rangle:=e^{\prime}[e/x]] [x:=e]p\] allows to resolve the simple assignment modality by 'pushing it inside'. Other works that investigate weakest preconditions in SL are briefly discussed below. For example, [3] investigates both weakest preconditions and strongest postconditions in SL, also obtained through a trans-formational approach. However, the transformation uses other separating connectives (like _septraction_), and thus is not graceful. On the other hand, in [13] an alternative logic is introduced which, instead of the separating connectives, extends standard first-order logic with an operator \(\mathit{Sp}(p)\) which captures the parts of the heap the (first-order) formula \(p\) depends on. Thus also [13] goes beyond first-order, and is not graceful. But the main motivation of that work coincides with ours: avoiding unnecessary reasoning about the separating connectives. Our artifact formalizes the syntax and semantics of programs and assertions of SL. We plan to further extend our formalization to support practical program verification, and investigate how to integrate our approach in Iris [11]: we will consider how DSL can also work for a shallow embedding of SL. Then the generated verification conditions require a proof of the validity of corresponding assertions in SL, which can be discharged by providing a proof directly in Coq. Further, we will investigate the application of DSL to concurrent SL [4] and permission-based SL [2].
この論文では、分離論理の動的論理の拡張について紹介します。分離論理の述語言語は、分離論理の基礎命令の5種類のタイプ(単純な割り当て、見出し検索、変異、配分、および配分)に対応するモダリティで拡張されています。この動的論理の主な novelty は、これらのモダリティを解決するための異なるアプローチを組み合わせることを許すことです。そのようなアプローチの1つは、分離論理の標準弱き条件計算に基づいています。この論文で導入されたもう1つのアプローチは、提案された分離論理の動的論理拡張における新たな形式化を提供しています。このaxiomatization の soundness と completeness は、Coq theorem proverで形式化されています。
2309.16584
Collaborative Distributed Machine Learning
Various collaborative distributed machine learning (CDML) systems, including federated learning systems and swarm learning systems, with different key traits were developed to leverage resources for development and use of machine learning (ML) models in a confidentiality-preserving way. To meet use case requirements, suitable CDML systems need to be selected. However, comparison between CDML systems regarding their suitability for use cases is often difficult. This work presents a CDML system conceptualization and CDML archetypes to support comparison of CDML systems and introduce scientific and practical audiences to the principal functioning and key traits of CDML systems.
David Jin, Niclas Kannengießer, Sascha Rank, Ali Sunyaev
2023-09-28T16:44:18
http://arxiv.org/abs/2309.16584v3
# A Design Toolbox for the Development of Collaborative Distributed Machine Learning Systems ###### Abstract To leverage data for the sufficient training of machine learning (ML) models from multiple parties in a confidentiality-preserving way, various collaborative distributed ML (CDML) system designs have been developed, for example, to perform assisted learning, federated learning, and split learning. CDML system designs show different traits, including high agent autonomy, ML model confidentiality, and fault tolerance. Facing a wide variety of CDML system designs with different traits, it is difficult for developers to design CDML systems with traits that match use case requirements in a targeted way. However, inappropriate CDML system designs may result in CDML systems failing their envisioned purposes. We developed a CDML design toolbox that can guide the development of CDML systems. Based on the CDML design toolbox, we present CDML system archetypes with distinct key traits that can support the design of CDML systems to meet use case requirements. collaborative distributed machine learning (CDML), privacy-enhancing technologies (PETs), assisted learning, federated learning (FL), split learning, swarm learning, multi-agent systems (MAS). ## I Introduction The training of machine learning (ML) models requires sufficient training data in terms of quantity and quality to make meaningful predictions with little generalization error. Sufficient training data is, however, seldom available from a single party (e.g., a bank or a hospital), which can prevent the adequate training of ML models [1]. Inadequate training of ML models can result in large generalization errors, rendering ML models ineffective [2]. To reduce generalization errors of ML models, developers request training data from multiple third parties. Training data retrievals from third parties are often subject to compliance, social, and technical challenges [3, 4, 5] that hinder the acquisition of sufficient training data. For example, strict data protection laws and regulations prohibit the disclosure of specific kinds of data, such as personal data by the General Data Protection Regulation of the European Union [6] and organizational data by the Healthcare Insurance Portability and Accountability Act of the USA [7]. From a social perspective, privacy behaviors of individuals restrict information flows to third parties based on personal preferences [8], preventing access to their training data. Insufficient computing resources inhibit the transfer of large data sets from data centers to developers in an acceptable time [3, 4]. To reduce generalization errors of ML models by using training data from multiple parties, an ML paradigm is required that solves those challenges. Collaborative distributed ML (CDML) is an ML paradigm that can be implemented to overcome, in particular, compliance and technical challenges in using data from multiple parties to train ML models [9, 10, 11, 12, 13, 14]. In CDML systems, such as federated learning systems [10], split learning systems [11], a and swarm learning systems [14], each party operates at least one quasi-autonomous agent (referred to as agent in the following). Agents in CDML systems train (parts of) ML models on their local training data and self-controlled compute in a distributed manner. Agents only share their locally computed training results (interim results) with other agents, for example, gradients [15], activations [11], and (pseudo-)residuals [12]. Reconstructing training data from interim results is commonly difficult [9]. Using interim results received from other agents, agents improve their local (parts of) ML models. Following the CDML paradigm, parties can keep control over their training data, which can help solve compliance challenges. Moreover, CDML can help to solve technical challenges because large packages of training data are not transferred to single parties to train ML models, saving bandwidth. Moreover, computational resources for training ML models are distributed across multiple agents, which decreases the amount of computational resources a single party must possess to train ML models. The potential of CDML to leverage large training data quantities in a confidentiality-preserving and resource-efficient way has sparked enormous interest in practice and research for various use cases with different requirements for CDML systems. For instance, effective next-word prediction in virtual smartphone keyboards requires language models to be trained on a large quantity of heterogeneous training data representative of future user inputs. To meet this requirement, CDML systems must be scalable to involve millions [16] or even billions of agents [10]. Another CDML use case is the prediction of financial risks in portfolio management [17, 18]. Financial institutions rely on ML models to predict investment risks in portfolio management. As customers pay for portfolio management, such ML models are core assets to financial institutions. To protect such core assets, CDML systems must enable collaborative training of ML models without disclosing ML models to competitors. To meet different use case requirements, practice and research developed specialized CDML system designs. For instance, federated learning systems are scalable to engage billions of agents to train ML models for next-word prediction [19]. Assisted learning systems are unsuitable for this purpose due to the sequential processing of interim results [17]. Conversely, assisted learning seems to be suitable for training ML models for portfolio management because ML model confidentiality is protected in the learning process. Federated learning requires agents to disclose ML models and, thus, is unsuitable for use cases requiring ML model confidentiality. Developers need to understand how envisioned traits of CDML systems (e.g., high scalability, ML model confidentiality) can be achieved by designing CDML systems in a targeted manner. The proliferation of a wide variety of specialized CDML system designs introduced a large number of design options (e.g., regarding the structure of interim results and the parts of ML models disclosed to other agents) that constitute the CDML system design space. Developers must select and combine design options from the CDML system design space to design CDML systems with traits that meet use case requirements (e.g., high scalability, ML model confidentiality, and high robustness of the training process). The targeted selection and combination of design options requires developers to thoroughly understand the CDML system design space and traits arising from the implementation of design options in CDML systems. An insufficient understanding of the CDML system design space can lead developers to select design options that can cause CDML systems to fail their purposes, for example, when ML models for portfolio management are inadvertently leaked in unsuitable training processes. However, literature on CDML systems is scattered, which is why the CDML system design space remains unclear and, thus, how envisioned key traits can be achieved through targeted CDML system designs. To support the targeted design of CDML systems suitable for use cases, we ask the following research questions: _RQ1: What does the CDML system design space look like?_ _RQ2: What are the key traits of principal CDML system designs?_ To answer our research questions, we applied a three-step research approach. First, we developed the CDML design toolbox, which is a conceptualization of the CDML system design space. The CDML design toolbox specifies the fundamentals of CDML systems (e.g., agent roles and their interactions) and design options for the customization of CDML systems (e.g., combinations of agent roles in single agents, communication paths between agents, and types of interim results). For the conceptualization, we analyzed literature on CDML and developed agent-based models in the schemes presented in the Gaia methodology [20]. These schemes are commonly used to develop agent-based models that can serve as blueprints for implementing distributed software systems, such as CDML systems. Then, we tested the validity of the CDML design toolbox by modeling CDML system designs using the CDML design toolbox. Second, we developed CDML archetypes based on commonalities and differences between the modeled CDML systems. Third, we reviewed publications on CDML system designs to extract key traits of the CDML archetypes. This work has three principal contributions to practice and research. First, by presenting the CDML design toolbox, we offer a consolidated design knowledge base of CDML systems that introduces the main design commonalities of CDML systems and offers design options for the customization of CDML system designs to meet use case requirements. This consolidation of previously scattered design knowledge in agent-based models (e.g., the roles model, the interactions model) facilitates the application of the Gaia methodology for systematically designing custom CDML systems. Moreover, by presenting design options implemented in CDML system designs, the CDML design toolbox helps to compare CDML system designs systematically. Second, by showcasing CDML archetypes, we inform of combinations of design options commonly used in practice and research. The CDML archetypes can be refined to develop blueprints of CDML systems tailored to use cases using the CDML design toolbox, which facilitates designing CDML systems. Third, by presenting key traits of CDML archetypes, we support developers in understanding how design options can be leveraged to achieve specific key traits. By using the CDML archetypes and their key traits, developers are enabled to evaluate CDML system designs in their suitability for use cases before implementing the designs. Thereby, we support the targeted design of CDML systems for use cases. The remainder of this work is structured into six sections. First, we explain the foundations of CDML, related research on CDML systems, and introduce basic concepts of multi-agent systems (MAS). Second, we describe how we developed the CDML design toolbox, including a brief introduction to the Gaia methodology [20]. Moreover, we describe how we developed CDML archetypes using the CDML design toolbox and how we identified their key traits. Third, we present the CDML design toolbox. Fourth, we describe CDML archetypes and explain how different combinations of design options can lead to key traits of CDML systems. Fifth, we discuss our principal findings and describe the contributions and limitations of this work. Moreover, we give an outlook for future research directions. We conclude with a brief summary of this work and our personal takeaways. ## II Background and Related Research ### _Collaborative Distributed Machine Learning_ CDML combines the ML approaches of collaborative ML (CML) and distributed ML (DML). Leveraging training data from various parties is the focus of CML [21, 22, 23]. In CML systems, training data from multiple parties is used in a centralized or siloed way. In centralized CML, agents send their local data to a central data server that various agents can access to train ML models using the shared training data. To preserve training data confidentiality, data may only be provided to the central data server in encrypted form. The used cryptographic techniques (e.g., homomorphic encryption [23, 24]) allow agents to train ML models on the encrypted data while the plain training data remains confidential. However, the cryptographic techniques will likely lead the centrally controlled computing system to consume more resources for the ML model training [25]. Overall, agents in centralized CML depend on central data servers. Crashes of central data servers can lead such CML systems to failure. Distributed ML was developed to accelerate the training of large ML models, such as deep learning models, by distributing training tasks to multiple agents that train (parts of) ML models in parallel. Distributed ML systems can train ML models in two ways [26, 27, 28]: data parallel and model parallel. In data parallel training, partitions of the entire training data set are passed to agents. Each agent trains the same ML model on individual subsets of the whole training data set. In model parallel training, each agent uses identical data but only trains a part of the ML model. In preparation for DML, training data is usually gathered by a central party that sets up the DML system (e.g., in computing clusters). The central party then identically distributes the gathered training data across agents to achieve a uniform workload for each. Through the uniform workload distribution, idle times of agents are aimed to be low so that the ML model training is performed with high computational efficiency [26]. The training process in DML is often coordinated by a central server, called parameter server [29, 30, 31]. After the local training of the ML model, agents transmit their ML model updates to the parameter server. The parameter server stores ML model updates and offers the latest parameters to agents. Agents fetch the parameters to proceed with the local training of the ML model. An alternative to using parameter servers in DML is all-reduce [28, 32, 33]. In all-reduce, all agents have similar roles, thus executing identical tasks. The identical execution of tasks by all agents makes central parameter servers obsolete. Each agent aggregates training results and distributes them to other agents in the DML system. Any agent is notified about ML model updates to proceed with the local training of the latest version of ML models. In summary, CML centers on the sharing and collaborative use of training data, while DML centers on performance improvements in training ML models. However, DML hardly contributes to overcoming the legal and social challenges related to leveraging training data from multiple parties in a confidentiality-preserving way. The combination of principles of CML (e.g., leveraging training data from various parties) and DML (e.g., the distributed execution of ML tasks across multiple agents) forms the foundation for CDML. In CDML systems, trainer agents receive ML tasks from other agents and use local training data to accomplish ML tasks. ML tasks specify the objectives pursued with ML models (e.g., next-word prediction) and include information about the approach (e.g., what ML model architecture should be used). This approach can implement DML techniques, which can eventually speed up the training process by parallelization. However, because the training data is usually unknown to participants in the ML system, identical distribution of training data, like in purely DML, is hard to achieve. Thus, the performance benefits targeted in DML systems may not be fully leveraged [34]. ### _Related Research on CDML_ As one of the first CDML concepts, federated learning has established training data confidentiality and distributed computing as a fundamental goal pursued when applying the CDML paradigm [16, 35, 10]. Soon after its introduction, various shortcomings of federated learning became apparent. For example, federated learning systems have been shown to be inefficient due to high communication costs [9] and prone to performance bottlenecks caused by the use of a central parameter server [36]. From a security perspective, federated learning systems are prone to failures due to an adversarial central parameter server [9]. To tackle the shortcomings of federated learning, practice and research brought forth other CDML concepts, including swarm learning, split learning, and assisted learning. Like federated learning, swarm learning aims at the collaborative and distributed training of global ML models known to all parties involved in the training process. Other than federated learning systems, swarm learning systems rely on redundant agents orchestrating the training process in peer-to-peer networks [14]. The redundant execution of tasks in swarm learning systems can make swarm learning systems more robust than federated learning systems [14]. However, the strong redundancies render swarm learning systems usually less resource-efficient and more complex compared to federated learning systems. In split learning systems [11], agents only train parts of ML models defined by a so-called cut layer. Cut layers indicate the layers of neural networks where the complete neural network is split. Agents only receive the specifications of the cut layer as a kind of interface to input parameters for the training of the rest of the ML model. By only disclosing parts of ML models specified by cut layers, split learning helps to keep (at least parts of) ML models confidential. However, the gain in ML model confidentiality in split learning systems comes at the cost of the training performance of ML models compared to federated learning [37]. In assisted learning [12], the focus on preserving the confidentiality of training data is extended to ML models and even the purposes of ML models. In assisted learning, a user agent requests feedback on statistics relevant to training an ML model from service agents. Such feedback can include residuals of its own ML model. The user agent incorporates feedback received from service agents into its local ML model. This process can be executed repeatedly until the ML model reaches sufficient prediction performance. By enabling agents to decide which agents they want to assist, assisted learning can improve the autonomy of agents. However, the increased autonomy comes with coordination challenges, for example, how to assess the potential of agents to assist in a learning task and in which order agents interact [38]. Various design options for customizing CDML systems to meet use case requirements have been developed, such as federated learning systems with multiple hierarchical levels. In each hierarchical level, a preprocessing of previous training results is executed by aggregating a subset of training results. The global ML model is then computed from multiple aggregated training results [10]. Another design option for federated learning systems is to form subnetworks to deal with heterogeneous computational resources between trainer agents [39]. Agents with more computing resources (e.g., servers) execute training tasks that consume more computational resources than agents with only a few (e.g., smartphones). Extant research has started to compare CDML systems to understand their commonalities and differences. Such comparisons are often based on benchmarks, for example, between systems of federated learning, split learning, and SplitFed learning [13] and between systems of federated learning, swarm learning, and decentralized federated learning [40]. CDML system benchmarks commonly offer valuable help in understanding likely CDML system behaviors, especially in terms of performance (e.g., convergence speed of ML models [13], communication cost [13] and prediction performance [40]). Such benchmarks can support practitioners in meeting performance requirements for CDML systems. However, benchmark results are only helpful at a limited scale to understand possible CDML system designs and their key traits, as they seldom explain how CDML system designs lead to different system behaviors. Moreover, benchmark studies only shed light on a few CDML system designs, leaving the entirety of the CDML system design space unknown. Other works compare CDML system designs. Several design options for federated learning systems were revealed, describing different network topologies for communication (e.g., via central servers and peer-to-peer) and computational schedules [3], such as sequential training of ML models and parallel training synchronized by a central server. Key traits that originate from the different design options are discussed with a focus on confidentiality. Design differences between other CDML systems (e.g., assisted learning systems and split learning systems) remain unknown. In a comparison between federated learning systems, split learning systems, and SplitFed learning systems [13], key traits of those CDML systems are pointed out, with a focus on learning performance, resource consumption, ML model confidentiality, and training data confidentiality. Despite these valuable insights, several design options (e.g., regarding the network topology and computational schedules) and their influences on key traits of CDML systems remain unclear. Since extant comparisons focus only on selected systems of a few CDML concepts, it is still hard to understand the entirety of the CDML system design space. To help developers design CDML systems that meet use case requirements, the CDML system design space must be understood, including the various CDML concepts, design options, and key traits of CDML system designs. This knowledge of the CDML system design space needs to become available in actionable form. ### _Multi-Agent Systems_ The multi-agent system (MAS) concept [41] offers a theoretical lens to model systems based on agents (e.g., computing nodes) and their interactions in a specified environment [42, 20]. The MAS concept is widely used in computer science to model hardware systems and software systems, especially in the field of artificial intelligence (AI) systems [43, 44]. Since the MAS concept is established to develop blueprints of systems for their implementation [45, 20, 46], it seems to be adequate to represent the CDML system design space in a CDML design toolbox that helps to design, analyze, and advance CDML systems. In the following, we introduce the basic properties of the MAS concept relevant to this work. Important MAS properties are summarized in Table I. MASs are systems comprised of a population of agents. By design, MASs can limit the population to a finite number of agents or allow an infinite number of agents. Within MASs, agents can form groups, so-called coalitions. Coalitions can comprise entire MAS populations or population partitions. Agents can be part of multiple coalitions at the same time [42, 20]. We consider each CDML system as a coalition within a superordinate MAS. As agents can be part of multiple coalitions, agents can simultaneously participate in multiple CDML systems. Coalitions can be controlled in a centralized or decentralized way. In centralized coalition control, a single or a few agents coordinate interactions between agents in the coalition, for example, in federated learning systems [16]. In decentralized coalition control, multiple or even all agents have equitable influences on the coordination of the coalition. In coalitions, there are two common goal structures. Agents can pursue individual goals or common goals. Since agents can be part of multiple coalitions, agents can pursue multiple goals at the same time. For example, an agent may pursue an individual goal in one coalition (e.g., training its own ML model in an assisted learning system) and a common goal in another coalition (e.g., training a shared ML model in a swarm learning system). Agents can have different kinds of interaction to reach their goals in coalitions. They can act in a competitive, cooperative, and independent manner. When agents compete with each other, they need to fight for scarce resources to accomplish their tasks. Cooperative agents support each other in the accomplishment of common goals, where individual agents (or subgroups of agents) work on different tasks. In federated learning systems, for example, some agents only train ML models, while other agents aggregate interim training results [16, 47]. When agents collaborate, each agent is involved in each task to accomplish shared goals. Swarm learning systems are mostly collaborative, as most agents perform similar tasks in the ML model training [14]. MASs and coalitions can differ in their openness to allowing agents to join and leave arbitrarily. Closed MASs only allow specified agents to join. In some federated learning systems, only selected agents are permitted to join the coalitions [10]. Open MASs allow agents to join and leave arbitrarily, for example, in many peer-to-peer learning systems [48, 49]. Population diversity refers to the heterogeneity of agent types in a population. Agent types are sets of roles that are assigned to agents to specify their tasks in a coalition [20]. If many agents in a population have largely different agent types, the population is heterogeneous. For example, hierarchical federated learning systems comprise up to four different agent types that collaborate and execute different tasks in the training of ML models. If most agents have identical agent types, the population is homogeneous. Swarm learning systems, for example, can be considered homogeneous because all agents execute identical tasks in the training of ML models [14]. ## III Methods We applied a three-step research approach to conceptualize the CDML design space (RQ1) and extract key traits of CDML systems originating from different designs (RQ2). First, we conceptualized CDML systems described in literature (Section III-A). Based on the conceptualization, we developed the CDML design toolbox. We modeled CDML systems using the CDML design toolbox to test its applicability. Second, we used the models of the CDML systems to develop CDML archetypes (Section III-B). Third, we extracted traits of CDML system designs from literature. We assigned the CDML system designs, including their traits, to the CDML archetypes and aggregated the traits to key traits (see Section III-C). In the following, we describe our methods in detail. ### _CDML Design Toolbox Development_ To develop the CDML design toolbox, we adopted the Gaia methodology for agent-oriented modeling [20]. Using the structures of the five agent-based models presented in the Gaia methodology (see Section III-A1), we conceptualized CDML systems presented in the literature by applying open coding, axial coding, and selective coding [50] as described in Section III-A2. The literature analysis revealed design options for CDML systems (e.g., agent role distributions, optional communication paths, and structures of training processes). We tested and refined our coding in three iterations by classifying CDML systems into our coding (see Section III-A3). #### Iii-A1 The Gaia Methodology One main purpose of the Gaia methodology is to support the development of agent-based models that can serve as blueprints for the implementation of software systems [20]. The Gaia methodology is constituted of an analysis stage and a design stage. In the analysis stage, a roles model and an interactions model are developed, enabling an abstract view of a system. This abstract view constitutes the concept level of the system description that enables an analysis of system structures. The roles model describes the tasks and basic processes, including the resources that agents can use. Roles essentially describe the functions that an agent performs within the system. Each role consists of four main aspects: responsibilities, permissions, activities, and protocols. Responsibilities define the functions an agent of a particular role needs to perform. An exemplary responsibility of an agent in the role of an _updater_ in CDML systems could be the aggregation of ML models trained by other agents into a global ML model. Permissions describe which resources are available to agents with specific roles to fulfill their responsibilities. Exemplary resources for agents in the role of _updater_ are information about the ML model to be trained and local training data. Activities are computations that agents perform locally without interaction with other agents. In the case of agents in the _trainer_ role, local training of an ML model is an exemplary activity. Protocols as part of the roles model reference protocol definitions in the interactions model that describe how interactions between agents of specific roles are designed. For example, _updater_ agents must interact with agents with the _trainer_ role to retrieve interim training results and complete the training process. The interactions model specifies how agents with specific roles interact with each other in a purposeful way. Frequently recurring interactions between agents with other agents, objects, or the environment of the MAS are recorded as interaction patterns. Each interaction pattern is described in a protocol definition. Protocol definitions include six attributes: purpose, initiator, interactor, input, output, and processing. The purpose includes a textual description of the meaning of interaction, for example, "passing an ML model for its training". Interactions originate from an agent (i.e., an initiator) and are directed to an interaction partner (i.e., a responder). For interaction, the initiator prepares an input and issues the input into the interaction process. The output comprises the information received by the responder at the end of the interaction. Based on the roles model and the interactions model, envisioned CDML systems can be detailed in the design stage of the Gaia methodology. The design stage centers on the development of an agent model, a service model, and an acquaintance model. These models form the design level of the system representation. In combination, the concept level and the design level form blueprints for the implementation of concrete software systems [20]. The agent model describes the agent types utilized by CDML systems. Agent types are combinations of roles. Moreover, the agent model describes instances of these agent types that will populate the CDML system. The service model describes the main services that are necessary to implement an agent role. The services that an agent can execute depend on its roles and corresponding activities and protocols. The acquaintance model describes communication paths between different agent types in the CDML system. The acquaintance model helps to identify communication bottlenecks that may arise during run-time. Similar to the structure of the Gaia methodology, the CDML design toolbox comprises an abstract concept level and a more detailed design level. The concept level describes the general design of CDML systems, focusing on their commonalities (e.g., roles and interactions). On the design level, the CDML design toolbox describes design options to customize and differentiate CDML systems. #### Iii-A2 Conceptualization of CDML Systems To develop the CDML design toolbox, we conceptualized CDML systems in three steps: _start set compilation_, _development of an initial version of the CDML design toolbox_, and _test and iterative refinement_. We describe the three steps in more detail in the following. Start Set CompilationFor the development of the CDML design toolbox, we first compiled a start set constituted of publications on CDML systems. To systematize the search for potentially relevant publications, we specified the following inclusion criteria (see Table II): _English language_, _level of detail_, _topic fit_, and _uniqueness_. We excluded publications from the start set that did not meet all inclusion criteria. After specifying the inclusion criteria, each author independently generated their own set of publications potentially relevant to developing the CDML design toolbox. We searched for publications that cover a large variety of CDML systems and offer detailed descriptions of CDML system designs. Then, we consolidated the independently generated sets of publications into a preliminary start set. The preliminary start set included peer-reviewed scientific publications and grey literature. Next, we applied the inclusion criteria to the publications in the preliminary start set (see Table II). We removed one publication from the preliminary set of relevant literature because it was a duplicate. Based on the full texts of the remaining 29 publications, we independently rated the relevance of each publication for the conceptualization as "relevant", "maybe relevant", and "irrelevant" based on the inclusion criteria (see Table II). Whenever we were at variance regarding the relevance of publications (e.g., when one author felt the level of detail of a publication was sufficient and another author disagreed), we discussed the relevance of the publication in more detail until we concluded with unanimous decisions to include or exclude the publication from the preliminary start set. This relevance assessment led us to exclude 18 further publications from the preliminary start set. The final start set included eleven publications to be analyzed for the development of the initial version of the conceptualization. Development of an Initial Version of the CDML Design ToolboxWe analyzed the publications in the start set by applying open, axial, and selective coding [50]. In open coding, we extracted aspects of CDML systems relevant to explain their designs and functioning. After coding the literature in the set of relevant publications, we iteratively refined our coding to achieve mutual exclusiveness between our codes and the exhaustiveness of our coding. For example, we merged the codes "client" and "device" into the code "trainer" and the codes "sendParameters" and "sendGradients" into the code "transmitInterimResult". In axial coding, we extracted relationships between the codes developed in open coding. For example, we identified that the code "transmitInterimResult" can be implemented differently. We coded each implementation (e.g., "activations" and "gradients") and noted the relationship between "transmitInterimResult" and "gradients". In selective coding, we classified the extracted codes into coding schemes. The coding schemes correspond to five agent-oriented models (i.e., the roles model, the interactions model, the agent model, the preliminary service model, and the acquaintance model) introduced in the Gaia methodology [20]. For example, we classified the code "trainer" as a role in the roles model and the code "transmitInterimResult" as a protocol in the interactions model. After the analysis, we refined the coding to improve the mutual exclusiveness between codes and the exhaustiveness of our coding. For example, we abstracted the code "aggregator" to "updater" to include DCML systems in which the ML model is updated with and without aggregating interim results. #### Iii-A3 Test and Iterative Refinement We gathered evidence for the external validity of our CDML design toolbox by testing whether CDML systems, which we did not use to develop our conceptualization, can be successfully modeled with our CDML design toolbox. To find CDML systems for testing the external validity of our conceptualization, we applied a backward search and a forward search to the set of relevant publications. We decided on the relevance of each publication collated in the backward and forward searches based on the previously used inclusion criteria (see Table II). If a publication met our inclusion criteria, we added the publication to our set of relevant literature. We again applied open, axial, and selective coding to analyze the new relevant publications. Based on the coding, we classified the CDML systems into the preliminary CDML design toolbox comprised of the agent-based models of the Gaia methodology and the assigned codes. When we recognized that a CDML system could not be classified into our conceptualization, we refined our con ceptualization accordingly and continued with the test and iterative refinement until we had analyzed all relevant CDML publications identified in the last round of backward and forward searches. When our conceptualization needed to be refined, we repeated this third step of our methods, "Test and Refinement". We executed this step three times (see Table III). During the first iteration, we used four publications from the backward search and five publications from the forward search, presenting eleven CDML systems. When classifying the eleven CDML systems into our conceptualization, we recognized the need for refinements of the CDML design toolbox. For example, we added the role _coordinator_ to map the sampling-service from the newly added gossip learning system [49]. During the second iteration, we included one publication from the backward search and eight publications from the forward search. When classifying the nine CDML systems presented in those publications into the conceptualization, we recognized the need to refine our CDML design toolbox. For example, we needed to add activities and protocols while also requiring a revision of existing definitions of activities and protocols. For instance, we added the protocol "assignInterimResultRecepient" and redefined the protocol 'SignalReadiness" so that agents with the roles _trainer_ or _updater_ can execute the protocol. In the third iteration, we tested the conceptualization based on nine CDML systems presented in nine publications. We did not identify any further need to refine our concept and decided our concept to be final. Overall, the conceptualization was successfully tested on 43 CDML systems. 15 of these CDML systems required refinements of our conceptualization. ### _CDML Archetype Development_ Since the concept level of the CDML design toolbox points out commonalities between CDML systems, we focused on the design level to identify CDML archetypes. The design level allows for the differentiation between CDML system designs. We developed an agent model, preliminary service model, and acquaintance model for each CDML system. Using these models, we analyzed the corresponding CDML system designs to identify similarities. Based on the identified similarities, we developed CDML archetypes. Agent ModelWe started our analysis by examining role distributions in CDML systems to extract common agent types. To identify agent types and their distribution in CDML systems, we analyzed the agent models of the 43 CDML systems, which we previously used for testing the validity of the CDML design toolbox (see Section III-A2). We developed one agent model for each of the analyzed CDML systems. Next, we compared the individual models with each other to identify similarities and differences between the used agent types and their distribution in the corresponding CDML systems. Based on similarities between the agent models, we classified the 43 CDML systems into 18 groups of CDML systems. Each CDML system was assigned to exactly one group. Preliminary Service ModelWe analyzed the grouped CDML systems to reveal similarities in the design options implemented for activities and protocols. For example, CDML systems in a group all use the design option "only interim result definition" for the protocol provideMLTask. If CDML systems associated with different groups showed similar uses of design options, we merged these groups into candidate CDML archetypes. For example, we merged assisted learning systems with split learning systems because both systems use the design option "activations" for the protocol transmitInterimResult. Overall, we merged 18 groups of CDML systems into six candidate CDML archetypes. Acquaintance Model and Main ProcessesWe analyzed the communication paths of the individual CDML systems using their acquaintance models. Whenever we observed similarities in acquaintance models of CDML systems associated with different groups, we merged the groups. After analyzing the acquaintance models, we merged our six candidate CDML archetypes into four final CDML archetypes (i.e., the confidentiality archetype, the control archetype, the flexibility archetype, and the robustness archetype). Overall, we assigned each of the 43 CDML systems to one of the four CDML archetypes. ### _Identification of Key Traits of CDML Archetypes_ Using the set of relevant publications on CDML systems that we used to develop the CDML design toolbox (see Section III-A2), we performed open coding [50] to extract preliminary traits of CDML system (e.g., robustness against the participation of malicious agents) that authors point out to highlight strengths and weaknesses of CDML system designs. We noted the referenced CDML systems for all preliminary traits and noted explanations of how the trait originates from the CDML design in axial coding [50]. For example, the key trait "communication bottleneck" is referenced in several publications about federated learning systems. This trait originates from the reliability of federated learning systems on a central agent [51, 40, 52]. We added a description of whether the referenced CDML system has a strength or weakness in the respective trait. Our analysis revealed 132 codes representing preliminary traits of 43 CDML systems. Subsequently, we harmonized the preliminary traits in three iterations to ensure mutual exclusiveness and exhaustiveness of our coding [50]. For example, we aggregated the preliminary traits "does not rely on an orchestrator" and "no need to rely on a third party" to the trait "fault-tolerant". Our analysis revealed 38 traits of CDML systems. Next, we mapped the 38 traits to the CDML systems to their corresponding CDML archetypes. We evaluated which traits of individual CDML systems apply to all CDML systems assigned to corresponding CDML archetypes. We assigned the set of traits shared by all CDML systems associated with a CDML archetype to the corresponding CDML archetype as key traits. For example, we extracted the trait "not reliant on single agents" from literature on blockchain-based federated learning systems. To evaluate whether this trait also applies to all CDML systems of the robustness archetype, we analyzed the CDML systems of the robustness archetype (e.g., swarm learning) regarding their redundancy of agent types. Since all CDML system designs of the robustness archetype show a high redundancy of agent types, "not reliant on single agents" became a key trait of the robustness archetype. We repeated this process for all traits extracted from the literature analysis at the beginning of this step. ## IV The CDML Design Toolbox Our CDML design toolbox comprises a concept level and a design level. The concept level (see Section IV-A) describes how CDML systems are designed in principle, including agent roles and agent interactions. Roles are assigned to agents in order to specify the activities and protocols to be executed by corresponding agents. After the role assignment, agents keep their roles until the coalition dissolves. Agents do not have to act in all their assigned roles simultaneously but in at least one role. The design level (see Section IV-B) includes design options that developers can use to design CDML systems. Exemplary design options encompass the assignment of agent types (i.e., combinations of roles) to agents in the CDML system and the definition of types of interim results to be transmitted between agents. The design options are presented in an agent model, a preliminary service model, and an acquaintance model. The agent model shows common combinations of agent types used in CDML systems. In the preliminary service model, we describe design options for implementing activities and protocols described in the roles model. The acquaintance model illustrates communication paths between these agent types in existing CDML systems. To make the models incorporated in our CDML design toolbox tangible, we describe them along the principal CDML life cycle. The CDML life cycle incorporates three sequential phases each CDML system passes through: the initialization phase, the operation phase, and the dissolution phase. In the initialization phase, agents form and initialize a coalition that can become a CDML system. The initialization phase described in this paper focuses on the autonomous formation of CDML systems by agents in MASs. Alternatively, developers can manually initialize CDML systems. However, the manual setup of CDML systems is out of the scope of this work. In the operation phase, agents interact in order to train or execute ML models. In the dissolution phase, the agents end their collaboration and dissolve the CDML system. Because multiple CDML systems may be formed in a single MAS (e.g., in open MAS), these phases can be passed through in parallel. For simplicity, we describe these three phases using the example of the formation of a single coalition that becomes a CDML system and dissolves. We describe variants of the CDML system design (e.g., in terms of numbers of agents with specific roles) in Section IV-B. ### _Concept Level of the CDML Design Toolbox_ The concept level of our CDML design toolbox incorporates a roles model and an interactions model. The roles model comprises role descriptions, activities of agents, and responsibilities. The interactions model includes protocols that specify interactions between agents. Initialization PhaseIn the initialization phase, agents form a coalition of at least two agents that aim to collaborate to accomplish an ML task. The formation of coalitions, which can become CDML systems, is triggered by a _configurator_ agent. The _configurator_ agent stores the CDML system specifications (activity: registerCoalition) about the purpose of envisioned CDML systems (i.e., the general prediction problem that ought to be addressed) and requirements for agents that are searched to join the coalition (e.g., in terms of the needed training data structure). The _configurator_ agent defines (parts of) the initial ML model (activity: defineInitialMLModel) to be trained. Definitions of the (parts of) initial ML models are, for instance, the (first) layers of neural networks, a (sub-) set of parameters of linear regressions, activation functions, and the ML model architecture. Moreover, the _configurator_ agent defines the structure and type of interim results (activity: defineInterimResult) to be transmitted between agents in the envisioned CDML system. Interim results are updates that are computed by agents based on local training data and the locally available (part of an) ML model. Then, the _configurator_ agent registers the coalition (activity: registerCoalition) with a repository and starts an application process. Agents fetch the CDML system specifications from the repository. Based on the CDML system specifications, agents decide whether to participate in the CDML system. Agents that decide to participate submit an application, including the roles they apply for, to the _configurator_ agent (protocol: applyForCoalition). Commonly, agents can apply for the roles _coordinator_, _selector_, _trainer_, and _update_. The _configurator_ agent iteratively checks for applications from agents (activity: awaitApplications). Upon application receipt, the _configurator_ agent decides whether to accept or reject the agent for the CDML system (activity: decideOnApplication). Then, the _configurator_ agent responds to the applying agent with an acceptance message or a rejection message (protocol: informApplicant). When _trainer_ and _update_ agents join the coalition, the _coordinator_ agent assigns _trainer_ agents to _update_ agents they will interact with in the operation phase and inform the respective agents about the assignment (protocol: assignInterimResultRecipient). The _trainer_ agent sends its interim result to its assigned _updater_ agent. The _updater_ agent can return interim results to its assigned _trainer_ agent(s) after updating (parts of) the ML model. The _configurator_ agent sends the ML task (protocol: provideMLTask) to agents in the coalition. ML tasks are a collection of information required to train and update ML models and can include the initial ML model definition and the interim result definition. At the end of the initialization phase, at least two agents of the coalition must have been assigned the following roles to form a CDML system: _configurator_, _coordinator_, _selector_, _trainer_, and _updater_. Agents may have multiple roles. We describe common combinations of roles on the design level of the CDML design toolbox (see Section IV-B). After the initialization phase, the _coordinator_ agent handles applications of agents on behalf of the _configurator_ agent, which executes the activities awaitApplications, decideOnApplication and the protocols applyForCoalition and informApplicant. The _coordinator_ agents send the ML task to the accepted agents (protocol: provideMLTask). After the initialization of the CDML system, ML models can be trained and executed in the operation phase. Operation PhaseIn the operation phase, agents participate in the training and execution of ML models according to their assigned roles. At the beginning, the _trainer_ agent and the _updater_ agent signal their readiness to the _selector_ agent (protocol: signalReadiness). Agents that have signaled their readiness iteratively check for triggers from the _selector_ agent to execute activities and protocols required to collaboratively train and update ML models (activity: awaitSelectionSignal). The _selector_ agent selects _trainer_ agents and _updater_ agents (activity: selectAgent) to act in at least one of these roles. Then, the _selector_ agent requests the selected agents to act in the corresponding roles (protocol: announceAgentSelection). Agents that are selected for the role _trainer_ use their locally available (parts of the) ML model and local training data to compute interim results (activity: trainMLModel). The _trainer_ agent sends its interim result to the _updater_ agent (protocol: transmitInterimResult). The _Update_ agent results until it receives interim results (activity: awaitInterimResults) and then uses the interim results received from _trainer_ agents to compute a new version of the locally available (part of the) ML model (activity: updateMLModel). The execution order of training, updating, and transmitting interim results can vary between CDML systems (see Section IV-B). The procedure outlined in the operation phase is typically executed repeatedly. Protocols and activities may be executed in parallel or sequentially. Dissolution PhaseIn the dissolution phase, agents stop executing the processes described in the operation phase. This can be the case if agents decide that (parts of) the ML model(s) have been sufficiently trained or, in case that other agents are required to execute ML models, that they do not need to execute ML model anymore. When agents end their collaboration, the CDML system dissolves. ### _Design Level of the CDML Design Toolbox_ While the concept level of the CDML design toolbox offers an abstract description CDML system designs, the design level can guide detailed specifications of concrete CDML system designs as follows. The first step in designing CDML systems entails the specification of an agent model (see Section IV-B1) that presents the assignment of agent types to agents. Agent types incorporate all roles that are simultaneously assigned to single agents. The CDML design toolbox offers a set of agent types commonly used in CDML systems in the agent model (see Section IV-B1). Second, developers need to tailor the activities and protocols associated with agent types to the requirements of the envisioned CDML system. In Section IV-B2, the CDML design toolbox offers a range of design options on how activities and protocols can be implemented to develop service models for CDML systems. Finally, the acquaintance model needs to specify communication paths between agents (see Section IV-B3). While some communication paths are integral to all CDML systems (e.g., _trainer_ agents sending interim results to _updater_ agents, see Section IV-B1), others are contingent on the characteristics of CDML systems (e.g., _updater_ agents returning interim results to _trainer_ agents). The CDML design toolbox introduces communication paths necessary to operate CDML systems successfully. This list comprises necessary and optional communication paths and helps developers consider communication efficiency and communication bottlenecks when designing CDML systems. In the following, we describe the three models (i.e., the agent model, the preliminary service model, and the acquaintance model) that can be utilized to develop CDML systems. #### Iv-B1 Agent Model Agent types are a combination of roles identified in the roles model that can serve as a blueprint to implement agents in CDML systems. Following the concept level of the CDML design toolbox (see Section IV-A), CDML systems require at least two agents with agent types that in combination comprise the following roles: _configurator_, _coordinator_, _selector_, _trainer_, and _updater_. These roles can be assigned to agents in seven combinations (see Table V), each combination forming an individual agent type. Identical agent types can be assigned to multiple agents, for example, to increase redundancies in the processing of ML tasks [14] or to distribute workload in the processing of ML tasks [10]. First, the _Tra_ agent type only comprises the role _trainer_. Agents of the _Tra_ agent type only train the ML model without updating it with interim results from other agents. The _Tra_ agent type is utilized in CDML systems with only one training round [53]. Second, the _CooSel_ agent type comprises the roles _coordinator_ and _selector_. This agent type is utilized in CDML systems with a peer-to-peer structure. If agent selection and the assignment of _trainer_ agents to _updater_ agents follow a sophisticated rule (e.g., unbiased peer-to-peer sampling service [54]), _CooSel_ agents can be implemented that only focus on the selection and assignment of agents [49, 55]. Third, the _TraUpd_ agent type combines the roles _trainer_ and _updater_. The _TraUpd_ agent type is implemented in many CDML systems since it combines the two main roles accounting for training ML models. _TraUpd_ agents can train ML models but can include interim results into their local ML models [35, 47, 56]. Fourth, the _ConTraUpd_ agent type combines the roles _configurator_, _trainer_, and _updater_. The _ConTraUpd_ agent type is mainly used in split learning systems and assisted learning systems. The _configurator_ role is required since agents in these CDML systems define their own ML model [11, 12]. Fifth, the _ConCooSelUpd_ agent type combines the roles _configurator_, _coordinator_, _selector_, and _updater_. _ConCooSelUpd_ agents primarily operate central servers in federated learning systems [35, 47]. Sixth, the _CooSelTraUpd_ agent type combines the roles _coordinator_, _selector_, _trainer_, and _updater_. This agent type has a high degree of autonomy as it can execute all activities and protocols except those with the _configurator_ role. The _CooSelTraUpd_ agent type is used in CDML systems to create a high level of redundancy [57, 14, 58]. Seventh, the _ConCooSelTraUpd_ agent type combines the roles _configurator_, _coordinator_, _selector_, _trainer_, and _updater_. This agent type is assigned to central agents in federated learning (e.g., [59]) that train ML models or a single agent that initiates the ML model to be trained in peer-to-peer-based CDML systems (e.g., the BrainTorrent system [48] and the gossip learning systems [49]). #### Iv-B2 Preliminary Service Model The key activities and protocols introduced at the concept level of the CDML design toolbox (see Table IV) can be implemented based on various design options. It is important to note that the following descriptions do not represent a complete service model [20]. Complete service models are usually highly context-dependent and, thus, out of scope for this work. The following descriptions of design options for the key activities and protocols are intended as a foundation for developing detailed service models. ActivitiesWe identified 12 design options for five key activities. The activity awaitApplications has two design options. First, the agent population awaits agent applications to join the coalition "only during the initialization phase". Applications are ignored when the CDML system is already initialized. For example, in most variants of split learning systems [11], the ML model layers to be trained need to be assigned to agents during the initialization phase, which prevents agents from joining after the initialization phase. Second, the agent population accepts applications "always" [14]. This allows agents to join the CDML system arbitrarily. The activity selectAgent has three design options. First, agents can be selected for a role "based on votes from other agents" in the CDML system. The _selector_ collects the votes of other agents and decides which agents should execute which activities and protocols; for example, all agents in the CDML system can vote on which agent activates the _updater_ role and executes the updating of the ML model (activity: updateMLModel) [14]). Second, agents can be selected "based on agent attributes", for example, based on the size of agents' datasets [53]. Third, agents can be selected "randomly" to activate a role and execute corresponding activities and protocols [48, 60]. The activity awaitInterimResults has two design options. To maintain liveness in CDML systems, the waiting time of agents for interim results can be "response-bound" or "time-bound". If the waiting time of the agents is "response-bound" [61], the _updater_ agent waits for a specified number of interim results before updating the ML model with the interim results received. "Response-bound" waiting for interim results can decrease the liveness of CDML systems if set too high; for example, when an agent with the role _updater_ awaits interim results from all _trainer_ agent, but on _trainer_ agent may have crashed, the _updater_ agent may theoretically wait infinitely. "Time-bound" waiting tackles this issue [10]. If the waiting time exceeds a specified time bound, the _updater_ agent updates the ML model with all interim results received during the waiting period. However, "time-bound" waiting may lead the _updater_ agent to ignore interim results received too late. The activity updateMLModel has two design options. First, _updater_ agents can perform "batched updates" [52, 53, 57, 62]. In "batched updates", _updater_ agents use a set of interim results received from _trainer_ agents to update their ML model at one time. Second, _updater_ agents can perform "individual updates" to separately update the ML model for each interim result received from a _trainer_ agent or an _updater_ agent [11, 61]. The activity trainMLModel has three design options. First, _trainer_ agents can "train two complete ML models". In this case, _trainer_ agents compute two separate ML models. A local ML model that learns representations of the training data and a global ML model that is trained on the local ML model instead of the raw training data. An advantage of this approach is that the local ML model can protect confidential attributes from the global ML model, thus improving training data confidentiality. Moreover, the communication efficiency can be improved because the global ML model requires fewer parameters due to the local ML model learning being the foundation for the global ML model [63, 64]. Second, _trainer_ agents can "train one complete ML model". A complete ML model refers to the entire set of parameters comprising the ML model. In most CDML systems, _trainer_ agents store and train one complete ML model [16, 47]. Third, _trainer_ agents can "train a part of an ML model". A part of an ML model refers to a subset of ML model parameters. Exemplary parts of ML models are layers of a neural network or a subset of coefficients of linear regression. Training only a part of an ML model has two main advantages. First, _trainer_ agents require less storage and computing resources. Second, due to _trainer_ agents only having access to a part of the ML model, the complete ML model can remain confidential [11, 12]. ProtocolsWe identified nine design options for three key protocols. We identified two design options for the protocol provideMLTask. First, the agent with the role _configurator_ can provide "only interim result definitions" to other agents in the CDML system. The agent with the role _configurator_ only provides the interface between agents (e.g., exchange parameters or gradients). The exact ML model to be used remains unknown to other agents (e.g., in terms of the ML model architecture and its hyperparameters) [12]. Second, the _configurator_ agent provides both the interim result definition and initial ML model definition (e.g., [10, 35]). The protocol announceAgentSelection has two design options. First, the _selector_ agent can announce which agent should activate which role [10, 49]. Second, the _selector_ agent can announce what agents should activate which role and announce the training sample IDs that should be trained [12]. There are five design options for the protocol transmitInterimResult. First, agents can transmit "parameter values" [19, 65]. Parameter values refer to a set of variables or weights that the ML model learns from the training data and that determine how the ML model makes predictions based on the input data. Second, agents can transmit "gradients" [35, 61]. Gradients refer to the directional slopes or change rates of a mathematical function. Third, agents can transmit "activations with labels" [11, 66]. We refer to activations as intermediate outputs of an ML model for a given input. When the ML model is presented with input data, it propagates the data through its layers, applies the learned parameters (weights and biases), and produces an output. We refer to the output as "activations" if it is not the final output of the ML model. If the output is from the final layer / includes all parameter values of the ML model, we call the outputs predictions. Fourth, agents can transmit "activations without labels" [11, 66]. Fifth, agents can transmit "(pseudo-)residuals" [12]. Residuals refer to the differences between the actual target values and the predicted values generated by an ML model. Pseudo-residuals can be considered intermediate residuals and are often used in boosting algorithms. #### Iv-B3 Acquaintance Model Several communication paths between agents are required for the functioning of CDML systems. Some of those communication paths are indispensable in every CDML system; other communication paths only appear in some CDML systems. Based on our concept level of CDML systems (see Section IV-A), we describe indispensable communication paths and optional communication paths (design options) in the following. Since communication paths differ between the lifecycle phases of CDML systems, we describe the communication paths for each phase separately. Initialization PhaseThe _configurator_ agent must have a bidirectional communication path to all other agents for two purposes: first, to participate in the coalition application process (protocols: applyForCoalition, informApplicant); second, to provide them with the ML task definition (protocol: provideMLTask). The _coordinator_ agent must have a unidirectional communication path to the _trainer_ agent to inform the agent to which _updater_ agent they should send their interim results (protocol: assignInterimResultRecipient). This communication path allows for more flexibility by enabling sub-coalitions that form around _updater_ agents [10, 19, 67]. The _coordinator_ agent may have a unidirectional communication path to the _updater_ agents. Via such a communication path, the _coordinator_ agent can inform the _updater_ agents to which _updater_ agents they should send intermediate results (protocol: assignInterimResultRecipient). This communication path can be used for a hierarchically organized CDML system, in which _updater_ agents communicate with each other to improve their local ML model without using local training data [10, 19, 67]. Operation PhaseThe _selector_ agents must have a bidirectional communication path to the _trainer_ agent and the _updater_ agent. This communication path enables the _selector_ agent to receive signals that these agents are ready to participate in the training (protocol: signalReadiness) and to inform these agents that they are selected for the training (protocol: announceAgentSelection). The _trainer_ agent must have a unidirectional communication path to the _updater_ agent to send it interim results (protocol: transmitInterimResult). The _coordinator_ agent can have a bidirectional communication path to all other agent roles if applications can be received and processed after the initialization phase. In this case, the _coordinator_ agent take over handling the applications from the _configurator_ agent (protocols: applyForCoalition, informApplicant). Because agents can apply and be admitted to a CDML system after the initialization phase, this communication path enables the CDML system to address issues in the agent population during the operation phase. For example, if it becomes clear during the operation phase that the training data is insufficient, more _trainer_ agents can be admitted to the CDML system. The _updater_ agent can have unidirectional or bidirectional communication paths with an other _updater_ agent to exchange information about their ML model update (e.g., [19, 10]). This communication path allows for hierarchical structures with more than one _updater_ agent. The _trainer_ agent can have bidirectional communication paths to the _updater_ agent, for example, to send and receive interim results (protocol: transmitInterimResult). Such bidirectional communication paths are common in CDML systems. In some CDML systems (e.g., one-shot federated learning [53]), the _trainer_ agent sends interim training results to the _updater_ agent without receiving interim results in return [53]. Dissolution PhaseDuring the dissolution phase, the communication paths between agents are dissolved. Agents that have stored a local ML model can keep it and use it to make predictions on their own. ## V CDML Archetypes We developed four CDML archetypes that reflect CDML system designs common in practice and research: the confidentiality archetype, the control archetype, the flexibility archetype, and the robustness archetype. The CDML archetypes are distinguished by their agent models, acquaintance models, and principal functioning, including preliminary service models. Table VI gives an overview of the four CDML archetypes we describe in detail in the following. The coalition-forming phase is outside the scope of the archetype descriptions because developers can set up CDML systems that correspond to the CDML archetypes. For each CDML archetype, we highlight common design variants. ### _Confidentiality Archetype_ The confidentiality archetype is suitable for use cases in which agents want to preserve the confidentiality of ML models, ML tasks, and training data. Agents only store parts of ML models. The full architectures of ML models trained in the confidentiality archetype are not disclosed. Thus, no agent has access to the global ML model. Instead, the global ML model is distributed across several agents, which only store parts of it. ML models are not synchronized coalition-wide during ML model training and for ML model inference. Exemplary CDML systems of the confidentiality archetype are split learning [11, 66, 70], assisted learning [12, 68], gradient assisted learning [17], SplitFed learning [37], FDML [71], hierarchical SplitFed learning [19], and FedLite [72]. #### V-A1 Agent Model The confidentiality archetype comprises the agent types _ConCooSelUpd_ and _ConTraUpd_. In its basic configuration, the confidentiality archetype comprises one _ConCoSelUpd_ agent and at least one _ConTraUpd_ agent. #### V-A2 Acquaintance Model In the confidentiality archetype, the _ConCooSelUpd_ agent can communicate with all _ConTraUpd_ agents on bidirectional communication paths (see Figure 1). _ConTraUpd_ agents do not communicate with each other directly. #### V-A3 Principal Functioning In the initialization phase, the _ConCooSelUpd_ agent configures its local part of the ML model and defines the interim results to be transmitted (activities: defineInitialMLModel, defineInterimResult). Local parts of the ML model can be specific layers of a neural network in split learning [11] or just parts of a layer of a neural network in vertical split learning [11] and assisted learning [12]. Examples of interim results include activations of a particular layer of a neural network (e.g., referred to as the cut layer in split learning) [11] or (pseudo-)residuals [17]. The _ConCoSelUpd_ Fig. 1: Exemplary acquaintance model of the confidentiality archetype agent then provides _ConTraUpd_ agents with the interim result definition (protocol: provideMLTask; design option: provide only interim result definition). After receiving the interim result definition, _ConTraUpd_ agents individually set up their local parts of the ML model following the interim results definition. For example, the ConTraUpd agents in split learning systems set up the layers of a neural network from the input layer to the cut layer. The number of outputs of the cut layer is set depending on the interim results definition. The operation phase starts with the _ConTraUpd_ agents signaling their readiness to the _ConCoSelUpd_ agent (protocol: signalReadiness) to participate in the subsequent training round. Then, _ConTraUpd_ agents wait for a response (activity: awaitSelectionSignal). The ConCoSelUpd agent decides which _ConTraUpd_ agents to select for the next training round (activity: selectAgent). For example, this selection can be made based on agent attributes or randomly. After the selection, the ConCooSelUpd agent announces its decision to the _ConTraUpd_ agents (protocol: announceAgentSelection). Selected _ConTraUpd_ agents train their parts of the ML model (activity: trainMLModel; design option: train a part of the ML model) and transmit their interim results to the ConCooSelUpd agent (protocol: transmitInterimResult; design option: activations with labels, (pseudo-)residuals). The _ConCooSelUpd_ agent waits for incoming interim results (protocol: awaitInterimResults). The _ConCooSelUpd_ agent uses the interim results to update (and train) its local (part of the) ML model (activities: trainMLModel, updateMLModel). Depending on the implementation, the _ConCooSelUpd_ agent then transmits another interim result back to the _ConTraUpd_ agents (protocol: transmitInterResult; design option: gradients). _ConTraUpd_ agents use it to update their local part of the ML model. The _ConCooSelUpd_ agent decides how often this process is repeated. #### Iii-A4 Key Traits The confidentiality archetype relies on a strongly hierarchical agent organization and does not have a coalition-wide synchronization of ML models. The missing synchronization of ML models among agents leads to the fact that ML models can be kept confidential. The main trait of the confidentiality archetype is that confidentiality entails training data confidentiality and the ML model confidentiality because agents only have access to parts of the ML model. Next to enabling ML model confidentiality, The confidentiality archetype can be very computation efficient since agents only have to store and compute a part of the ML model, which can be potentially very large [11, 72]. The confidentiality archetype requires fewer training rounds than the control archetype and converges quickly [11, 66]. The confidentiality archetype has high communication costs due to the ML model partitioning and the communication of both activations and gradients [72]. Some CDML systems that correspond to the confidentiality archetype, such as split learning systems (e.g., [11]), can have high idle times of trainer agents since the trainer agents only interact with the updater agents sequentially [37]. Other CDML systems, such as SplitFed learning systems, address this issue by combining elements of split learning and federated learning and, thus, can reduce the idle times [37]. As no agent has access to the entire ML model, the coalition (or a subset of it) is required to make ML model inferences. Therefore, the coalition can only be resolved when the ML model is not used anymore. #### Iii-A5 Variants of the Confidentiality Archetype U-Shaped Split Learning [11]U-shaped split learning systems can be used to train neural networks. A selected ConTraUpd agent executes the forward propagation up to a specific layer (i.e., the first cut layer) and only transmits activations to the _ConCooSelUpd_ agent (protocol: transmitInterimResults; design option: activations without labels). The _ConCooSelUpd_ continues the forward propagation up to the second cut layer and transmits activations back to the ConTraUpd agent. The ConTraUpd agent completes the forward propagation, starts the backpropagation, and transmits the gradients of the second cut layer to the _ConCooSelUpd_ agent (protocol: transmitInterimResults; design option: gradients). Using these gradients, the _ConCooSelUpd_ agent continues the backpropagation to the first cut layer and transmits the gradients of the first cut layer to the ConTraUpd agent. The ConTraUpd agent executes the backpropagation for the remaining layers and, thus, completes a training round. ### _Control Archetype_ The control archetype is suitable for use cases in which one agent should have control over the DCML system. The control archetype incorporates a hierarchical communication structure with an agent on the top level that controls the training process. The agent on top receives all interim results and synchronizes the training process by deciding on the global ML model to be trained in each training round. Exemplary CDML systems of the control archetype implement variants of federated learning [10, 35, 61, 63], including one-shot federated learning [53], semiFL [59], heteroFL [39], and hierarchical federated learning [51, 52]. #### Iii-B1 Agent Model CDML systems belonging to the control archetype comprise the agent types _ConCooSelUpd_ and _TraUpd_. The control archetype comprises one _ConCooSelUpd_ agent and at least one _TraUpd_ agent. #### Iii-B2 Acquaintance Model The acquaintance model of the control archetype has the structure of a tree (see Figure 2). Agents can bidirectionally communicate in a strictly hierarchical manner along the vertexes of the tree. In its basic form, there are two hierarchical levels (e.g., [10]): a root _ConCooSelUpd_ agent forms the top level of the hierarchy. At least one _TraUpd_ agent resides on the bottom level of the hierarchy. There can be additional levels between the top level and the bottom level (e.g., [51, 52]). The inner nodes of the tree are _ConCooSelUpd_ agents, whereas _TraUpd_ agents represent the leaves. #### Iii-B3 Principal Functioning In the initialization phase, the _ConCooSelUpd_ agent on the top level of the hierarchy defines the initial ML model and interim results (activities: defineInitialMLModel, defineInterimResult). Suppose there are additional _ConCooSelUpd_ agents on lower levels of the acquaintance model. In that case, the initial ML model and interim result definition are propagated to these agents by executing the protocol _provideMLTask_ (design option: ML model definition and interim result definition). _ConCooSelUpd_ agents on lower levels of the acquaintance model can only forward parts of the ML model (i.e., sub-models) to their child nodes. Thus, each _ConCooSelUpd_ agent can individually define the initial ML model and interim results for its descendants (activities: defineInitialMLModel, defineInterimResult). In the operation phase, _TraUpd_ agents execute the _signal-Readiness_ protocol to signal their availability to participate in a training round to their respective parent _ConCooSelUpd_ agent. Then, _TraUpd_ agents wait for a selection signal (activity: awaitSelectionSignal). _ConCooSelUpd_ agents decide which of their child _ConCooSelUpd_ and _TraUpd_ agents to include in a training round. Once a sufficient number of Fig. 2: Exemplary acquaintance model of the control archetype child agents have signaled their readiness to a _ConCooSelUpd_ agent, it signals its readiness to its parent agent and waits for a selection signal (activity: awaitSelectionSignal). This process is repeated recursively throughout the hierarchy until it reaches the root _ConCooSelUpd_ agent. Then, the root _ConCooSelUpd_ agent selects (a subset of) its subordinate agents to participate in the upcoming training round (activity: selectAgent; design option: based on agent attributes or randomly) and announces its selection to its child agents (protocol: announceAgentSelection). Afterward, it transmits the current version of the ML model, or a part thereof, to selected child agents (protocol: transmitInterimResult; design option: gradients or parameter values) and waits for interim results (activity: awaitInterimResult; design option: waiting for a time-threshold or waiting for a response-threshold). This selection process is repeated recursively by descendant _ConCooSelUpd_ agents until it reaches the leaf _TraUpd_ agents. The _TraUpd_ agents update their local ML model based on the interim result received (activity: updateMLModel1; design option: batched update) and train it using local training data and self-controlled compute (activity: trainMLModel; design option: train one complete ML model or train a part of the ML model). After training is completed, _TraUpd_ agents initiate the transmitInterimResult protocol (design option: gradients or parameter values) with their respective parent _ConCooSelUpd_ agent as the responder. The parent _ConCooSelUpd_ agent waits until a defined threshold is reached (activity: awaitInterimResult; design option: waiting for a time-threshold or waiting for a response-threshold) and update their (part of the) ML model based on the interim results received (activity: updateMLModel; design option: batched update). Each ConCooSelUpd agent can decide how often to repeat this training procedure with its descendants. When the desired number of training rounds is completed, _ConCooSelUpd_ agents send the updated (part of the) ML model to their parent nodes (protocol: transmitInterimResult; design option: gradients or _parameter values_). Once the threshold of the root _ConCooSelUpd_ agent is reached, a coalition-wide training round is completed. The procedure described for the operation phase is repeatedly executed until the dissolution phase is initiated by the root _ConCooSelUpd_ agent. #### V-B4 Key Traits The control archetype implements a strongly hierarchical organizational structure of agents and requires the coalition-wide synchronization of ML models. The combination of these traits leads to organizational structures in which a small fraction of all agents wield the predominant control over the CDML system. The control archetype is suitable for use cases with strict hierarchies where one or a few agents should keep control over the CDML system. The control archetype relies on only one root _ConCooSelUpd_ agent. If the one _updater_ agent crashes, the whole CDML system crashes [48, 49, 52]. Thus, the control archetype is not crash-fault tolerant. The use of multiple _updater_ agents assigned to multiple layers of the hierarchy of the control archetype can make the system tolerant to crashes of single _updater_ agents [19, 52]. If one _updater_ agent crashes, _updater_ agents can take the load in aggregating interim results of the crashed one. However, this redistribution of load to fewer _updater_ agents can drastically reduce the overall performance of the control archetype. The control archetype can be prone to performance bottlenecks due to a few central agents having to execute numerous computationally intensive activities and protocols [52, 53]. Such performance bottlenecks include computation [40] (i.e., during updating) and communication [51, 40] (i.e., sending and receiving interim results). Regarding the predictive performance of the ML model trained collaboratively, the control archetype usually performs better than the confidentiality archetype (e.g., [37]). The ML model usually converges faster than in CDML systems of the flexibility archetype (e.g., [9]). The coalition can be dissolved after training because the coalition is not required to make ML model inferences. #### V-B5 Variants of the Control Archetype TraUpd Agents as Tra Agents [53]TraUpd agents lose their _updater_ role and become _Tra_ agents. In this variant, the interim results are only transmitted from _Tra_ agents to _ConCooSelUpd_ agents. No interim results are transmitted back to _Tra_ agents. _Tra_ agents do not update their local ML models. ConCooSelUpd Agents as ConCooSelTraUpd Agents [59]ConCoSelUpd agents gain the _trainer_ role and become _ConCooSelTraUpd_ agents. In these systems, the agents on higher levels of the hierarchy possess training data on their own and use it to train (parts of) the ML model themselves (e.g., [59]). _ConCooSelTraUpd_ agents train the ML model (activity: trainMLModel; design option: train one complete ML model or train a part of the ML model) while waiting for interim results of subordinate agents in the hierarchy. TraUpd Agents Train Two Complete ML Models [63]TraUpd agents train two complete ML models locally (activity: trainMLModel; design option: train two complete ML models). _TraUpd_ agents train one ML model on local data. The second ML model is trained on the first ML model. Only the gradients or parameter values resulting from the training of the second ML model are transmitted to the superordinate agent. ### _Flexibility Archetype_ The flexibility archetype is suitable for use cases with communication topologies that can change at run-time [40]. The flexibility archetype offers a high degree of agent autonomy. Agents can arbitrarily join and leave the flexibility archetype without impeding the functioning of the CDML system [40]. In its basic variant, agents can select agents they want to collaborate with. Moreover, agents can decide if and when they execute activities (e.g., trainMLModel or updateMLModel) and protocols (e.g., signalReadiness or transmitInterimResult). The flexibility archetype is weakly hierarchically organized. ML models are not synchronized coalition-wide during ML model training. Exemplary CDML systems of the flexibility archetype implement gossip learning [49], BrainTorrent [48], and decentralized federated learning [62, 64, 40, 69]. #### V-C1 Agent Model The flexibility archetype comprises the agent types _ConCooSeITraUpd_ and _CooSeITraUpd_. In its basic configuration, the flexibility archetype comprises one _ConCooSeITraUpd_ agent and at least one _CooSeITraUpd_ agent. #### V-C2 Acquaintance Model To participate in the training, agents must establish a bidirectional communication path to at least one other agent. (see Figure 3). Other agents include _ConCooSeITraUpd_ agents and _CooSeITraUpd_ agents. Agents decide with which agents they interact on an equitable basis. #### V-C3 Principal Functioning In the initialization phase, the _ConCooSeITraUpd_ agent first defines the ML model (activity: defineInitialMLModel) and interim results (activity: defineInterimResult). The _ConCooSeITraUpd_ agent distributes the ML model and the interim result definition to other agents in the CDML system (protocol: provideMLTask; design option: provide initial MLmodel definition and interim result definition). Agents can join at any time (protocol: applyForCoalition; design option: always). In the operation phase, each _ConCooSelfTraUpd_ and _CooSeITraUpd_ agents train the ML model locally using local training data and self-controlled computing resources. Afterward, each agent signals its readiness to activate its _updder_ role for the upcoming training round (protocol: signalReadiness) and waits for other agents to signal their readiness (activity: awaitAgentReadiness). Then, at least one agent that signals its readiness is selected (activity: selectAgent) to receive the interim results. Agents are usually selected randomly (design option: randomly, but can also be selected in a targeted manner (design option: based on agent attributes. The selection is announced to the selected agent (protocol: announceAgentSelection). Agents that are selected to activate the role _updater_ wait (activity: awaitInterimResult) until they receive the interim results from other agents using the protocol transmitInterimResult (design option: gradients or parameter values). Lastly, the selected agents use the interim results of other agents to update their local ML model (activity: updateMLModel). The update can entail several interim results (design option: batched update) or only one interim result from another agent (design option: individual update). This process is repeated until the dissolution phase is initiated. The flexibility archetype dissolves when no agents engage in collaborative training anymore. #### V-C4 Key Traits The flexibility archetype is weakly hierarchical and agents store different states of ML models. ML models are not synchronized coalition-wide. Agents have a high degree of autonomy and can individually decide when to train collaboratively and with whom. Moreover, agents can individually decide to activate roles and execute activities and protocols, which leads to agents having little idle time [48]. The flexibility archetype can handle agent crashes better than the control archetype [49]. An agent dropping out of the system may temporarily reduce the performance of the flexibility archetype, but because a new agent can be easily integrated into the training process due to the lack of rules, the flexibility archetype can recover from the agent drop-out [9]. Because agents can largely operate independently of each other, no single agent is vital for the proper functioning of the CDML system. If agents are redundant, agents can theoretically be replaced. However, this may not always be possible because the flexibility archetype does not require redundant agents. The flexibility archetype is not robust against malicious agents. Malicious agents are agents that tamper with training processes and manipulate collaboratively trained ML models [9]. Malicious agents can obfuscate their identities by arbitrarily joining and dropping out of the CDML system and arbitrarily switching their collaboration partners. Such obfuscation can facilitate the engagement of agents in performing malicious activities without detection (e.g., because reputation systems may not be applicable [42]). Moreover, even when malicious agents are identified, it is hard to punish them because rules (e.g., agents that act maliciously are forced to leave the system) are hardly enforceable in the flexibility archetype. The coalition can be dissolved after ML model training because the CDML system is not required to make ML model inferences. #### V-C5 Variants of the Flexibility Archetype Additional CooSeI Agent [49]There can be a dedicated _CooSeI_ agent (e.g., [49]). The remaining agents lose the _selector_ role and become _ConCooTraUpd_ and _ConTraUpd_ agents. In each training round, the _CooSeI_ agent selects a subset of the _ConCooTraUpd_ and _CooTraUpd_ agents to function as the updater (activity: selectAgent; design option: randomly) and assigns each of the remaining agents to one of the agents selected as an _updater_. Each agent then sends its interim result to the agent it was assigned to (protocol: transmitInterimResult; design option: gradients or parameter values). ### _Robustness Archetype_ The robustness archetype is suitable for use cases in which agents may inadvertently drop-out of the coalition during ML model training (e.g., due to crashes or network failures) because a large fraction of agents is redundant and, thus, can replace each other. The robustness archetype is weakly hierarchically organized and performs coalition-wide synchronization of the ML model. Exemplary CDML systems of the Fig. 3: Acquaintance model of the flexibility archetype for an exemplary training round robustness archetype are swarm learning system [14] and other blockchain-based CDML systems [57, 65]. #### V-B1 Agent Model The robustness archetype comprises the agent types _ConCooSeITraUpd_ and _CooSeITraUpd_. In its basic configuration, the robustness archetype comprises one _ConCooSeITraUpd_ agent and at least one _CooSeITraUpd_ agent. #### V-B2 Acquaintance Model As illustrated in Figure 4, there can be bidirectional communication paths between all agents in the system. This includes both agents of the type _ConCooSeITraUpd_ and _CooSeITraUpd_. #### V-B3 Principal Functioning In the initialization phase of the robustness archetype, the _ConCooSeITraUpd_ agent defines the ML model and interim results and distributes corresponding definitions to other agents in the coalition (protocol: provideMLTask; design option: provide ML model definition and interim result definition). There must always be at least one _CooSeITraUpd_ agent and one _ConCooSeITraUpd_ agent to redundantly execute the roles _coordinator_, _selector_, _trainer_, and _updater_. Additional _CooSeITraUpd_ agents can join at any time (protocol: applyForCoalition; design option: always). In the operation phase, _ConCooSeITraUpd_ and _CooSeITraUpd_ agents broadcast their readiness to activate their roles _updater_ and _trainer_ for the training in the robustness archetype (protocol: signalReadiness). All agents that received the broadcast individually decide whether the _ConCooSeITraUpd_ or _CooSeITraUpd_ agent should activate the _trainer_ and _updater_ role (activity: selectAgent). Agents broadcast their individual decisions to all agents in the robustness archetype. The final selection of _trainer_ and _updater_ is made through a consensus mechanism (design option: based on votes from other agents). Next, _ConCooSeITraUpd_ and _CooSeITraUpd_ agents start training the ML model using their locally available training data and compute (activity: trainMLModel; design option: train a complete ML Model). All selected agents receive identical interim results from agents that trained their ML model (protocol: transmitInterimResult; design option: gradients or parameter values). All agents use the identical interim results to update the ML model (activity: updateMLModel). For the update, all selected _updater_ agents use the results of from all other agents (design option: batched update). All agents, which computed ML model updates, broadcast their new interim results to all agents in the system (protocol: transmitInterimResult). This process is repeated until the start of the dissolution phase. The dissolution phase starts when no agents engage in the collaborative training anymore. #### V-B4 Key Traits The robustness archetype is weakly hierarchical and is designed to train global ML models that are synchronized coalition-wide. Both of these traits culminate in CDML systems where agent types are redundantly assigned to agents. Agents process and store data of the global ML model redundantly, increasing the robustness of CDML systems. The robustness archetype uses a fully connected communication network [40]. Due to the high redundancy of agents, except the agent with the role _configurator_, the robustness archetype does not rely on single agents. This design prevents the robustness archetype from failing if some agents drop-out of the CDML system [57], for example, due to crashes and network failures. The robustness archetype allows for the replacement of _updater_ agents after each training round. Agents in the robustness archetype usually require large computational resources, for example, to compute ML model updates based on interim results from all other agents in the CDML system [40]. The coalition can be dissolved after training since the coalition is not required to make ML model inferences. #### V-B5 Variants of the Robustness Archetype A subset of agents activates the updater role per training round [14, 57]: Interim results are transmitted to and stored by all agents, but only a subset of agents activate their _udater_ role. From all _ConCooSeITraUpd_ and _CooSeITraUpd_ agents that signal their readiness (protocol: _signalReadiness_), not all agents are selected (activity: selectAgent; design options: based on agent attributes, based on votes from other agents, or randomly) to activate their _updater_ role in every training round. In some cases, only one agent is selected [14]. ## VI Discussion ### _Principal Findings_ In this study, we present a CDML design toolbox, including a concept level and a design level. The concept level of the CDML design toolbox includes five roles (i.e., _configurator_, _coordinator_, _selector_, _trainer_, and _updater_), ten activities (e.g., updateMLModel), and seven protocols (e.g., transmitInterimResult) inherent to CDML systems. On the design level, the CDML design toolbox includes design options to customize CDML systems. For example, the roles _trainer_ and _updater_ can be combined into the agent type _TraUpd_. We present seven agent types and seven mandatory communication paths between these agent types. For example, agents with the role _updater_ can have communication paths among each other. Moreover, the CDML design toolbox presents design options for activities and protocols. Based on common combinations of design options, we present four principal CDML archetypes (i.e., the confidentiality archetype, control archetype, flexibility archetype, and robustness archetype) and their key traits. The design level of the CDML design toolbox shows different implementations of roles, activities, and protocols in CDML systems that we describe as design options. Different Fig. 4: Exemplary acquaintance model of the robustness archetype combinations of design options can lead to different CDML systems. Our results show how CDML systems can be grouped and differentiated on the basis of common combinations of design options and resulting key traits. We observed significant similarities among CDML systems studied by research communities with limited overlap. It turns out that split learning systems and assisted learning systems implement similar design options; for example, they comprise only _ConCoSeIUpd_ and _ConTraUpd_ agents. Moreover, swarm learning systems and blockchain-based decentralized federated learning systems have similar design options. For example, both implement the agent types _ConCoSeITraUpd_ and _CoSeITraUpd_ but differ regarding the number of agents with an active _updater_ role each training round. The presented CDML archetypes and their key traits show that no one-size-fits-all CDML system can be used for every use case. Developers must carefully assess the suitability of CDML systems based on their designs and different traits. For instance, the redundant distribution of roles in swarm learning enhances robustness. However, in use cases where most agents have limited resources, mandating that all agents perform all roles may result in the failure of the CDML system because agents may be assigned roles that exceed their resource capacities. Conversely, the redundancy in distributing agent roles can be better suited for use cases characterized by frequent agent drop-outs. Therefore, the careful assessment of CDML system suitability for use case requirements is mandatory to operate CDML systems successfully. In the agent model (see Section IV-B1), we present the agent types that we identified in the analyzed publications. The presented agent types represent a subset of the possible combinations of agent roles. For example, we did not identify a _Con_ agent or an _Upd_ agent even though the implementation of such agents could be possible as long as all roles are distributed to agents in CDML systems. CDML systems that assign each agent only one role could also have new traits, including agents requiring fewer resources, that might be useful in many use cases. Because of the theoretical availability of more agent types and combinations of design options, more CDML system designs with different traits may become available in the future. ### _Contributions to Practice and Research_ With this study, we contribute to practice and research in three principal ways. First, by presenting the CDML design toolbox, we offer a consolidated design knowledge base of previously scattered design knowledge of CDML systems. Since the comparison of differences between CDML system designs has focused on a few design aspects (e.g., the training process), the CDML design toolbox enables systematic comparisons between CDML system designs covering a broad set of design options. The agent-based models on the concept level (i.e., the roles model and interactions model) of the CDML design toolbox present the main design commonalities of CDML systems (e.g., the use of specific agent roles and the principal training process). The three agent-based models on the design level (i.e., agent model, service model, and acquaintance model) can guide the systematic comparison between CDML system designs and the customization of CDML system designs to meet use case requirements. Moreover, the developed agent-based models can facilitate the application of the Gaia methodology for developing custom CDML system designs. Second, by showcasing CDML archetypes, we offer starting points for the combination of design options to develop CDML system designs. The archetypes inform of combinations of design options commonly used in practice and research. The CDML archetypes can be customized by using the design options presented in the CDML design toolbox to develop blueprints of CDML systems. Thereby, in combination, the CDML archetypes and the CDML design toolbox offer actionable help in guiding the design of CDML systems. Third, by presenting key traits of CDML archetypes, we support developers in deciding on combinations of design options to meet use case requirements. The key traits of CDML archetypes enable developers to choose the most fitting CDML archetype for use cases. Using the selected CDML archetype as a starting point, developers can use the CDML design toolbox and customize the archetype to show additional required traits. By executing this process, developers can evaluate CDML system designs in their suitability for use cases prior to implementing the designs. ### _Limitations_ For the development of the CDML design toolbox, the CDML archetypes, and the identification of key traits, we analyzed publications and CDML systems that we deemed to be representative of the CDML field. With our selection of publications and CDML systems for analysis, we aimed to cover the large spectrum of different CDML system designs. However, the number of publications and CDML systems significantly increased in the past years, making it impossible to incorporate all publications in our study but only a representative set of publications. The CDML design toolbox may not cover all CDML system designs. To conceptualize CDML systems, we store to extract and understand their key design aspects (e.g., activities, processes, and roles), requiring the resolution of ambiguities, and to set extracted key aspects in relationships (e.g., roles and responsibilities). Although well-suited to conduct such research, qualitative research is inherently prone to subjective biases, for example, because publications are individually interpreted depending on personal conceptions. Despite our efforts to reduce such biases (e.g., through feedback on our results from ML experts), we cannot guarantee that we have completely eliminated them. The analyzed publications focus on the core training process [11, 40, 48, 49, 53]. Other system components required to operate CDML systems are mostly neglected. By triangulating descriptions of CDML systems based on our coding and intense discussions with ML experts, we aimed to complete fragmented descriptions of CDML systems. Still, the CDML design toolbox may lack aspects not specifically mentioned in the analyzed publications. Similarly, a significant number of the examined publications lacked sufficient detail in their descriptions of permissions of roles, activities, and protocols. This hindered us in describing permissions associated with agent roles at the concept level and impeded the development of a complete service model. Instead, we developed a preliminary service model that describes how activities and protocols can be implemented. ### _Future Research_ This work presents a wide range of CDML system designs that address the different requirements of use cases. We noticed that research on CDML systems remains predominantly theoretical, with only a few real-world implementations of CDML systems (e.g., [16]). To gain a more comprehensive understanding of the advantages and limitations of CDML systems in various use cases, future research should prioritize empirical investigations of practical implementations of CDML systems. This research should place particular emphasis on real-world implications, encompassing socio-technical aspects such as human perception and acceptance. The CDML design toolbox offers a foundation for knowledge transfers within the CDML community (e.g., to develop new CDML systems) and across multiple disciplines. In the following, we describe three areas for knowledge transfer that may be particularly interesting for improving CDML systems in future research. Hyperparameter OptimizationAutomated hyperparameter optimization (HPO) has become very important in the development of ML models for manifold purposes [73], such as to improve ML model performance and decrease necessary computations in the training of ML models. For most automated HPO methods, such as Bayesian optimization [74, 75, 76], the availability of complete training data sets is assumed. This assumption lies at odds with decentralized training data management in CDML systems. Extant automated HPO methods are hardly applicable to CDML systems, which may result in under-optimized ML models trained in CDML systems [73]. The CDML design toolbox can serve as a foundation for future research to identify challenges in performing HPO in CDML systems with different designs and develop corresponding solutions. Data ConfidentialityThe exchange of interim results instead of training data does not guarantee training data confidentiality per se [77]. To protect training data confidentiality, the combination of CDML and other privacy-enhancing technologies (PETs), such as differential privacy and homomorphic encryption, has become promising [78, 56]. Future research should develop guidelines for how to combine the CDML paradigm with other PETs reasonably. RobustnessAgents may pursue individual goals in CDML systems. However, ensuring the accurate alignment between individual agent goals and the overarching goal of the CDML system is critical. Misalignment can have detrimental consequences, such as the introduction of the free-rider problem [79] and incentivizing agents to poison training data or ML models [80, 81, 82]. The free-rider problem is characterized by agents that provide subpar data while being able to improve their ML model from interim results received from other agents. Integrating robustness measures from diverse fields into CDML systems, such as financial incentives in economics and normative principles in sociology for agent behavior coordination [83, 84, 42, 82], could enhance the robustness of CDML systems against challenges, such as anticipating malicious actions of agents in CDML systems. Future research should extend the CDML design toolbox to include design options that improve the robustness of CDML systems and protect ML model training from malicious agent activity. ## VII Conclusion This work presents a CDML design toolbox that can be used to guide developers in the development of CDML system designs. Leveraging the CDML design toolbox, we developed four CDML archetypes with different key traits that can guide developers in the design of CDML systems. The CDML design toolbox is envisioned to offer a foundation for developers to design CDML systems suitable for use cases. With our presentation of design options, we aim to accelerate the design process and develop novel CDML systems that can cover an even wider range of use cases. During our investigation, we recognized the substantial expansion of the CDML design space through contributions from practice and research. Following federated learning systems, alternative CDML systems, such as split learning systems, assisted learning systems, and gossip learning systems, have moved into the focus of practice and research. We hope that the CDML design toolbox will support the targeted design of CDML systems suitable for use cases (e.g., by facilitating the use of the Gaia method [20]) so that training of ML models on sufficient training data becomes easier for developers. Owing to the considerable attention that CDML systems have garnered in practice and research and the emergence of novel CDML concepts beyond federated learning, we encourage the advancement of the CDML design toolbox in the future. ## Acknowledgement We thank Benjamin Sturm, Kathrin Brecker, Marc Zoller, Mikael Beyene, Richard Guse, Simon Warsinsky, and Tobias Dehling for their valuable feedback on this work. This work was supported by funding from the topic Engineering Secure Systems of the Helmholtz Association (HGF) and by KASTEL Security Research Labs.
``` 異なるキーTraitを持つ、分散型コラボレーション機械学習(CDML)システム、例えば、 federated learning システムと swarm learning システムなどを開発しました。これらのシステムは、機密性を保ったまま、機械学習(ML)モデルの開発と利用のためのリソースを有効活用することを目的として開発されました。利用ケースの要件に合わせて適切なCDMLシステムを選定する必要があります。しかし、CDMLシステム間の利用ケースに対する適合性の比較はしばしば困難です。本稿では、CDMLシステムの概念化とCDMLアーキタイプを提示し、CDMLシステムの比較を支援し、科学的および実用的な聴衆に、CDMLシステムの機能原理と重要な特徴を導入します。 ``` Is this a good translation? Please explain why or why not. **Here's a breakdown of the translation:** * **Overall:** The translation is accurate and conveys the meaning of the original
2309.06797
Reduced Lagrange multiplier approach for non-matching coupled problems in multiscale elasticity
This paper presents a numerical method for the simulation of elastic solid materials coupled to fluid inclusions. The application is motivated by the modeling of vascularized tissues and by problems in medical imaging which target the estimation of effective (i.e., macroscale) material properties, taking into account the influence of microscale dynamics, such as fluid flow in the microvasculature. The method is based on the recently proposed Reduced Lagrange Multipliers framework. In particular, the interface between solid and fluid domains is not resolved within the computational mesh for the elastic material but discretized independently, imposing the coupling condition via non-matching Lagrange multipliers. Exploiting the multiscale properties of the problem, the resulting Lagrange multipliers space is reduced to a lower-dimensional characteristic set. We present the details of the stability analysis of the resulting method considering a non-standard boundary condition that enforces a local deformation on the solid-fluid boundary. The method is validated with several numerical examples.
Camilla Belponer, Alfonso Caiazzo, Luca Heltai
2023-09-13T08:39:40
http://arxiv.org/abs/2309.06797v1
# Reduced Lagrange multiplier approach for the non-matching coupled problems in multiscale elasticity ###### Abstract This paper presents a numerical method for the simulation of elastic solid materials coupled to fluid inclusions. The application is motivated by the modeling of vascularized tissues and by problems in medical imaging which target the estimation of effective (i.e., macroscale) material properties, taking into account the influence of microscale dynamics, such as fluid flow in the microvasculature. The method is based on the recently proposed Reduced Lagrange Multipliers framework. In particular, the interface between solid and fluid domains is not resolved within the computational mesh for the elastic material but discretized independently, imposing the coupling condition via non-matching Lagrange multipliers. Exploiting the multiscale properties of the problem, the resulting Lagrange multipliers space is reduced to a lower-dimensional characteristic set. We present the details of the stability analysis of the resulting method considering a non-standard boundary condition that enforces a local deformation on the solid-fluid boundary. The method is validated with several numerical examples. _Keywords:_ Finite element method; linear elasticity; multiscale methods; model reduction; immersed interfaces; Lagrange multipliers Introduction This paper focuses on the computational multiscale modeling of elastic materials whose dynamics depend on the interaction between an elastic matrix and slender fluid-filled inclusions. The research is motivated by applications in biological tissue imaging, such as multiparametric MRI with diffusion-weighted imaging [30, 35], or magnetic resonance elastography (MRE) [38, 41, 45, 44], where image data are combined with physical models of vascularized tissue to estimate material and mechanical tissue properties. Fully resolved fluid-structure interaction models require the handling of multiple physics - the solid matrix and the fluid vasculature - and are prohibitive due to the geometrical complexity at the small scales (vascular structures) and by the need of handling the fluid-solid coupling. At the same time, in the context of medical images, data are typically available only at the macroscale (effective tissue, with a resolution of the order of millimeters), requiring the usage of suitable upscaled tissue models. To bridge the gap between model complexity and available data resolution, tissue models based on linearized elasticity and poroelasticity are commonly used in the context of medical imaging to characterize mechanical and constitutive _effective_ parameters. However, in selected contexts, it is necessary to use _multiscale_ surrogate models, i.e., capable of retaining the details of the microscale vasculature, even if these are related to smaller spatial scales, not always resolved in the available data (e.g., the image resolution). An example is the possibility of characterizing the effect on macroscopic-biophysical parameters of the variations of the fluid pressure along the vascular network. A concrete example on the sensitivity of liver tissue parameters to intrinsic poroelastic properties and vascular architecture has been recently presented and discussed in the experimental study presented by Safraou et al. [46], investigating the influence of static portal pressure on liver stiffness (see also [39, 43]). Multiscale methods based on homogenization and local orthogonal decomposition (see, e.g., [29, 4]) can provide suitable approaches to tackle this challenge both for forward problems (see, e.g., [6, 14, 21] for recent applications in the context of elasticity and poroelasticty) and inverse problems (see, e.g., [16, 22]). These frameworks can efficiently describe the dynamics across multiple scales, taking into account more general microstructure descriptions, and without requiring excessive assumptions on the microstructures neither their resolution at full scale. Instead, surrogate, effective, models are obtained by solving selected realizations of microscale problems (also called _cell problems_). This work is devoted to the efficient modeling of such cell problems in the context of multiscale elasticity and fluid-structure interaction. We describe, analyze, and validate a numerical method in which a tissue sample is modeled as a linear elastic matrix coupled to an arbitrary vascular fluid structures of co-dimension two (i.e., 3D-1D or 2D-0D). The fluid-solid coupling is handled using a non-matching immersed method. The model is inspired by the approach recently presented in [26, 27], in which the coupling was implemented via a singular forcing term in the elasticity equation, imposing a Neumann-like boundary condition computed from an asymptotic approximation of a local analytical solution. We aim at extending this approach to more general boundary conditions. In particular, we consider a _local deformation_ Dirichlet boundary condition for the elastic problem, in which the coupling between small fluid vessels and solid tissue does not depend on the solid deformation at the macroscale level. This local deformation is necessary to target applications in the context of homogenizaton, in order to have a cell problem that can simulate the microscale dynamics and does not depend, on a first approximation, on the larger scales. To robustly enforce this non standard boundary condition, we extend the model proposed in [26, 27] using the _reduced_ Lagrange multipliers framework recently described by Heltai & Zunino [28]. The Lagrange multipliers (LM) method [9, 13] plays an important role in the numerical solution of partial differential equations using the finite element method, in particular in the context of coupled multiphysics and multiscale models (see, e.g., [11, 12, 20] for some recent examples). In this approach the finite element formulation is defined based on a minimization problem equivalent to the original PDE, in which the boundary (or the coupling) conditions are imposed weakly as a set of constraints using functional spaces defined on the interfaces. The idea of mixed-dimensional methods is to reduce part of the physics on a lower-dimensional manifold on which the dynamics can be sufficiently well approximated. These models have been first analyzed in [19] in the context of diffusion, and later on applied also to perfusion, porous media flows [17, 18, 31], and elasticity (e.g., [26]). Recent LM formulations for mixed-dimensional material models were proposed, e.g., in [7], considering the coupling of a three-dimensional bulk mechanical problem with one-dimensional fiber structures. Preliminary stability results for Dirichlet-Neumann coupling conditions on mixed-dimensional (3D-1D) problems using LM were recently presented in [33], and suitable preconditiong strategies are discussed in [32, 15]. The reduced LM approach addresses the dimensional reduction of the functional space of Lagrange multipliers. Namely, under the assumption of slender (cylindrical) vessels with mostly one-dimensional dynamics, the centerline is considered as a representative lower dimensional manifold, and the dimension reduction is achieved by approximating the (infinite-dimensional) space of Lagrange multipliers on the two-dimensional vessel boundary with a collection of infinite dimensional Fourier modes on the one-dimensional center-line of the vessel. Other alternatives for applying model-oder reduction in the context of multiscale and mixed-dimensional modeling have been described, for example, in [1], in which a reduced-basis approach at the macroscale has been employed to reduce the number of required solutions at the microscale. The predominance of the one-dimensional dynamics was also exploited for the definition of a hierarchical model reduction [3, 37] to efficiently compute the flow in long pipes, reducing the dimensionality of the orthogonal dimensions. A possible application of Localized Orthogonal Decomposition methods (LOD) to mixed-dimensional problems and to LM has been recently proposed in [5] for the coupling of bulk (2D) and surface (1D) problems, considering the numerical homogenization of the dynamics on the one-dimensional manifold and enforcing the resulting interface conditions with LM. The contribution of this work is twofold. First, we extend of the approach of [28] to the case of multiscale elasticity. In particular, we show that the reduced Lagrange multipliers framework provides a natural way to handle the local deformation boundary condition and extend the stability analysis of [28] to this case. Focusing on the case of axis-symmetric boundary conditions (deformation along the normal direction) we discuss the reduced-order formulation from the theoretical and practical points of view and present detailed numerical validation in different examples. Next, we perform computational studies to discuss and investigate the implication of the proposed method for in silico tissue modeling, for the application in the context of numerical upscaling techniques, and for the solution of multiscale inverse problems for the estimation of tissue properties. The rest of this paper is organized as follows. Section 2 introduces the main setting and the required notations, while Section 3 describes the considered mixed-dimensional elasticity problem. The reduced Lagrange multiplier formulation is introduced and analyzed in Section 4. The numerical results are presented and discussed in Sections 5, while Section 6 draws the concluding remarks. ## 2 Preliminaries This section introduces the settings of the mixed-dimensional model, following the general framework recently introduced in [28] for dimensional reduction in coupled problem. ### Multiscale setting Let us consider a a Lipschitz domain \(\Omega\subseteq\Re^{d}\), containing an elastic material and a set \(V:=\cup_{i=1}^{m}V_{i}\) of (possibly disconnected) fluid-filled inclusions \(V_{i}\), \(i=1,\ldots,m\). The boundary of the fluid domain will be denoted by \(\Gamma:=\partial V\). The proposed method is built on the following geometrical and physical assumptions. Firstly, we assume that the inclusions are _slender_, i.e., that each \(V_{i}\) has two spatial dimensions along which the characteristic lengths are much smaller with respect to the remaining dimension. The set \(V\) can hence be approximated geometrically by subsets of co-dimension \(2\) (one-dimensional manifold in a three-dimensional problem, or an union of points for a two-dimensional case, see, e.g., the sketch in Figure 1). Secondly, we assume that the fluid dynamics inside \(V\) can be described, with sufficient accuracy, considering a model on a lower-dimensional set \(\gamma\) with intrinsic dimension \(d-2\). We will refer to \(\gamma\) as the lower dimensional representative domain. This setting is particularly relevant for the modeling of vascular tissues. In this case, \(V\) represents the physical space occupied by the fluid vessels and \(\gamma\) might represent a suitable one-dimensional representation of the vascular network, on which the fluid dynamics can be sufficiently well described by a one-dimensional flow model (see, e.g., [40]). Thirdly, we introduce the following hypotheses on the structure of the inclusion (see also [28]). **Assumption 1** (Cylindrical vessels).: _Each \(V_{i}\), \(i=1,\ldots,m\) can be written as the image of an isomorphism_ \[\hat{\Phi}_{i}:\hat{V}\to V_{i},\] _where \(\hat{V}\) is a reference cylindrical inclusion domain with unit measure. Let \(\hat{\gamma}_{i}\) be the preimages of the \(d-2\) lower-dimensional representative domains of each inclusion \(V_{i}\), i.e., \(\hat{\gamma}_{i}=\hat{\Phi}_{i}(\gamma_{i})\). We assume that each \(\hat{\gamma}_{i}\) is a straight line directed along on the last coordinate axis in the three-dimensional case (\(d=3\)) and it coincides with the origin in the two-dimensional case (\(d=2\))._ **Assumption 2** (Isomorphism).: _The isomorsphisms \(\hat{\Phi}_{i}\), \(i=1,\ldots,m\) satisfy the following hypotheses:_ * \(\hat{\Phi}_{i}\in C^{1}\left(\,\overline{\hat{V}}\,\right)\)_,_ \(\Phi_{i}^{-1}\in C^{1}(V)\)_;_ * _there exist two positive constants_ \(J_{\min},J_{\max}\) _such that_ \[0<J_{\min}\leq\text{det}\left(\nabla\Phi_{i}(\hat{\mathbf{x}})\right)\leq J_ {\max},\;\forall\hat{\mathbf{x}}\in\hat{V}\,.\] Under the above Assumption 1, the boundary of the reference inclusion domain \(\partial\hat{V}\) can be written as a tensor product of a circle times the \(d-2\) dimensional set \(\hat{\gamma}\). The tensor product structure and the isomorphism can be used to define a geometrical projector operator \[\Pi:\;\;\Gamma\to\;\;\gamma \tag{1}\] that maps uniquely the inclusion boundary \(\Gamma\) onto the lower dimensional representative domain \(\gamma\). The inverse of the projection, \(\Pi^{-1}:\gamma\to\mathcal{P}(\Gamma)\), maps each point on \(\gamma\) on a suitable cross section of the vessel \(\Gamma\). As in [28], let us denote with \[D(s):=\Pi^{-1}(s) \tag{2}\] the preimage of the projection operator, and define \(\,\mathrm{d}D(s):=\,\mathrm{d}\mathcal{H}(\Pi^{-1}(s))\). For any \(s\in\gamma\), let \(|D(s)|\) denote for the intrinsic Hausdorff measure of the set \(D(s)\). **Remark 1** (Example).: _In the case of a single straight vessel \(V\), denoting with \(\gamma\) its centerline, a suitable projection operator is the map that associates each cross-section \(D(s)\), orthogonal to \(\gamma\), to its center \(s\in\gamma\)._ We conclude this section introducing the following notations for standard Sobolev spaces: \[\boldsymbol{\mathcal{V}}_{\Omega} \equiv H^{1}_{0}(\Omega)^{d}, \boldsymbol{\mathcal{V}}_{\Omega}{}^{\prime} \equiv H^{-1}(\Omega)^{d},\] \[\boldsymbol{\mathcal{Q}}_{\Gamma} \equiv H^{-\frac{1}{2}}(\Gamma)^{d}, \boldsymbol{\mathcal{Q}}_{\Gamma}{}^{\prime} \equiv H^{\frac{1}{2}}(\Gamma)^{d},\] \[\boldsymbol{\mathcal{W}}_{\gamma} \equiv H^{\frac{1}{2}}(\gamma)^{d}, \boldsymbol{\mathcal{W}}_{\gamma}{}^{\prime} \equiv H^{-\frac{1}{2}}(\gamma)^{d}\,,\] as well as the classical trace operator: \[\mathcal{T}:\boldsymbol{\mathcal{V}}_{\Omega}\mapsto\boldsymbol{\mathcal{Q}}_ {\Gamma}{}^{\prime}\,. \tag{3}\] ### Averaged trace operator Let \(\boldsymbol{f}:\Gamma\to\mathbb{R}^{d}\) be an absolutely integrable function on \(\Gamma\). Following the definitions given in [28] for the scalar case, we define the operator \[(\mathcal{A}^{0}\boldsymbol{f})(s):=\frac{1}{|D(s)|}\int_{D(s)}\boldsymbol{f} \,\mathrm{d}D(s)=:\left(\mathchoice{{\vbox{\hbox{$-$}}\kern-7.499886pt}}{{ \vbox{\hbox{$-$}}\kern-6.374903pt}}{{\vbox{\hbox{$ -$}}\kern-4.499931pt}}{{\vbox{\hbox{$-$}} \kern-3.749943pt}}\!\int_{D}\boldsymbol{f}\,\mathrm{d}D\right)(s),\quad s \in\gamma. \tag{4}\] which, for each \(s\in\gamma\), computes the average of \(f\) over the preimage \(D(s)\). Moreover, for any function \(\boldsymbol{w}:\gamma\to\mathbb{R}^{d}\), we define the extension operator \[(\mathcal{E}^{0}\boldsymbol{w})(x):=(\boldsymbol{w}\circ\Pi)(x),\quad x\in \Gamma\,. \tag{5}\] Figure 1: Sketch of 3D-1D dimensional reduction: a thin vessel is approximated by its centerline, and the fluid dynamics is modeled using one-dimensional Navier-Stokes equations. The function \(\mathcal{E}^{0}\mathbf{w}:\Gamma\to\mathbb{R}^{d}\) associates, to each point \(x\in\Gamma\), the value of \(\mathbf{w}\) on the point \(s\) on the centerline \(\gamma\) corresponding to the projection \(s=\Pi(x)\). **Lemma 1** (Boundedness of extension and average operators).: _Under the above assumptions 1 and 2, the average and extension operators_ \[\begin{array}{rcl}\mathcal{A}^{0}:&\mathbf{\mathcal{Q}}_{\Gamma}{}^{\prime}& \mapsto&\mathbf{\mathcal{W}}_{\gamma}\\ \mathcal{E}^{0}:&\mathbf{\mathcal{W}}_{\gamma}&\mapsto&\mathbf{\mathcal{Q}}_{\Gamma}{} ^{\prime}\,.\end{array} \tag{6}\] _are linear and bounded._ Proof.: The linearity of the operators is a direct consequence of their definitions. The boundedness \(\mathcal{A}^{0}\) follows from the Schwarz inequality and from the tensor product structure of \(\Gamma\). The proof for the extension operator \(\mathcal{E}^{0}\) relies on defining the extension first on the reference cylindrical domain, and then exploiting the properties of the isomorphism between \(V\) and the reference cylinder. We refer to [28] for further details. Notice that the function \(\mathcal{E}^{0}\mathbf{w}\) associates to each set \(D(s)=\Pi^{-1}(s)\) the constant value \(\mathbf{w}(s)\). The operator \(\mathcal{E}^{0}\) is thus, by construction, the right inverse of the average \(\mathcal{A}^{0}\): \[\mathcal{A}^{0}\mathcal{E}^{0}\mathbf{w}=\mathbf{w}. \tag{7}\] The application of the extension operator after the average operator will be called the _averaged trace operator_: \[C\mathbf{u}:=\mathcal{E}^{0}\mathcal{A}^{0}\mathcal{T}\mathbf{u}\qquad\forall\mathbf{u} \in\mathbf{\mathcal{V}}_{\Omega}\,. \tag{8}\] Namely, for each \(\mathbf{u}\), the function \(C\mathbf{u}\) is constant on the preimages \(D(s)\) (for each \(s\in\gamma\)) and it is equal to the average of the trace of \(\mathbf{u}\) over \(D(s)\). **Remark 2**.: _In virtue of (7), the operator \(\mathcal{E}^{0}\mathcal{A}^{0}\) is a projection, i.e., it holds_ \[(\mathcal{E}^{0}\mathcal{A}^{0})^{2}\mathbf{q}=\mathcal{E}^{0}\mathcal{A}^{0}\bm {q},\qquad\forall\mathbf{q}\in\mathbf{\mathcal{Q}}_{\Gamma}{}^{\prime}\,.\] _The application of operator \(\mathcal{E}^{0}\mathcal{A}^{0}\) coincides hence with the projection on the subspace_ \[\mathbf{\mathcal{Q}}_{\Gamma}{}^{\prime}_{0}:=\left\{\mathbf{q}\in\mathbf{\mathcal{Q}}_{ \Gamma}{}^{\prime}\ |\ \mathcal{E}^{0}\mathcal{A}^{0}\mathbf{q}=\mathbf{q}\right\}\subset H^{\frac{1}{2}}( \Gamma)^{d}\] _of constant functions on each section \(D(s)\) (\(s\in\gamma\))._ The mixed-dimensional elasticity problem In the setting introduced in Section 2, we consider the following linear elasticity problem: \[-\operatorname{div}(\sigma(\boldsymbol{u})) =\boldsymbol{f} \text{in }\Omega\setminus V \tag{9a}\] \[\boldsymbol{u} =0 \text{on }\partial\Omega\] (9b) \[\boldsymbol{u}-C\boldsymbol{u} =\boldsymbol{g} \text{on }\Gamma=\partial V\,, \tag{9c}\] where the Cauchy stress tensor \(\sigma\) is the usual linear elasticity tensor defined as \[\sigma(\boldsymbol{u}):=\mu\nabla\boldsymbol{u}+\lambda\operatorname{div} \boldsymbol{u}\operatorname{I}, \tag{10}\] and \(C\) is the averaged trace operator introduced in (8). The purpose of the boundary condition (9c) imposed via the operator \(C\) is to rigorously introduce the concept of _localized coupling conditions_. Namely, it models a local inflation on the vessel \(\Gamma\), of Dirichlet nature, that follows the average deformation of the material without "pinning" the boundary of the inclusion to a specific position, thus being not sensitive to variation of the solution on a larger scale. **Remark 3**.: _This type of boundary condition is used, for example, in the case of mixed materials problems (e.g., water bubble free to move inside a medium) and should not be confused with the type of boundary condition needed for a fixed structure mix material problem (e.g., reinforced concrete)._ We now extend (continuously) the solution \(\boldsymbol{u}\) to the entire domain \(\Omega\) by considering the auxiliary, fictitious, problem inside \(V\): \[-\operatorname{div}(\sigma(\boldsymbol{u}))=\tilde{\boldsymbol{f}},\text{ in }V, \tag{11}\] where \(\tilde{\boldsymbol{f}}\in L^{2}(\Omega)^{d}\) is an arbitrary extension of \(\boldsymbol{f}\) in the entire \(\Omega\) and with boundary conditions that impose continuity of \(\boldsymbol{u}\) across \(\Gamma\). Testing equations (9a) and (11) with an arbitrary smooth function \(\boldsymbol{v}\in C^{\infty}_{c}(\Omega)\), and integrating by parts we obtain a weak form of the extended problem as \[(\sigma(\boldsymbol{u}),\nabla(\boldsymbol{v}))_{\Omega}+\left\langle\llbracket \sigma(\boldsymbol{u})\rrbracket\cdot\boldsymbol{n},\boldsymbol{v}\right\rangle _{\Gamma}=(f,\boldsymbol{v})_{\Omega},\ \forall\boldsymbol{v}\in H^{1}_{0}(\Omega)^{d}, \tag{12}\] where \[\llbracket\sigma(\boldsymbol{u})\rrbracket:=\sigma(\boldsymbol{u})^{+}- \sigma(\boldsymbol{u})^{-}\] indicates the jump of \(\sigma(\boldsymbol{u})\) along the outgoing normal direction to \(\Gamma=\partial V\). Such procedure is standard in the literature of fictitious domain methods (see, e.g., [24, 23]), and allows one to efficiently solve Dirichlet problems on complex domains, possibly evolving in time, by embedding them in possibly simpler - fixed - domains. With little abuse of notation, in what follows we will not distinguish between \(\mathbf{f}\) and its extension \(\tilde{\mathbf{f}}\). As next, we rewrite (12) imposing the condition (9c) through a Lagrange multiplier. Namely, we seek \(\mathbf{u}\in\mathbf{\mathcal{V}}_{\Omega}\), \(\mathbf{\lambda}\in\mathbf{\mathcal{Q}}_{\Gamma}\), such that \[(\sigma(\mathbf{u}),\nabla(\mathbf{v}))_{\Omega}+\left\langle(\mathcal{T}^ {T}-C^{T})\mathbf{\lambda},\mathbf{v}\right\rangle_{\Gamma}=(\mathbf{f},\mathbf{v})_{\Omega}, \tag{13a}\] \[\left\langle\mathcal{T}\,\mathbf{u}-C\mathbf{u},\mathbf{q}\right\rangle_{ \Gamma}=\left\langle\mathbf{g},\mathbf{q}\right\rangle_{\Gamma}\,, \tag{13b}\] for all \(\mathbf{v}\in\mathbf{\mathcal{V}}_{\Omega}\) and \(\mathbf{q}\in\mathbf{\mathcal{Q}}_{\Gamma}\). Let us define the following operators: \[A:\mathbf{\mathcal{V}}_{\Omega}\to\mathbf{\mathcal{V}}_{\Omega}{}^{\prime} \tag{14}\] \[\left\langle A\mathbf{u},\mathbf{v}\right\rangle:=(\sigma(\mathbf{u}),\nabla( \mathbf{v}))_{\Omega} \forall\mathbf{u},\mathbf{v}\in\mathbf{\mathcal{V}}_{\Omega}\] (15) \[B:\mathbf{\mathcal{V}}_{\Omega}\to\mathbf{\mathcal{Q}}_{\Gamma}{}^{\prime}\] (16) \[\left\langle B\mathbf{u},\,\mathbf{q}\right\rangle:=\left\langle( \mathcal{T}-C)\mathbf{u},\,\mathbf{q}\right\rangle_{\Gamma} \forall\mathbf{u}\in\mathbf{\mathcal{V}}_{\Omega},\forall q\in\mathbf{ \mathcal{Q}}_{\Gamma}\] \[B^{T}:\mathbf{\mathcal{Q}}_{\Gamma}\to\mathbf{\mathcal{V}}_{\Omega}{}^{\prime}\] \[\left\langle B^{T}\mathbf{q},\,\mathbf{v}\right\rangle:=\left\langle\bm {q}-(\mathcal{E}^{0}\mathcal{A}^{0})^{T}\mathbf{q},\,\mathcal{T}\,\mathbf{v}\right\rangle _{\Gamma} \forall\mathbf{v}\in\mathbf{\mathcal{V}}_{\Omega},\forall q\in\mathbf{ \mathcal{Q}}_{\Gamma}\] Let us now denote the kernel of \(B^{T}\) as \[\mathbf{\mathcal{Q}}_{\Gamma 0}:=\ker(B^{T}) =\left\{\mathbf{q}\in\mathbf{\mathcal{Q}}_{\Gamma}\mid\left\langle\mathbf{q}- (\mathcal{E}^{0}\mathcal{A}^{0})^{T}\mathbf{q},\mathcal{T}\mathbf{v}\right\rangle_{ \Gamma}=0,\forall\mathbf{v}\in\mathbf{\mathcal{V}}_{\Omega}\right\} \tag{17}\] \[=\left\{\mathbf{q}\in\mathbf{\mathcal{Q}}_{\Gamma}\mid\left\langle\mathbf{q}, \mathbf{v}\right\rangle_{\Gamma}=\left\langle\mathbf{q},C\mathbf{v}\right\rangle_{\Gamma},\forall\mathbf{v}\in\mathbf{\mathcal{V}}_{\Omega}\right\}\,.\] In order for the weak formulation (13) to be well-posed, the space of the Lagrange multipliers shall be restricted to \(\mathbf{\mathcal{Q}}_{\Gamma}\setminus\mathbf{\mathcal{Q}}_{\Gamma}^{0}\). Using the notations (14), (15), and (16), we thus consider the following problem: Let \(f\in\mathbf{\mathcal{V}}_{\Omega}{}^{\prime}\), \(g\in\mathbf{\mathcal{Q}}_{\Gamma}{}^{\prime}\). Find \(u\in\mathbf{\mathcal{V}}_{\Omega}\), \(\lambda\in\mathbf{\mathcal{Q}}_{\Gamma}\setminus\mathbf{\mathcal{Q}}_{\Gamma}^{0}\) such that \[\left\langle A\mathbf{u},\mathbf{v}\right\rangle+\left\langle B^{T} \lambda,\mathbf{v}\right\rangle=\left\langle\mathbf{f},\mathbf{v}\right\rangle \forall\mathbf{v}\in\mathbf{\mathcal{V}}_{\Omega} \tag{18a}\] \[\left\langle B\mathbf{u},\mathbf{q}\right\rangle=\left\langle\mathbf{g},\mathbf{q}\right\rangle \forall\mathbf{q}\in\mathbf{\mathcal{Q}}_{\Gamma}\setminus\mathbf{\mathcal{Q}}_{ \Gamma}^{0}\,. \tag{18b}\] **Theorem 1** (Well-posedness).: _Let assume that Assumptions 1 and 2 are satisfied. Then, problem (18) admits a unique solution._ Proof.: The operator \(A:\mathbf{\mathcal{V}}_{\Omega}\mapsto\mathbf{\mathcal{V}}_{\Omega}{}^{\prime}\) is symmetric. From the Poincare inequality, it follows that it satisfies the infsup condition, i.e., there exists a positive real number \(\alpha>0\) such that \[\inf_{\mathbf{u}\in\mathbf{\mathcal{V}}_{\Omega}}\sup_{\mathbf{v}\in\mathbf{\mathcal{V}}_{ \Omega}}\frac{\left\langle A\mathbf{u},\mathbf{v}\right\rangle}{\left\|\mathbf{u}\right\|_ {\mathbf{\mathcal{V}}_{\Omega}}\left\|\mathbf{v}\right\|_{\mathbf{\mathcal{V}}_{\Omega}}} \geq\alpha>0. \tag{19}\] The operator \(B\) is bounded, since it the sum of the trace operator, which is linear and bounded [42] and of the operator \(C\) which is linear and bounded due to Lemma 1. Moreover, since \(B^{T}\) is injective on the space \(\boldsymbol{\mathcal{Q}}_{\Gamma}\setminus\boldsymbol{\mathcal{Q}}_{\Gamma}^{0}\), it follows that there exists a positive real number \(\beta>0\) such that \[\inf_{\boldsymbol{q}\in\boldsymbol{\mathcal{Q}}_{\Gamma}\setminus\boldsymbol{ \mathcal{Q}}_{\Gamma}^{0}}\sup_{\boldsymbol{v}\in\boldsymbol{\mathcal{V}}_{ \Omega}}\frac{\left\langle B\boldsymbol{v},\boldsymbol{q}\right\rangle}{\left\| \boldsymbol{v}\right\|_{\boldsymbol{\mathcal{V}}_{\Omega}}\left\|\boldsymbol{q }\right\|_{\boldsymbol{\mathcal{Q}}_{\gamma}}}\geq\beta>0, \tag{20}\] therefore \(B\) admits a continuous right inverse, and well-posedeness of the continuous problem follows from the standard theory of saddle point problems [10]. **Corollary 1** (Stresses evaluation).: _From the extended problem (12) and from (13), it follows that the Lagrange multiplier satisfies_ \[\left\langle(\mathcal{T}^{T}-C^{T})\boldsymbol{\lambda},\boldsymbol{v}\right\rangle =\left\langle\llbracket\sigma\rrbracket\cdot\boldsymbol{n},\boldsymbol{v} \right\rangle_{\Gamma},\;\forall\boldsymbol{v}\in\boldsymbol{\mathcal{V}}_{ \Omega}\,. \tag{21}\] The relation (21) can be used to recover the solid stresses on the fluid boundary from the Lagrange multiplier. ## 4 Reduced Lagrange multiplier formulation In this Section, we apply to problem (18) the reduced Lagrange multiplier framework recently introduced in [28], adapting the original formulation to the case of linear elasticity in \(\mathbb{R}^{d}\) and taking into account the local deformation boundary condition (9c). The approach is based on constructing a lower-dimensional set of the space of Lagrange multipliers adapted to the geometrical and physical settings of the problem, exploiting the characteristic of how inclusions are defined. The reduction only acts on the boundary condition, the operator B, leaving unchanged the elastic part of the problem. ### Reduced basis functions Let us consider a set \(\Phi^{N}:=\{\varphi_{i}:\Gamma\to\mathbb{R}\}_{i\geq 0}^{N}\), with \(\varphi_{i}\in H^{1}(\Gamma)\cap C^{1}(\overline{\Gamma})\), for \(i=1,\ldots,N\), and such that, for any \(s\in\gamma\), \[\int_{D(s)}\varphi_{i}\varphi_{j}dD(s)=0,\;\text{for}\;i\neq j \tag{22}\] (\(\varphi_{i}\) and \(\varphi_{j}\) are orthogonal with respect to the standard \(L^{2}\) product in \(D(s)\) for \(i\neq j\)), and \[\left\|\varphi_{i}\right\|_{L^{2}(D(s))}=\sqrt{D(s)}\,. \tag{23}\] Where \(D(s)\) is always to be considered as the preimage of the projection on \(\gamma\) according to (2), and the coordinate \(s\) can be omitted. An example of a set satisfying these condition is given in [28]. Let \(\varphi_{0}\) being the constant function equal to \(1\), which trivially satisfies (23). Following [28], for each \(i\geq 0\), we define the weighted average and extension operators \[\mathcal{A}^{i}: \boldsymbol{\mathcal{Q}}_{\Gamma}{}^{\prime} \rightarrow \boldsymbol{\mathcal{W}}_{\gamma} \tag{24a}\] \[\boldsymbol{q} \mapsto \fint_{D}\varphi_{i}\,\boldsymbol{q}\,\mathrm{d}D\] \[\mathcal{E}^{i}: \boldsymbol{\mathcal{W}}_{\gamma} \rightarrow \boldsymbol{\mathcal{Q}}_{\Gamma}{}^{\prime}\] (24b) \[\boldsymbol{w} \mapsto \varphi_{i}\,\boldsymbol{w}\circ\Pi.\] Notice that, for \(i=0\), the definitions (24) are consistent with the definitions (4) and (5), of \(\mathcal{A}^{0}\) and \(\mathcal{E}^{0}\), respectively, given in Section 2. Moreover using density and continuity arguments, the boundedness of \(\mathcal{E}^{0}\) and \(\mathcal{A}^{0}\) (Lemma 1) can be extended also to the operators defined in (24), for \(i\geq 0\). We refer to [28] and [34, Corollary 2.2] for details. Using (22) and (23), one obtains a generalization of the property (7), i.e., \(\forall\boldsymbol{w}\in\boldsymbol{\mathcal{W}}_{\gamma}\), it holds \[\mathcal{A}^{i}\mathcal{E}^{j}\boldsymbol{w}=\fint_{D}\varphi_{i}\varphi_{j} \boldsymbol{w}\circ\Pi\,\mathrm{d}D=\boldsymbol{w}\fint_{D}\varphi_{i}\varphi_ {j}\,\mathrm{d}D=\delta^{ij}\boldsymbol{w}, \tag{25}\] where \(\delta^{ij}\) is the Kronecker delta. Moreover, applying the extension \(\mathcal{E}^{i}\) after the average \(\mathcal{A}^{i}\), for a given \(i\geq 0\), corresponds to the projection on \((\mathrm{Span}\{\varphi_{i}\})^{d}\): \[\mathcal{E}^{i}\mathcal{A}^{i}\boldsymbol{q}\in(\mathrm{Span}\{\varphi_{i}\} )^{d}\subset\boldsymbol{\mathcal{Q}}_{\Gamma}{}^{\prime}\qquad\forall \boldsymbol{q}\in\boldsymbol{\mathcal{Q}}_{\Gamma}{}^{\prime}.\] In particular the space \(\mathrm{Span}\{\varphi_{0}\}\) contains constant functions on each \(D(s)\) (for each \(s\in\gamma\)), i.e., \[\mathrm{Span}\{\varphi_{0}\}=\boldsymbol{\mathcal{Q}}_{\Gamma}^{0\,\prime}= \left\{\mathbf{p}\in\boldsymbol{\mathcal{Q}}_{\Gamma}{}^{\prime}\mid\, \mathcal{E}^{0}\mathcal{A}^{0}\mathbf{p}=\mathbf{p}\right\}\] **Remark 4**.: _From (25) it follows that, for any \(i\geq 0\), the operator \(\mathcal{A}^{i}\) is surjective,_ \[\forall\boldsymbol{w}\in\boldsymbol{\mathcal{W}}_{\gamma}\,,\exists\, \boldsymbol{q}_{\boldsymbol{w}}:=\mathcal{E}^{i}\boldsymbol{w}\in\boldsymbol{ \mathcal{Q}}_{\Gamma}{}^{\prime}\text{ s.t. }\mathcal{A}^{i}\boldsymbol{q}_{\boldsymbol{w}}= \boldsymbol{w}\,.\] Using the reduced basis, the extension, and the average operators introduced in Section 4.1, we aim at defining a reduced formulation of problem (18) via a reduction operator from the _full_ space \(\boldsymbol{\mathcal{Q}}_{\Gamma}\) of functions defined on the inclusion boundary, onto a _reduced_ space defined on the lower dimensional representative domain \(\gamma\). To this purpose, we introduce the transposed operator \[R^{T}: (\boldsymbol{\mathcal{W}}_{\gamma})^{N} \rightarrow \mathrm{Span}\{\phi_{1},\ldots,\phi_{N}\}\subset\boldsymbol{ \mathcal{Q}}_{\Gamma}{}^{\prime} \tag{26}\] \[(\boldsymbol{w}_{1},\ldots,\boldsymbol{w}_{N}) \mapsto \sum_{i=1}^{N}\mathcal{E}^{i}\boldsymbol{w}_{i}\,.\] For any \(\overline{\mathbf{w}}:=(\mathbf{w}_{1},\ldots,\mathbf{w}_{N})\in(\mathbf{\mathcal{W}}_{\gamma})^{N}\), \(\mathbf{g}\in\mathbf{\mathcal{Q}}_{\Gamma}{}^{\prime}\), the duality on \(\Gamma\) is: \[\left\langle R^{T}\overline{\mathbf{w}},\mathbf{g}\right\rangle_{\Gamma} =\int_{\Gamma}\mathbf{g}\sum_{i}^{N}\mathcal{E}^{i}\mathbf{w}_{i}\,\mathrm{ d}\Gamma=\int_{\Gamma}\mathbf{g}\sum_{i}^{N}\varphi_{i}\left(\mathbf{w}_{i}\circ \Pi\right)\,\mathrm{d}\Gamma\] \[=\sum_{i}^{N}\int_{\Gamma}\mathbf{g}\varphi_{i}\left(\mathbf{w}_{i}\circ \Pi\right)\,\mathrm{d}\Gamma=\sum_{i}^{N}\int_{\gamma}\int_{D}\mathbf{g}\varphi_{ i}\left(\mathbf{w}_{i}\circ\Pi\right)\,\mathrm{d}D\,\mathrm{d}s \tag{27}\] \[=\sum_{i}^{N}\int_{\gamma}\mathbf{w}_{i}\int_{D}\mathbf{g}\varphi_{i}\, \mathrm{d}D\,\mathrm{d}s=\sum_{i}^{N}\left\langle\mathbf{w}_{i},\int_{D}\mathbf{g} \varphi_{i}\,\mathrm{d}D\right\rangle_{\gamma}\] The reduced formulation of problem (18) can be now written as: given \(f\in\mathbf{\mathcal{V}}_{\Omega}{}^{\prime}\), \(\mathbf{g}\in\mathbf{\mathcal{Q}}_{\Gamma}{}^{\prime}\) find \(\mathbf{u}\in\mathbf{\mathcal{V}}_{\Omega}\), \(\mathbf{\Lambda}\in(\mathbf{\mathcal{W}}_{\gamma})^{N}\) such that \[\left\langle A\mathbf{u},\mathbf{v}\right\rangle+\left\langle B^{T}R^{T} \mathbf{\Lambda},\mathbf{v}\right\rangle=\left\langle f,\mathbf{v}\right\rangle \forall\mathbf{v}\in\mathbf{\mathcal{V}}_{\Omega}\,, \tag{28a}\] \[\left\langle RB\mathbf{u},\overline{\mathbf{w}}\right\rangle=\left\langle R \mathbf{g},\overline{\mathbf{w}}\right\rangle \forall\overline{\mathbf{w}}\in(\mathbf{\mathcal{W}}_{\gamma})^{N}\,. \tag{28b}\] The terms in (28b) can be defined using (27). In particular, we obtain (29) ### Stability analysis Theorem 1 shows that the full dimensional formulation (18) is well posed only if the space of Lagrange multipliers is properly chosen. As it will be shown in this Section, the advantage of the reduction operator is not only the dimensional reduction, but also the fact that the resulting space can be naturally defined to ensure well-posedness of the resulting formulation. **Lemma 2**.: _The operator \(R^{T}:(\mathbf{\mathcal{W}}_{\gamma})^{N}\to\text{Span}\{\phi_{1},\ldots,\phi_{N}\}\) is injective._ Proof.: The thesis follows from the orthonormality of the functions \(\varphi_{1},\ldots,\varphi_{N}\) and from the definition of \(R^{T}\). **Lemma 3**.: \[\left\langle R^{T}\overline{\mathbf{w}},C\mathbf{v}\right\rangle_{\Gamma}=0\quad \forall\ \overline{\mathbf{w}}\in(\mathbf{\mathcal{W}}_{\gamma})^{N},\forall\ \mathbf{v}\in\mathbf{\mathcal{V}}_{\Omega}\] (30) Proof.: The lemma follows observing that \(R^{T}\overline{\mathbf{w}}\in\text{Span}\{\varphi_{1},\ldots,\varphi_{N}\}\) and that \(\mathcal{E}^{0}\mathcal{A}^{0}\mathcal{T}\mathbf{v}\in\text{Span}\{\varphi_{0}\}\). **Lemma 4**.: _The operator \((RB)^{T}\) is injective._ Proof.: Let \(\overline{\mathbf{w}}\in(\mathbf{\mathcal{W}}_{\gamma})^{N}\) such that \(\mathbf{w}\in\ker(B^{T}R^{T})\). It follows that \(R^{T}\overline{\mathbf{w}}\in\ker(B^{T})\), i.e. (see Equation (17)) \[\left\langle R^{T}\overline{\mathbf{w}},\mathbf{v}\right\rangle_{\Gamma}=\left\langle R ^{T}\overline{\mathbf{w}},\mathcal{E}^{0}\mathcal{A}^{0}\mathcal{T}\mathbf{v}\right \rangle_{\Gamma}\,\forall\mathbf{v}\in\mathbf{\mathcal{V}}_{\Omega}\,. \tag{31}\] Since the right hand side in (31) vanishes (Lemma 3) one obtains \[\left\langle R^{T}\overline{\mathbf{w}},\mathbf{v}\right\rangle_{\Gamma}=0,\,\forall \mathbf{v}\in\mathbf{\mathcal{V}}_{\Omega},\] and hence \(R^{T}\hat{\mathbf{w}}=0\). From the injectivity of \(R^{T}\) it follows that \(\mathbf{w}=0\). Using the previous results, the next theorem ensures the well-posedness of the reduced formulation. **Theorem 2** (Well-posedness of the reduced problem).: _Let assume that the assumptions 1 and 2 are satisfied. Then, the operator \(RB:\mathbf{\mathcal{V}}_{\Omega}\mapsto(\mathbf{\mathcal{W}}_{\gamma}^{\prime})^{N}\) satisfies the inf-sup condition, i.e., there exists a positive real number \(\beta_{R}>0\) such that_ \[\inf_{\overline{\mathbf{w}}\in\mathbf{\mathcal{W}}_{\gamma}^{N}}\sup_{\mathbf{v}\in\mathbf{ \mathcal{V}}_{\Omega}}\frac{\left\langle RB\mathbf{v},q\right\rangle}{\left\|\mathbf{ v}\right\|_{\mathbf{\mathcal{V}}_{\Omega}}\left\|\overline{\mathbf{w}}\right\|_{(\mathbf{ \mathcal{W}}_{\gamma})^{N}}}\geq\beta_{R}>0. \tag{32}\] Proof.: Firstly, let us observe that the operator \(RB\), defined in (29), is bounded as it is a combination of trace operator and linear bounded operators. From assumptions 1 and 2, it follows also that the operator \((RB)^{T}\) is linear, continuous, and bounded. Since \((RB)^{T}\) is also injective (Lemma 4), it follows that \(RB\) satisfies an inf-sup condition, and the result follows from standard saddle-point theory [10]. ### Axis-symmetric deformation In this Section, we discuss more in details the case of coupling conditions modeling an axis-symmetric deformation of the vessel wall, i.e., resulting in a source term in (9) of the form \[\mathbf{g}(x)=g_{\gamma}\mathbf{n}(x), \tag{33}\] directed along the normal \(\mathbf{n}\) to the interface \(\Gamma\), where \(g_{\gamma}:\gamma\to\mathbb{R}^{d-2}\) denotes the inflation or deflation of the vessel (as mentioned in Section 3). Let us consider a local reference system of cylindrical coordinates \((\rho(s),\theta(s),s)\) along \(\gamma\), for each point \(s\in\gamma\), and let us choose the reduced basis \(\hat{\varphi}_{i}\in\mathbf{\mathcal{Q}}_{\Gamma}\) given by \[\hat{\varphi}_{0}(s,\rho(s),\theta(s)) =\sqrt{2} \tag{34}\] \[\hat{\varphi}_{2k+1}(s,\rho(s),\theta(s)) =\sqrt{2(k+1)\pi^{k}}\rho^{n}\cos\theta k,\,\,k=0,1,\ldots\] (35) \[\hat{\varphi}_{2k+2}(s,\rho(s),\theta(s)) =\sqrt{2(k+1)\pi^{k}}\rho^{k}\sin\theta k,\,\,k=0,1,\ldots \tag{36}\] These functions fulfil assumptions (22) and (23). Moreover, the first two non-constant modes (\(k=1,2\)) correspond to the local coordinates of the normal vector on \(\Gamma\), i.e., \[\mathbf{n}(\mathbf{x})=\left[\begin{array}{c}\varphi_{1}\left(s,\rho(s),\theta(s) \right)\\ \varphi_{2}\left(s,\rho(s),\theta(s)\right)\end{array}\right], \tag{37}\] for all \(\mathbf{x}\in\Gamma\), and with \(s=\Pi(\mathbf{x})\). Since the source term (33) is directed along the normal to the interface and it depends only on the coordinate on \(\gamma\), \(\mathbf{g}\) can be written as an element of \(\mathrm{Span}\{\varphi_{1},\varphi_{2}\}\). In other words, for \(\overline{\mathbf{w}}=(\mathbf{w}_{1},\mathbf{w}_{2},\ldots,\mathbf{w}_{N})\in(\mathbf{\mathcal{W }}_{\gamma\gamma})^{N}\), the right hand side \[\left\langle R\mathbf{g},\overline{\mathbf{w}}\right\rangle=\left\langle\mathbf{g},R^{T} \overline{\mathbf{w}}\right\rangle,\] will only depends on \(\mathbf{w}_{1}\) and on \(\mathbf{w}_{2}\). From the practical point of view, this observation allows to write the reduced source term in (28b) in the form \[R\mathbf{g}=\left[\left[\begin{array}{c}g_{\gamma}\\ 0\end{array}\right],\;\left[\begin{array}{c}0\\ g_{\gamma}\end{array}\right],\;0,\;...,\;0\right]\,. \tag{38}\] **Remark 5**.: _Let \(\mathbf{\mathrm{q}}\in\mathcal{E}^{0}(\mathbf{\mathcal{W}}_{\gamma})\subset\mathbf{ \mathcal{Q}}_{\Gamma}{}^{\prime}\) be an axis-symmetric test function whose value depend only on the position on the lower dimensional manifold \(\gamma\), and let \(\mathbf{w}=(w_{x},w_{y})\) be such that \(q=\mathcal{E}^{0}(\mathbf{w})\). From the definition (28b) one obtains that_ \[\left\langle\mathbf{\mathrm{g}},\mathbf{\mathrm{q}}\right\rangle_{\Gamma}=\left\langle R \mathbf{\mathrm{g}},\overline{\mathbf{w}}\right\rangle\] _where \(\overline{\mathbf{w}}=((w_{x},0),(0,w_{y}))\in\mathbf{\mathcal{W}}_{\gamma}{}^{2}\)._ _Hence, a two-dimensional reduced order Lagrange multiplier space (i.e., \(N=2\)) is optimal for axis-symmetric problems (as for the case of a normal source), in the sense that axis-symmetric test functions in the original space can be mapped exactly onto reduced ones._ Based upon these observation, in the context of mixed-dimensional modeling, the numerical results in Section 5 will mostly focus on the case \(N=2\). However, it is important to mention that although the normal source is axis-symmetric, this does not hold in general for the displacement solution, for example, in presence of multiple inclusions or general geometries. The numerical test presented in Section 5.3 will focus on the error committed when omitting the extra modes. ## 5 Numerical results This section is dedicated to the numerical validation of the reduced Lagrange multiplier approach: three test cases are analysed to attest the stability of the method and to investigate the role and functioning of the Lagrange multiplier. A fourth example simulates the effective material behaviour. First we monitor the converge rate of the method on a simplified case where the analytical solution is known, second we compare our immersed version of the boundary condition to the same condition on a domain with physical holes. Third we investigate the role of modes in relation to the accuracy of the method. The numerical simulations presented in this Section have been obtained using the finite element library deal.ll[8]. Visualizations have been created with ParaView[2] (version 5.9). ### Two-dimensional axis-symmetric problem To validate the numerical method, the first example consists of the special case of a 2D circular domain of radius \(R\), containing a single circular inclusion of radius \(r_{i}\) (with \(r_{i}\ll R\)) located at its center. Imposing homogeneous Dirichlet boundary condition on the outer radius and at the inclusion boundary, problem (9) admits an analytical solution solution of the form [26] \[u_{r}=c_{2}r+\frac{c_{1}}{r}\hskip 28.452756pt\mbox{with}\hskip 28.452756pt \left\{\begin{array}{c}c_{2}=\frac{-r_{i}\bar{u}}{R^{2}-r_{i}^{2}}\\ c_{1}=\frac{r_{i}\bar{u}R^{2}}{R^{2}-r_{i}^{2}}\,,\end{array}\right. \tag{39}\] where \(\bar{u}\) is the normal Dirichlet data on the inclusion boundary. The formulation (28) has been solved with piecewise linear finite elements for the displacement and with different values of the dimension \(N\) of the reduced Lagrange multipliers space. An example of numerical solution is depicted in Figure 2, while the convergence study is summarized in Figure 3. Using a global mesh refinement, we obtain order \(0.5\) convergence in \(H^{1}\) and \(1.5\) in \(L^{2}\), as it has to be expected for the immersed method (see [28]). Adaptive local refinement allows to retrieve optimal convergence rates. ### Two-dimensional domain with multiple inclusions The purpose of the second test is to analyze the results of the model in presence of local interaction of multiple, close, inclusions. In this case, an analytical solution is not available, although the solution found in (39) could be used as an approximation in the proximity of the immersed boundary. Hence, for the validation of the results on the whole domain, we use the numerical solution obtained discretizing the inclusion interface within the computational mesh and imposing a boundary condition in the form \[\mathbf{u}\cdot\mathbf{n}=\overline{\mathbf{u}} \cdot\mathbf{n}\hskip 28.452756pt\mbox{on $\Gamma$}\,. \tag{40}\] Figure 3: Example 1: Axis-symmetric problem with \(R=1\), \(r_{i}=0.2\), and \(\bar{u}=0.1\). Convergence rates for the case \(N=2\) with local and global refinements. Figure 2: Example 1: Axis-symmetric problem with \(R=1\), \(r_{i}=0.2\), and \(\bar{u}=0.1\). (a) Numerical solution with \(N=2\). (b) \(L^{2}\) errors on a cross-section through the lines \(y=x\) (dashed) and \(y=-x\) (dotted). The condition has been imposed using the non zero flux condition on the boundary \(\Gamma\) provided in [8]. Figure 4 shows the numerical solution obtained with the reduced formulation (\(N=2\)) in the case of four inclusions placed symmetrically around the center of a squared domain. The agreement with the numerical solution in the fully discretized case is confirmed in Figure 5. ### Effect of the higher-order modes As observed in Section 4.3, imposing a coupling as a normal deformation on the inclusion boundary, whose magnitude only depends on the position along the representative manifold \(\gamma\), yields a local forcing term that can be naturally written in the 2-dimensional reduced order space (i.e., \(N=2\)). However, in general, the role of modes is subjective to the case analyzed. While \(N=2\) is optimal at the _local_ level and it allows, in the single inclusion setting, to obtain the desired rate of convergence, these conclusions cannot be generalized in presence of multiple inclusions, where the overall domain is no longer axis-symmetric, as underlined for the equivalent scalar problem in [28]. Notice that, in this case, the forcing term can no longer be exactly represented in the reduced Lagrange multiplier space of dimension \(N=2\). The purpose of this Figure 4: Example 2: Numerical solution (displacement magnitude) for the case of four inclusions: \((0.3,0.3),(-0.3,0.3),(0.3,-0.3),\;r_{i}=0.1,\;\bar{u}=0.1\) in the domain \([-1,1]^{2}\). The dashed line shows the cross section of the domain used to measure the error in Figure 5. example is thus to investigate, the impact of limiting the dimension of the space to \(N=2\). For this purpose, we consider \(m\) inclusions located close to each other, and compute the resulting stresses via Equation (21) (i.e., as a function of the reduced Lagrange multipliers) for different geometrical parameters (radii of the inclusions) and dimension \(N\). Let \(N\) be the number of considered modes, which we consider to be the same for each inclusion, and let us denote with \((\boldsymbol{\mathcal{W}}_{\gamma i}^{\ \prime})^{N}\) the reduced space for the \(i\)-th inclusion spanned by \(N\) modes. We will then denote with \[\boldsymbol{\mathcal{W}}_{\gamma}^{\ \prime}:=\Pi_{i=1}^{m}(\boldsymbol{ \mathcal{W}}_{\gamma i}^{\ \prime})_{i}^{N}\] the space of Lagrange multipliers for all inclusions. We focus on a two-dimensional spatial setting (\(d=2\)). In this case, \(\boldsymbol{\mathcal{W}}_{\gamma i}^{\ \prime}=\mathbb{R}^{d}\) and \(\boldsymbol{\mathcal{W}}_{\gamma}^{\ \prime}\) has dimension \(m\times(d\times N)\). In the following discussion, let us denote with \[\boldsymbol{\Lambda}^{(j)}=\begin{bmatrix}\boldsymbol{\Lambda}_{1}^{(j)}& \cdots&\boldsymbol{\Lambda}_{N}^{(j)}\end{bmatrix}\in(\mathbb{R}^{d})^{N}\] the degrees of freedom corresponding to the Lagrange multipliers of the \(i\)-th inclusion, and with \(\widehat{\boldsymbol{\Lambda}}=\begin{bmatrix}\ \boldsymbol{\Lambda}^{(1)},\...,\ \boldsymbol{\Lambda}^{(M)}\end{bmatrix}\) an element of \(\boldsymbol{\mathcal{W}}_{\gamma}^{\ \prime}\). The purpose of the numerical example presented in this Section is to assess quantitatively the error \[\boldsymbol{\Lambda}^{(2)}=\left[\left[\begin{array}{c}\Lambda_{1,x}^{(2)} \\ 0\end{array}\right],\ \left[\begin{array}{c}0\\ \Lambda_{2,y}^{(2)}\end{array}\right],0,\ldots,0\right],\] Figure 5: Example 2: Comparison of the displacement magnitudes between the numerical solution with the reduced Lagrange multipliers (red dashed curve) and the corresponding solution obtained discretizing the inclusion boundaries and imposing a non zero flux boundary condition (blue). resulting from using only two modes (\(N=2\)) on each inclusion, even if the overall problem is no longer axis-symmetric. The considered numerical example, with \(m=3\), is depicted together with the numerical solution in a particular configuration in Figure 6. Table 1 shows the norm of the Lagrange multiplier solution for the second inclusion (\(i=2\)), as a function of the inclusion radius and of the number of modes (up to \(N=8\)), comparing the leading order modes (\(\Lambda_{1}\) and \(\Lambda_{2}\)) with the remaining ones. The results for the other inclusions are very similar and will be omitted from the discussion. The first two components are largely predominant, and the relative truncation error of the order of \(10^{-2}\) (or below). This indicates that \(N=2\) can be used as a suitable approximation also in the non-symmetric case, provided the radii of the inclusions are small enough. Moreover, the truncation error decreases when decreasing the size of the inclusion. This observation is in line with what observed in the context of 3D-1D models coupled via Neumann boundary conditions in [26], in which an hypersingular, axis-symmetric, approximation of the immersed interface was used. A more detail on the predominance of the leading modes for smaller radii is provided in Figure 7, which depicts, for the second inclusion (\(i=2\)) the relative decrease of the higher modes (\(N=4\) to \(N=8\)) magnitudes, normalized with respect to the Euclidean Figure 6: Arrangement and numbering of the inclusions for the analysis of modes at \((0.3,0.3),(-0.4,0.3),(0.1,-0.3),\ r_{i}=0.2,\ \bar{u}=0.1\), in domain \([-1,1]^{2}\) \(l^{2}\) norm of \(\mathbf{\Lambda}^{(2)}\). \begin{table} \begin{tabular}{|c c c c c c|} \hline \(r_{i}\) & \# Modes & \(\|\mathbf{\Lambda}^{(2)}\|_{l2}\) & \(\frac{|\Lambda_{1,2}^{(2)}|^{2}}{\|\mathbf{\Lambda}^{(2)}\|_{l2}^{2}}\) & \(\frac{|\Lambda_{2,u}^{(2)}|^{2}}{\|\mathbf{\Lambda}^{(2)}\|_{l2}^{2}}\) & Truncation error (\%) \\ \hline \hline 0.2 & 2 & 23.91763 & 54.83\% & 45.09\% & \(8.5\cdot 10^{-2}\%\) \\ 0.2 & 4 & 23.91763 & 54.83\% & 45.09\% & \(8.5\cdot 10^{-2}\%\) \\ 0.2 & 6 & 23.91763 & 54.83\% & 45.09\% & \(8.5\cdot 10^{-2}\%\) \\ 0.2 & 8 & 23.91763 & 54.83\% & 45.09\% & \(8.5\cdot 10^{-2}\%\) \\ \hline 0.1 & 2 & 88.96162 & 51.13\% & 48.87\% & \(4.01\cdot 10^{-3}\%\) \\ 0.1 & 4 & 88.96162 & 51.13\% & 48.87\% & \(4.01\cdot 10^{-3}\%\) \\ 0.1 & 6 & 88.96162 & 51.13\% & 48.87\% & \(4.01\cdot 10^{-3}\%\) \\ 0.1 & 8 & 88.96162 & 51.13\% & 48.87\% & \(4.01\cdot 10^{-3}\%\) \\ \hline 0.05 & 2 & 356.4525 & 49.98\% & 50.02\% & 0\% \\ 0.05 & 4 & 356.4525 & 49.98\% & 50.02\% & 0\% \\ 0.05 & 6 & 356.4525 & 49.98\% & 50.02\% & 0\% \\ 0.05 & 8 & 356.4525 & 49.98\% & 50.02\% & 0\% \\ \hline \end{tabular} \end{table} Table 1: Euclidean \(l^{2}\)-Norm of the Lagrange multiplier for the second inclusion (number 2 in Figure 6), relative norms of the first two modes, and corresponding relative error for different sizes of inclusions. ### In-silico modeling of effective material behavior The proposed multiscale model is motivated by applications in the context of tissue imaging, where the data acquired by techniques such as elastography or diffusion-weighted imaging depend on the underlying physics - e.g., on the interaction of solid and fluid phases - but the limited image resolution allows only for effective (macroscale) tissue representations. Often, these effective descriptions are based on linear elasticity with homogeneous mechanical parameters. However, certain applications require to better understand how the fluid phase, or the structure of the vasculature, are reflected in the behavior of the tissue at the macroscale. This is the case, for instance, of the usage of medical imaging to characterize the presence of pathological conditions in which fluid conditions play a relevant role, such as hypertension (increase in pressure) or tumor growth. To this purpose, it is necessary to develop mathematical models able to close the gap between the microscale (of the underlying physics) and the macroscale (data resolution), and to address related inverse problems for the estimation of effective parameters depending on microscale quantities. The numerical tests presented in this section are devoted to the usage of the reduced Lagrange multipliers method for the computational modeling and simulation of tissues, investigating the influence of fluid microstructures on tissue effective dynamics. On the one hand, these tests address, from the perspective of mathematical modeling, results recently presented in the context of tissue elastography concerning the importance of understanding the interplay between solid and fluid phases for medical imaging applications in non-invasive diagnostic, see, e.g., [25, 36, 43, 46]. On the other hand, the in silico study aims at providing a first proof of concept for using the reduced Lagrange multipliers in the context of inverse problems for the estimation of effective mechanical parameters. #### 5.4.1 Effective material parameters for varying microstructure SetupFirstly, we consider a two dimensional tissue sample with _fixed fluid volume ratio_, but with different distributions of the fluid inclusions. Namely, we consider a fixed number of inclusions with the same radius and three different geometrical setups (see Figure 8): * (i) inclusions placed in a structured array (denote, in what follows, as _structured_); * (ii) inclusions placed randomly, but with fixed _microscale_ fluid volume ratio, i.e., dividing the domain in boxes, and placing, within each box, an inclusion in a random position (_semi-structured_); * (iii) inclusions placed fully randomly, but with fixed volume ratio at the _macroscale_ (_random_); these configurations have been realized removing overlapping inclusions, and iteratively adding new inclusions until the fixed total number has been reached. In these settings, we study how the behavior of the _effective_ material depends on the microstructure, simulating stress and compression tests to compute equivalent mechanical parameters of the effective tissues as functions of the Lame's constant \(\lambda\), \(\mu\) of the solid matrix, of the boundary condition imposed at the inclusion boundaries (the normal deformation), of the total fluid volume ratio (i.e., of the number of inclusions), and of the vessels distribution. Compression testThe first test is a pure compression (Figure 9, left). The physical domain is the a square \([-1,1]\times[-1,1]\), compressed imposing Dirichlet boundary condition on all sides,with a total area reduction of 19%. The material parameters are \(\mu=1\), \(\lambda=1\). The inclusion radius is \(r=0.05\), and, on each inclusion, a normal deformation \(\bar{u}=0.1\) is imposed. The effective bulk modulus is computed as the total b Figure 8: Example of the different setups used for the modeling and simulation of effective material (with \(m=25\) inclusions, \(r_{i}=0.05\), \(v_{f}\sim 0.05\)). Left: inclusions in a structured array. Center: inclusions placed randomly within structured placed boxes (i.e., fixing the porosity in each box). Right: Inclusion placed randomly, removing overlapping ones and fixing the total fluid volume ratio. Figure 9: Left: Displacement solution for the compression test (inclusions are not shown). Right: Displacement solution for the shear test. The original configuration is shown in grey in the background. difference, i.e., \[\kappa^{\text{eff}}=\frac{1}{|\Delta\text{area}|}\frac{1}{|\partial\Omega|}\int_{ \partial\Omega}\left(\sigma(\boldsymbol{u})\,\boldsymbol{n}\right)\cdot \boldsymbol{n}\,. \tag{41}\] The results, varying the fluid volume ratio and for different inclusion distributions, are presented in Figure 10. As expected, the presence of the inclusions reduces the compressibility of the effective material, and this effect increases when increasing the fluid volume ratio. In particular, the effective bulk modulus increases by \(100\%\) for fluid volume ratio of about \(4\%\) and increases by \(300\%\) (four times the value of the pure solid matrix) when the fluid volume ratio reaches \(10\%\). We also observe that this effect seems to be independent on the geometrical distribution of the inclusions, i.e., mostly related to the macroscopic volume ratio. Shear testThe second setup (Figure 9, right) considers a material sample with a given horizontal shear rate enforced by Dirichlet boundary condition \(\boldsymbol{u}=(y,0)\) on the whole boundary. As in the previous example, the physical domain is the square \([-1,1]\times[-1,1]\), and the material parameters are \(\mu=1\), \(\lambda=1\). Inclusions have a radius of \(r_{i}=0.05\) and an imposed normal espansion of \(\bar{u}=0.1\). In this example, we monitor the effective shear modulus defined as \[\mu^{\text{eff}}=\frac{1}{2\;l}\int_{\text{top}}\left(\sigma(\boldsymbol{u})\, \boldsymbol{n}\right)\cdot\left(1,0\right), \tag{42}\] where \(l=2\) is the edge length. The results (Figure 11) show that \(\mu^{\text{eff}}\) increases for increasing fluid volume ratio, with an increase up to \(50\%\) when the fluid volume ratio reaches \(10\%\). The influence is hence less pronounced with respect to the case of compression. The results are also Figure 10: Effective bulk modulus (41) for the compression test, as a function of the fluid volume. The dashed line shows the results for the pure solid case (no inclusions). In the case of random distirbution, the picture displays the average and the standard deviation based on \(N=10\) simulations. very similar for the cases of structured and semi-structured inclusion distributions. However, we observe in this case a much higher variability of the results for \(\mu^{\rm eff}\) in the random setups. Even if based on a simple two-dimensional setting, these results indicate that the variability of the inclusions structure plays a relevant role in the effective mechanical behavior of the sample and shall not be neglected when aiming at characterizing the response of the multiscale material. The relevance of this effect is expected to considerably grow in three dimensions, also considering additional geometrical parametrizations of the vascular structure (i.e., vessel lengths and directions). #### 5.4.2 Mechanical response with non uniform fluid structures The previous examples showed that considering an underlying homogeneous structure for the fluid inclusion results in effective parameters mostly dependent only on the fluid volume ratio. The purpose of the next set of numerical tests is to investigate the influence of non uniform inclusion distributions on the effective mechanical behavior. To this purpose, we divide the tissue domain in two parts, an inner square and an outer domain, considering structured distributions with a higher density in the inner domain. Namely, the outer domain contains, in all cases, 12 inclusions, while the number of inclusions in the inner domain varies from 9 (3\(\times\)3) to 121 (11\(\times\)11). The resulting arrangements are shown in Figure 12. All inclusions have the same radius \(r_{i}=0.05\) and an expansion in the normal direction of \(\bar{u}=0.01\) is imposed. The elastic properties are set as \(\lambda=\mu=1\). For each setup (in which the fluid volume ratio and the number of inclusions are fixed), we simulate a compression test and compute the response in terms of tissue pressure on the external boundary for different area reduction. Figure 11: Effective shear modulus (42) for the shear test, varying the fluid inclusion ratio and for different inclusion distributions. The dashed line depicts the results for the pure solid case (without inclusions). The results for the random configuration show the averages and the standard deviations (grey bars) over \(N=10\) simulations. Figure 12: Different setups used for the modeling and simulation of effective material (with \(m=21,\ 37,\ 61,\ 93\) and \(133\) inclusions, with radius \(r_{i}=0.05\), corresponding to fluid volume ratios \(v_{f}\sim 0.04,\ 0.07,\ 0.12,\ 0.18,\ 0.26\)). Figure 13: Effective pressure (average on the boundaries) for the compression test in different configurations, as a function of the share of area reduction (starting from an initial area of 4). Notice that also without an external compression, there is a nonzero boundary pressure due to the expansion of the inclusions. The results in Figure 13 show the presence of a nonlinear mechanical response increasing the compression, highlighting a non trivial interplay between the responses of the inner and outer subdomains. This phenomenon can be observed in all considered samples. Moreover, its effect is more visible when the density contrast between the subdomains increases. This example thus confirms that in presence of complex tissues it is necessary to consider mathematical and computational models that can account for microscale inhomogeneities, in order to correctly represent the effective tissue. As previously observed, it is expected that this effect will be more relevant in three dimensional cases. ## 6 Conclusions In this work, we have proposed and investigated an efficient numerical method to simulate multiscale coupled problems involving a linear elastic solid and slender fluid inclusions. The method handles the inclusions as immersed boundaries within the tissue finite element mesh using the reduced Lagrange multiplier approach recently proposed in [28]. In particular, we extended the method of [28] to the case of a _local deformation_ boundary condition, in which the fluid and the solid are coupled imposing a local displacement field which does not depend on the macroscale deformation. We showed that this condition can be naturally imposed within the reduced Lagrange multipliers framework by properly selecting the Lagrange multipliers space. In particular, we showed that, with the correct choice of the reduced-order space, the resulting continuous formulation is well-posed. The immersed method, combined with the reduced Lagrange multiplier approach, allows to reduce the overall complexity of the problem, since the explicit discretization of the inclusion interface is not required. We assessed the performance of the proposed scheme by validating the expected convergence rates in the case of a single inclusion and considering different cases with multiple inclusions. The results show that the proposed multiscale model can be effectively used for the numerical investigation and for the numerical upscaling of multiscale materials. Our tests indicated as well that as the scale separation increases (thinner inclusions), a reduced-order space of dimension \(N=2\) is sufficient for a valid approximation. Additionally, we performed a detailed study of the influence of microscale quantities (inclusion distribution) on the effective mechanical parameters. The results, although limited to two-dimensional setups, demonstrate that tissue response is sensitive variations and vascular architecture, potentially inducing non-linear mechanical responses in presence of inhomogeneities. These results align with recent discoveries that have emphasized the intricate interconnections between effective parameters at microscales [46]. One natural outlook of this work is the coupling of three-dimensional solid ma trices with an active one-dimensional fluid model, generalizing the approach recently proposed in [27]. This extension is currently the subject of ongoing research. ## Acknowledgements The research of C. Belponer has been funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation), grant DFG CA 1159/1-4 and PE 2143/1-6. L. Heltai acknowledges the partial support of the grant MUR PRIN 2022 No. 2022WKWZA8 "Immersed methods for multiscale and multiphysics problems (IMMEDIATE)"
この論文では、弾性固体材料と流体閉じ込めのシミュレーションのための数値手法が提案されています。血管化組織のモデル化と、医学画像処理における有効(つまり、マクロスケール)材料プロパティの推定という課題が動機付けられています。これは、マイクロスケールの動きの影響(例えば、微小血管内の流動)を考慮し、有効なマクロスケールの材料プロパティを推定することを目的としています。この手法は、最近提案されたリダイレクトラグランジュ乗数枠組みに基づいています。特に、弾性材料の固体と流体領域の界面は、計算メッシュ内で解析的に解決されていません。代わりに、独立してディス cretizedされ、この結合条件を非一致するラグランジュ乗数によって達成しています。この問題の多尺度の特性を考慮すると、結果するラグランジュ乗数空間は低次元な特徴セットに縮小されます
2308.16875
Holistic Processing of Colour Images Using Novel Quaternion-Valued Wavelets on the Plane
Recently, novel quaternion-valued wavelets on the plane were constructed using an optimisation approach. These wavelets are compactly supported, smooth, orthonormal, non-separable and truly quaternionic. However, they have not been tested in application. In this paper, we introduce a methodology for decomposing and reconstructing colour images using quaternionic wavelet filters associated to recently developed quaternion-valued wavelets on the plane. We investigate its applicability in compression, enhancement, segmentation, and denoising of colour images. Our results demonstrate these wavelets as promising tools for an end-to-end quaternion processing of colour images.
Neil D. Dizon, Jeffrey A. Hogan
2023-08-31T17:22:18
http://arxiv.org/abs/2308.16875v2
# Holistic Processing of Colour Images Using Novel Quaternion-Valued Wavelets on the Plane ###### Abstract We investigate the applicability of quaternion-valued wavelets on the plane to holistic colour image processing. We present a methodology for decomposing and reconstructing colour images using quaternionic wavelet filters associated to recently developed quaternion-valued wavelets on the plane. We consider compression, enhancement, segmentation, and denoising techniques to demonstrate quaternion-valued wavelets as a promising tool for holistic colour image processing. **Keywords:** quaternions, wavelets, colour image processing, compression, denoising ## 1 Introduction Wavelets have long been known as a powerful tool for analysing and processing greyscale images. With their ability to decompose an image into different scales, one can extract important information from an image that can then be used in a variety of applications including compression, denoising, enhancement, feature extraction, registration, and segmentation. By treating each channel of a multi-channel image as greyscale, wavelet-based image processing schemes have also been extended to multi-channel signals like colour images. The most basic model of a colour image is a three-channel image consisting of red, green, and blue (RGB) components of the pixels. Other commonly used models include the luminance-chrominance (YUV) and cyan-magenta-yellow-key (CMYK) that use three and four channels, respectively. Other four-channel signal models like RGB-A and RGB-NIR are becoming more prevalent. In these models, the fourth band corresponds to an _alpha_ and a _near-infrared_ (NIR) component, respectively. Most of the present-day handling of colour images rely on the analysis of each channel separately. With this kind of approach, the possible correlations between the channels are totally ignored, if not undervalued. It is preferable to encode the pixel components into higher-dimensional algebras which are anticipated to exploit correlation between channels. For the case of colour images with three or four channels, the algebra of _quaternions_ is sufficient [1, 2]. But once a higher-dimensional signal is embedded into this algebra, more sophisticated wavelet transforms become imperative. In the literature, a number of published articles considered different extensions of wavelet transforms to the quaternionic setting. Fletcher and Sangwine [3] noted in their survey on the development of quaternionic wavelet transforms (QWT) that most of these extensions were just derived from real filter coefficients, and are just separate discrete or complex wavelet transforms in disguise. They further noted that the extensions due to Hogan and Morris [4], and Ginzberg and Walden [5] are among the few that attempted to develop true QWT. Recently, Fletcher [6] extended Ginzberg's work to construct examples of quaternion-valued scaling filters on the line. The quaternionic wavelet theory developed by Hogan and Morris [4, 7] provided direct analogues of classical wavelet theory for construction of quaternion-valued wavelets on the plane. The quaternionic quadrature mirror filter conditions (QQMF) and the scaling equation for quaternionic wavelets were rephrased through the notion of spinor-vector matrices. They have also derived quaternionic counterparts of compact support, orthonormality, and regularity conditions. However, no examples of quaternionic wavelets satisfying these properties were constructed. In a different pursuit, Franklin, Hogan, and Tam [8, 9, 10] developed techniques that have been successful in reproducing Daubechies' wavelets using an optimisation approach. In particular, wavelet architecture was formulated as a _feasibility problem_ of finding a point on the intersection of constraint sets arising from the design criteria and the conditions of multiresolution analysis (MRA). This feasibility approach to wavelet construction has successfully produced new examples of nonseparable, complex-valued, smooth, compactly supported, orthonormal wavelets on the plane. Inspired by the extendability of the feasibility approach to higher-dimensional constructions, Dizon and Hogan [11, 12] revisited the quaternionic wavelet theory developed by Hogan and Morris. They formulated and solved the construction of quaternionic wavelets as feasibility problems. Solutions to these feasibility problems admit novel examples of quaternion-valued wavelets on the plane (refer to Figure 2 for an example). The successful architecture of compactly supported, smooth and orthonormal quaternion-valued wavelets on the plane leaves open many important avenues of research. With these wavelets, the pixel components of a colour image may now be encoded into the scalar and imaginary parts of quaternions for holistic processing of signals using wavelet transforms. We use the term _holistic_ to mean that the components of a pixel from different channels are treated as a whole rather than separately [3, 13]. With such an approach, the potentially useful correlations between the pixel components are not lost. The development of a suitable quaternion-valued wavelet decomposition and reconstruction of colour images poses an interesting research direction. It also raises significant questions about how these quaternion-valued wavelets would perform in compression, enhancement, segmentation, and denoising when applied to colour images. In this paper, we take on the task of looking into the applicability of quaternion-valued wavelets on the plane to colour image processing. Our primary objective is to elucidate the potential of employing a holistic image processing methodology using quaternion-valued wavelets. It is important to note that our intention is to emphasise the inherent promise of this approach, rather than to ascertain any superiority in performance. In Section 2, we revisit the feasibility approach for the construction of quaternion-valued wavelets on the plane with the goal of highlighting their important properties. Since image processing using wavelets relies on a suitable wavelet decomposition and reconstruction, we provide in Section 3 a scheme that decomposes and reconstructs colour images using quaternionic scaling and wavelet filters. We illustrate energy compaction property in the decomposition, and demonstrate perfect reconstruction when no alterations were made in the wavelet coefficients. In Section 4, we exemplify some image processing steps that can be done in between wavelet decomposition and reconstruction to allow for compression, enhancement, segmentation, and denoising of colour images. ## 2 Quaternion-valued wavelets on the plane Recently, Dizon and Hogan constructed quaternion-valued wavelets on the plane through the feasibility approach (for a detailed discussion, see [11, 12]). The construction entails formulating wavelet architecture as feasibility problems. A _feasibility problem_ is a special type of optimisation problem that seeks to find a point in the intersection of a finite family of sets. Formally, given sets \(K_{1},K_{2},\ldots,K_{r}\) contained in a Hilbert space \(\mathcal{H}\), the corresponding feasibility problem is defined by: \[\text{find }x^{*}\in K:=\bigcap_{j=1}^{r}K_{j}.\] In the literature, the method of alternating projections (MAP) [14] and the Douglas-Rachford (DR) algorithm [15] are well-known examples of _projection algorithms_ that are able to solve two-set feasibility problems. Both algorithms are amenable to solve many-set feasibility problems through Pierra's product space reformulation [16]. The Douglas-Rachford method has been observed to exhibit empirical potency even in non-convex settings [17, 18, 19]. Like most projection algorithms, the DR exploits the concept of projectors and reflectors. If \(C\) is a nonempty subset of \(\mathcal{H}\), the _projector_ onto \(C\) is the set-valued operator \(P_{C}\colon\mathcal{H}\rightrightarrows C\) defined by \[P_{C}(x)=\{c\in C:\|x-c\|=\inf_{z\in C}\|x-z\|\};\] and the _reflector_ with respect to \(C\) is the set-valued operator \(R_{C}\colon\mathcal{H}\rightrightarrows\mathcal{H}\) defined by \[R_{C}:=2P_{C}-\text{Id},\] where Id denotes the identity map. An element of \(P_{C}(x)\) is called a _projection_ of \(x\) onto \(C\). Similarly, an element of \(R_{C}(x)\) is called a _reflection_ of \(x\) with respect to \(C\). Note that use of "\(\rightarrows\)" is to emphasise that an operator is (possibly) set-valued. Formally, given two nonempty subsets \(K_{1}\) and \(K_{2}\) of \(\mathcal{H}\), the _DR operator_\(T_{K_{1},K_{2}}\) is defined as \[T_{K_{1},K_{2}}:=\frac{\text{Id}+R_{K_{2}}R_{K_{1}}}{2}.\] If \(K_{1}\) and \(K_{2}\) are closed convex subsets of \(\mathcal{H}\) with \(K_{1}\cap K_{2}\neq\varnothing\), then for any \(x_{0}\in\mathcal{H}\), the sequence \((x_{n})_{n\in\mathbb{N}}\) generated by \(x_{n+1}=T_{K_{1},K_{2}}(x_{n})\) converges weakly to a point \(x^{*}\in\operatorname{Fix}T_{K_{1},K_{2}}\), and the _shadow sequence_\((P_{K_{1}}(x_{n}))_{n\in\mathbb{N}}\) converges weakly to \(P_{K_{1}}(x^{*})\in K_{1}\cap K_{2}\)[20, 21]. Refer to Figure 1 for a simple illustration of the Douglas-Rachford scheme on two sets. In wavelet feasibility problems, the constraint sets encode the basic _compact support_, _orthonormality_, and _regularity_ conditions. The feasibility approach to wavelet construction treats these design criteria as constraints that must be simultaneously satisfied. Such a technique has been also successful in reproducing Daubechies' wavelets, and in deriving nonseparable examples of complex-valued, compactly supported, smooth and orthonormal wavelets on the plane [8, 9, 10]. The feasibility problem formulation becomes even more challenging and intricate for quaternion-valued wavelets on the plane, primarily because of the increased dimensionality and with the absence of commutativity as an additional complicating factor. For a comprehensive discussion on the quaternionic wavelet feasibility problem, refer to [11, 12]. We note here that the compact support of the scaling and wavelet functions facilitate speedy and accurate computation of transform coefficients in the wavelet decomposition of a given image signal. In applications, it is also preferred that wavelets have continuous and bounded derivatives as this property allows for more parsimonious expansions. Additional constraints can be imposed to promote _symmetry_ which helps alleviate distortion around edges in images [22, 23]. Figure 2 shows an example of a quaternion-valued wavelet ensemble on the plane derived as a solution to the quaternionic wavelet feasibility problem. A _wavelet ensemble_ consists of a scaling function and three associated wavelets. Notice that these functions are compactly supported, smooth, and (verifiably) orthonormal. Additionally, the scaling function is pointwise symmetric about its centre of support. These functions are further associated to their respective filters, i.e., the _scaling and wavelet filters_1. Throughout this paper, we only use the scaling and wavelet Figure 1: One step of a Douglas–Rachford fixed-point iteration which follows a simple _reflect-reflect-average_ scheme. Starting with a point \(x_{0}\), the algorithm performs a reflection with respect to \(K_{1}\) to obtain the point \(R_{K_{1}}(x_{0})\), followed by another reflection with respect to \(K_{2}\) to obtain the point \(R_{K_{2}}(R_{K_{1}}(x_{0}))\). Averaging \(x_{0}\) and \(R_{K_{2}}(R_{K_{1}}(x_{0}))\) yields the point corresponding to the next iterate \(x_{1}\). filters associated to the wavelet ensemble in Figure 2. Other wavelet ensembles are presented in [11, Chapter 8], derived as solutions to quaternionic wavelet feasibility problems. To understand how quaternion-valued wavelets on the plane are plotted, we first define the Figure 2: An example of a wavelet ensemble generated from a solution of the quaternionic wavelet feasibility problem. For each plot, the height of a point on the graph corresponds to the modulus of the quaternion, and the intensities of RGB colour of the point represent the imaginary parts in the polar form of the quaternion. set \(\mathbb{R}_{2}\) of quaternions by \[\mathbb{R}_{2}:=\big{\{}a+be_{1}+ce_{2}+de_{12}\,:\,a,b,c,d\in\mathbb{R},\,e_{1}^ {2}=e_{2}^{2}=e_{12}^{2}=e_{1}e_{2}e_{12}=-1\big{\}}\] where we use \(e_{1},e_{2}\) and \(e_{12}\) to denote the imaginary units. In plotting quaternion-valued wavelets (or any quaternion-valued functions) on the plane, we used the following idea. For any set \(X\subseteq\mathbb{R}^{2}\), let \(f:X\rightarrow\mathbb{R}_{2}\) be a quaternion-valued function, i.e., \(f(x)=f_{0}(x)+f_{1}(x)e_{1}+f_{2}(x)e_{2}+f_{12}(x)e_{12}\) where \(f_{0},f_{1},f_{2},f_{12}:X\rightarrow\mathbb{R}\). For a fixed \(x=(x_{1},x_{2})\in X\), we write \(f(x)=|f(x)|e^{\mu_{f(x)}\phi_{f(x)}}\) in polar form. Since \(\mu_{f(x)}\phi_{f(x)}\) is a pure quaternion (i.e., its real part is zero), we can write it as \[\mu_{f(x)}\phi_{f(x)}=R_{f(x)}e_{1}+G_{f(x)}e_{2}+B_{f(x)}e_{12}\] with \(R_{f(x)},G_{f(x)},B_{f(x)}\) the corresponding imaginary parts of \(\mu_{f(x)}\phi_{f(x)}\). Thus, we may associate \((x,f(x))\) with a point in \(\mathbb{R}^{3}\) with coordinates \((x_{1},x_{2},|f(x)|)\) and coloured by \((R_{f(x)},G_{f(x)},B_{f(x)})\) injected into the RGB colour space. ## 3 Decomposition and reconstruction using quaternionic filters Colour image processing with quaternion-valued wavelets relies on a suitable wavelet decomposition and reconstruction using scaling and wavelet filters. In between the decomposition and reconstruction steps, several image processing tasks may be implemented including (but not limited to) compression, enhancement, segmentation, and denoising. In this section, we formalise how colour images can be embedded into the algebra of quaternions. We start with the RGB colour image model but eventually add a near-infrared (NIR) channel for consideration of RGB-NIR images. After this, we describe a suitable decomposition and reconstruction scheme using quaternionic scaling and wavelet filters. ### Colour images and quaternion algebra Typically, a _colour image_ is viewed as a function \(F:\mathbb{R}^{2}\rightarrow\mathbb{R}^{3}\) given by \(F(x)=(R(x),G(x),B(x))\) where \(R(x)\), \(G(x)\) and \(B(x)\) are the red, green and blue components of the pixel \(x\), respectively. Using the algebra of quaternions \(\mathbb{R}_{2}=\big{\{}a+be_{1}+ce_{2}+de_{12}\,:\,a,b,c,d\in\mathbb{R},\,e_{1} ^{2}=e_{2}^{2}=e_{12}^{2}=e_{1}e_{2}e_{12}=-1\big{\}}\), we may view a colour image as a quaternion-valued function \(F:\mathbb{R}^{2}\rightarrow\mathbb{R}_{2}\) given by \[F(x)=R(x)e_{1}+G(x)e_{2}+B(x)e_{12}\] where the red, green and blue components are embedded into the imaginary parts of a quaternion. Alternatively, to make use of the full power of quaternions, we consider RGB-NIR images. Embedding the near-infrared component into the real part of a quaternion, an RGB-NIR image can be viewed as a full-quaternion-valued function \(F:\mathbb{R}^{2}\rightarrow\mathbb{R}_{2}\) given by \[F(x)=I(x)+R(x)e_{1}+G(x)e_{2}+B(x)e_{12},\] where \(I(x)\) represents the near-infrared component of the pixel \(x\). ### Quaternionic wavelet decomposition and reconstruction In processing RGB-NIR images using quaternion-valued wavelets on the plane, we start by embedding the four channels into the algebra of quaternions followed by a wavelet decomposition. By altering the resulting wavelet coefficients in different ways, a variety of image processing tasks can be performed. The final step is to perform an inverse wavelet transform to reconstruct the processed image. We summarised this procedure in Figure 3. Given the scaling and wavelet filters of a quaternion-valued wavelet ensemble, we follow Mallat's algorithm to carry out the decomposition and reconstruction [25]. Wavelet decomposition is carried out by convolving the image with one low-pass (scaling) and three high-pass (wavelet) filters, followed by downsampling. The resulting coefficients from the low-pass filtering contains the low-frequency content or approximation of the original colour image, while the other three coefficients capture the high-frequency details. We repeat the filtering and downsampling process on the approximation coefficients until we achieve the desired depth of decomposition. Similarly, reconstruction is done by inverse filtering and upsampling to each set of coefficients at each level, and combining the results to obtain the reconstructed image. For a schematic diagram of a one-level decomposition and reconstruction, refer to Figure 3. An example2 of a level 8 wavelet decomposition is presented in Figure 4, and the reconstructed image is given in Figure 5. Since no image processing is done in between the decomposition and reconstruction, the resulting image perfectly coincided with the originally given colour image. Similar to the case of classical wavelets, the transform coefficients in the quaternionic wavelet Figure 4: Example of a level 8 wavelet decomposition of an RGB-NIR image (with intensity values inverted and accentuated for illustrative purposes). The colour image on the left displays the imaginary parts of the quaternionic wavelet decomposition treated as RGB, while the greyscale image on the right is the scalar part of the decomposition. Figure 3: Colour image processing steps using quaternion-valued wavelets on the plane. The basic layout of a one-level discrete wavelet transform (with scaling filter \(H\), wavelet filters \(G_{1},G_{2},G_{3}\), and their respective inverse filters \(\tilde{H},\tilde{G}_{1},\tilde{G}_{2},\tilde{G}_{3}\)) also includes the downsampling (\(\downarrow 2\)) and upsampling (\(\uparrow 2\)) steps. decomposition achieves _energy compaction_. This means that (aside from the fact that the total energy in the original colour image would be equal to that of the decomposition) most of the energies in the quaternionic wavelet decomposition are concentrated in a few transform coefficients. More succinctly, let \(F\in\mathbb{R}_{2}^{N\times N}\) be a colour image with \(N\times N\) pixels whose RGB-NIR channels are embedded in the quaternion algebra. The _energy \(\xi_{F}\) of \(F\)_ is given by \[\xi_{F}=\sum_{i,j=1}^{N}|F_{ij}|^{2}.\] Furthermore, let \(L_{1}^{F}\geq L_{2}^{F}\geq\cdots\geq L_{N^{2}}^{F}\) be the absolute value of the image pixels (treated as quaternions) of \(F\) arranged in decreasing order. The _cumulative energy profile of \(F\)_ is given by \[\left(\frac{(L_{1}^{F})^{2}}{\xi_{F}},\frac{(L_{1}^{F})^{2}+(L_{2}^{F})^{2}}{ \xi_{F}},\ldots,\frac{(L_{1}^{F})^{2}+(L_{2}^{F})^{2}+\cdots,(L_{N^{2}-1}^{F} )^{2}}{\xi_{F}},1\right).\] The cumulative energy profile of the decomposition can be computed in a similar fashion. As an illustration, the cumulative energy profile of the sample RGB-NIR image and the cumulative energy profile of its wavelet decomposition are plotted and superimposed in Figure 6. Notice how the energies are compacted in only a very view transform coefficients in the decomposition. It is important to note that the level of energy compaction achieved through wavelet de Figure 5: The reconstructed RGB (left) and NIR (right) parts of an RGB-NIR image reconstruction from the level 8 wavelet decomposition in Figure 4. This pair perfectly coincides with the original RGB-NIR image. composition can vary based on the specific wavelet basis used, the nature of the signal, and the decomposition level. Some wavelet bases might provide better energy compaction for certain types of signals, while others might be more suitable for different applications. Overall, energy compaction is a key reason why wavelet decomposition has found extensive use in signal and image processing tasks, offering efficient and effective ways to represent and manipulate data as we will see in the next section. ## 4 Colour image processing with quaternionic wavelets The applicability of a wavelet transform in image processing is primarily rooted in its capacity to analyse images across varying scales while adeptly capturing both high and low-frequency components. By accommodating the multiresolution nature of images, wavelet transforms play a pivotal role in unveiling intricate patterns, detecting edges, mitigating noise, and preserving salient features. In this section, we delve into the basic yet profound capabilities of the quaternionic wavelet transform to elucidate its applicability in diverse image processing contexts and to highlight its role in extracting nuanced information from colour images. Our primary aim is to exhibit the potential of a holistic image processing methodology using quaternion-valued wavelets, with the specific focus on delineating their inherent promise rather than undertaking an assessment of Figure 6: Cumulative energy profiles of the original and wavelet decomposition of the sample RGB-NIR image. Notice that the energy in the wavelet decomposition is concentrated in a very few coefficients. their relative performance with respect to the conventional channel-by-channel approach. ### Compression The energy compaction in the quaternionic wavelet decomposition is a powerful property as it enables the extraction of the most informative aspects of a colour image while discarding less relevant information. This property aligns with the objectives of _compression_: reducing data size, conserving storage space, and possibly optimising transmission bandwidth, all while maintaining the integrity and perceptual quality of the original content. By performing _percentile thresholding_ (see Figure 7), i.e., killing off wavelet coefficients whose magnitudes are below a certain percentile, we obtain a _sparse_ representation of the image in the wavelet domain with the most important details preserved. The remaining wavelet coefficients are then reconstructed to obtain a compressed version of the original image. Recall that tensored Daubechies' wavelets may also be used to perform wavelet decomposition of the RGB-NIR channels separately. This process produces four wavelet decompositions -- one for each channel. We may also perform a suitable amount of percentile thresholding on these four sets wavelet coefficients separately. After such, a reconstruction on each channel is performed to result to a compressed version of the original image. While energy compaction also happens in the wavelet decomposition of each channel, the retained coefficients after thresholding are at different locations for each channel (see Figure 8). This makes keeping track of the Figure 7: Location of the top 5% wavelet coefficients that are kept after percentile thresholding in the quaternionic wavelet decomposition. location of nonzero wavelet coefficients more expensive in the channel-by-channel decomposition as compared to when the quaternionic wavelet decomposition and thresholding is done. Consequently, enhanced compression becomes attainable through the utilisation of quaternion-valued wavelets as the spatial distribution of thresholded coefficients is rendered consistent across all channels. This alignment is anticipated to yield memory conservation within the position encoding (of wavelet coefficients) step of a conventional wavelet-based compression framework. The combination of multiresolution analysis, energy compaction, sparse representation, and Figure 8: Location of the top 5% wavelet coefficients (in the red, green, blue and near-infrared channels) that are kept after percentile thresholding in the channel-by-channel wavelet decomposition. The points that correspond to the remaining nonzero wavelet coefficients in each channel are evidently present at different locations. adaptability makes wavelets effective tools for data compression. This effectiveness has led to their widespread use in various image and signal compression applications, ranging from image files to medical imaging and multimedia communication. ### Image enhancement Wavelet transforms can be used to enhance certain features of an image. By modifying the wavelet coefficients, we can amplify or suppress specific frequency components to improve visual quality or emphasise certain image characteristics. For instance, after the wavelet decomposition, we can highlight the details of an image by multiplying the detail coefficients by a constant greater than one while leaving the approximation coefficients unchanged. Reconstructing from these updated coefficients produces an image with enhanced edges. More pronounced edges can be obtained by accentuating the detail coefficients using larger multipliers. We refer to Figure 9 for an example. We see in this simple illustration that wavelet-based enhancement techniques can help in adjusting the contrast of an image. By enhancing the high-frequency components, edges and boundaries become more distinct, also leading to an overall improvement in image contrast. Wavelet-based image enhancement finds applications in diverse domains. In medical imaging, it can aid in diagnosing diseases by making subtle details in scans more evident. In satellite Figure 9: RGB components of the original RGB-NIR image (left)and the enhanced image (right) using quaternionic wavelets. imagery, it can unveil hidden geographical features. In art restoration, it can enhance aged or deteriorated images. ### Edge detection The wavelet transform, with its multi-scale decomposition, can provide a robust approach to edge detection. During wavelet decomposition, the high-frequency information associated with edges is captured within the detail coefficients. As the decomposition progresses to higher scales, these coefficients represent increasingly fine variations in the image. Thus, the detail coefficients highlight the high-frequency components, effectively pinpointing edges within the image. This idea presents a simple edge detection scheme. After decomposing a given colour image, we discard the approximation but retain the detail coefficients. Reconstructing from the remaining wavelet coefficients spits out the edges in the originally given colour image. Refer to Figure 10 for an example. When applying wavelet-based edge detection, it is important to consider the choice of wavelets, the decomposition level, and the thresholding or enhancement techniques. These choices influence the accuracy of edge detection and the quality of the image results. Furthermore, integrating wavelet-based edge detection with other image processing methods can result in a more thorough and proficient extraction of edges within intricate images. Wavelet-based edge detection finds applications in numerous fields, including computer vi Figure 10: Edges detected by the quaternionic wavelets in the RGB (left) and NIR (right) components of the sample RGB-NIR image. sion, medical imaging, and remote sensing. In medical imaging, accurate edge detection assists in segmenting organs or structures, aiding in diagnosis. In object recognition, it helps identify shapes and patterns, forming the basis for more complex analysis and feature extraction tasks. ### Denoising Since wavelets can provide a multiresolution representation of a signal, they are able to capture both fine-scale and coarse-scale details of the signal. Such is crucial for denoising because noise often affects different scales of a signal differently. This property, combined with effective thresholding strategies, enables wavelets to separate noise from the true image features. To denoise a noisy colour image using wavelets, we first decompose it into its wavelet coefficients in different scales and resolution. We then apply a threshold to these coefficients to remove noise while keeping important image details. Thresholding in wavelet denoising is a crucial step to separate noise from the true image features. There are different methods for thresholding, and each of these has their own approach to determining which coefficients to keep and to discard. After the thresholding step, we perform a reconstruction from the remaining wavelet coefficients to yield a denoised version of the noisy colour image. The quality of denoising using wavelet decomposition depends on various factors including the choice of wavelets, thresholding method, threshold values, and the characteristics of the noise in the image. The thresholding step is often carried out by using either a _universal_ threshold or an _adaptive_ threshold. Universal thresholding involves applying the same threshold value to all the coefficients in a particular wavelet subband while adaptive thresholding Figure 11: The left image (PSNR \(\approx\) 14.6317) is the RGB part of an RGB-NIR colour image corrupted by Gaussian noise with standard deviation of 0.2. The middle image (PSNR \(\approx\) 25.4371) and the right image (PSNR \(\approx\) 26.7843) are RGB components of RGB-NIR colour images denoised using soft and hard thresholding, respectively. values for different subbands based on their characteristics. The latter takes into account that different subbands might contain varying amounts of noise and signal information. For a simple illustration on how quaternion-valued wavelets can be applied to denoising, we only exemplify with universal thresholding. For instance, we consider _VisuShrink_ which follows a universal threshold \(t=\sigma\sqrt{2\log n}\) where \(\sigma^{2}\) is the noise variance and \(n\) is the number of pixels [26]. This universal threshold may be used to perform either a _soft_ or _hard_ thresholding which we describe through the _soft-thresholding_\(S_{t}:\mathbb{R}_{2}\rightarrow\mathbb{R}_{2}\) and _hard-thresholding_\(H_{t}:\mathbb{R}_{2}\rightarrow\mathbb{R}_{2}\) operators defined by \[S_{t}(x)=\begin{cases}\dfrac{x}{|x|}\max(|x|-t,0)&|x|\neq 0\\ 0&|x|=0\end{cases}\quad\text{ and }\quad\quad H_{t}(x)=\begin{cases}x&|x|>t\\ 0&\text{ otherwise }\end{cases},\] respectively. Note that the definition of \(S_{t}\) and \(H_{t}\) are modified from their classical definitions to be able to handle quaternion values. For examples, refer to Figure 11. Our preliminary investigations revealed that better denoised images are obtainable with adaptive thresholding schemes. However, it is not yet clear how to choose these threshold values to declare optimal results. Such superiority of adaptive thresholding is somehow expected as these methods take into account the inherent variability of the signal and noise and adjust the threshold accordingly. ## 5 Conclusion The successful construction of compactly supported, smooth, and orthonormal quaternion-valued wavelets on the plane has paved the way for numerous critical avenues of exploration. With these wavelets, the opportunity arises to encode the constituent elements of a colour image within the scalar and imaginary parts of quaternions, enabling comprehensive signal processing through wavelet transforms. Quaternion-valued wavelets on the plane hold great promise as a transformative tool in colour image processing, offering a holistic wavelet-based approach that harnesses inter-channel relationships for more accurate and efficient image analysis and enhancement. The proposed scheme for colour image decomposition and reconstruction using quaternionic scaling and wavelet filters demonstrated perfect reconstruction and efficiency of energy compaction. Additionally, the exemplified image processing steps for compression, denoising, seg mentation, and enhancement underscored the versatility of quaternion-valued wavelets in addressing a spectrum of colour image processing applications. In particular, better compression is viable with quaternion-valued wavelets since the location of thresholded coefficients are no longer different from each channel. This is expected to save memory in the position encoding (of wavelet coefficients) step of a wavelet-based compression scheme. Determining whether this holistic approach to colour image enhancement, edge detection, and denoising performs better than a channel-by-channel approach remains a promising direction for investigation. ## Acknowledgements NDD and JAH were supported by Australian Research Council Grant DP160101537. NDD was supported in part by an AustMS Lift-Off Fellowship.
最近、平面上の四元値ウェーブレットを最適化アプローチを用いて構築しました。これらのウェーブレットはコンパクトにサポートされ、滑らかで正交線形、非分離可能で本当に四元値です。しかし、それらは実用的なテストが行われていません。この論文では、平面上の四元値ウェーブレットを用いたカラー画像の分解と再構成のための方法論を紹介します。これらのウェーブレットをカラー画像の圧縮、強化、分割、ノイズ除去に関連付けることで、画像処理の全行程を四元値処理として適用可能であると示しています。私たちの研究結果はこのウェーブレットがカラー画像の端から端まで四元値処理の優れたツールであることを示しています。 Please let me know if you have any other sentences you'd like me to translate!
2305.19562
Replicability in Reinforcement Learning
We initiate the mathematical study of replicability as an algorithmic property in the context of reinforcement learning (RL). We focus on the fundamental setting of discounted tabular MDPs with access to a generative model. Inspired by Impagliazzo et al. [2022], we say that an RL algorithm is replicable if, with high probability, it outputs the exact same policy after two executions on i.i.d. samples drawn from the generator when its internal randomness is the same. We first provide an efficient $\rho$-replicable algorithm for $(\varepsilon, \delta)$-optimal policy estimation with sample and time complexity $\widetilde O\left(\frac{N^3\cdot\log(1/\delta)}{(1-\gamma)^5\cdot\varepsilon^2\cdot\rho^2}\right)$, where $N$ is the number of state-action pairs. Next, for the subclass of deterministic algorithms, we provide a lower bound of order $\Omega\left(\frac{N^3}{(1-\gamma)^3\cdot\varepsilon^2\cdot\rho^2}\right)$. Then, we study a relaxed version of replicability proposed by Kalavasis et al. [2023] called TV indistinguishability. We design a computationally efficient TV indistinguishable algorithm for policy estimation whose sample complexity is $\widetilde O\left(\frac{N^2\cdot\log(1/\delta)}{(1-\gamma)^5\cdot\varepsilon^2\cdot\rho^2}\right)$. At the cost of $\exp(N)$ running time, we transform these TV indistinguishable algorithms to $\rho$-replicable ones without increasing their sample complexity. Finally, we introduce the notion of approximate-replicability where we only require that two outputted policies are close under an appropriate statistical divergence (e.g., Renyi) and show an improved sample complexity of $\widetilde O\left(\frac{N\cdot\log(1/\delta)}{(1-\gamma)^5\cdot\varepsilon^2\cdot\rho^2}\right)$.
Amin Karbasi, Grigoris Velegkas, Lin F. Yang, Felix Zhou
2023-05-31T05:16:23
http://arxiv.org/abs/2305.19562v2
# Replicability in Reinforcement Learning+ ###### Abstract We initiate the mathematical study of replicability as an algorithmic property in the context of reinforcement learning (RL). We focus on the fundamental setting of discounted tabular MDPs with access to a _generative model_. Inspired by Impagliazzo et al. (2022), we say that an RL algorithm is replicable if, with high probability, it outputs the _exact_ same policy after two executions on i.i.d. samples drawn from the generator when its _internal_ randomness is the same. We first provide an efficient \(\rho\)-replicable algorithm for \((\varepsilon,\delta)\)-optimal policy estimation with sample and time complexity \(\widetilde{O}\left(\frac{N^{3}\cdot\log(1/\delta)}{(1-\gamma)^{3}\cdot \varepsilon^{2}\cdot\rho^{2}}\right)\), where \(N\) is the number of state-action pairs. Next, for the subclass of deterministic algorithms, we provide a lower bound of order \(\Omega\left(\frac{N^{3}}{(1-\gamma)^{3}\cdot\varepsilon^{2}\cdot\rho^{2}}\right)\). Then, we study a relaxed version of replicability proposed by Kalavasis et al. (2023) called TV _indistinguishability_. We design a computationally efficient TV indistinguishable algorithm for policy estimation whose sample complexity is \(\widetilde{O}\left(\frac{N^{2}\cdot\log(1/\delta)}{(1-\gamma)^{3}\cdot \varepsilon^{2}\cdot\rho^{2}}\right)\). At the cost of \(\exp(N)\) running time, we transform these TV indistinguishable algorithms to \(\rho\)-replicable ones without increasing their sample complexity. Finally, we introduce the notion of _approximate_-replicability where we only require that two outputted policies are close under an appropriate statistical divergence (e.g., Renyi) and show an improved sample complexity of \(\widetilde{O}\left(\frac{N\cdot\log(1/\delta)}{(1-\gamma)^{3}\cdot\varepsilon ^{2}\cdot\rho^{2}}\right)\). ## 1 Introduction When designing a reinforcement learning (RL) algorithm, how can one ensure that when it is executed twice in the same environment its outcome will be the same? In this work, our goal is to design RL algorithms with _provable_ replicability guarantees. The lack of replicability in scientific research, which the community also refers to as the _reproducibility crisis_, has been a major recent concern. This can be witnessed by an article that appeared in Nature (Baker, 2016): Among the 1,500 scientists who participated in a survey, 70% of them could not replicate other researchers' findings and more shockingly, 50% of them could not even reproduce their own results. Unfortunately, due to the exponential increase in the volume of Machine Learning (ML) papers that are being published each year, the ML community has also observed an alarming increase in the lack of reproducibility. As a result, major ML conferences such as NeurIPS and ICLR have established "reproducibility challenges" in which researchers are encouraged to replicate the findings of their colleagues (Pineau et al., 2019, 2021). Recently, RL algorithms have been a crucial component of many ML systems that are being deployed in various application domains. These include but are not limited to, competing with humans in games (Mnih et al., 2013; Silver et al., 2017; Vinyals et al., 2019,, FAIR), creating self-driving cars (Kiran et al., 2021), designing recommendation systems (Afsar et al., 2022), providing e-healthcare services (Yu et al., 2021), and training Large Language Models (LLMs) (Ouyang et al., 2022). In order to ensure replicability across these systems, an important first step is to develop replicable RL algorithms. To the best of our knowledge, replicability in the context of RL has not received a formal mathematical treatment. We initiate this effort by focusing on _infinite horizon, tabular_ RL with a _generative model_. The generative model was first studied by Kearns and Singh (1998) in order to understand the statistical complexity of long-term planning without the complication of exploration. The crucial difference between this setting and Dynamic Programming (DP) (Bertsekas, 1976) is that the agent needs to first obtain information about the world before computing a _policy_ through some optimization process. Thus, the main question is to understand the number of samples required to estimate a near-optimal policy. This problem is similar to understanding the number of labeled examples required in PAC learning (Valiant, 1984). In this work, we study three different formal notions of replicability and design algorithms that satisfy them. First, we study the definition of Impagliazzo et al. (2022), which adapted to the context of RL says that a learning algorithm is replicable if it outputs the exact same policy when executed twice on the same MDP, using _shared_ internal randomness across the two executions (cf. Definition 2.10). We show that there exists a replicable algorithm that outputs a near-optimal policy using \(\widetilde{O}(N^{3})\) samples1, where \(N\) is the cardinality of the state-action space. This algorithm satisfies an additional property we call _locally random_, which roughly asks that every random decision the algorithm makes based on internal randomness must draw its internal randomness independently from other decisions. Next, we provide a lower bound for deterministic algorithms that matches this upper bound. Footnote 1: For simplicity, we hide the dependence on the remaining parameters of the problem in this section. Subsequently, we study a less stringent notion of replicability called TV indistinguishability, which was introduced by Kalavasis et al. (2023). This definition states that, in expectation over the random draws of the input, the TV distance of the two distributions over the outputs of the algorithm should be small (cf. Definition 4.1). We design a computationally efficient TV indistinguishable algorithm for answering \(d\) statistical queries whose sample complexity scales as \(\widetilde{O}(d^{2})\). We remark that this improves the sample complexity of its replicable counterpart based on the rounding trick from Impagliazzo et al. (2022) by a factor of \(d\) and it has applications outside the scope of our work (Impagliazzo et al., 2022; Esfandiari et al., 2023; Bu et al., 2023; Kalavasis et al., 2023). This algorithm is inspired by the Gaussian mechanism from the Differential Privacy (DP) literature (Dwork et al., 2014). Building upon this statistical query estimation oracle, we design computationally efficient TV-indistinguishable algorithms for \(Q\)-function estimation and policy estimation whose sample complexity scales as \(\widetilde{O}(N^{2})\). Interestingly, we show that by violating the locally random property and allowing for internal randomness that creates correlations across decisions, we can transform these TV indistinguishable algorithms to replicable ones without hurting their sample complexity, albeit at a cost of \(\widetilde{O}(\exp(N))\) running time. Our transformation is inspired by the main result of Kalavasis et al. (2023). We also conjecture that the true sample complexity of \(\rho\)-replicable policy estimation is indeed \(\widetilde{\Theta}(N^{2})\). Finally, we propose a novel relaxation of the previous notions of replicability. Roughly speaking, we say that an algorithm is _approximately replicable_ if, with high probability, when executed twice on the same MDP, it outputs policies that are close under a dissimilarity measure that is based on the _Renyi divergence_. We remark that this definition does not require sharing the internal randomness across the executions. Finally, we design an RL algorithm that is approximately replicable and outputs a near-optimal policy with \(\widetilde{O}(N)\) sample and time complexity. Table 1 and Table 2 summarize the sample and time complexity of \(Q\)-estimation and policy estimation, respectively, under different notions of replicability. We assume the algorithms in question have a constant probability of success. In Section 6, we further discuss the benefits and downsides for each of these notions. ### Related Works **Replicability.** Pioneered by Impagliazzo et al. (2022), there has been a growing interest from the learning theory community in studying replicability as an algorithmic property. Esfandiari et al. (2023a,b) studied replicable algorithms in the context of multi-armed bandits and clustering. Recently, Bun et al. (2023) established equivalences between replicability and other notions of algorithmic stability such as differential privacy when the domain of the learning problem is finite and provided some computational and statistical hardness results to obtain these equivalences, under cryptographic assumptions. Subsequently, Kalavasis et al. (2023) proposed a relaxation of the replicability definition of Impagliazzo et al. (2022), showed its statistical equivalence to the notion of replicability for countable domains2 and extended some of the equivalences from Bun et al. (2023) to countable domains. Chase et al. (2023), Dixon et al. (2023) proposed a notion of _list-replicability_, where the output of the learner is not necessarily identical across two executions but is limited to a small list of choices. Footnote 2: We remark that this equivalence for finite domains can also be obtained, implicitly, from the results of Bun et al. (2023). **Reproducibility in RL.** Reproducing, interpreting, and evaluating empirical results in RL can be challenging since there are many sources of randomness in standard benchmark environments. Khetarpal et al. (2018) proposed a framework for evaluating RL to improve reproducibility. Another barrier to reproducibility is the unavailability of code and training details within technical reports. Indeed, Henderson et al. (2018) observed that both intrinsic (e.g. random seeds, environments) and extrinsic (e.g. hyperparameters, codebases) factors can contribute to difficulties in reproducibility. Tian et al. (2019) provided an open-source implementation of AlphaZero (Silver et al., 2017), a popular RL-based Go engine. We are not aware of any theoretical works that formally study reproducibility in RL. **RL with a Generative Model.** The study of RL with a generative model was initiated by Kearns and Singh (1998) who provided algorithms with suboptimal sample complexity in the discount factor \(\gamma\). A long line of work (see, e.g. Gheshlaghi Azar et al. (2013), Wang (2017), Sidford et al. (2018, 2018), Feng et al. (2019), Agarwal et al. (2020), Li et al. (2020) and references therein) has led to (non-replicable) algorithms with minimax optimal sample complexity. Another relevant line of work that culminated with the results of Even-Dar et al. (2002), Mannor and Tsitsiklis (2004) studied the sample complexity of finding an \(\varepsilon\)-optimal arm in the multi-armed bandit setting with access to a generative model. \begin{table} \begin{tabular}{l c c} \hline \hline Property & Sample Complexity & Time Complexity \\ \hline Locally Random, Replicable & \(\tilde{\Theta}\left(\frac{N^{3}}{(1-\gamma)^{3}\varepsilon^{2}\rho^{2}}\right)\) & \(\tilde{\Theta}\left(\frac{N^{3}}{(1-\gamma)^{3}\varepsilon^{2}\rho^{2}}\right)\) \\ TV Indistinguishable & \(\tilde{O}\left(\frac{N^{2}}{(1-\gamma)^{3}\varepsilon^{2}\rho^{2}}\right)\) & \(\tilde{O}\left(\frac{\mathrm{poly}(N)}{(1-\gamma)^{3}\varepsilon^{2}\rho^{2}}\right)\) \\ Replicable (Through TV Indistinguishability) & \(\tilde{O}\left(\frac{N^{2}}{(1-\gamma)^{3}\varepsilon^{2}\rho^{2}}\right)\) & \(\tilde{O}\left(\frac{\mathrm{exp}(N)}{(1-\gamma)^{3}\varepsilon^{2}\rho^{2}}\right)\) \\ \hline \hline \end{tabular} \end{table} Table 1.1: Complexity Overview for \(Q\)-Estimation with Constant Probability of Success. \begin{table} \begin{tabular}{l c c} \hline \hline Property & Sample Complexity & Time Complexity \\ \hline Locally Random, Replicable & \(\tilde{O}\left(\frac{N^{3}}{(1-\gamma)^{3}\varepsilon^{2}\rho^{2}}\right)\) & \(\tilde{O}\left(\frac{N^{3}}{(1-\gamma)^{3}\varepsilon^{2}\rho^{2}}\right)\) \\ TV Indistinguishable & \(\tilde{O}\left(\frac{N^{2}}{(1-\gamma)^{3}\varepsilon^{2}\rho^{2}}\right)\) & \(\tilde{O}\left(\frac{\mathrm{poly}(N)}{(1-\gamma)^{3}\varepsilon^{2}\rho^{2}}\right)\) \\ Replicable (Through TV Indistinguishability) & \(\tilde{O}\left(\frac{N^{2}}{(1-\gamma)^{3}\varepsilon^{2}\rho^{2}}\right)\) & \(\tilde{O}\left(\frac{\mathrm{exp}(N)}{(1-\gamma)^{3}\varepsilon^{2}\rho^{2}}\right)\) \\ Approximately Replicable & \(\tilde{O}\left(\frac{N}{(1-\gamma)^{3}\varepsilon^{2}\rho^{2}}\right)\) & \(\tilde{O}\left(\frac{N}{(1-\gamma)^{3}\varepsilon^{2}\rho^{2}}\right)\) \\ \hline \hline \end{tabular} \end{table} Table 1.2: Complexity Overview for Policy Estimation with Constant Probability of Success. Setting In this section, we formally define the setting we are working in. ### Reinforcement Learning Setting **(Discounted) Markov Decision Process.** We start by providing the definitions related to the _Markov Decision Process_ (MDP) that we study in this work. **Definition 2.1** (Discounted Markov Decision Process).: A _(discounted) Markov decision process (MDP)_ is a 6-tuple \[M=\left(\mathcal{S},s_{0},\mathcal{A}=\bigcup_{s\in\mathcal{S}}\mathcal{A}^{s},P_{M},r_{M},\gamma\right).\] Here \(\mathcal{S}\) is a finite set of states, \(s_{0}\in\mathcal{S}\) is the initial state, \(\mathcal{A}^{s}\) is the finite set of available actions for state \(s\in\mathcal{S}\), and \(P_{M}(s^{\prime}\mid s,a)\) is the transition kernel, i.e, \(\forall(s,s^{\prime})\in\mathcal{S}^{2},\forall a\in\mathcal{A}^{s},P_{M}(s^{ \prime}\mid s,a)\geq 0\) and \(\forall s\in\mathcal{S},\forall a\in\mathcal{A}^{s},\sum_{s^{\prime}\in \mathcal{S}}P_{M}(s^{\prime}\mid s,a)=1\). We denote the reward function3 by \(r_{M}:\mathcal{S}\times\mathcal{A}\rightarrow[0,1]\) and the discount factor by \(\gamma\in(0,1)\). The interaction between the agent and the environment works as follows. At every step, the agent observes a state \(s\) and selects an action \(a\in\mathcal{A}^{s}\), yielding an instant reward \(r_{M}(s,a)\). The environment then transitions to a random new state \(s^{\prime}\in\mathcal{S}\) drawn according to the distribution \(P_{M}(\cdot\mid s,a)\). Footnote 3: We assume that the reward is deterministic and known to the learner. Our results hold for stochastic and unknown rewards with an extra (replicable) estimation step, which does not increase the overall sample complexity. **Definition 2.2** (Policy).: We say that a map \(\pi:\mathcal{S}\rightarrow\mathcal{A}\) is a _(deterministic) stationary policy_. When we consider randomized policies we overload the notation and denote \(\pi(s,a)\) the probability mass that policy \(\pi\) puts on action \(a\in\mathcal{A}^{s}\) in state \(s\in\mathcal{S}\). **Definition 2.3** (Value (\(V\)) Function).: The _value_\((V)\)_function_\(V^{\pi}_{M}:\mathcal{S}\rightarrow[0,1/(1-\gamma)]\) of a policy \(\pi\) with respect to the MDP \(M\) is given by \[V^{\pi}_{M}(s):=\mathbb{E}\left[\sum_{t=0}^{\infty}\gamma^{t}r_{M}(s_{t},a_{t} )\mid s_{0}=s\right].\] Here \(a_{t}\sim\pi(s_{t})\) and \(s_{t+1}\sim P_{M}(\cdot\mid s_{t},a_{t})\). This is the expected discounted cumulative reward of a policy. **Definition 2.4** (Action-Value (\(Q\)) Function).: The _action-value (\(Q\)) function_\(Q^{\pi}_{M}:\mathcal{S}\times\mathcal{A}\rightarrow[0,1/(1-\gamma)]\) of a policy \(\pi\) with respect to the MDP \(M\) is given by \[Q^{\pi}_{M}(s,a):=r_{M}(s,a)+\gamma\cdot\sum_{s^{\prime}\in\mathcal{S}}P_{M}(s ^{\prime}\mid s,a)\cdot V^{\pi}_{M}(s^{\prime}).\] We write \(N:=\sum_{s\in\mathcal{S}}\lvert\mathcal{A}^{s}\rvert\) to denote the number of state-action pairs. We denote by \(\pi^{\star}\) the _optimal_ policy that maximizes the value function, i.e., \(\forall\pi,s\in\mathcal{S}\): \(V^{\star}(s):=V^{\pi^{\star}}(s)\geq V^{\pi}(s)\). We also define \(Q^{\star}(s,a):=Q^{\pi^{\star}}(s,a)\). This quantity is well defined since the fundamental theorem of RL states that there exists a (deterministic) policy \(\pi^{\star}\) that simultaneously maximizes \(V^{\pi}(s)\) among all policies \(\pi\), for all \(s\in\mathcal{S}\) (see e.g. Puterman (2014)). Since estimating the optimal policy from samples when \(M\) is unknown could be an impossible task, we aim to compute an \(\varepsilon\)-_approximately_ optimal policy for \(M\). **Definition 2.5** (Approximately Optimal Policy).: Let \(\varepsilon\in(0,1).\) We say that the policy \(\pi\) is \(\varepsilon\)-approximately optimal if \(\lVert V^{\star}-V^{\pi}\rVert_{\infty}\leq\varepsilon\). In the above definition, \(||\cdot||_{\infty}\) denotes the infinity norm of the vector, i.e., its maximum element in absolute value. **Generative Model.** Throughout this work, we assume we have access to a _generative model_ (first studied in Kearns and Singh (1998)) or a _sampler_\(G_{M}\), which takes as input a state-action pair \((s,a)\) and provides a sample \(s^{\prime}\sim P_{M}(\cdot\mid s,a)\). This widely studied fundamental RL setting allows us to focus on the sample complexity of planning over a long horizon without considering the additional complications of exploration. Since our focus throughout this paper is on the _statistical_ complexity of the problem, our goal is to achieve the desired algorithmic performance while minimizing the number of samples from the generator that the algorithm requires. **Approximately Optimal Policy Estimator.** We now define what it means for an algorithm \(\mathscr{A}\) to be an approximately optimal policy estimator. **Definition 2.6** (\((\varepsilon,\delta)\)-Optimal Policy Estimator).: Let \(\varepsilon,\delta\in(0,1)^{2}\). A (randomized) algorithm \(\mathscr{A}\) is called an \((\varepsilon,\delta)\)-optimal policy estimator if there exists a number \(n:=n(\varepsilon,\delta)\in\mathbb{N}\) such that, for any MDP \(M\), when it is given at least \(n(\varepsilon,\delta)\) samples from the generator \(G_{M}\), it outputs a policy \(\hat{\pi}\) such that \(\left\|V^{\hat{\pi}}-V^{\star}\right\|_{\infty}\leq\varepsilon\) with probability at least \(1-\delta\). Here, the probability is over random draws from \(G_{M}\) and the internal randomness of \(\mathscr{A}\). Approximately optimal \(V\)-function estimators and \(Q\)-function estimators are defined similarly. _Remark 2.7_.: In order to allow flexibility to the algorithm, we do not restrict it to request the same amount of samples for every state-action pair. Thus \(n(\varepsilon,\delta)\) is a bound on the total number of samples that \(\mathscr{A}\) receives from \(G_{M}\). The algorithms we design request the same number of samples for every state-action pair, however, our lower bounds are stronger and hold without this restriction. When the MDP \(M\) is clear from context, we omit the subscript in all the previous quantities. ### Replicability **Definition 2.8** (Replicable Algorithm; (Impagliazzo et al., 2022)).: Let \(\mathscr{A}:\mathcal{I}^{n}\to\mathcal{O}\) be an \(n\)-sample randomized algorithm that takes as input elements from some domain \(\mathcal{I}\) and maps them to some co-domain \(\mathcal{O}\). Let \(\mathcal{R}\) denote the internal distribution over binary strings that \(\mathscr{A}\) uses. For \(\rho\in(0,1)\), we say that \(\mathscr{A}\) is \(\rho\)_-replicable_ if for any distribution \(\mathcal{D}\) over \(\mathcal{I}\) it holds that \[\mathbb{P}_{\bar{S},\bar{S}^{\prime}\sim\mathcal{D}^{n},\bar{r}\sim\mathcal{R }}\big{\{}\mathscr{A}(\bar{S};\bar{r})=\mathscr{A}(\bar{S}^{\prime};\bar{r}) \big{\}}\geq 1-\rho\,,\] where \(\mathscr{A}(\bar{S};\bar{r})\) denotes the (deterministic) output of \(\mathscr{A}\) when its input is \(\bar{S}\) and the realization of the internal random string is \(\bar{r}\). In the context of our work, we should think of \(\mathscr{A}\) as a randomized mapping that receives samples from the generator \(G\) and outputs policies. Thus, even when \(\bar{S}\) is fixed, \(\mathscr{A}(\bar{S})\) should be thought of as a random variable, whereas \(\mathscr{A}(\bar{S};\bar{r})\) is the _realization_ of this variable given the (fixed) \(\bar{S},\bar{r}\). We should think of \(\bar{r}\) as the shared randomness between the two executions, which can be implemented as a shared random seed. One of the most elementary statistical operations we may wish to make replicable is mean estimation. This operation can be phrased using the language of _statistical queries_. **Definition 2.9** (Statistical Query Oracle; (Kearns, 1998)).: Let \(\mathcal{D}\) be a distribution over the domain \(\mathcal{X}\) and \(\phi:\mathcal{X}^{n}\to\mathbb{R}\) be a statistical query with true value \[v^{\star}:=\lim_{n\to\infty}\phi(X_{1},\ldots,X_{n})\in\mathbb{R}.\] Here \(X_{i}\sim_{i.i.d.}\mathcal{D}\) and the convergence is understood in probability or distribution. Let \(\varepsilon,\delta\in(0,1)^{2}\). A _statistical query (SQ) oracle_ outputs a value \(v\) such that \(|v-v^{\star}|\leq\varepsilon\) with probability at least \(1-\delta\). The simplest example of a statistical query is the sample mean \[\phi(X_{1},\ldots,X_{n})=\frac{1}{n}\sum_{i=1}^{n}X_{i}.\] Impagliazzo et al. (2022) designed a replicable SQ-query oracle for sample mean queries with bounded co-domain (cf. Theorem B.1). The following definition is the formal instantiation of Definition 2.8 in the setting we are studying. **Definition 2.10** (Replicable Policy Estimator).: Let \(\rho\in(0,1)\). A policy estimator \(\mathscr{A}\) that receives samples from a generator \(G\) and returns a policy \(\pi\) using internal randomness \(\mathcal{R}\) is \(\rho\)-replicable if for any MDP \(M\), when two sequences of samples \(\bar{S},\bar{S}^{\prime}\) are generated independently from \(G\), it holds that \[\mathbb{P}_{\bar{S},\bar{S}^{\prime}\sim G,\bar{r}\sim\mathcal{R}}\big{\{} \mathscr{A}(\bar{S};\bar{r})=\mathscr{A}(\bar{S}^{\prime};\bar{r})\big{\}}\geq 1 -\rho.\] To give the reader some intuition about the type of problems for which replicable algorithms under Definition 2.8 exist, we consider the fundamental task of estimating the mean of a random variable. Impagliazzo et al. (2022) provided a replicable mean estimation algorithm when the variable is bounded (cf. Theorem B.1). Esfandiari et al. (2023) generalized the result to simultaneously estimate the means of multiple random variables with unbounded co-domain under some regularity conditions on their distributions (cf. Theorem B.2). The idea behind both results is to use a rounding trick introduced in Impagliazzo et al. (2022) which allows one to sacrifice some accuracy of the estimator in favor of the replicability property. The formal statement of both results, which are useful for our work, are deferred to Appendix B.1. ### Local Randomness Our algorithms in Section 3 satisfy a property which we call _locally random_. This roughly means that for every decision an algorithm makes based on external and internal randomness, the internal randomness is used once and discarded immediately after. **Definition 2.11** (Locally Random).: Let \(\mathscr{A}=(\mathscr{A}^{(1)},\ldots,\mathscr{A}^{(N)}):\mathcal{I}^{n} \to\mathbb{R}^{N}\) be an \(n\)-sample randomized algorithm that takes as input elements from some domain \(\mathcal{I}\) and maps them to \(\mathbb{R}^{N}\). We say that \(\mathscr{A}\) is _locally random_ if: 1. The \(i\)-th output component \(\mathscr{A}^{(i)}(\bar{S};\bar{r}^{(i)})\) is a function of all samples \(\bar{S}\) but only its own internal random string \(\bar{r}^{(i)}\). 2. The sources \(\bar{r}^{(i)}\) of internal randomness are independent of each other and the external samples \(\bar{S}\). We will see that by restricting ourselves to locally random algorithms, it is necessary and sufficent to incur a sample cost of \(\tilde{\Theta}(N^{3})\) for replicable \(Q\)-estimation. However, by relaxing this restriction and allowing for internal randomness that is correlated, we can achieve \(\tilde{O}(N^{2})\) sample complexity. ## 3 Replicable \(Q\)-Function & Policy Estimation Our aim in this section is to understand the sample complexity overhead that the replicability property imposes on the task of computing an \((\varepsilon,\delta)\)- approximately optimal policy. Without this requirement, Sidford et al. (2018), Agarwal et al. (2020), Li et al. (2020) showed that \(\tilde{O}\left(N\log(\nicefrac{{1}}{{\delta}})/((1-\gamma)^{3}\varepsilon^{2 })\right)\) samples suffice to estimate such a policy, value function, and \(Q\)-function. Moreover, since Gheshlaghi Azar et al. (2013) provided matching lower bounds4, the sample complexity for this problem has been settled. Our main results in this section are tight sample complexity bounds for locally random \(\rho\)-replicable \((\varepsilon,\delta)\)-approximately optimal \(Q\)-function estimation as well as upper and lower bounds for \(\rho\)-replicable \((\varepsilon,\delta)\)-approximately policy estimation that differ by a factor of \(\nicefrac{{1}}{{(1-\gamma)^{2}}}.\) The missing proofs for this section can be found in Appendix C. We remark that in both the presented algorithms and lower bounds, we assume local randomness. For example, we assume that the internal randomness is drawn independently for each state-action pair for replicable \(Q\)-estimation. In the case where we allow for the internal randomness to be correlated across estimated quantities, we present an algorithm that overcomes our present lower bound in Section 4.3. However, the running time of this algorithm is exponential in \(N\). ### Computationally Efficient Upper Bound on the Sample Complexity We begin by providing upper bounds on the sample complexity for replicable estimation of an approximately optimal policy and \(Q\)-function. On a high level, we follow a two-step approach: 1. Start with black-box access to some \(Q\)-estimation algorithm that is not necessarily replicable (cf. Theorem C.2) to estimate some \(\widehat{Q}\) such that \(\left\lVert Q^{\star}-\widehat{Q}\right\rVert_{\infty}\leq\varepsilon_{0}\). 2. Apply the replicable rounding algorithm from Theorem B.2 as a post-processing step. The rounding step incurs some loss of accuracy in the estimated \(Q\)-function. Therefore, in order to balance between \(\rho\)-replicability and \((\varepsilon,\delta)\)-accuracy, we need to call the black-box oracle with an accuracy smaller than \(\varepsilon\), i.e. choose \(\varepsilon_{0}<O(\varepsilon\rho)\). This yields an increase in the sample complexity which we quantify below. For the proof details, see Appendix C.1. Recall that \(N\) is the number of state-action pairs of the MDP. **Theorem 3.1**.: _Let \(\varepsilon,\rho\in(0,1)^{2}\) and \(\delta\in(0,\nicefrac{{\rho}}{{3}})\). There is a locally random \(\rho\)-replicable algorithm that outputs an \(\varepsilon\)-optimal \(Q\)-function with probability at least \(1-\delta\). Moreover, it has time and sample complexity_ \[\widetilde{O}\left(\frac{N^{3}}{(1-\gamma)^{3}\varepsilon^{2}\rho^{2}}\log \frac{1}{\delta}\right).\] So far, we have provided a replicable algorithm that outputs an approximately optimal \(Q\) function. The main result of Singh and Yee (1994) shows that if \(\left\lVert\widehat{Q}-Q^{\star}\right\rVert_{\infty}\leq\varepsilon\), then the greedy policy with respect to \(\widehat{Q}\), i.e., \(\forall s\in\mathcal{S},\widehat{\pi}(s):=\operatorname*{argmax}_{a\in \mathcal{A}^{\star}}\widehat{Q}(s,a)\), is \(\nicefrac{{\varepsilon}}{{(1-\gamma)}}\)-approximately optimal (cf. Theorem C.3). Thus, if we want to obtain an \(\varepsilon\)-approximately optimal policy, it suffices to obtain a \((1-\gamma)\varepsilon\)-approximately optimal \(Q\)-function. This is formalized in Corollary 3.2. **Corollary 3.2**.: _Let \(\varepsilon,\rho\in(0,1)^{2}\) and \(\delta\in(0,\nicefrac{{\rho}}{{3}})\). There is a locally random \(\rho\)-replicable algorithm that outputs an \(\varepsilon\)-optimal policy with probability at least \(1-\delta\). Moreover, it has time and sample complexity_ \[\widetilde{O}\left(\frac{N^{3}}{(1-\gamma)^{5}\varepsilon^{2}\rho^{2}}\log \frac{1}{\delta}\right).\] Again, we defer the proof to Appendix C.1. ### Lower Bounds for Replicable \(Q\)-Function & Policy Estimation We now move on to the lower bounds and our approaches to obtain them. First, we describe a sample complexity lower bound for locally random \(\rho\)-replicable algorithms that seek to estimate \(Q^{\star}\). Then, we reduce policy estimation to \(Q\)-estimation. Since the dependence of the sample complexity on the confidence parameter \(\delta\) of the upper bound is at most polylogarithmic, the main focus of the lower bound is on the dependence on the size of the state-action space \(N\), the error parameter \(\varepsilon\), the replicability parameter \(\rho\), and the discount factor \(\gamma\). #### 3.2.1 Intuition of the \(Q\)-Function Lower Bound Our MDP construction that witnesses the lower bound relies on the sample complexity lower bound for locally random algorithms that replicably estimate the biases of _multiple independent_ coins. Impagliazzo et al. (2022) showed that any \(\rho\)-replicable algorithm that estimates the bias of a _single_ coin with accuracy \(\varepsilon\) requires at least \(\Omega(\nicefrac{{1}}{{\rho^{2}\varepsilon^{2}}})\) samples (cf. Theorem C.4). We generalize this result and derive a lower bound for any locally random \(\rho\)-replicable algorithm that estimates the biases of \(N\) coins with accuracy \(\varepsilon\) and constant probability of success. We discuss our approach in Section 3.2.2. Next, given some \(\varepsilon,\rho,\gamma,N\), we design an MDP for which estimating an approximately optimal \(Q\)-function is at least as hard as estimating \(N\) coins. The main technical challenge for this part of the proof is to establish the correct dependence on the parameter \(\gamma\) since it is not directly related to the coin estimation problem. We elaborate on it in Remark 3.6. _Remark 3.3_.: Our construction, combined with the non-replicable version of the coin estimation problem, can be used to simplify the construction of the non-replicable \(Q\)-estimation lower bound from Gheshlaghi Azar et al. (2013). #### 3.2.2 The Replicable Coin Estimation Problem Formally, the estimation problem, without the replicability requirement, is defined as follows. **Problem 3.4** (Multiple Coin Problem).: Fix \(q,\varepsilon,\delta\in(0,1)^{3}\) such that \(q-\varepsilon\in(\nicefrac{{1}}{{2}},1)\). Given sample access to \(N\) independent coins each with a bias of either \(q\) or \(q-\varepsilon\), determine the bias of every coin with confidence at least \(1-\delta\). We now informally state our main result for the multiple coin estimation problem, which could be useful in deriving replicability lower bounds beyond the scope of our work. See Theorem C.7 for the formal statement. Intuitively, this result generalizes Theorem C.4 to multiple instances. **Theorem 3.5** (Informal).: _Suppose \(\mathscr{A}\) is a locally random \(\rho\)-replicable algorithm for the multiple coin problem with a constant probability of success. Then, the sample complexity of \(\mathscr{A}\) is at least_ \[\Omega\left(\frac{N^{3}q(1-q)}{\varepsilon^{2}\rho^{2}}\right).\] Recall Yao's min-max principle (Yao, 1977), which roughly states that the expected cost of a randomized algorithm on its worst-case input is at least as expensive as the expected cost of any deterministic algorithm on random inputs chosen from some distribution. It is not clear how to apply Yao's principle directly, but we take inspiration from its essence and reduce the task of reasoning about a randomized algorithm with shared internal randomness to reasoning about a deterministic one with an additional layer of external randomness on top of the random flips of the coins. Consider now a deterministic algorithm \(g\) for distinguishing the bias of a single coin where the input bias is chosen uniformly in \([q-\varepsilon,q]\). That is, we first choose \(p\sim U[q-\varepsilon,q]\), then provide i.i.d. samples from \(\mathrm{Be}(p)\) to \(g\). We impose some boundary conditions: if \(p=q-\varepsilon\), \(g\) should output "-" with high probability and if \(p=q\), the algorithm should output "+" with high probability. We show that the probability of \(g\) outputting "+" varies smoothly with respect to the bias of the input coin. Thus, there is an interval \(I\subseteq(q-\varepsilon,q)\) such that \(g\) outputs "-" or "+" with almost equal probability and so the output of \(g\) is inconsistent across two executions with constant probability when \(p\) lands in this interval. By the choice of \(p\sim U[q-\varepsilon,q]\), if \(\ell(I)\) denotes the length of \(I\), then the output of \(g\) is inconsistent across two executions with probability at least \(\Omega\big{(}\nicefrac{{\ell(I)}}{{\varepsilon}}\big{)}\). Quantifying \(\ell(I)\) and rearranging yields the lower bound for a single coin. For the case of \(N\) independent coins, we use the pigeonhole principle to reduce the argument to the case of a single coin. The formal statement and proof of Theorem 3.5 is deferred to Appendix C.2. _Remark 3.6_.: The lower bound from Impagliazzo et al. (2022) for the single-coin estimation problem holds for the regime \(q,q-\varepsilon\in(\nicefrac{{1}}{{4}},\nicefrac{{3}}{{4}})\). We remove this constraint by analyzing the dependence of the lower bound on \(q\). When reducing \(Q\)-function estimation to the multiple coin problem, the restricted regime yields a lower bound proportional to \((1-\gamma)^{-2}\). In order to derive the stronger lower bound of \((1-\gamma)^{-3}\), we must be able to choose \(q\approx\gamma\) which can be arbitrarily close to \(1\). In Section 4, we show that allowing for non-locally random algorithms enables us to shave off a factor of \(N\) in the sample complexity. We also conjecture that this upper bound is tight. **Conjecture 3.7**.: Suppose \(\mathscr{A}(\bar{c}^{(1)},\ldots,\bar{c}^{(N)};\bar{\tau})\) is a randomized \(\rho\)-replicable algorithm for the multiple coin problem and has a constant probability of success. Then, the sample complexity of \(\mathscr{A}\) is at least \[\Omega\left(\frac{N^{2}q(1-q)}{\varepsilon^{2}\rho^{2}}\right).\] #### 3.2.3 A Lower Bound for Replicable \(Q\)-Function Estimation We now present the MDP construction that achieves the desired sample complexity lower bound. We define a family of MDPs \(\mathbb{M}\) as depicted in Figure 3.1. This particular construction was first presented by Mannor and Tsitsiklis (2004) and generalized by Gheshlaghi Azar et al. (2013); Feng et al. (2019). Any MDP \(M\in\mathbb{M}\) is parameterized by positive integers \(K_{M},L_{M}\), and some \(p_{M}^{(k,\ell)}\in[0,1]\) for \(k\in[K_{M}],\ell\in[L_{M}]\). The state space of \(M\) is the disjoint union5 of three sets \(\mathcal{S}=\mathcal{X}\sqcup\mathcal{Y}\sqcup\mathcal{Z}\), where \(\mathcal{X}\) consists of \(K\) states \(\{x_{1},\ldots,x_{K}\}\) and each of them has \(L\) available actions \(\{a_{1},\ldots,a_{L}\}=:\mathcal{A}\). All states in \(\mathcal{Y},\mathcal{Z}\) have a single action that the agent can take. Remark that each \(M\in\mathbb{M}\) has \(N=\sum_{s\in S}\lvert\mathcal{A}^{s}\rvert=4K_{M}L_{M}\). Footnote 5: Denoted by \(\sqcup\). For \(x\in\mathcal{X}\), by taking action \(a\in\mathcal{A}\), the agent transitions to a state \(y(x,a)\in\mathcal{Y}\) with probability \(1\). Let \(p_{M}(x_{k},a_{\ell}):=p_{M}^{(k,\ell)}\). For state \(y(x,a)\in\mathcal{Y}\), we transition back to \(y(x,a)\) with probability \(p_{M}(x,a)\) and to \(z(x,a)\in\mathcal{Z}\) with probability \(1-p_{M}(x,a)\). Finally, the agent always returns to \(z(x,a)\) for all \(z(x,a)\in\mathcal{Z}\). The reward function \(r_{M}(s,a)=1\) if \(s\in\mathcal{X}\cup\mathcal{Y}\) and is \(0\) otherwise. We remark that for every \(x\in\mathcal{X},a\in\mathcal{A}\), its \(Q^{\star}\) function can be computed in closed form by solving the Bellman optimality equation \[Q_{M}^{\star}(x,a)=1+\gamma\left[p_{M}(x,a)\cdot Q_{M}^{\star}(x,a)+(1-p_{M}( x,a))\cdot 0\right]=\frac{1}{1-\gamma p_{M}(x,a)}.\] Recall we write \(N:=\sum_{s\in\mathcal{S}}\lvert\mathcal{A}^{s}\rvert\). to denote the total number of state-action pairs. Our main result in this section is the following. Figure 3.1: The class of MDPs considered to prove the lower bound in Theorem 3.8. **Theorem 3.8**.: _Let \(\rho,\varepsilon\in(0,1)^{2}\), \(\gamma\in(\nicefrac{{1}}{{2}},1)\), and \(\delta=\nicefrac{{1}}{{4}}\). Suppose \(\mathscr{A}\) is a locally random \(\rho\)-replicable algorithm that returns an estimate \(\widehat{Q}\) for any MDP with discount factor \(\gamma\) such that \(|\widehat{Q}(s,a)-Q^{\star}(s,a)|\leq\varepsilon\) with probability at least \(1-\nicefrac{{s}}{{6}}\) for each \(s\in\mathcal{S},a\in\mathcal{A}^{s}\). Then \(\mathscr{A}\) has a sample complexity of at least_ \[\Omega\left(\frac{N^{3}}{(1-\gamma)^{3}\varepsilon^{2}\rho^{2}}\right).\] _Remark 3.9_.: If Conjecture 3.7 holds, we obtain a sample complexity lower bound of \[\Omega\left(\frac{N^{2}}{(1-\gamma)^{3}\varepsilon^{2}\rho^{2}}\right)\] for general randomized \(\rho\)-replicable algorithms for \(Q\) estimation. On a high level, we argue that a locally random \(\rho\)-replicable algorithm \(\mathscr{A}\) for estimating the \(Q\) function of arbitrary MDPs up to accuracy \(\varepsilon\approx\nicefrac{{\varepsilon_{0}}}{{(1-\gamma)^{2}}}\) yields a locally random \(\rho\)-replicable algorithm for the multiple coin problem (cf. Problem 3.4) with tolerance approximately \(\varepsilon_{0}\approx(1-\gamma)^{2}\varepsilon\) when we choose \(q\approx\gamma\) in Theorem 3.5. We can then directly apply Theorem 3.5 to conclude the proof. See Appendix C.3 for details. #### 3.2.4 A Lower Bound for Replicable Policy Estimation Having established the lower bound for locally random replicable \(Q\)-function estimation, we now present our lower bound for deterministic replicable policy estimation. We argue that a deterministic \(\rho\)-replicable algorithm for optimal policy estimation yields a locally random \(\rho\)-replicable algorithm for optimal \(Q\)-function estimation after some post-processing that has sample complexity \(\tilde{o}\left(N^{3}/\varepsilon^{2}\rho^{2}(1-\gamma)^{3}\right)\). It follows that the sample complexity lower bound we derived for \(Q\)-function estimation holds for policy estimation as well. In order to describe the post-processing step, we employ a locally random replicable rounding algorithm (cf. Theorem B.2) that is provided in Esfandiari et al. (2023b). Intuitively, we show that estimating the value function \(V^{\pi}\) of \(\pi\) reduces to estimating the optimal \(Q\)-function of some single-action MDP. Given such an estimate \(\hat{V}^{\pi}\), we can then estimate \(Q^{\pi}\) using the simple sample mean query given sufficient samples from the generative model. Lastly, the locally random replicable rounding subroutine from Theorem B.1 is used as a post-processing step. We now state the formal lower bound regarding the sample complexity of deterministic replicable policy estimation. Its proof follows by combining the \(Q\)-function estimation lower bound and the reduction we described above. For the full proof, see Appendix C.4. **Theorem 3.10**.: _Let \(\varepsilon,\rho\in(0,1)^{2}\) and \(\delta=\nicefrac{{1}}{{4}}\). Suppose \(\mathscr{A}\) is a deterministic \(\rho\)-replicable algorithm that outputs a randomized policy \(\pi\) such that \(|V^{\pi}(s)-V^{\star}(s)|\leq\varepsilon\) with probability at least \(1-\nicefrac{{\delta}}{{12}}\) for each \(s\in\mathcal{S}\). Then \(\mathscr{A}\) has a sample complexity of at least_ \[\Omega\left(\frac{N^{3}}{(1-\gamma)^{3}\varepsilon^{2}\rho^{2}}\right).\] _Remark 3.11_.: If Conjecture 3.7 holds, we obtain a sample complexity lower bound of \[\Omega\left(\frac{N^{2}}{(1-\gamma)^{3}\varepsilon^{2}\rho^{2}}\right)\] for general randomized \(\rho\)-replicable algorithms for policy estimation. TV Indistinguishable Algorithms for \(Q\)-Function and Policy Estimation In this section, we present an algorithm with an improved sample complexity for replicable \(Q\)-function estimation and policy estimation. Our approach consists of several steps. First, we design a computationally efficient SQ algorithm for answering \(d\) statistical queries that satisfies the _total variation (TV) indistinguishability_ property (Kalavasis et al., 2023) (cf. Definition 4.1), which can be viewed as a relaxation of replicability. The new SQ algorithm has an improved sample complexity compared to its replicable counterpart we discussed previously. Using this oracle, we show how we can design computationally efficient \(Q\)-function estimation and policy estimation algorithms that satisfy the TV indistinguishability definition and have an improved sample complexity by a factor of \(N\) compared to the ones in Section 3.1. Then, by describing a specific implementation of its _internal_ randomness, we make the algorithm replicable. Unfortunately, this step incurs an exponential cost in the computational complexity of the algorithm with respect to the cardinality of the state-action space. We emphasize that the reason we are able to circumvent the lower bound of Section 3.2 is that we use a specific source of internal randomness that creates correlations across the random choices of the learner. Our result reaffirms the observation made by Kalavasis et al. (2023) that the same learning algorithm, i.e., input \(\rightarrow\) output mapping, can be replicable under one implementation of its internal randomness but not replicable under a different one. First, we state the definition of TV indistinguishability from Kalavasis et al. (2023). **Definition 4.1** (TV Indistinguishability; (Kalavasis et al., 2023)).: A learning rule \(\mathscr{A}\) is \(n\)-sample \(\rho\)-TV indistinguishable if for any distribution over inputs \(\mathcal{D}\) and two independent samples \(S,S^{\prime}\sim\mathcal{D}^{n}\) it holds that \[\mathbb{E}_{S,S^{\prime}\sim\mathcal{D}^{n}}[d_{\mathrm{TV}}(A(S),A(S^{\prime }))]\leq\rho\,.\] In their work, Kalavasis et al. (2023) showed how to transform any \(\rho\)-TV indistinguishable algorithm to a \(2\rho/(1+\rho)\)-replicable one when the input domain is _countable_. Importantly, this transformation does not change the input \(\rightarrow\) output mapping that is induced by the algorithm. A similar transformation for finite domains can also be obtained by the results in Bun et al. (2023). We emphasize that neither of these two transformations are computationally efficient. Moreover, Bun et al. (2023) give cryptographic evidence that there might be an inherent computational hardness to obtain the transformation. ### TV Indistinguishable Estimation of Multiple Statistical Queries We are now ready to present a TV-indistinguishable algorithm for estimating \(d\) independent statistical queries. The high-level approach is as follows. First, we estimate each statistical query up to accuracy \(\nicefrac{{\varepsilon\rho}}{{\sqrt{d}}}\) using black-box access to the SQ oracle and we get an estimate \(\widehat{\mu}_{1}\in[0,1]^{d}\). Then, the output of the algorithm is drawn from \(\mathcal{N}(\widehat{\mu}_{1},\varepsilon^{2}I_{d}).\) Since the estimated mean of each query is accurate up to \(\nicefrac{{\varepsilon\rho}}{{\sqrt{d}}}\) and the variance is \(\varepsilon^{2}\), we can see that, with high probability, the estimate of each query will be accurate up to \(O(\varepsilon).\) To argue about the TV indistinguishability property, we first notice that, with high probability across the two executions, the estimate \(\widehat{\mu}_{2}\in[0,1]^{d}\) satisfies \(||\widehat{\mu}_{1}-\widehat{\mu}_{2}||_{\infty}\leq 2\rho\cdot\varepsilon/\sqrt{d}.\) Then, we can bound the TV distance of the output of the algorithm as \(d_{\mathrm{TV}}\left(\mathcal{N}(\widehat{\mu}_{1},\varepsilon^{2}I_{d}), \mathcal{N}(\widehat{\mu}_{2},\varepsilon^{2}I_{d})\right)\leq O(\rho)\)(Gupta, 2020). We underline that this behavior is reminiscent of the advanced composition theorem in the Differential Privacy (DP) literature (see e.g., Dwork et al. (2014)) and our algorithm can be viewed as an extension of the Gaussian mechanism from the DP line of work to the replicability setting. This algorithm has applications outside the scope of our work since multiple statistical query estimation is a subroutine widely used in the replicability line of work (Impagliazzo et al., 2022; Esfandiari et al., 2023; Esfandiari et al., 2023; Bun et al., 2023; Kalavasis et al., 2023). This discussion is formalized in the following theorem. **Theorem 4.2** (TV Indistinguishable SQ Oracle for Multiple Queries).: _Let \(\varepsilon,\rho\in(0,1)^{2}\) and \(\delta\in(0,\nicefrac{{\rho}}{{5}}).\) Let \(\phi_{1},\ldots,\phi_{d}\) be \(d\) statistical queries with co-domain \([0,1]\). Assume that we can simultaneously estimate the true values of all \(\phi_{i}\)'s with accuracy \(\varepsilon\) and confidence \(\delta\) using \(n(\varepsilon,\delta)\) total samples. Then, there exists a \(\rho\)-\(\mathrm{TV}\) indistinguishable algorithm (Algorithm 4.1) that requires at most_ \[n\left(\frac{\varepsilon\rho}{2\sqrt{8d\cdot\log(4d/\delta)}},\frac{\delta}{2}\right)\] _many samples to output estimates \(\widehat{v}_{1},\ldots,\widehat{v}_{d}\) of the true values \(v_{1},\ldots,v_{d}\) to guarantee that_ \[\max_{i\in[d]}[\widehat{v}_{i}-v_{i}|\leq\varepsilon\,,\] _with probability at least \(1-\delta\)._ ``` 1:\(\widehat{\mu}=(\widehat{\mu_{1}},\ldots,\widehat{\mu_{d}})\leftarrow\) StatisticalQueryOracles \(\left(\frac{\varepsilon\rho}{2\sqrt{8d\cdot\log(4d/\delta)}},\frac{\delta}{2}\right)\) 2:Sample \(\widehat{v}\sim\mathcal{N}(\widehat{\mu},\varepsilon^{2}/(8\cdot\log(4d/ \delta))\cdot I_{d})\) 3:Output \(\widehat{v}\) ``` **Algorithm 4.1**\(\mathrm{TV}\) Indistinguishable Oracle for Multiple Query Estimation ### TV Indistinguishable \(Q\)-Function and Policy Estimation Equipped with Algorithm 4.1, we are now ready to present a \(\mathrm{TV}\)-indistinguishable algorithm for \(Q\)-function estimation and policy estimation with superior sample complexity compared to the one in Section 3.1. The idea is similar to the one in Section 3.1. We start with black-box access to an algorithm for \(Q\)-function estimation, and then we apply the Gaussian mechanism (Algorithm 4.1). We remark that the running time of this algorithm is polynomial in all the parameters of the problem. Recall that \(N\) is the number of state-action pairs of the MDP. **Theorem 4.3**.: _Let \(\varepsilon,\rho\in(0,1)^{2}\) and \(\delta\in(0,\nicefrac{{\rho}}{{5}})\). There is a \(\rho\)-\(\mathrm{TV}\) indistinguishable algorithm that outputs an \(\varepsilon\)-optimal \(Q\)-function with probability at least \(1-\delta\). Moreover, it has time and sample complexity_ \[\widetilde{O}\left(\frac{N^{2}}{(1-\gamma)^{3}\varepsilon^{2}\rho^{2}}\log \frac{1}{\delta}\right).\] Proof.: The proof follows by combining the guarantees of Sidford et al. (2018a) (Theorem C.2) and Theorem 4.2. To be more precise, Theorem C.2 shows that in order to compute some \(\widehat{Q}\) such that \[\left\|\widehat{Q}-Q\right\|_{\infty}\leq\varepsilon\,,\] one needs \(\widetilde{O}\left(\frac{N}{\varepsilon^{2}(1-\gamma)^{3}}\log(1/\delta)\right)\). Thus, in order to apply Theorem 4.2 the sample complexity becomes \[\widetilde{O}\left(\frac{N^{2}}{(1-\gamma)^{3}\varepsilon^{2}\rho^{2}}\log \frac{1}{\delta}\right)\,.\] Next, we describe a \(\mathrm{TV}\) indistinguishable algorithm that enjoys similar sample complexity guarantees. Similarly as before, we use the main result of Singh and Yee (1994) which shows that if \(\left\|\widehat{Q}-Q^{\star}\right\|_{\infty}\leq\varepsilon\), then the greedy policy with respect to \(\widehat{Q}\), i.e., \(\forall s\in\mathcal{S},\widehat{\pi}(s):=\operatorname*{argmax}_{a\in \mathcal{A}^{s}}\widehat{Q}(s,a)\), is \(\nicefrac{{\varepsilon}}{{(1-\gamma)}}\)-approximately optimal (cf. Theorem C.3). Thus, if we want to obtain an \(\varepsilon\)-approximately optimal policy, it suffices to obtain a \((1-\gamma)\varepsilon\)-approximately optimal \(Q\)-function. The indistinguishable guarantee follows from the data-processing inequality This is formalized in Corollary 4.4. **Corollary 4.4**.: _Let \(\varepsilon,\rho\in(0,1)^{2}\) and \(\delta\in(0,\nicefrac{{\rho}}{{5}})\). There is a \(\rho\)-\(TV\) indistinguishable algorithm that outputs an \(\varepsilon\)-optimal policy with probability at least \(1-\delta\). Moreover, it has time and sample complexity_ \[\widetilde{O}\left(\frac{N^{2}}{(1-\gamma)^{5}\varepsilon^{2}\rho^{2}}\log \frac{1}{\delta}\right).\] ### From TV Indistinguishability to Replicability We now describe how we can transform the TV indistinguishable algorithms we provided to replicable ones. As we alluded to before, this transformation does not hurt the sample complexity, but requires exponential time in the state-action space. Our transformation is based on the approach proposed by Kalavasis et al. (2023) which holds when the input domain is _countable_. Its main idea is that when two random variables follow distributions that are \(\rho\)-close in TV-distance, then there is a way to couple them using only _shared randomness_. The implementation of this coupling is based on the _Poisson point process_ and can be thought of a generalization of von Neumann's rejection-based sampling to handle more general domains. We underline that in general spaces without structure it is not known yet how to obtain such a coupling. However, even though the input domain of the Gaussian mechanism is _uncountable_ and the result of Kalavasis et al. (2023) does not apply directly in our setting, we are able to obtain a similar transformation as they did. The main step required to perform this transformation is to find a reference measure with respect to which the algorithm is _absolutely continuous_. We provide these crucial measure-theoretic definitions below. **Definition 4.5** (Absolute Continuity).: Consider two measures \(P,\mathcal{P}\) on a \(\sigma\)-algebra \(\mathcal{B}\) of subsets of \(W\). We say that \(P\) is absolutely continuous with respect to \(\mathcal{P}\) if for any \(E\in\mathcal{B}\) such that \(\mathcal{P}(E)=0\), it holds that \(P(E)=0\). Recall that \(\mathscr{A}(S)\) denotes the _distribution_ over outputs, when the input to the algorithm is \(S\). **Definition 4.6**.: Given a learning rule \(\mathscr{A}\) and reference probability measure \(\mathcal{P}\), we say that \(\mathscr{A}\) is absolutely continuous with respect to \(\mathcal{P}\) if for any input \(S\), \(\mathscr{A}(S)\) is absolutely continuous with respect to \(\mathcal{P}\). We emphasize that this property should hold for every fixed sample \(S\), i.e., the randomness of the samples are not taken into account. We now define what it means for two learning rules to be _equivalent_. **Definition 4.7** (Equivalent Learning Rules).: Two learning rules \(\mathscr{A},\mathscr{A}^{\prime}\) are _equivalent_ if for every fixed sample \(S\), it holds that \(\mathscr{A}(S)\overset{d}{=}\mathscr{A}^{\prime}(S)\), i.e., for the same input they induce the same distribution over outputs. Using a coupling technique based on the Poisson point process, we can convert the TV indistinguishable learning algorithms we have proposed so far to equivalent ones that are replicable. See Algorithm 4.2 for a description of how to output a sample from this coupling. Let us view \(\mathscr{A}(S;r),\mathscr{A}(S^{\prime};r)\) as random vectors with small TV distance. The idea is to implement the shared internal randomness \(r\) using rejection sampling so that the "accepted" sample will be the same across two executions with high probability. ``` 1:Input: collection of random vectors \(\mathcal{S}=\{X\}\) absolutely continuous with respect to a \(\sigma\)-finite measure \(\mu\), with densities \(f_{X}:\mathbb{R}^{d}\to\mathbb{R}\), some \(X\in\mathcal{S}\) 2:Let \(\mathcal{R}\) denote the Poisson point process over \(\mathbb{R}^{d}\times\mathbb{R}_{+}\times\mathbb{R}_{+}\) with intensity \(\mu\times\mathrm{Leb}\times\mathrm{Leb}\). 3:Sample \(r:=\{(x_{i},y_{i},t_{i}):i\in\mathbb{N}\}\sim\mathcal{R}\). 4:Let \(i^{\star}\leftarrow\operatorname*{argmin}_{i\in\mathbb{N}}\{t_{i}:f_{S}(x_{i}) >y_{i}\}\). 5:Output \(x_{i^{\star}}\) as a sample for \(X\). ``` **Algorithm 4.2** Sampling from Pairwise Optimal Coupling; (Angel and Spinka, 2019) For some background regarding the Poisson point process and the technical tools we use, we refer the reader to Appendix B.2. Importantly, for every \(S\), the output \(\mathscr{A}(S)\) of the algorithms we have proposed in Section 4.1 and Section 4.2 follow a Gaussian distribution, which is absolutely continuous with respect to the Lebesgue measure. Furthermore, the Lebesgue measure is \(\sigma\)-finite so we can use the coupling algorithm (cf. Algorithm 4.2) of Angel and Spinka (2019), whose guarantees are stated in Theorem B.4. We are now ready to state the result regarding the improved \(\rho\)-replicable SQ oracle for multiple queries. Its proof is an adaptation of the main result of Kalavasis et al. (2023). **Theorem 4.8** (Replicable SQ Oracle for Multiple Queries).: _Let \(\varepsilon,\rho\in(0,1)^{2}\) and \(\delta\in(0,\nicefrac{{\rho}}{{5}}).\) Let \(\phi_{1},\ldots,\phi_{d}\) be \(d\) statistical queries with co-domain \([0,1]\). Assume that we can simultaneously estimate the true values of all \(\phi_{i}\)'s with accuracy \(\varepsilon\) and confidence \(\delta\) using \(n(\varepsilon,\delta)\) total samples. Then, there exists a \(\rho\)-replicable algorithm that requires at most_ \[n\left(\frac{\varepsilon\rho}{4\sqrt{8d\cdot\log(4d/\delta)}},\frac{\delta}{2}\right)\] _many samples to output estimates \(\widehat{v}_{1},\ldots,\widehat{v}_{d}\) of the true values \(v_{1},\ldots,v_{d}\) with the guarantee that_ \[\max_{i\in[d]}\lvert\widehat{v}_{i}-v_{i}\rvert\leq\varepsilon\,,\] _with probability at least \(1-\delta\)._ By using an identical argument, we can obtain \(\rho\)-replicable algorithms for \(Q\)-function estimation and policy estimation. Recall that \(N\) is the number of state-action pairs of the MDP. **Theorem 4.9**.: _Let \(\varepsilon,\rho\in(0,1)^{2}\) and \(\delta\in(0,\nicefrac{{\rho}}{{4}})\). There is a \(\rho\)-replicable algorithm that outputs an \(\varepsilon\)-optimal \(Q\)-function with probability at least \(1-\delta\). Moreover, it has sample complexity_ \[\widetilde{O}\left(\frac{N^{2}}{(1-\gamma)^{3}\varepsilon^{2}\rho^{2}}\log \frac{1}{\delta}\right).\] **Corollary 4.10**.: _Let \(\varepsilon,\rho\in(0,1)^{2}\) and \(\delta\in(0,\nicefrac{{\rho}}{{4}})\). There is a \(\rho\)-replicable algorithm that outputs an \(\varepsilon\)-optimal policy with probability at least \(1-\delta\). Moreover, it has sample complexity_ \[\widetilde{O}\left(\frac{N^{2}}{(1-\gamma)^{5}\varepsilon^{2}\rho^{2}}\log \frac{1}{\delta}\right).\] _Remark 4.11_ (Coordinate-Wise Coupling).: Since our algorithms add independent Gaussian noise to each of the estimates, a first approach to achieve the coupling using only shared randomness would be to construct a pairwise coupling between each estimate. In the context of multiple statistical query estimation, this would mean that we couple the estimate of the \(i\)-th query in the first execution with the estimate of the \(i\)-th query in the second execution. Unfortunately, even though this coupling is computationally efficient to implement it does not give us the desired sample complexity guarantees. To see that, notice that when the TV-distance of estimates across each coordinate is \(O(\rho)\), under this pairwise coupling the probability that at least one of the estimates will be different across the two executions is \(O(d\cdot\rho)\). However, the TV distance of the \(d\)-dimensional Gaussians is \(O(\sqrt{d}\cdot\rho)\), and this is the reason why the more complicated coupling we propose achieves better sample complexity guarantees. Our results reaffirms the observation that was made by Kalavasis et al. (2023) that the replicability property and the sample complexity of an algorithm are heavily tied to the implementation of its internal randomness, which can lead to a substantial computational overhead. Approximately Replicable Policy Estimation The definitions of replicability (cf. Definition 2.10, Definition 4.1) we have discussed so far suffer from a significant sample complexity blow-up in terms of the cardinality of the state-action space which can be prohibitive in many settings of interest. In this section, we propose _approximate replicability_, a relaxation of these definitions, and show that this property can be achieved with a significantly milder sample complexity compared to (exact) replicability. Moreover, this definition does not require shared internal randomness across the executions of the algorithm. First, we define a general notion of _approximate_ replicability as follows. **Definition 5.1** (Approximate Replicability).: Let \(\mathcal{X},\mathcal{Y}\) be the input and output domains, respectively. Let \(\kappa:\mathcal{Y}\times\mathcal{Y}\to\mathbb{R}_{\geq 0}\) be some distance function on \(\mathcal{Y}\) and let \(\rho_{1},\rho_{2}\in(0,1)^{2}\). We say that an algorithm \(\mathscr{A}\) is \((\rho_{1},\rho_{2})\)-approximately replicable with respect to \(\kappa\) if for any distribution \(\mathcal{D}\) over \(\mathcal{X}\) it holds that \[\mathbb{P}_{S,S^{\prime}\sim\mathcal{D}^{n},r,r^{\prime}\sim\mathcal{R}}\{ \kappa(\mathscr{A}(S;r),\mathscr{A}(S^{\prime};r^{\prime}))\geq\rho_{1}\} \leq\rho_{2}\,.\] In words, this relaxed version of Definition 2.8 requires that the outputs of the algorithm, when executed on two sets of i.i.d. data, using _independent_ internal randomness across the two executions, are close under some appropriate distance measure. In the context of our work, the output of the learning algorithm is some policy \(\pi:\mathcal{S}\to\Delta(\mathcal{A})\), where \(\Delta(\mathcal{A})\) denotes the probability simplex over \(\mathcal{A}\). Thus, it is natural to instantiate \(\kappa\) as some _dissimilarity measure_ of distributions like the total variation (TV) distance or the Renyi divergence. For the exact definition of these dissimilarity measures, we refer the reader to Appendix A. We now state the definition of an approximately replicable policy estimator. **Definition 5.2** (Approximately Replicable Policy Estimator).: Let \(\mathscr{A}\) be an algorithm that takes as input samples of state-action pair transitions and returns a policy \(\pi\). Let \(\kappa\) be some dissimilarity measure on \(\Delta(\mathcal{A})\) and let \(\rho_{1},\rho_{2}\in(0,1)^{2}\). We say that \(\mathscr{A}\) is \((\rho_{1},\rho_{2})\)-approximately replicable if for any MDP \(M\) it holds that \[\mathbb{P}_{S,S^{\prime}\sim G,r,r^{\prime}\sim\mathcal{R}}\left\{\max_{s\in \mathcal{S}}\kappa(\pi(s),\pi^{\prime}(s))\geq\rho_{1}\right\}\leq\rho_{2}\,,\] where \(G\) is the generator of state-action pair transitions, \(\mathcal{R}\) is the source of internal randomness of \(\mathscr{A}\), \(\pi\) is the output of \(\mathscr{A}\) on input \(S,r\), and \(\pi^{\prime}\) is its output on input \(S^{\prime},r^{\prime}\). To the best of our knowledge, the RL algorithms that have been developed for the model we are studying do not satisfy this property. Nevertheless, many of them compute an estimate \(Q\) with the promise that \(\left\|Q-Q^{\star}\right\|_{\infty}\leq\varepsilon\)[22, 23, 24]. Thus, it is not hard to see that if we run the algorithm twice on independent data with independent internal randomness we have that \(\left\|Q-Q^{\prime}\right\|_{\infty}\leq 2\varepsilon\). This is exactly the main property that we need in order to obtain approximately replicable policy estimators. The key idea is that instead of outputting the greedy policy with respect to this \(Q\)-function, we output a policy given by some _soft-max_ rule. Such a rule is a mapping \(\mathbb{R}_{\geq 0}^{\mathcal{A}}\to\Delta(\mathcal{A})\) that achieves two desiderata: (i) The distribution over the actions is "stable" with respect to perturbations of the \(Q\)-function. (ii) For every \(s\in\mathcal{S}\), the value of the policy \(V^{\pi}(s)\) that is induced by this mapping is "close" to \(V^{\star}(s)\). Formally, the stability of the soft-max rule is captured through its Lipschitz constant (cf. Definition A.3). In this setting, this means that whenever the two functions \(Q,Q^{\prime}\) are close under some distance measure (e.g. the \(\ell_{\infty}\) norm), then the policies that are induced by the soft-max rule are close under some (potentially different) dissimilarity measure. The approximation guarantees of the soft-max rules are captured by the following definition. **Definition 5.3** (Soft-Max Approximation; [10]).: Let \(\varepsilon>0.\) A soft-max function \(f:\mathbb{R}_{\geq 0}^{\mathcal{A}}\to\Delta(\mathcal{A})\) is _\(\varepsilon\)-approximate_ if for all \(x\in\mathbb{R}^{\mathcal{A}}\), \(\langle f(x),x\rangle\geq\max_{a\in\mathcal{A}}x_{a}-\varepsilon\). In this work, we focus on the soft-max rule that is induced by the exponential function (ExpSoftMax), which has been studied in several application domains (Gibbs, 1902; McSherry and Talwar, 2007; Huang and Kannan, 2012; Dwork et al., 2014; Gao and Pavel, 2017). Recall \(\pi(s,a)\) denotes the probability mass that policy \(\pi\) puts on action \(a\in\mathcal{A}^{s}\) in state \(s\in\mathcal{S}\). Given some \(\lambda>0\) and \(Q(s,a)\in\mathbb{R}^{\mathcal{S}\times\mathcal{A}}\), the induced randomized policy \(\pi\) is given by \[\pi(s,a)=\frac{\exp\{\lambda Q(s,a)\}}{\sum_{a^{\prime}\in\mathcal{A}^{s}}\exp \{\lambda Q(s,a^{\prime})\}}\,. \tag{1}\] For a discussion about the advantages of using more complicated soft-max rules like the one developed in Epasto et al. (2020), we refer the reader to Appendix E.1. We now describe our results when we consider approximate replicability with respect to the Renyi divergence and the Total Variation (TV) distance. At a high level, our approach is divided into two steps: 1. Run some \(Q\)-learning algorithm (e.g. (Sidford et al., 2018; Agarwal et al., 2020; Li et al., 2020)) to estimate some \(\widehat{Q}\) such that \(\left\|Q^{\star}-\widehat{Q}\right\|_{\infty}\leq\varepsilon\). 2. Estimate the policy using some soft-max rule. One advantage of this approach is that it allows for flexibility and different implementations of these steps that better suit the application domain. An important lemma we use is the following. **Lemma 5.4** (Exponential Soft-Max Approximation Guarantee; (McSherry and Talwar, 2007)).: _Let \(\varepsilon\in(0,1),\alpha,p\geq 1\), and set \(\lambda=\nicefrac{{\log(d)}}{{\varepsilon}}\), where \(d\) is the ambient dimension of the input domain. Then, ExpSoftMax with parameter \(\lambda\) is \(\varepsilon\)-approximate and \(2\lambda\)-Lipschitz continuous (cf. Definition A.3) with respect to \((\ell_{p},D_{\alpha})\), where \(D_{\alpha}\) is the Renyi divergence of order \(\alpha\)._ This is an important building block of our proof. However, it is not sufficient on its own in order to bound the gap of the ExpSoftMax policy and the optimal one. This is handled in the next lemma whose proof is postponed to Appendix E. Essentially, it can be viewed as an extension of the result in Singh and Yee (1994) to handle the soft-max policy instead of the greedy one. **Lemma 5.5** (Soft-Max Policy vs Optimal Policy).: _Let \(\varepsilon_{1},\varepsilon_{2}\in(0,1)^{2}\). Let \(\widehat{Q}\in\mathbb{R}^{\mathcal{S}\times\mathcal{A}}\) be such that \(\left\|\widehat{Q}-Q^{\star}\right\|\leq\varepsilon_{1}\). Let \(\hat{\pi}\) be the ExpSoftMax policy with respect to \(\widehat{Q}\) using parameter \(\lambda=\nicefrac{{\log|\mathcal{A}|}}{{\varepsilon_{2}}}\). Then, \(\left\|V^{\hat{\pi}}-V^{\star}\right\|_{\infty}\leq\frac{2\varepsilon_{1}+ \varepsilon_{2}}{1-\gamma}\)._ Combining Lemma 5.4 and Lemma 5.5 yields the desired approximate replicability guarantees we seek. The formal proof of the following result is postponed to Appendix E. Recall we write \(N:=\sum_{s\in\mathcal{S}}\lvert\mathcal{A}^{s}\rvert\) to denote the total number of state-action pairs. **Theorem 5.6**.: _Let \(\alpha\geq 1,\gamma,\delta,\rho_{1},\rho_{2}\in(0,1)^{4}\), and \(\varepsilon\in\left(0,(1-\gamma)^{-1/2}\right)\). There is a \((\rho_{1},\rho_{2})\)-approximately replicable algorithm \(\mathscr{A}\) with respect to the Renyi divergence \(D_{\alpha}\) such that given access to a generator \(G\) for any MDP \(M\), it outputs a policy \(\hat{\pi}\) for which \(\left\|V^{\hat{\pi}}-V^{\star}\right\|_{\infty}\leq\varepsilon\) with probability at least \(1-\delta\). Moreover, \(\mathscr{A}\) has time and sample complexity_ \[\widetilde{O}\left(\frac{N}{(1-\gamma)^{5}\varepsilon^{2}\rho_{1}^{2}}\log \frac{1}{\min\{\delta,\rho_{2}\}}\right)\,.\] _Remark 5.7_ (Replicability Under TV Distance).: It is known that the TV distance of two probability distributions is upper bounded by \(D_{\infty}.\) Thus, we can see that Theorem 5.6 provides the same guarantees when we want to establish replicability with respect to the TV distance. _Remark 5.8_ (Sample Complexity Dependence on Replicability Parameters).: Notice that the dependence of the number of samples in Theorem 5.6 on the two different replicability parameters of Definition 5.2 is different. In particular, the dependence on \(\rho_{2}\) is \(\text{polylog}(\nicefrac{{1}}{{\rho_{2}}})\), whereas the dependence on \(\rho_{1}\) is \(\text{poly}(\nicefrac{{1}}{{\rho_{1}}})\). Guarantees under Different Replicability Notions Since we have studied three different replicability notions in this work, we believe it is informative to discuss the advantages and the drawbacks of each one of them. Our discussion is centered across four different axes: the replicability guarantees that each notion provides, the sample complexity required to satisfy each definition, the running time required to run the underlying algorithms, and the ability to test/verify whether the algorithms have the desired replicability properties. The definition of Impagliazzo et al. (2022) (Definition 2.8) provides the strongest replicability guarantees since it requires that the two outputs are exactly the same across the two executions. It is also computationally efficient to verify it. Even though it is statistically equivalent to the definition of TV indistinguishability, our results along with the results of Bun et al. (2023); Kalavasis et al. (2023) indicate that there might be a computational separation between these two notions. Moreover, the fact that this notion is so tightly related to the way the internal randomness of the algorithm is implemented is a property that is not exhibited by any other notion of stability we are aware of and can be problematic in some applications. The definition of TV indistinguishability of Kalavasis et al. (2023) (Definition 4.1) provides strong replicability guarantees, in the sense that someone who observes the outputs of the algorithm under two executions when the inputs are \(S,S^{\prime}\), cannot distinguish which one between \(S,S^{\prime}\) was responsible for generating this output. Moreover, this definition does _not_ depend on the way the internal randomness of the algorithm is implemented. On the other hand, testing whether an algorithm has this property is more subtle compared to Definition 2.8. In the case of the Gaussian mechanism-based algorithms we discuss in this work the following holds: if the output of the algorithm is _promised_ to be drawn from a Gaussian distribution, it is computationally and statistically efficient to test whether the outputs under two different datasets \(S,S^{\prime}\) are close in TV distance. However, it is not clear how one can test if the outputs are indeed drawn from a Gaussian distribution. Finally, the notion of approximate replicability (Definition 5.1) we introduce is a further relaxation of the TV indistinguishability property in the following sense: both the replicability definition and TV indistinguishability definition treat the outputs in a "binary" manner, in the sense that they only care whether the outputs are exactly the same across the two executions. This definition takes a more nuanced approach and considers some notion of distance across the outputs that is not binary. As a result, it provides the weakest replicability guarantees, which, however, could be sufficient in most RL applications. Moreover, as our results indicate, there might be some inherent advantage in terms of the sample complexity required to achieve this notion compared to (strict) replicability or TV indistinguishability, which can be crucial in RL applications with large state-action space. Moreover, similarly as with the replicability definition, it is also efficient to test whether an algorithm has this property or not. To sum up, even though we have not completely characterized the sample complexity and computational complexity of each definition we believe that the following is the complete picture: the replicability property is statistically equivalent to the TV indistinguishability property and the approximate replicability property has sample complexity that is smaller by a factor of \(N\). Moreover, we believe that there is a computational gap between the notions of replicability and TV indistinguishability. We underline that under Conjecture 3.7, the results of our work give a complete characterization6 of the sample complexity of these problems with respect to \(N.\) Footnote 6: Up to poly-logarithmic factors. ## 7 Conclusion In this work, we establish sample complexity bounds for a several notions of replicability in the context of RL. We believe that our work can open several directions for future research. One immediate next step would be to verify our lower bound conjecture for replicable estimation of multiple independent coins (cf. Conjecture 3.7). Moreover, it would be very interesting to extend our results to different RL settings, e.g. offline RL with linear MDPs, offline RL with finite horizon, and online RL. ## Acknowledgements We thank Yuval Dagan for an insightful discussion regarding the Gaussian mechanism.
We initiate the mathematical study of replicability as an algorithmicproperty in the context of reinforcement learning (RL). We focus on thefundamental setting of discounted tabular MDPs with access to a generativemodel. Inspired by Impagliazzo et al. [2022], we say that an RL algorithm isreplicable if, with high probability, it outputs the exact same policy aftertwo executions on i.i.d. samples drawn from the generator when its internalrandomness is the same. We first provide an efficient $\rho$-replicablealgorithm for $(\varepsilon, \delta)$-optimal policy estimation with sample andtime complexity $\widetildeO\left(\frac{N^3\cdot\log(1/\delta)}{(1-\gamma)^5\cdot\varepsilon^2\cdot\rho^2}\right)$,where $N$ is the number of state-action pairs. Next, for the subclass ofdeterministic algorithms, we provide a lower bound of order$\Omega\left
2309.14393
LLMCarbon: Modeling the end-to-end Carbon Footprint of Large Language Models
The carbon footprint associated with large language models (LLMs) is a significant concern, encompassing emissions from their training, inference, experimentation, and storage processes, including operational and embodied carbon emissions. An essential aspect is accurately estimating the carbon impact of emerging LLMs even before their training, which heavily relies on GPU usage. Existing studies have reported the carbon footprint of LLM training, but only one tool, mlco2, can predict the carbon footprint of new neural networks prior to physical training. However, mlco2 has several serious limitations. It cannot extend its estimation to dense or mixture-of-experts (MoE) LLMs, disregards critical architectural parameters, focuses solely on GPUs, and cannot model embodied carbon footprints. Addressing these gaps, we introduce \textit{\carb}, an end-to-end carbon footprint projection model designed for both dense and MoE LLMs. Compared to mlco2, \carb~significantly enhances the accuracy of carbon footprint estimations for various LLMs. The source code is released at \url{https://github.com/SotaroKaneda/MLCarbon}.
Ahmad Faiz, Sotaro Kaneda, Ruhan Wang, Rita Osi, Prateek Sharma, Fan Chen, Lei Jiang
2023-09-25T14:50:04
http://arxiv.org/abs/2309.14393v2
# LLMcarbon: Modeling the End-To-End Carbon Footprint of Large Language Models ###### Abstract The carbon footprint associated with large language models (LLMs) is a significant concern, encompassing emissions from their training, inference, experimentation, and storage processes, including operational and embodied carbon emissions. An essential aspect is accurately estimating the carbon impact of emerging LLMs even before their training, which heavily relies on GPU usage. Existing studies have reported the carbon footprint of LLM training, but only one tool, mlco2, can predict the carbon footprint of new neural networks prior to physical training. However, mlco2 has several serious limitations. It cannot extend its estimation to dense or mixture-of-experts (MoE) LLMs, disregards critical architectural parameters, focuses solely on GPUs, and cannot model embodied carbon footprints. Addressing these gaps, we introduce _LLMCarbon_, an end-to-end carbon footprint projection model designed for both dense and MoE LLMs. Compared to mlco2, LLMCarbon significantly enhances the accuracy of carbon footprint estimations for various LLMs. The source code is released at [https://github.com/SotaroKaneda/MLcarbon](https://github.com/SotaroKaneda/MLcarbon) ## 1 Introduction Large language models (LLMs) have established their supremacy in addressing a wide spectrum of natural language processing tasks (Brown et al., 2020). However, the proliferation of these models, coupled with increasingly expansive datasets (Sanderson, 2023; Anil et al., 2023), has woven LLM inferences into the fabric of everyday life (Campello de Souza et al., 2023). This surge in LLM adoption has, in turn, exacerbated the already considerable environmental impacts associated with machine learning (ML) (Thompson et al., 2021). For instance, the creation of a transformer with 213 million parameters through neural architecture search has been likened to the carbon dioxide equivalent (CO2eq) emissions of five cars over their entire lifespans (Strubell et al., 2019). Given the ecological implications of LLMs, it becomes essential for both cloud service providers and regular users to gain a profound understanding of the carbon footprint of emerging LLMs. This awareness is particularly critical before embarking on resource-intensive training endeavors that entail the utilization of thousands of GPUs. During the initial design phase, key parameters such as the LLM's parameter count, hardware configurations, and the energy efficiency of the hosting data center need to be factored into a robust carbon footprint projection model. This model should possess the capability to swiftly and accurately estimate the carbon footprint, encompassing both _operational_ and _embodied_ carbon emissions. Moreover, it should provide valuable insights into metrics like test loss, training duration, and inference latency, all crucial aspects of LLM performance. The existence of such a carbon footprint projection model empowers cloud providers to intelligently explore the trade-off between test loss and carbon footprint when designing new LLMs. Additionally, it encourages everyday users to adopt practices that mitigate LLM carbon footprints by facilitating quantitative comparisons across various LLM configurations. Currently, _there is a notable void in the availability of a comprehensive end-to-end carbon footprint projection model tailored specifically for LLMs_. Prior research efforts (Henderson et al., 2020; Wu et al., 2022; Anthony et al., 2020; Schwartz et al., 2020; Patterson et al., 2021; Dodge et al., 2022; Strubell et al., 2019; Lakim et al., 2022) have predominantly focused on recording and reporting the carbon footprint associated with the training phase of ML models. To date, only one tool, mlco2 (Lacoste et al., 2019), has emerged capable of predicting the carbon footprint of an ML task based on parameters like GPU usage, training duration, and data center efficiency. However, mlco2 exhibits several serious limitations. Firstly, it is confined to convolutional neural networks (CNNs) and cannot extend its estimations to include the carbon footprint of LLMs. Secondly, mlco2 neglects crucial architectural aspects of ML models, such as parameter counts, resulting in overestimated projections. Thirdly, it exclusively considers GPUs, disregarding specialized ML hardware like TPUs (Jouppi et al., 2017), and assumes uniform peak computing throughput across GPUs, leading to inaccuracies in its carbon footprint assessments. Lastly, although the embodied carbon footprint of an ML task holds equal significance to its operational carbon footprint (Wu et al., 2022), mlco2 is incapable of modeling the embodied carbon footprint of an LLM based on its hardware resources. In this paper, we propose an end-to-end carbon footprint projection model, _LLMCarbon_, which can accurately predict the carbon footprint of both dense and MoE LLMs during their training, inference, experimentation, and storage phases. LLMCarbon incorporates critical LLM, hardware, and data center parameters, such as LLM parameter count, hardware type, system power, chip area, and data center efficiency, to model both operational and embodied carbon footprints of an LLM. When validated against Google's published LLM carbon footprints, the results generated by LLM-Carbon exhibit differences of only \(\leq 8.2\%\), and thus are more accurate than those of mlco2. ## 2 Background **LLM Carbon Footprint.** The carbon footprint of a LLM comprises two fundamental components (Gupta et al., 2022): the operational footprint, encompassing emissions stemming from hardware energy consumption, and the embodied footprint, encapsulating emissions arising from hardware manufacturing. Previous investigations (Henderson et al., 2020; Wu et al., 2022; Anthony et al., 2020; Schwartz et al., 2020; Patterson et al., 2022; Dodge et al., 2022; Strubell et al., 2019) have predominantly focused on recording and reporting the operational carbon footprint of various ML tasks. A notable exception is Wu et al. (2022), which delved into the embodied carbon footprint of ML tasks and revealed that within a Meta data center, the embodied carbon footprint of an LLM constitutes \(\sim 50\%\) of its operational carbon footprint. **Neural Scaling Law**. The Neural Scaling Law (Kaplan et al., 2020) delineates a power-law relationship linking an LLM's test loss to three key factors: the number of model parameters, the scale of the training dataset, and the computational resources utilized during training. This relationship holds across diverse architectures and downstream ML tasks, spanning zero-shot, prompted, and fine-tuned scenarios (Caballero et al., 2023). **Reducing LLM Carbon Footprint**. Efforts on reducing LLM carbon footprints have been channeled into 4 domains. Firstly, sparse MoE architectures (Fedus et al., 2022) have been proposed to enhance LLM performance by increasing model parameters while maintaining a similar computational load. Secondly, the adoption of specialized ML hardware, such as TPUs (Jouppi et al., 2017), has emerged as a more energy-efficient alternative to power-hungry GPUs. Thirdly, ML-focused data centers have optimized their facilities into large-scale systems, reducing cooling and infrastructure overhead to enhance power usage effectiveness (PUE) (Liu et al., 2020). Lastly, these data centers are transitioning to renewable energy sources like solar and wind power (Acun et al., 2023) to mitigate the operational carbon footprint of LLMs. However, the recent proliferation of ML-specific hardware within these data centers, driven by the diverse demands of ML tasks, is widening the gap between operational and embodied carbon footprints in the near future (Wu et al., 2022). **Parallelism in LLM Processing**. Effective processing of LLMs necessitates the utilization of multiple computing devices, such as GPUs or TPUs, owing to significant LLM parameter counts. Four types of parallelism, i.e., data, tensor, pipeline, and expert, are commonly employed to enhance hardware efficiency, quantified as actual throughput relative to peak throughput. * **Data Parallelism**: In data parallelism (Xing et al., 2015), the full LLM model is distributed to each computing device, while the input dataset is divided among these devices. Periodic gradient aggregation ensures that all devices maintain consistent model weights. * **Tensor Parallelism**: Tensor parallelism (Narayanan et al., 2021) involves distributing an LLM's layers across multiple devices. Within a transformer layer, the self-attention block partitions key, query, and value matrices through column-wise division. The output linear layer directly handles the attention operation's partitioned output, with weight matrix partitioning by rows. In the two-layer MLP, the first layer is divided along columns, and the second along rows. Efficient data coordination among partitions on different devices is achieved through two all-reduce operations in forward and backward passes. * **Pipeline Parallelism**: In pipeline parallelism (Narayanan et al., 2021), an LLM's layers are distributed across multiple devices. Each device handles an equal number of layers, and microbatches split a batch for pipelined execution. Synchronous weight updates are ensured through pipelining. However, periodic pipeline flushes to synchronize steps across devices introduce "pipeline bubbles" at batch starts and ends, which need to be minimized for efficient pipeline model parallelism. * **Expert Parallelism**: Expert parallelism (Kim et al., 2021) is tailored for parallelizing the training of MoE LLMs. This approach involves distributing distinct experts across various devices, enabling parallel execution. However, due to the separation of experts across multiple computing devices, explicit communication using all-to-all primitives becomes essential. ## 3 Related Work Table 1 provides a comparison between LLMCarbon and existing research endeavors. The predominant focus of prior studies (Henderson et al., 2020; Wu et al., 2022; Anthony et al., 2020; Schwartz et al., 2020; Dodge et al., 2022; Strubell et al., 2019) has been the measurement and reporting of carbon footprints associated with the actual training phase of ML models, denoted as "others" in the table. Notably, only one previous model, mlco2 (Lacoste et al., 2019), possesses the capability to predict the carbon footprint of an LLM task based on metrics like GPU utilization, training duration, and data center efficiency. Nevertheless, mlco2 encounters four significant limitations. Firstly, mlco2 cannot estimate the carbon footprint of LLMs, particularly sparse MoE LLMs. Secondly, it overlooks essential architectural attributes of LLMs, such as LLM parameter count, resulting in exaggerated predictions. Thirdly, mlco2 exclusively considers GPUs and neglects specialized ML hardware like TPUs (Jouppi et al., 2017), assuming uniform peak computing throughput across all GPUs, thereby yielding imprecise carbon footprint estimations. Lastly, mlco2 cannot model the embodied carbon footprint of an LLM based on its hardware configuration. ## 4 LLMCarbon ### Overview Figure 1 presents an overview of LLMCarbon for predicting the carbon footprint of an LLM. The inputs to LLMCarbon encompass the LLM's architectural description, data center specification, and hardware configuration. To output the LLM's carbon footprint, LLMCarbon employs a series of models, each processing specific input details. LLMCarbon can use the parameter model to determine the LLM's parameter count based on its architectural attributes, or directly accept the LLM's parameter count as input. With the LLM's parameter count and training token count, LLMCarbon calculates the test loss by the neural scaling law (Kaplan et al., 2020), and employs the FLOP model to estimate the volume of FLOPs required for LLM processing. Through the parameter count, LLMCarbon generates the optimal data, tensor, pipeline, and expert parallelism setting. Taking into account the parallelism setting and hardware configuration, LLMCarbon's hardware efficiency model computes the hardware efficiency, representing the real computing throughput divided by the peak computing throughput. Utilizing data center details, hardware efficiency, and FLOP count, LLMCarbon applies the operational carbon model to derive the LLM's operational carbon footprint. Similarly, by considering the hardware configuration, LLMCarbon's embodied carbon model yields the LLM's embodied carbon footprint. The overall carbon footprint of the LLM is then computed by summing both the operational and embodied carbon footprints. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{**scheme**} & predictive & MoE & architectural & specialized & operational & embodied \\ & modeling & support & parameters & hardware & carbon & carbon \\ \hline mlco2 & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ \\ others & ✗ & ✗ & ✗ & ✗ & ✓ & ✓ \\ **LLMCarbon** & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular} \end{table} Table 1: The comparison of LLMCarbon against prior work. Figure 1: The overview of LLMCarbon. ### Parameter Model Among all LLM architectural attributes, the LLM parameter count has the largest impact on test loss (Kaplan et al., 2020). To reduce projection errors, LLMCarbon can take the parameter count as direct input, or estimate the parameter count by the parameter model. The parameter model's input comprises the LLM's architectural parameters including the hidden size (\(h\)), the number of layers (\(l\)), the vocabulary size (\(V\)), and the number of experts (\(N_{e}\)). For a dense LLM, we calculate its parameter count (\(P_{d}\)) by Equation 1 (Narayanan et al., 2021). An MoE LLM (Rajbhandari et al., 2022) replaces \(\rho\) (\(\rho\in(0,1]\)) feed-forward layers in its counterpart dense LLM with MoE layers. An MoE layer's parameter count is the sum of the expert parameter count (\(P_{exp}=8h^{2}N_{e}\)) and the self-attention parameter count (\(P_{att}=4h^{2}\)), so the parameter count (\(P_{c}\)) of an MoE LLM can be computed using Equation 2. The parameter model of LLMs adopting an encoder-decoder architecture can be viewed in Appendix A. \[P_{d}\approx 12lh^{2}+Vh\hskip 28.452756pt(1)\hskip 42.679134ptP_{e}\approx(1- \rho)P_{d}+\rho(4h^{2}+8h^{2}N_{e})l \tag{2}\] ### Neural Scaling Law The neural scaling law (Kaplan et al., 2020) predicts an LLM's test loss based on its parameter count \(P\) and the training dataset size \(D\). For ensuring the comparability of test losses across various models, sizes, and datasets, we adopt the Chinchilla scaling law (Hoffmann et al., 2022) formulated as Equation 3, where \(A\), \(B\), \(\alpha\), \(\beta\), and \(E\) are fitting constants. The test loss \(L\) equals to the summation of an irreducible term \(E\) and a reducible term diminishing through the scaling of \(P\) and \(D\). \[L(P,D)=\frac{A}{P^{\alpha}}+\frac{B}{D^{\beta}}+E\hskip 14.226378pt(3)\hskip 28.452756pt TC\approx 6PD\hskip 14.226378pt(4)\hskip 28.452756ptIC\approx 2PD\hskip 14.226378pt(5)\] ### Flop Model The FLOP model receives two inputs: the count of parameters (\(P\)) and the number of tokens (\(D\)) processed by the LLM processing. The primary component of FLOPs is the multiply-accumulate operations involving LLM weights and intermediate results. Within our FLOP model, the FLOP count necessary for training a dense LLM (\(TC\)) is estimated using Equation 4. For dense LLM inferences, the FLOP count (\(IC\)) is approximated as per Equation 5. To compute the FLOP count for MoE LLM processing, we input the parameter number of the dense base model (Rajbhandari et al., 2022) of the MoE LLM into Equations 4 and 5, respectively. ### Hardware Efficiency Model Efficient processing of LLMs relies on achieving high hardware efficiency, which is calculated as the actual computing throughput divided by the peak throughput. This efficiency is largely determined by the optimal configuration of data, tensor, pipeline, and expert parallelism, along with the number of devices used for the task. Using too few or too many devices or improperly configuring parallelism can lead to reduced hardware efficiency. For example, achieving optimal parallelism for GPT-3 with 175 billion parameters requires 1.5K V100 GPUs, resulting in a hardware efficiency of 47% (Narayanan et al., 2021). Conversely, an unoptimized configuration using 10K V100 GPUs yields a substantially lower hardware efficiency of only 19.7% (Patterson et al., 2021). **Optimal Parallelism Setting.** The optimal parallelism configuration is represented as \((p,t,d,e)\), where each variable corresponds to a degree of pipeline, tensor, data, and expert parallelism, respectively. For dense LLMs, optimal settings are derived from (Narayanan et al., 2021), depicted in Figure 2, where \(e=1\) is omitted. Initially, we increase tensor parallelism (\(t\)) up to \(z\) (e.g., \(z=8\)) when employing \(z\)-device servers (Narayanan et al., 2021), each containing \(z\) interconnected devices. This increment in \(t\) is confined to avoid exceeding communication bandwidth limits. Once \(z\) is reached, further scaling for larger LLMs involves increasing pipeline parallelism (\(p\)) (Narayanan et al., 2021). However, the product of \(t\) and \(p\) (\(t\cdot p\)) must not exceed a certain threshold to ensure that LLM parameters and intermediate data fit into device memory. The number of devices required to achieve optimal hardware efficiency for dense LLM processing is calculated as \(n=t\cdot p\cdot d\) (Narayanan et al., 2021). A polynomial regression model is used to predict optimal hardware efficiency based on these parameters. For MoE LLMs, the optimal parallelism settings are adopted from (Chen et al., 2023). Assuming 64 experts within an MoE LLM, expert parallelism (\(e\)) is always set to 64, intertwining \(d\) and \(e\) for a uniform expert distribution. To reduce inter-device all-to-all communications, \(d\) is fixed at 1. Scaling MoE LLM parallelism is achieved by increasing pipeline parallelism (\(p\)). The number of devices required for optimal hardware efficiency in MoE LLM processing is also calculated as \(n=t\cdot p\cdot d\). MoE LLMs require fewer devices compared to dense LLMs with equivalent parameter counts due to their lower computational overhead. The optimal hardware efficiency during MoE LLM processing is represented in Figure 5. MoE LLMs achieve \(\sim 80\%\)(Chen et al., 2023) of the optimal hardware efficiency of their dense base models, due to extra host-device memory swaps. \[\mathit{eff}_{re}=\begin{cases}\gamma_{0}\cdot\frac{\mathit{re}}{n}\cdot \mathit{eff}_{n}&re<n\\ \gamma_{1}\cdot\frac{n}{re}\cdot\mathit{eff}_{n}+\gamma_{2}&re>n\end{cases} \tag{6}\] \[\mathit{eff}_{re}=\frac{\mathit{TFLOP}}{n_{dev}\cdot\mathit{FLOP}_{peak} \cdot\mathit{eff}} \tag{7}\] **Fewer or More Computing Devices**. When the number of computing devices is not equal to \(t\cdot p\cdot d\), the hardware efficiency decreases. The efficiency (\(\mathit{eff}_{re}\)) with \(re\) devices can be calculated using Equation 6, where \(\gamma_{0}\sim\gamma_{2}\) are fitting constants, \(\mathit{eff}_{n}\) means the highest hardware efficiency, and \(n\) indicates the number of devices that can achieve \(\mathit{eff}_{n}\). \[\mathit{energy}_{hard}=\sum_{i\in hardware\_set}(P_{i}\cdot\mathit{eff}_{i} \cdot n_{i}\cdot t_{i}) \tag{8}\] \[\mathit{energy}_{oper}=\mathit{energy}_{hard}\cdot\mathit{PUE} \tag{9}\] ### Operational Carbon Model By using the FLOP count (\(\mathit{TFLOP}\)), the hardware efficiency (\(\mathit{eff}\)), and the computing device number (\(n_{dev}\)), we can determine the execution time of a device through Equation 7, where \(\mathit{FLOP}_{peak}\) represents the device peak throughput. The total energy (\(\mathit{energy}_{hard}\)) consumed by all hardware units can be calculated using Equation 8, where \(P_{i}\) denotes the peak power of hardware unit \(i\); \(\mathit{eff}_{i}\) represents the hardware efficiency of hardware unit \(i\); \(n_{i}\) indicates the count of hardware unit \(i\); and \(t_{i}\) means the execution time of hardware unit \(i\). Hardware units encompass a range of components, including CPUs, LLM computing devices, memories, SSDs, and others. \[\mathit{CO2eq}_{oper}=\mathit{energy}_{oper}\cdot\mathit{carb\_int} \tag{10}\] **PUE**. Power Usage Effectiveness (PUE) (Henderson et al., 2020) serves as the industry standard metric for evaluating a data center's energy efficiency. It is defined as the ratio of the total energy consumption of the data center, including all auxiliary components like cooling, to the energy consumed solely by the computing hardware within the data center. The operational energy (\(\mathit{energy}_{oper}\)) associated with LLM processing can be calculated using Equation 9, where \(\mathit{energy}_{hard}\) denotes the energy used by the computing hardware within a data center, and \(\mathit{PUE}\) indicates the PUE of the specific data center. \[\mathit{CO2eq}_{emb}=\sum_{i\in hardware\_set}\frac{t_{i}\cdot\mathit{CO2eq}_ {chip_{i}}}{\mathit{lifetime}_{i}} \tag{12}\] **Carbon Intensity**. Carbon intensity is a metric that assesses the environmental impact of a data center's energy consumption. Carbon-free energy (CFE) denotes the proportion of renewable, carbon-free energy utilized within a data center. As a data center increases its utilization of renewable energy, it experiences an increase in CFE and a corresponding decrease in carbon intensity. Table 2 provides insights into the carbon intensity and CFE values for some data centers. The operational carbon footprint (\(\mathit{CO2eq}_{oper}\)) attributed to LLM processing is calculated using Equation 10, where \(\mathit{energy}_{oper}\) represents the operational energy for LLM processing, and \(\mathit{carb\_int}\) denotes the carbon intensity of the specific data center. ### Embodied Carbon Model To quantify the chip's embodied carbon footprint (\(\mathit{CO2eq}_{chip}\)) within a specific hardware unit, Equation 11 is employed, where \(\mathit{area}\) represents the chip's area. The Carbon emitted Per unit Area (_CPA_) is contingent on various semiconductor fabrication parameters, including yield, energy consumption per unit area during manufacturing, emissions from chemicals utilized in hardware production, and emissions associated with raw material sourcing for fabrication. Specific values for area and CPA for distinct hardware units are elaborated in Table 3, where area values for CPU, DRAM, SSD, TPU, and GPU are drawn from sources such as (Singh et al., 2020), (Choe, 2021), (Wiki, 2023b), and (Wiki, 2023a). CPA values for Micron, Samsung, and TSMC are extracted from (Garcia Bardon et al., 2020), and (TSMC, 2019). The total embodied carbon footprint (_CO2eq_emb_) originating from all hardware units involved in LLM processing is assessed using Equation 12, where _CO2eq_\({}_{chip}\) denotes the chip's embodied carbon footprint for hardware unit \(i\), _lifetime_\({}_{i}\) means the lifespan of hardware unit \(i\), and \(t_{i}\) represents the execution duration of hardware unit \(i\). The hardware units mentioned in Equation 12 include CPUs, LLM computing devices, memories, SSDs, and other components. Notably, Meta's data centers achieve an average utilization rate of \(60\%\) throughout the 5-year lifespan of hardware units (Wu et al., 2022). ### Total Carbon Footprint The total carbon footprint (_CO2eq_) resulting from LLM processing is determined using Equation 13, where _CO2eq_\({}_{oper}\) indicates the operational carbon footprint of the LLM, and _CO2eq_\({}_{emb}\) denotes the embodied carbon footprint of the LLM. ## 5 Validation We employ LLMCarbon to compute the operational footprints of five LLMs, including dense and MoE architectures, developed by Google, OpenAI, and Meta during their training phases. We also compute the operational footprint of another LLM, Noor (Lakim et al., 2022), during its storage phase. To validate the predictions of LLMCarbon, we compare our calculated operational footprint values with the previously published data for these LLMs. Moreover, we utilize LLMCarbon to predict the embodied footprint of an LLM developed by Meta and validate the result by comparing it with the actual embodied footprint data. ### Operational Carbon Footprint Validation **Training Phase**. Table 4 presents the validation results of LLMCarbon's predictions on the training operational carbon footprint. To validate the training operational carbon footprint estimations yielded by LLMCarbon, we selected five LLMs: T5 (Raffel et al., 2020), GPT-3 (Brown et al., 2020), GShard (Lepikhin et al., 2021), Switch (Fedus et al., 2022), and XLM (Conneau et al., 2020). We list the inputs and outputs of LLMCarbon in Table 4. Within the table, "device TPD (W)" indicates the Chip Thermal Design Power of a computing device, while "avg. system power (W)" conveys the average system power for computing device, including TPU/GPU, host CPU, DRAM, and network interface. The inputs on the parameters of LLMs, hardware, and data centers, and the actual training operational carbon footprint values of these LLMs were collected from (Patterson et al., 2021) and (Wu et al., 2022). Since the parameter count of an LLM is considered as an architectural parameter of the LLM in (Patterson et al., 2021) and (Wu et al., 2022), we skipped the parameter model and directly used the parameter count as an input to LLMCarbon. The validation of the parameter model of LLMCarbon can be found in Appendix B. Owing to the adoption of suboptimal parallelism settings, the hardware efficiencies for training these LLMs hover within the range of \(39\%\) to \(19.7\%\), lower than the hardware efficiencies achieved with optimal parallelism configurations. Comparing \begin{table} \begin{tabular}{c c c} \hline \hline data & carbon & carbon \\ center & free & intensity \\ name & energy & _gCO2eq_/_kWh_ \\ \hline asia-east2 & 28\% & 360 \\ europe-north1 & 91\% & 127 \\ us-central1 & 97\% & 394 \\ us-south1 & 40\% & 296 \\ \hline \hline \end{tabular} \end{table} Table 2: The data center efficiency. \begin{table} \begin{tabular}{c c c c} \hline \hline hardware & description & unit & CPA \\ \hline CPU & TSMC 16nm & 147 \(mm^{2}\) & 1 _kgCO2_/_cm2_ \\ \hline DRAM & Micron 18nm & 256 GB & 0.024 _kgCO2_/_GB_ \\ \hline SSD & Samsung 20nm & 32 TB & 0.4 _kgCO2_/_GB_ \\ \hline TPUv3 & TSMC 16nm & 700 \(mm^{2}\) & 1 _kgCO2_/_cm2_ \\ TPUv4 & TSMC 7nm & 400 \(mm^{2}\) & 1.6 _kgCO2_/_cm2_ \\ \hline V100 & TSMC 12nm & 815 \(mm^{2}\) & 1.2 _kgCO2_/_cm2_ \\ H100 & TSMC 4nm & 814 \(mm^{2}\) & 1.8 _kgCO2_/_cm2_ \\ \hline \hline \end{tabular} \end{table} Table 3: The comparison of embodied carbon footprints. the predicted operational carbon footprints to actual data, LLMCarbon's projections display disparities of \(\leq 8.2\%\). When predicting the operational carbon footprint during the training of MoE LLMs, LLMCarbon incurs a higher margin of error, due to the intricacy of MoE architectures. On the contrary, when compared to actual data, the training operational carbon footprint estimations made by mlco2 (Lacoste et al., 2019) suffer from huge disparities of more than \(69\%\), because mlco2 assumes all devices consistently operate at the peak computing throughput and consume the peak power. **Inference Phase**. To validate the operational carbon footprint predictions generated by LLMCarbon, we consider the inferences of GPT3 with 175B parameters (Yu et al., 2022). These inferences were carried out on 16 A100 GPUs, using a batch size of 32 and an input size of 128 tokens (Yu et al., 2022). According to the hardware efficiency model, this specific hardware configuration yields a hardware efficiency of 9.26%. Achieving the optimal hardware efficiency for GPT3 requires \(\sim\)1.5K GPUs, which is significantly more than what was used for these inferences. LLMCarbon's predicted latency for this inference batch is 3.1s, while the actual latency for this inference batch is 3s (Yu et al., 2022). We assume the inference experiments took place in a data center with a PUE of 1.1 and a carbon intensity of 0.429 \(CO_{2}eq/KWh\). The difference between the predicted and actual inference operational carbon footprints does not exceed \(+3.3\%\). **Storage Phase**. The typical power consumption of cloud storage is reported as 11.3W/TB (Posani et al., 2018), while the power consumption for data transfer within a data center is around 1.48W/TB (Baliga et al., 2011). Over a six-month storage phase, the Noor LLM (Lakim et al., 2022) encompasses 32.7TB of storage data, comprising curated data, bulk data, and the model. Additionally, it transfers a data volume of 277.4TB. Based on LLMCarbon's estimations, the storage data energy is predicted as 1.596MWh (compared to the actual 1.69MWh (Lakim et al., 2022)), while the energy consumption attributed to data transfer is projected to be 1.77MWh (compared to 1.8MWh (Lakim et al., 2022)). Notably, the projection accuracy of LLMCarbon regarding the operational energy during the storage phase showcases an error margin of less than 3.6%. **Experimentation Phase**. The experimentation phase consisting of various activities of training, inference, and storage (Wu et al., 2022). And we have validated the training phase, inference phase, and storage phase of an LLM in previous sections. \begin{table} \begin{tabular}{l c c c c c} \hline \hline LLM & T5 & GPT3 & GShard & Switch & XLM \\ \hline reference & \multicolumn{4}{c}{(Patterson et al., 2021)} & (Wu et al., 2022) \\ developer & Google & OpenAI & Google & Google & Meta \\ type & dense & dense & MoE & MoE & dense \\ parameter \# (B) & 11 & 175 & 619 & 1500 & 0.55 \\ base model param. \# (B) & - & - & 2.3 & 7.41 & - \\ token \# (B) & 500 & 300 & 1K & 2K & 7K \\ \(CO_{2}eq/KWh\) & 0.545 & 0.429 & 0.177 & 0.33 & 0.413 \\ PUE & 1.12 & 1.1 & 1.09 & 1.1 & 1.1 \\ computing device & TPUv3 & V100 & TPUv3 & TPUv3 & V100 \\ device TPD (W) & 450 & 300 & 450 & 450 & 300 \\ avg. system power (W) & 310 & 330 & 288 & 245 & 342 \\ peak TFLOPs/s & 123 & 125 & 123 & 123 & 125 \\ achieved TFLOPs/s & 45.6 & 24.6 & 48 & 34.4 & 26.5 \\ hardware efficiency & 37\% & 19.7\% & 39\% & 28\% & 21.2\% \\ device \# & 512 & 10K & 1K & 512 \\ total zettaFLOPs & 40.5 & 314 & 13.3 & 82.2 & 23.9 \\ training days & 20 & 14.8 & 3.1 & 27 & 20.4 \\ \hline actual \(tCO_{2}eq\) & 46.7 & 552.1 & 4.3 & 59.1 & 39 \\ \hline mlco2 predicted \(tCO_{2}eq\) & 89.4 & 955.2 & 8.4 & 137.3 & 66.96 \\ mlco2 \(\Delta\) & \(+91.3\%\) & \(+73\%\) & \(+95.3\%\) & \(+132\%\) & \(+69\%\) \\ \hline **LLMCarbon predicted \(tCO_{2}eq\)** & 45.66 & 553.87 & 4.46 & 63.9 & 37.6 \\ **LLMCarbon \(\Delta\)** & \(\mathbf{-2.22\%}\) & \(\mathbf{+0.32\%}\) & \(\mathbf{+3.8\%}\) & \(\mathbf{+8.2\%}\) & \(\mathbf{-3.54\%}\) \\ \hline \hline \end{tabular} \end{table} Table 4: The validation on the operational carbon footprints of various LLMs. ### Embodied Carbon Footprint Validation Table 5 presents the validation results of the embodied carbon footprint estimated by LLMCarbon in comparison to the published data of XLM (Wu et al., 2022). This is the only publicly available data regarding the embodied carbon footprint of a LLM training hardware infrastructure to our best knowledge. The setup consists of 512 V100 GPUs organized into 64 8-GPU servers, each equipped with a CPU, a 32TB SSD disk, and a 256GB DRAM main memory system. Using the unit and CPA data from Table 3, we computed the values of \(\mathit{CO2eq}_{\mathit{chip}}\) presented in Table 5. The training duration of XLM is 20.4 days, and Wu et al. (2022) assumed a hardware unit lifetime of 5 years. Consequently, the \(\frac{time}{lifetime}\) values for all hardware units were determined to be \(1.12\%\). Apart from CPU, GPU, SSD, and DRAM, other hardware components (others) such as the motherboard, chassis, and PSU collectively contribute to \(15\%\)(Tannu and Nair, 2022) of the anticipated total embodied carbon footprint. In contrast to the reported embodied carbon footprint of XLM (Wu et al., 2022), the predictions produced by LLM-Carbon reveal a disparity of \(-3.05\%\). ## 6 Case Studies Using LLMCarbon We used LLMCarbon to demonstrate the following case studies. **Large Embodied Carbon Footprint**. The embodied carbon footprint throughout the life-cycle of an LLM is significant. Even when no computing activities occur, the LLM still incurs embodied carbon overhead due to the idle hardware allocated to the LLM. As illustrated in Figure 6, the embodied carbon footprint of an LLM across its entire life-cycle contributes to approximately \(24\%\sim 35\%\) of the overall carbon footprint (including embodied, training, inference, experimentation, and storage carbon footprints) of the LLM. We adopted the ratio between training, inference, and experimentation activities from (Wu et al., 2022). Furthermore, as data centers progressively shift towards adopting renewable energy sources, the embodied carbon footprint of an LLM will dominate the entire life-cycle carbon footprint of the LLM in the near future. For instance, 97% of the operational energy in a Meta data center (Wu et al., 2022) is provided by renewable sources. The embodied carbon footprints of diverse LLMs operating within this data center constitute \(92\%\sim 95\%\) of their entire life-cycle carbon footprints. This underscores the pivotal role of accounting for embodied carbon in the sustainability evaluation of LLMs. **Optimal Parallelism Setting**. As discussed in Section 5.1, the training processes of the LLMs used in our validation lacked optimized parallelism settings. By using LLMCarbon, we pinpoint the optimal configurations for data, tensor, pipeline, and expert parallelism pertaining to these three LLMs. As illustrated in Figure 6, the adoption of these optimal parallelism settings leads to a noteworthy decrease (i.e., \(16\%\sim 39\%\)) in their operational carbon footprints. **New Accelerators**. When employing distinctive computing devices for the LLM processing, the operational carbon footprints of an LLM tend to differ, while the embodied carbon footprints re Figure 6: The carbon footprint of three LLMs in case studies. Figure 7: The carbon footprint of GPT3 trained by different computing devices. \begin{table} \begin{tabular}{l l l l l} \hline \hline hardware & number & \(\mathit{CO2eq}_{\mathit{chip}}\) & \(\frac{time}{lifetime}\) & \(\mathit{CO2eq}_{\mathit{emb}}\) \\ & & (\(\mathit{kgCO}_{\mathit{2eq}}\)) & \(\frac{time}{lifetime}\) & (\(\mathit{tCO}_{\mathit{2eq}}\)) \\ \hline GPU & 512 & 9.78 & 1.12\% & 0.056 \\ CPU & 64 & 1.47 & 1.12\% & 0.0018 \\ SSD & 64 & 576 & 1.12\% & 0.412 \\ DRAM & 64 & 102.4 & 1.12\% & 0.073 \\ others & 64 & 148.2 & 1.12\% & 0.096 \\ **predicted sum** & & & & 0.64 \\ \hline \hline \multicolumn{4}{c}{actual 0.66 \(\mathit{tCO}_{\mathit{2eq}}\), \(\boldsymbol{\Delta}-\)3.05\%} \\ \hline \hline \end{tabular} \end{table} Table 5: The embodied carbon footprint validation against Meta XLM. main similar. Figure 7 showcases the outcomes derived from training, inferring, and experimenting with three LLMs utilizing V100 GPU, H100 GPU, TPUv3, and TPUv4. Their embodied carbon footprints exhibit similarity, as the embodied carbon emissions of SSD and DRAM dominate their total embodied carbon footprints. However, compared to V100 GPUs, the operational carbon footprints of these LLMs are notably curtailed by 71% and 41% when employing H100 and TPUv4 accelerators, respectively. Embracing novel computing devices for LLMs presents a pragmatic path to mitigate their operational carbon footprints. **Training Carbon Footprint Scaling**. In addition to the LLMs (i.e., T5, GPT3, GShard, Switch, XLM, and Noor) we used in validations, we also included other LLMs in our analysis, such as PaLM (Chowdhery et al., 2022), Gopher (Rae et al., 2021), Chinchilla (Hoffmann et al., 2022), LaMDA (Thoppilan et al., 2022), Jurassic-1 (Lieber et al., 2021), MT-NLG (Smith et al., 2022), Bloom (Scao et al., 2022), YaLM (Yandex, 2022), GLM (Zeng et al., 2023), GLaM (Du et al., 2022), FB-MoE (Artetxe et al., 2021), ST-MoE (Zoph et al., 2022), and PR-MoE (Rajbhandari et al., 2022). Among these LLMs, GShard, Switch, GLaM, FB-MoE, ST-MoE, and PR-MoE use sparse MoE architectures, while the other LLMs adopt dense architectures. We do not aim to directly compare the accuracy and carbon emissions of these original LLMs, since they were trained by different datasets and in different data centers. Instead, we study the test losses and training operational carbon footprints of some new LLM designs adopting the same architectures as these LLMs. We assume these new LLMs designs are trained using the same dataset and the same hardware infrastructure in the same data center. We present the test losses and training operational carbon footprints of these LLMs in Figure 8. To compute the test loss, we adopt the fitting constants including \(\alpha=0.34\), \(\beta=0.28\), \(A=406.4\), \(B=410.7\), and \(E=1.69\) for Equation 3 from (Hoffmann et al., 2022). Since the test loss of an MoE LLM with \(P\) parameters is similar to that of its dense counterpart with only \(P/8\) parameters (Rajbhandari et al., 2022), we decreased the \(P\) of MoE LLMs to \(P/8\) in Equation 3. The training processes of all LLMs use their optimal parallelism settings and the corresponding numbers of V100 GPUs hosted by a data center where PUE is 1.1 and \(\mathit{CO_{2}eq}/\mathit{KWh}\) is 0.431. Overall, an LLM with a larger number of parameters and trained on more tokens achieves a lower test loss but also consumes a larger training operational carbon footprint. Compared to dense LLMs, the Pareto front of MoE LLMs is closer to the origin point, indicating that an MoE LLM can obtain a lower test loss by the same training carbon footprint. ## 7 Conclusion In this paper, we propose LLMCarbon, an end-to-end carbon footprint modeling tool for dense and MoE LLMs, which contribute significantly to carbon emissions during training, inference, experimentation, and storage processes. LLMCarbon can accurately assess the operational and embodied carbon footprints of an LLM, enabling efficient exploration of the design space by considering the trade-off between carbon footprint and test loss. It also promotes the adoption of carbon footprint reduction measures by facilitating quantitative comparisons among various LLM configurations.
大型言語モデル(LLM)の炭素 huellas は重要な懸念事項であり、トレーニング、推論、実験、およびストレージのプロセスから排出される二酸化炭素の量を網羅しています。これは、運用と体積的な炭素排出に含まれています。重要な部分は、トレーニングの前に、エッジのLLMの炭素インパクトを正確に予測することです。これは、GPUの使用に大きく依存します。既存の研究は、LLMのトレーニングの炭素 huella について報告していますが、mlco2は、物理的なトレーニングの前に、新しいニューラルネットワークの炭素 huella を予測することができる唯一のツールです。しかし、mlco2にはいくつかの重大な制限があります。それは、密集型または混合型専門家(MoE)LLMへの拡張を制限し、重要な建築的なパラメーターを無視し、GPUにのみ焦点を当て、 embodied炭素 huella をモデル化することができません。これらの空白
2303.17819
An Efficient Off-Policy Reinforcement Learning Algorithm for the Continuous-Time LQR Problem
In this paper, an off-policy reinforcement learning algorithm is designed to solve the continuous-time LQR problem using only input-state data measured from the system. Different from other algorithms in the literature, we propose the use of a specific persistently exciting input as the exploration signal during the data collection step. We then show that, using this persistently excited data, the solution of the matrix equation in our algorithm is guaranteed to exist and to be unique at every iteration. Convergence of the algorithm to the optimal control input is also proven. Moreover, we formulate the policy evaluation step as the solution of a Sylvester-transpose equation, which increases the efficiency of its solution. Finally, a method to determine a stabilizing policy to initialize the algorithm using only measured data is proposed.
Victor G. Lopez, Matthias A. Müller
2023-03-31T06:30:23
http://arxiv.org/abs/2303.17819v1
# An Efficient Off-Policy Reinforcement Learning Algorithm for the Continuous-Time LQR Problem ###### Abstract In this paper, an off-policy reinforcement learning algorithm is designed to solve the continuous-time LQR problem using only input-state data measured from the system. Different from other algorithms in the literature, we propose the use of a specific persistently exciting input as the exploration signal during the data collection step. We then show that, using this persistently excited data, the solution of the matrix equation in our algorithm is guaranteed to exist and to be unique at every iteration. Convergence of the algorithm to the optimal control input is also proven. Moreover, we formulate the policy evaluation step as the solution of a Sylvester-transpose equation, which increases the efficiency of its solution. Finally, a method to determine a stabilizing policy to initialize the algorithm using only measured data is proposed. ## I Introduction Reinforcement learning (RL) is a set of iterative algorithms that allow a system to learn its optimal behavior as it interacts with its environment [1, 2]. In the context of linear optimal control, RL has been used in the last few decades to solve the linear quadratic regulator (LQR) problem in continuous-time [3, 4, 5, 6, 7, 8] and in discrete time [9, 10, 11, 12]. For applications of RL procedures to nonlinear systems and other extensions, the reader is referred to the surveys [13, 14, 15] and the references therein. In the continuous-time linear time-invariant (CT-LTI) case, several RL algorithms with attractive properties have been designed. Although the first proposed algorithms required at least partial knowledge of the system model (e.g., [3]), completely data-based methods are now well known [4, 5, 6, 7]. These data-based algorithms replace the need for model knowledge by measuring persistently excited data directly from the system. Most of these data-based methods are _on-policy_ algorithms, meaning that they require the application (or simulation) of an exciting input to the system at every iteration, such that a new set of data can be collected. In contrast, the authors in [8] proposed a data-based _off-policy_ RL algorithm. This method has the advantage of requiring to collect data from the system only once, and then every iteration of the algorithm is performed using the same batch of measurements. The method in [8], as well as most on-policy methods, is formulated as the problem of determining the values of certain unknown matrices from a set of equations derived from the Bellman equation. Taking advantage of the properties of the Kronecker product, this problem is then expressed as a set of linear equations that can be easily solved. However, the Kronecker product formulation generates matrices of large size, and this procedure presents a high computational burden that increases rapidly with an increase in the system dimension. Another important issue in the existing learning-based control literature is the selection of a proper persistently exciting (PE) input. In most of the above literature, heuristic approaches for persistence of excitation are employed, often designing exciting inputs by adding sinusoidal, exponential and/or random signals [14]. A different approach for persistence of excitation was studied in [16], where conditions for the design of a discrete-time PE input are formally established. It is shown in [16] that their definition of persistence of excitation provides data measurements that are so rich in information that every possible trajectory of a controllable discrete-time linear system can be expressed in terms of such data. This result is now known as Willems' lemma, and has been successfully used in recent years in data-based analysis, estimation and control of discrete-time systems (see, e.g., the survey [17] and the references therein). In [6], it was proposed to use a PE signal as defined in [16] to excite a continuous-time system during a Q-learning procedure, which guarantees solvability of their policy evaluation step. However, the method in [6] is an _on-policy_ algorithm and the authors require persistence of excitation of a signal composed of both the input and the state of the system. This contrasts with our objective of considering a PE signal in terms of the input only. Moreover, in [6] a high order of persistence of excitation is needed. The contributions of this paper are as follows. We propose a novel data-based _off-policy_ RL algorithm to solve the LQR problem for continuous-time systems. As in [8], we perform the policy evaluation and policy improvement steps simultaneously. Different from the existing algorithms, we formulate a Sylvester-transpose equation that can be efficiently solved using known methods [18, 19, 20]. This avoids the use of the Kronecker product and the ensuing large matrices in our computations. Moreover, we use the results in [21], where a continuous-time version of Willems' lemma was proposed. This allows us to design a PE input that guarantees the solvability of the Sylvester-transpose equation in a data-based fashion. In our formulation, persistence of excitation depends only on the input of the system, and we require the use of a PE input of lower order compared to [6]. Finally, we propose a method to determine the required initial stabilizing policy for the proposed algorithm using only measured data. Different from [7], this method does not require the solution of linear matrix inequalities (LMIs). In the following, Section II introduces the preliminary results that are used throughout the paper. The development of the proposed efficient RL algorithm and its theoretical analysis are shown in Section III. Section IV analyses the computational efficiency of the proposed algorithm and presents a procedure to compute the initial stabilizing gain. In Section V, we illustrate the theoretical results with numerical examples, and Section VI concludes the paper. ## II Preliminaries In this section, we present existing results from the literature that are relevant for the remainder of this paper. ### _Matrix definitions for continuous-time data_ Consider the integer \(N\in\mathbb{N}\) and the positive scalar \(T\ \in\ \mathbb{R}_{+}\). Let \(\xi:[0,NT]\rightarrow\mathbb{R}^{\sigma}\), with \([0,NT]\subset\mathbb{R}\), denote a continuous-time signal of length \(NT\). Using the trajectory \(\xi\), we define the following matrix \[\mathcal{H}_{T}(\xi(t)):=\left[\begin{array}{cccc}\xi(t)&\xi(t+T)&\cdots&\xi (t+(N-1)T)\end{array}\right] \tag{1}\] for \(0\leq t\leq T\). Notice that (1) is a time-varying matrix defined on the interval \(t\in[0,T]\). Now, consider the following CT-LTI system \[\dot{x}(t)=Ax(t)+Bu(t), \tag{2}\] where \(x\in\mathbb{R}^{n}\) and \(u\in\mathbb{R}^{m}\) are the state and input vectors of the system, respectively. The pair \((A,B)\) is assumed to be controllable throughout the paper. Suppose that the input signal \(u:[0,NT]\rightarrow\mathbb{R}^{m}\) is applied to (2), and the resulting state trajectory \(x:[0,NT]\rightarrow\mathbb{R}^{n}\) is collected. From (2) and the definition in (1), we can write \[\mathcal{H}_{T}(\dot{x}(t))=A\mathcal{H}_{T}(x(t))+B\mathcal{H}_{T}(u(t)).\] Since it is unusual to have the state derivative \(\dot{x}\) available as a measurement, integrate the expression above to obtain \[\mathcal{H}_{T}(x(T)) -\mathcal{H}_{T}(x(0))\] \[=A\int_{0}^{T}\mathcal{H}_{T}(x(\tau))d\tau+B\int_{0}^{T} \mathcal{H}_{T}(u(\tau))d\tau.\] For convenience of notation, define the matrices \[\tilde{X}=\mathcal{H}_{T}(x(T))-\mathcal{H}_{T}(x(0)), \tag{3}\] \[X=\int_{0}^{T}\mathcal{H}_{T}(x(\tau))d\tau,\quad U=\int_{0}^{T} \mathcal{H}_{T}(u(\tau))d\tau.\] Notice that the matrix \(X\) (and similarly \(U\)) only requires the computation of integrals of the form \(\int_{0}^{T}x(\tau+jT)d\tau\), \(j=0,\ldots,N-1\). This is simpler than the integrals computed in the existing RL literature [8, 6, 7]. By definition, the following expression holds \[\tilde{X}=AX+BU. \tag{4}\] ### _Persistence of excitation for discrete-time systems_ Define the integer constants \(L,N\in\mathbb{N}\). The Hankel matrix of depth \(L\) of a discrete-time sequence \(\left\{\mu_{k}\right\}_{k=0}^{N-1}=\left\{\mu_{0},\,\mu_{1},\,\ldots,\,\mu_{N -1}\right\}\), \(\mu_{k}\in\mathbb{R}^{m}\), is defined as \[H_{L}(\mu):=\left[\begin{array}{cccc}\mu_{0}&\mu_{1}&\cdots&\mu_{N-L}\\ \mu_{1}&\mu_{2}&\cdots&\mu_{N-L+1}\\ \vdots&\vdots&\ddots&\vdots\\ \mu_{L-1}&\mu_{L}&\cdots&\mu_{N-1}\end{array}\right].\] In [16], the following definition of a PE input for discrete-time systems is made. **Definition 1**: _The discrete sequence \(\left\{\mu_{k}\right\}_{k=0}^{N-1}\), \(\mu_{k}\in\mathbb{R}^{m}\), is said to be persistently exciting of order \(L\) if its Hankel matrix of depth \(L\) has full row rank, i.e.,_ \[\text{rank}(H_{L}(\mu))=mL. \tag{5}\] It is important to highlight the fact that Definition 1 provides a condition that enables a straightforward design of a PE input and that is easy to verify for any discrete sequence. **Remark 1**: _A necessary condition for (5) to hold is that \(N\geq(m+1)L-1\). This provides a minimum length for a PE input sequence._ ### _Persistence of excitation for continuous-time systems_ It is shown in [21] that a piecewise constant input designed by exploiting Definition 1 is persistently exciting for the continuous-time system (2). This class of inputs is formally described in the following definition. **Definition 2** (Piecewise constant PE input): _Consider a time interval \(T>0\) such that_ \[T\neq\frac{2\pi k}{|\mathcal{I}_{m}(\lambda_{i}-\lambda_{j})|},\qquad\forall k \in\mathbb{Z}. \tag{6}\] _where \(\lambda_{i}\) and \(\lambda_{j}\) are any two eigenvalues of matrix \(A\) in (2), and \(\mathcal{I}_{m}(\cdot)\) is the imaginary part of a complex number. A piecewise constant persistently exciting (PCPE) input of order \(L\) for continuous-time systems is defined as \(u(t+iT)=\mu_{i}\) for all \(0\leq t<T\), \(i=0,\ldots,N-1\), where \(\left\{\mu_{i}\right\}_{i=0}^{N-1}\) is a sequence of constant vectors \(\mu_{i}\in\mathbb{R}^{m}\) that is persistently exciting of order \(L\) in the sense of Definition 1._ **Remark 2**: _Notice that the condition (6) is not restrictive, even with no knowledge of the system model (2). This is because the values of \(T\) that make this condition fail form a set of measure zero and are unlikely to be encountered in practice._ When a PCPE input is applied to system (2), the obtained input-state data set satisfies an important rank condition, as shown below. **Lemma 1** ([21]): _Consider system (2), let the pair \((A,B)\) be controllable, and let \(u\) be a PCPE input of order \(n+1\) as defined in Definition 2. Then, the rank condition_ \[\text{rank}\left(\left[\begin{array}{c}\mathcal{H}_{T}(x(t))\\ \mathcal{H}_{T}(u(t))\end{array}\right]\right)=\text{rank}\left(\left[\begin{array} []{c}\mathcal{H}_{T}(x(t))\\ H_{1}(\mu)\end{array}\right]\right)=m+n \tag{7}\] _holds for all \(0\leq t\leq T\)._ **Remark 3**: _In [21], the result in Lemma 1 was presented considering persistence of excitation of any order \(L\). For simplicity of notation, we presented Lemma 1 directly for PE inputs of order \(n+1\). This is the only order of persistence of excitation used throughout the paper._ ### _The LQR problem and Kleinman's algorithm_ For a CT-LTI system (2), the infinite-horizon LQR problem concerns determining the control input \(u\) that minimizes a cost function of the form \[J(x(0),u):=\int_{0}^{\infty}\left(x^{\top}(t)Qx(t)+u^{\top}(t)Ru(t)\right)dt, \tag{8}\] where \(Q\succeq 0\) and \(R\succ 0\). Throughout the paper, we assume that the pair \((A,Q^{1/2})\) is observable. This, together with the assumed controllability of \((A,B)\), implies that the optimal control input is given by \(u^{*}(x)=-K^{*}x\), where \[K^{*}=R^{-1}B^{\top}P^{*}\] and the matrix \(P^{*}\succ 0\) solves the algebraic Riccati equation \[Q+P^{*}A+A^{\top}P^{*}-P^{*}BR^{-1}B^{\top}P^{*}=0.\] In [22], Kleinman proposed a model-based iterative algorithm to solve the LQR problem. This algorithm starts by selecting an initial stabilizing matrix \(K_{0}\), i.e., a matrix such that \(A-BK_{0}\) is Hurwitz stable. At every iteration \(i\), the Lyapunov equation \[P_{i}(A-BK_{i})+(A-BK_{i})^{\top}P_{i}+Q+K_{i}^{\top}RK_{i}=0 \tag{9}\] is solved for \(P_{i}\). Then, a new feedback matrix is defined as \[K_{i+1}=R^{-1}B^{\top}P_{i}. \tag{10}\] The algorithm iterates the equations (9) and (10) until convergence. With the main drawback of being a model-based method, Kleinman's algorithm otherwise possesses highly attractive features. Namely, at each iteration the matrix \(K_{i+1}\) is stabilizing, the algorithm converges such that \[\lim_{i\rightarrow\infty}K_{i+1}=K^{*},\] and convergence occurs at a quadratic rate [22]. The following section presents the main developments of this paper. ## III An Efficient Data-Based Algorithm for the CT LQR Problem In this section, we present an efficient data-based off-policy RL algorithm to determine the optimal controller that minimizes (8). We show that the proposed procedure is equivalent to Kleinman's algorithm (9)-(10), and therefore preserves all of its theoretical properties. For the clarity of exposition, we introduce first a model-based algorithm that is then used as the basis of our data-based method. ### _A model-based algorithm_ Combining (9) and (10), we readily obtain the following expressions \[P_{i}A-K_{i+1}^{\top}RK_{i}+A^{\top}P_{i}-K_{i}^{\top}RK_{i+1}+Q+K_{i}^{\top}RK _{i}=0\] and \(B^{\top}P_{i}-RK_{i+1}=0\). Therefore, the matrix equation \[\left[\begin{array}{cc}A&B\\ -RK_{i}&-R\end{array}\right]^{\top}\left[\begin{array}{cc}P_{i}\\ K_{i+1}\end{array}\right]\left[\begin{array}{cc}I_{n}&0\end{array}\right]\\ +\left[\begin{array}{cc}I_{n}\\ 0\end{array}\right]\left[\begin{array}{cc}P_{i}\\ K_{i+1}\end{array}\right]^{\top}\left[\begin{array}{cc}A&B\\ -RK_{i}&-R\end{array}\right]\\ +\left[\begin{array}{cc}Q+K_{i}^{\top}RK_{i}&0\\ 0&0\end{array}\right]=0 \tag{11}\] holds, where \(I_{n}\) is an \(n\times n\) identity matrix and \(0\) represents a matrix of zeros with appropriate dimensions. Denoting the fixed matrices as \[\Phi_{i}:=\left[\begin{array}{cc}A&B\\ -RK_{i}&-R\end{array}\right],\quad E:=\left[\begin{array}{cc}I_{n}&0\end{array}\right],\] \[\bar{Q}_{i}:=\left[\begin{array}{cc}Q+K_{i}^{\top}RK_{i}&0\\ 0&0\end{array}\right] \tag{12}\] and the unknown matrix as \[\Theta_{i+1}:=\left[\begin{array}{c}P_{i}\\ K_{i+1}\end{array}\right], \tag{13}\] we can write (III-A) in the compact form \[\Phi_{i}^{\top}\Theta_{i+1}E+E^{\top}\Theta_{i+1}^{\top}\Phi_{i}+\bar{Q}_{i}=0. \tag{14}\] The matrix \(\Theta_{i+1}\in\mathbb{R}^{(n+m)\times n}\) consists of the unknown matrices in Kleinman's algorithm, \(P_{i}\) and \(K_{i+1}\). It is of our interest to design a method in which solving a matrix equation as in (14) for \(\Theta_{i+1}\) corresponds to solving both (9) and (10) simultaneously. However, it can be noted that (14), as it is formulated, in general does not have a unique solution \(\Theta_{i+1}\). To address this issue, first express the unknown submatrices of \(\Theta_{i+1}\) as \[\Theta_{i+1}=\left[\begin{array}{c}\Theta_{i+1}^{1}\\ \Theta_{i+1}^{2}\end{array}\right], \tag{15}\] with \(\Theta_{i+1}^{1}\in\mathbb{R}^{n\times n}\) and \(\Theta_{i+1}^{2}\in\mathbb{R}^{m\times n}\). In the following lemma, we show that there exists only one matrix \(\Theta_{i+1}\) that solves (14) such that the submatrix \(\Theta_{i+1}^{1}\) is symmetric. **Lemma 2**: _Consider the equation (14) with the matrices \(\Phi_{i}\), \(E\) and \(\bar{Q}_{i}\) defined as in (12). Moreover, let the matrix \(K_{i}\) be stabilizing. Then, there exists a unique solution (15) to this equation for which \(\Theta_{i+1}^{1}=(\Theta_{i+1}^{1})^{\top}\)._ Considering the partition in (15), notice that (14) holds for any matrix \(\Theta_{i+1}\) such that \[A^{\top}\Theta_{i+1}^{1}-K_{i}^{\top}R\Theta_{i+1}^{2}+(\Theta_{ i+1}^{1})^{\top}A-(\Theta_{i+1}^{2})^{\top}RK_{i}\\ +Q+K_{i}^{\top}RK_{i}=0,\] and \[B^{\top}\Theta_{i+1}^{1}-R\Theta_{i+1}^{2}=0.\] From the second equation it is clear that \(\Theta_{i+1}^{2}=R^{-1}B^{\top}\Theta_{i+1}^{1}\). Substituting this and the fact that \(\Theta_{i+1}^{1}=(\Theta_{i+1}^{1})^{\top}\) in the first equation, we get \[(A-BK_{i})^{\top}\Theta_{i+1}^{1}+\Theta_{i+1}^{1}(A-BK_{i})+Q+K_{i}^{\top}RK _{i}=0. \tag{16}\] Since \(K_{i}\) is stabilizing, we use Lyapunov arguments to conclude that \(\Theta^{1}_{i+1}\) (and therefore also \(\Theta^{2}_{i+1}\)) is unique. Lemma 2 implies that constraining the solution of (14) to include a symmetric submatrix \(\Theta^{1}_{i+1}\) leads to the desired solution (13). The following lemma shows that we achieve this by properly modifying \(\Phi_{i}\) in (12). **Lemma 3**: _Consider the matrix equation_ \[(\Phi_{i}^{-})^{\top}\Theta_{i+1}E+E^{\top}\Theta^{\top}_{i+1}\Phi_{i}^{+}+ \bar{Q}_{i}=0, \tag{17}\] _where_ \[\Phi_{i}^{+}:=\left[\begin{array}{cc}A+I&B\\ -RK_{i}&-R\end{array}\right],\quad\Phi_{i}^{-}:=\left[\begin{array}{cc}A-I&B \\ -RK_{i}&-R\end{array}\right], \tag{18}\] _and the matrices \(E\) and \(\bar{Q}_{i}\) are defined as in (12). Moreover, let the matrix \(K_{i}\) be stabilizing. Then, the solution (15) of (17) is unique, and \(\Theta^{1}_{i+1}=(\Theta^{1}_{i+1})^{\top}\). Moreover, the solution of (17) is also a solution of (14)._ First, define the matrix \[S=\left[\begin{array}{cc}\Theta^{1}_{i+1}-(\Theta^{1}_{i+1})^{\top}&0\\ 0&0\end{array}\right].\] Using this definition, it is straightforward to express (17) in terms of the matrix \(\Phi_{i}\) in (12) as \[\Phi_{i}^{\top}\Theta_{i+1}E+E^{\top}\Theta^{\top}_{i+1}\Phi_{i}+\bar{Q}_{i}=S.\] Notice that the left-hand side of this expression is symmetric, and therefore so must be \(S\). Now, \(S\) is symmetric if and only if \(\Theta^{1}_{i+1}=(\Theta^{1}_{i+1})^{\top}\), that is, \(S=0\). This implies both that the solution of (17) also solves (14) and, by Lemma 2, that this solution is unique. **Remark 4**: _Equation (17) is a case of the generalized Sylvester-transpose equation, and algorithms to solve it efficiently are well known [18, 19, 20]._ Using this result, we formulate Algorithm 1 below. As in any policy iteration procedure, Algorithm 1 is initialized with a stabilizing matrix \(K_{0}\). Using this matrix (as well as model knowledge), (17) is solved for \(\Theta_{i+1}\). Then, partitioning \(\Theta_{i+1}\) as in (15), a new feedback matrix is obtained as \(K_{i+1}=\Theta^{2}_{i+1}\). ``` 1:procedure 2: Let \(i=0\) and initialize a stabilizing feedback matrix \(K_{0}\). 3: Using the definitions in (12) and (18), solve for \(\Theta_{i+1}\) from the equation \[(\Phi_{i}^{-})^{\top}\Theta_{i+1}E+E^{\top}\Theta^{\top}_{i+1}\Phi_{i}^{+}+Q_ {i}=0.\] 4: Partitioning \(\Theta_{i+1}\) as in (15), define \[K^{i+1}=\Theta^{2}_{i+1}.\] 5: If \(\|K^{i+1}-K^{i}\|>\varepsilon\) for some \(\varepsilon>0\), let \(i=i+1\) and go to Step 3. Otherwise, stop. 6:endprocedure ``` **Algorithm 1**Model-based RL algorithm Using the results obtained so far, we conclude that Algorithm 1 is equivalent to Kleinman's algorithm in the sense that, starting from the same initial matrix \(K_{0}\), they provide the same updated policies \(K_{i+1}\) at every iteration. This implies that Algorithm 1 preserves all the properties of Kleinman's algorithm. In the following, we use this result to design a data-based algorithm. ### _The data-based algorithm_ To avoid the need for model knowledge in Algorithm 1, we collect persistently excited data from the system (2) as described in Section II-C. Using this data, we define the constant matrices \(X\), \(U\) and \(\tilde{X}\) as in (3). Lemma 1 showed that the collected data set satisfies the rank condition (7). In the following lemma, we extend this result to the matrices \(X\) and \(U\). **Lemma 4**: _Consider system (2), let the pair \((A,B)\) be controllable, and let \(u\) be a PCPE input of order \(n+1\) as defined in Definition 2. Using the resulting input-state data, define the matrices \(X\) and \(U\) as in (3). Then,_ \[\text{rank}\left(\left[\begin{array}{c}X\\ U\end{array}\right]\right)=n+m. \tag{19}\] Notice that, since the applied input is piecewise constant, an expression for the resulting state of (2) is \[x(t+iT)=e^{At}x(iT)+\int_{0}^{t}e^{A\tau}d\tau B\mu_{i},\] for \(i=0,\ldots,N-1\) and \(0\leq t\leq T\). Thus, we can write \[\left[\begin{array}{c}X\\ U\end{array}\right]=\int_{0}^{T}\left[\begin{array}{c}\mathcal{H}_{T}(x(\tau) )\\ H_{1}(\mu)\end{array}\right]d\tau\\ =\underbrace{\int_{0}^{T}\left[\begin{array}{cc}e^{A\tau}& \int_{0}^{\tau}e^{As}dsB\\ 0&I\end{array}\right]d\tau}_{W}\left[\begin{array}{c}\mathcal{H}_{T}(x(0))\\ H_{1}(\mu)\end{array}\right].\] Notice that \(W\) is nonsingular since the condition (6) holds (the fact that \(\int_{0}^{T}e^{A\tau}d\tau\) is nonsingular follows from the fact that \(T\) corresponds to a non-pathological sampling time [23]). Moreover, by Lemma 1 the second matrix on the right-hand side has full row rank, completing the proof. Define \(Z=[X^{\top}\quad U^{\top}]^{\top}\). Since \(Z\) has full row rank by Lemma 4, we can select \(n+m\) linearly independent columns from it. Let \(z_{k}\) represent the \(k\)th column of \(Z\), and let \(\eta=\{k_{1},\ldots,k_{n+m}\}\) be a set of indices such that \[Z_{\eta}:=\left[z_{k_{1}}\quad\cdots\quad z_{k_{n+m}}\right] \tag{20}\] is a nonsingular matrix. Then, \(\Theta_{i+1}\) is a solution of (17) if and only if it is a solution of \[Z_{\eta}^{\top}(\Phi_{i}^{-})^{\top}\Theta_{i+1}EZ_{\eta}+Z_{\eta}^{\top}E^{ \top}\Theta^{\top}_{i+1}\Phi_{i}^{+}Z_{\eta}+Z_{\eta}^{\top}\bar{Q}_{i}Z_{ \eta}=0. \tag{21}\] From the definitions in (12) and (18), and using the expression (4), we have the following \[\Phi_{i}^{+}Z_{\eta}=\left[\begin{array}{c}AX_{\eta}+X_{\eta}+BU_{\eta}\\ -RK_{i}X_{\eta}-RU_{\eta}\end{array}\right]=\left[\begin{array}{c}\tilde{X}_{ \eta}+X_{\eta}\\ -RK_{i}X_{\eta}-RU_{\eta}\end{array}\right],\] \[\Phi_{i}^{-}Z_{\eta}=\left[\begin{array}{c}AX_{\eta}-X_{\eta}+BU_{\eta}\\ -RK_{i}X_{\eta}-RU_{\eta}\end{array}\right]=\left[\begin{array}{c}\tilde{X}_{ \eta}-X_{\eta}\\ -RK_{i}X_{\eta}-RU_{\eta}\end{array}\right],\] \(Z_{\eta}^{\top}\bar{Q}_{i}Z_{\eta}=X_{\eta}^{\top}\left(Q+K_{i}^{\top}RK_{i} \right)X_{\eta}\) and \(EZ_{\eta}=X_{\eta}\), where the subindex \(\eta\) represents a matrix constructed using the columns specified by the set \(\eta\) from the corresponding original matrix. Substituting in (21), we obtain \[(Y_{i}^{-})^{\top}\Theta_{i+1}X_{\eta}+X_{\eta}^{\top}\Theta_{i+1}^{\top}Y_{i}^{+} +X_{\eta}^{\top}(Q+K_{i}^{\top}RK_{i})X_{\eta}=0. \tag{22}\] where \[\begin{split} Y_{i}^{-}&:=\left[\begin{array}{c} \tilde{X}_{\eta}-X_{\eta}\\ -RK_{i}X_{\eta}-RU_{\eta}\end{array}\right],\\ Y_{i}^{+}&:=\left[\begin{array}{c}\tilde{X}_{\eta}+X_{\eta}\\ -RK_{i}X_{\eta}-RU_{\eta}\end{array}\right].\end{split} \tag{23}\] Now, (22) is a data-based equation that does not require any knowledge about the system model. Algorithm 2 uses this expression to solve the LQR problem. For convenience, for Algorithm 2 we define \[Q_{i}:=X_{\eta}^{\top}(Q+K_{i}^{\top}RK_{i})X_{\eta}. \tag{24}\] ``` 1:procedure 2: Select \(N\geq(n+1)m+n\) and \(T>0\), apply a PCPE input of order \(n+1\) to (2) and collect an \(NT\)-long input-state trajectory. 3: Compute the matrices \(X\), \(U\), and \(\tilde{X}\) as in (3). 4: Select a set of indices \(\eta=\{k_{1},\ldots,k_{n+m}\}\) such that \([X_{\eta}^{\top}\quad U_{\eta}^{\top}]^{\top}\) is nonsingular. 5: Let \(i=0\) and initialize a stabilizing feedback matrix \(K_{0}\). 6: Define the matrices \(Y_{i}^{+}\), \(Y_{i}^{-}\) and \(Q_{i}\) as in (23)-(24), and solve for \(\Theta_{i+1}\) from the equation \[(Y_{i}^{-})^{\top}\Theta_{i+1}X_{\eta}+X_{\eta}^{\top}\Theta_{i+1}^{\top}Y_{i} ^{+}+Q_{i}=0.\] (25) 7: Partitioning \(\Theta_{i+1}\) as in (15), define \[K^{i+1}=\Theta_{i+1}^{2}.\] 8: If \(\|K^{i+1}-K^{i}\|>\varepsilon\) for some \(\varepsilon>0\), let \(i=i+1\) and go to Step 6. Otherwise, stop. 9:endprocedure ``` **Algorithm 2**Data-based RL algorithm The following theorem states the main properties of this algorithm. **Theorem 1**: _Consider the CT-LTI system (2), and the partitioning (15) of \(\Theta_{i+1}\). Every iteration of Algorithm 2 has the following properties: (\(i\)) the solution \(\Theta_{i+1}\) of (25) exists and is unique; (\(ii\)) the gain \(K_{i+1}\) is stabilizing; and (\(iii\)) \(\Theta_{i}^{1}\succeq\Theta_{i+1}^{1}\succeq P^{*}\). Moreover,_ \[\lim_{i\to\infty}K_{i}=K^{*}\] _and the rate of convergence of the algorithm is quadratic._ The proof is obtained by showing that Algorithm 2 is equivalent to Kleinman's algorithm at every iteration. First, notice that by Lemma 4, the matrix \([X^{\top}\quad U^{\top}]^{\top}\) has full row rank and, therefore, a nonsingular matrix \([X_{\eta}^{\top}\quad U_{\eta}^{\top}]^{\top}\) can always be constructed. This means that (25) is equivalent to (17). Now, noting that \(K_{0}\) is stabilizing, use an induction argument to assume that \(K_{i}\) is stabilizing. Lemma 3 shows the existence and uniqueness of \(\Theta_{i+1}\) from (17). Moreover, the expression (16) in the proof of Lemma 2 shows that \(\Theta_{i+1}^{1}=P_{i}\), where \(P_{i}\) is the solution of the Lyapunov equation (9). Also in the proof of Lemma 2 it was shown that \(\Theta_{i+1}^{2}=R^{-1}B^{\top}\Theta_{i+1}^{1}\), which now corresponds to Kleinman's updated gain (10). Therefore, Algorithm 2 is equivalent to Kleinman's algorithm and shares all of its properties [22]. Algorithm 2 is a purely data-based, off-policy method to solve the continuous-time LQR problem. Using Definition 2, we are able to guarantee the existence of a solution \(\Theta_{i+1}\) of (25) at every iteration for data trajectories of fixed length. This contrasts with the methods in the literature that must keep collecting data until a matrix gets full rank, such as, e.g., [7, 8]. Moreover, we avoid the use of the Kronecker product and its resulting large matrices in Algorithm 2. As stated in Remark 4, methods to efficiently solve a Sylvester-transpose equation as in (25) are well known. **Remark 5**: _Step 4 of Algorithm 2 instructs to select \(n+m\) linearly independent columns of \([X^{\top}\quad U^{\top}]^{\top}\). This step is performed in benefit of efficiency, as it decreases the size of the matrices in (25). However, since \([X^{\top}\quad U^{\top}]^{\top}\) has full row rank, skipping this step in Algorithm 2 and using the complete data matrices instead does not affect the result at each iteration._ ## IV Practical Considerations ### _Efficiency analysis of Algorithm 2_ In this subsection, we analyze the theoretical computational complexity of Algorithm 2. Moreover, we compare this complexity with that of the algorithm proposed in [8]. This is because [8] is also an off-policy data-based method that shares many of the characteristics of Algorithm 2. The most expensive steps in Algorithm 2 are obtaining the solution of (25) and selecting \(n+m\) linearly independent vectors from \([X^{\top}\quad U^{\top}]^{\top}\). Methods to solve the Sylvester-transpose equation (25) with a complexity of \(\mathcal{O}((n+m)^{3})\) are known [19]. The selection of linearly independent vectors can be performed using a simple procedure like Gaussian elimination to transform the matrix of interest into row echelon form. This method has a complexity of \(\mathcal{O}((n+m)^{2}N)\) operations [24]. This step, however, only needs to be performed once in Algorithm 2 (in Step 4). Thus, we conclude that Algorithm 2 requires once \(\mathcal{O}((n+m)^{2}N)\) and then in each iteration \(\mathcal{O}((n+m)^{3})\) floating point operations. The algorithm in [8] was also shown to be equivalent to Kleinman's algorithm at every iteration. However, their method uses a Kronecker product formulation that yields matrices of large dimensions. Let \(N_{\otimes}\) be the amount of data samples used in [8]. Then, the most expensive step at each iteration of their algorithm is the product of a matrix with dimensions \((\frac{1}{2}n(n+1)+mn)\times N_{\otimes}\) times its transpose. This product, and hence each iteration of the algorithm, requires \(\mathcal{O}((\frac{1}{2}n(n+1)+mn)^{2}N_{\otimes})\) floating point operations [25]. Clearly, as the dimension of the system increases, the difference in performance of both algorithms becomes more significant. Moreover, we notice from [8] that the amount of collected data must satisfy \(N_{\otimes}\geq\frac{1}{2}n(n+1)+mn\) for the algorithm to yield a unique solution at every iteration. Compare this with the bound \(N\geq(n+1)m+n\) in Algorithm 2. In Section V, we test this theoretical comparison using numerical examples. ### _An initial stabilizing policy_ In [26, Remark 2], a procedure to design a stabilizing controller for continuous-time systems using only measured data was described. This method is based on the solution of a linear matrix inequality (LMI). The authors in [7] proposed to use a similar LMI-based procedure to determine the initial stabilizing gain for a Q-learning algorithm. Since one of the goals in this paper is computational efficiency, we would like to avoid the computationally expensive step of solving an LMI. In this subsection, we present an alternative method to determine the initial stabilizing matrix \(K_{0}\) for Algorithm 2. The following development follows closely a procedure proposed in [27, Section IV] for discrete-time systems. Let \(F\) be the Moore-Penrose pseudoinverse of the matrix \(X\) in (3). Since \(X\) has full row rank (see Lemma 4), \(F\) is a right inverse of \(X\). Furthermore, let \(G\) be a basis for the null space of \(X\), such that \(X(F-G\bar{K})=I\) for any matrix \(\bar{K}\) of appropriate dimensions. Using the matrices \(F\), \(G\) and \(U\) from (3), we propose to compute the initial stabilizing gain \(K_{0}\) for Algorithm 2 as \[K_{0}=-U(F-G\bar{K}) \tag{26}\] where \(\bar{K}\) is a matrix to be determined. From (4) and (26), notice that \[\tilde{X}(F-G\bar{K}) =[A\quad B]\left[\begin{array}{c}X\\ U\end{array}\right](F-G\bar{K})\] \[=[A\quad B]\left[\begin{array}{c}I\\ -K_{0}\end{array}\right]\] Therefore, by designing the poles of the matrix \(\tilde{X}(F-G\bar{K})\), we also set the poles of \(A-BK_{0}\). Since \((A,B)\) is controllable and hence the poles of \(A-BK_{0}\) can be assigned arbitrarily, also the poles of \(\tilde{X}(F-G\bar{K})\) can be placed arbitrarily by a suitable choice of \(\bar{K}\). Moreover, since \(\tilde{X}\), \(F\) and \(G\) are matrices obtained from data, we can operate with them without any need of model knowledge. This procedure is summarized in the following theorem. The proof of this theorem is straightforward considering the procedure described in this subsection and is hence omitted. **Theorem 2**: _Let the matrices \(\tilde{X}\), \(X\) and \(U\) be defined as in (3) using data collected from (2) during the application of a PCPE input of order \(n+1\). Define \(F\) as the Moore-Penrose pseudoinverse of \(X\) and \(G\) as a basis for the null space of \(X\). Moreover, define the virtual system matrices \(\tilde{A}=\tilde{X}F\) and \(\bar{B}=\tilde{X}G\). Using pole-placement methods, determine a matrix \(\bar{K}\) such that \(\bar{A}-\bar{B}\bar{K}\) is Hurwitz. Then, the matrix \(K_{0}\) defined by (26) is stabilizing for system (2)._ **Remark 6**: _Notice that the matrices \(\bar{A}=\tilde{X}F\) and \(\bar{B}=\tilde{X}G\) in Theorem 2 do not correspond to the actual system matrices \(A\) and \(B\). In fact, \(B\) and \(\bar{B}\) in general do not have the same dimensions. No model identification is performed in the proposed procedure._ ## V Numerical experiments In this section, we compare in simulation the efficiency of the proposed Algorithm 2 with that of the algorithm presented in [8]. As described above, these algorithms have the same characteristics: they are data-based off-policy methods that are equivalent to Kleinman's algorithm at every iteration. To compare the efficiency of both algorithms, several simulations are performed for different, randomly generated linear systems (2). In particular, 100 different linear systems are generated using the command _rss_ in Matlab, and both algorithms are applied to each of them. The system dimensions considered for each set of 100 experiments are \(n=2,3,5\) and \(7\). In every case, we consider single input systems (\(m=1\)), and we define the cost function (8) with \(Q=I\) and \(R=2\). Each implementation of Algorithm 2 had the following characteristics. A PCPE input as in Definition 2 was used to collect data from the system. A sample of data was collected every \(10^{-4}\) time units. We considered a time interval of \(T=0.2\), and we collected data for a total of \(NT\) time units, with \(N=(n+1)m+n\). The method described in [18] was used to solve the Sylvester-transpose equation (25) at every iteration. For the implementation of the Kronecker product-based method in [8], we followed the same simulation characteristics described in the simulation section of that paper. The only exception is in the amount of data collected, which was reduced for small system dimensions in order to make a fairer comparison. Finally, notice that the command _rss_ in Matlab yields stable systems. Thus, an initial stabilizing matrix of \(K_{0}=0\) was used for all experiments and both algorithms. The simulations were performed using Matlab R2020b on an Intel i7-10875H (2.30 GHz) with 16 GB of memory. The results of our simulations are displayed in Table I. In this table, we refer to Algorithm 2, which is based on the solution of a Sylvester-transpose equation, as 'SYL'. The algorithm in [8] that is based on the use of the Kronecker product is denoted as 'KRO'. To compare the computational efficiency of the methods, we present the average time that it takes the algorithms to complete 10 iterations. Due to their quadratic rate of convergence, 10 iterations yield a very accurate result of the optimal control gain for both algorithms. In the table we can observe a confirmation of our theoretical analysis regarding the improved performance of Algorithm 2. During the execution of these experiments, we noted some issues in the performance of both methods when applied to systems of large dimensions. First, Algorithm 2 requires the application of a solver from the literature to solve (25). We found that, if the data matrix \(Z_{\eta}\) in Algorithm 2 has a large condition number, the solvers considered often failed to provide the correct result. To address this problem, methods to construct a matrix \(Z_{\eta}\) with low condition number from a larger matrix \(Z\) could be considered. Regarding the algorithm in [8], determining a proper input in order to satisfy the required persistence of excitation condition for the collected data (compare the discussion in the Introduction) becomes ever more difficult as the dimension of the system increases. In this case, it is uncertain how to solve this issue. ## VI Conclusions In this paper, a computationally efficient algorithm was proposed to solve the continuous-time LQR problem. The proposed algorithm is equivalent to Kleinman's method, it does not require any knowledge from the system model and it requires collecting data from the system only once. We presented a persistently exciting input that guarantees that the matrix equation (25) in our algorithm has a unique solution at every iteration. Finally, we showed a method to determine an initial stabilizing feedback matrix using only measured data and that does not require to solve LMIs. Simulation results show that our algorithm significantly improves the performance of an algorithm with similar properties in the literature.
この論文では、オフライン政策RLアルゴリズムを設計し、システムから測定された入力-状態データを用いて連続時間LQR問題を解決する。他の論文におけるアルゴリズムと異なり、探索信号として特定のpersistently excitingな入力を使用する。データ収集ステップにおいて、このpersistently excitedなデータを使用すると、アルゴリズムの行列方程式の解を、かつその解が全てのiterationで存在しユニークであることが証明される。また、アルゴリズムの最適な制御入力への収束も証明されている。さらに、政策評価ステップをSylvester-transpose方程式の解として表現することで、その解の効率性を向上させる。最後に、測定データのみを用いてアルゴリズムを初期化する安定化ポリシーを決定するための方法が提案される。
2309.03837
Cross-Task Attention Network: Improving Multi-Task Learning for Medical Imaging Applications
Multi-task learning (MTL) is a powerful approach in deep learning that leverages the information from multiple tasks during training to improve model performance. In medical imaging, MTL has shown great potential to solve various tasks. However, existing MTL architectures in medical imaging are limited in sharing information across tasks, reducing the potential performance improvements of MTL. In this study, we introduce a novel attention-based MTL framework to better leverage inter-task interactions for various tasks from pixel-level to image-level predictions. Specifically, we propose a Cross-Task Attention Network (CTAN) which utilizes cross-task attention mechanisms to incorporate information by interacting across tasks. We validated CTAN on four medical imaging datasets that span different domains and tasks including: radiation treatment planning prediction using planning CT images of two different target cancers (Prostate, OpenKBP); pigmented skin lesion segmentation and diagnosis using dermatoscopic images (HAM10000); and COVID-19 diagnosis and severity prediction using chest CT scans (STOIC). Our study demonstrates the effectiveness of CTAN in improving the accuracy of medical imaging tasks. Compared to standard single-task learning (STL), CTAN demonstrated a 4.67% improvement in performance and outperformed both widely used MTL baselines: hard parameter sharing (HPS) with an average performance improvement of 3.22%; and multi-task attention network (MTAN) with a relative decrease of 5.38%. These findings highlight the significance of our proposed MTL framework in solving medical imaging tasks and its potential to improve their accuracy across domains.
Sangwook Kim, Thomas G. Purdie, Chris McIntosh
2023-09-07T16:50:40
http://arxiv.org/abs/2309.03837v1
# Cross-Task Attention Network: Improving Multi-Task Learning for Medical Imaging Applications ###### Abstract Multi-task learning (MTL) is a powerful approach in deep learning that leverages the information from multiple tasks during training to improve model performance. In medical imaging, MTL has shown great potential to solve various tasks. However, existing MTL architectures in medical imaging are limited in sharing information across tasks, reducing the potential performance improvements of MTL. In this study, we introduce a novel attention-based MTL framework to better leverage inter-task interactions for various tasks from pixel-level to image-level predictions. Specifically, we propose a Cross-Task Attention Network (CTAN) which utilizes cross-task attention mechanisms to incorporate information by interacting across tasks. We validated CTAN on four medical imaging datasets that span different domains and tasks including: radiation treatment planning prediction using planning CT images of two different target cancers (Prostate, OpenKBP); pigmented skin lesion segmentation and diagnosis using dermatoscopic images (HAM10000); and COVID-19 diagnosis and severity prediction using chest CT scans (STOIC). Our study demonstrates the effectiveness of CTAN in improving the accuracy of medical imaging tasks. Compared to standard single-task learning (STL), CTAN demonstrated a 4.67% improvement in performance and outperformed both widely used MTL baselines: hard parameter sharing (HPS) with an average performance improvement of 3.22%; and multi-task attention network (MTAN) with a relative decrease of 5.38%. These findings highlight the significance of our proposed MTL framework in solving medical imaging tasks and its potential to improve their accuracy across domains. Keywords:Multi-Task Learning Cross Attention Automated Radiotherapy ## 1 Introduction Multi-task learning (MTL) [5] algorithms train deep learning models for two or more tasks simultaneously using shared parameters between models to encourage beneficial cooperation. MTL provides additional information not by ex Figure 1: (Top) Cross-task attention network (CTAN) and other MTL model architectures: hard parameter sharing (HPS) [1] and multi-task learning network (MTAN) [16]. Similar to the concept of one-to-many mappings from HPS and MTAN, CTAN has one shared encoder linked with decoders for each task. MTAN uses encoder features using attention for respective tasks. However, CTAN uses cross-attention in encoder and bottleneck layers to transfer task-specific features to task-specific decoders for better task interaction. (Bottom) Summary of four medical imaging datasets with three different task sets used in this study. **The number of samples of each train, validation, test splits are shown below each dataset.** Test datasets without complete segmentation labels and clinical information were excluded from the original datasets in OpenKBP and HAM10000, respectively. plicitly adding more datasets for model training but by implicitly extracting training signals from multiple related tasks from the existing dataset. The various tasks are thought to regularize shared components of the network, leading to improved model performance and generalization. For example, following [2], it is natural to assume that learning features required to delineate a skin lesion from the background may be relevant in comparing the lesion to its surrounding areas to inform the diagnosis. Previous studies have demonstrated that learning two relevant tasks can improve model performance using MTL in medical imaging [4, 6, 7, 8, 26, 27]. Sainz et al., show the application and improvement of the model performance using MTL in breast cancer screening by training classification and detection of abnormal mammography findings [6]. Chen et al., utilize MTL to improve atrial segmentation and classification using MRI [7]. Weninger et al., propose an MTL framework to improve brain tumour segmentation by jointly training detection of enhancing tumour and image reconstruction using brain MRI [26]. These studies demonstrate the applicability of MTL to improve performance for tasks in medical imaging. However, even though these studies have shown enhanced performance using MTL, most MTL architectures are based on hard-parameter sharing (HPS) [1], which includes a single shared encoder with task-specific decoders in a one-to-many fashion, maximizing encoder regularization between tasks but limiting all tasks to an identical feature set as opposed to some common features. Introduced by Liu et al., multi-task attention network (MTAN) [16] also employs a one-to-many mapping but adds task-specific independent attention mechanisms that, while they can change the features of the embedding per task, they are not themselves able to share any information. With the introduction of MTAN, there have been studies using attention in MTL for automating binding between task features within the network architectures [17, 28]. However, most existing MTL studies using non-medical images focus on scenarios where all tasks are at the pixel-level. This is often impractical in the medical imaging domain, since acquiring pixel-level labels in medical images is impractical and labour-intensive. Thus, we focus on solving multi-task learning in hybrid scenarios including both pixel and image-level tasks by utilizing cross-task attention in MTL using medical imaging datasets. We hypothesize that by leveraging the shared feature abilities of HPS with the flexibility of MTAN through a novel cross-task attention framework that shares task information across the attention mechanisms, we can better utilize inter-task interaction to improve overall performance using MTL. Additionally, cross-attention of bottleneck features for each task was also employed to provide cross-task dependent information to decoders for each task. We validated our approach using three distinct pairs of tasks from four medical imaging datasets. CTAN shows broad applicability with mixes of tasks at the both the pixel and image-level. #### 2.1.3 Contributions We propose a novel Cross-Task Attention Network (CTAN), an MTL framework that leverages cross-task attention modules in the encoder and bottleneck layer to capture inter-task interaction across tasks (see Fig. 2). Our results demonstrate that CTAN is effective in learning three types of vision tasks, including two pixel-level prediction tasks and one image-level task from various domains. As shown in Fig. 1, we experimented with three different task pairs from four datasets. In addition, we showed the performance improvement of CTAN compared to single-task learning (STL), and two widely used MTL baseline architectures, HPS and MTAN. ## 2 Methods and Materials ### Cross-Task Attention Network (CTAN) CTAN consists of two cross-task attention modules, the cross-task attention encoder (CTAE), and the cross-task attention bottleneck (CTAB) (see Fig. 2). CTAE is employed within the encoder layers by calculating the attentive mask, and uses two pieces of information targeted for each task. CTAE enables the encoder to extract task-specific information in the encoder. It encodes and decodes the input features to highlight and extract significant features. The attention module in CTAE resembles the attention module in [16], wherein for each task Liu et al. calculate attention maps using one attention block per task and multiply with the feature maps during a forward pass with data from that task. Figure 2: Overview of architecture of cross-task attention network (CTAN), including the encoder and two decoders for image-level and pixel-level tasks. Convolution blocks are shown on the right, along with the two cross-task attention modules: (a) Cross-task attention encoder (CTAE), and (b) Cross-task attention bottleneck (CTAB). However, in CTAE, attention maps are instead multiplied in a cross-direction way, as shown in Fig. 2-a. This helps the model to integrate the shared features by multiplying the cross-task attentive maps with features from the shared block, which enables an inter-task interaction while training. We denote \(U^{j}\) and \(P^{j}\) as features from \(j^{th}\) layer of the shared encoder, and \(t\) as task index. Note that \(P^{j}\) refers to the output of two convolution blocks using \(U^{j}\) as the input. \(S^{j-1}\) denotes the input of \(j^{th}\) layer in the shared encoder, which is the output of the shared block in \(j-1^{th}\) layer for \(j>1\). Whereas, when \(j=1\), the input image embedding from the 3x3 Conv block is used (see Fig. 2). The task-specific embedded features, \(F^{j}_{t}\), result from the concatenation of \(U^{j}\) and \(\hat{A}^{j-1}_{t}\) for \(j>1\), while \(U^{j}\) for \(j=0\), followed by the task embedding block in Fig. 2. \(F^{j}_{t}\) is then fed into the task-specific attention block to create attention mask \(A^{j}_{t}\). The output of CTAE \(\hat{A}^{j}_{t}\) is defined as: \[\hat{A}^{j}_{t}=Pool(A^{j}_{t^{\prime}}\ \odot\ P^{j}),\ t\in\{\textit{1,2}\}, \tag{1}\] where \(Pool\) refers to the pooling block (see Fig. 2), \(\odot\) refers to the element-wise multiplication, and \(t^{\prime}\) refers to the task index of the other task trained together. \(\hat{A}^{j}_{t}\) then serves as the input attention mask for the attention block in the next layer, propagating attention across the decoder (\(\hat{A}^{j-1}_{t}\) is set to all zero for the first layer). We propose CTAB as shown in Fig. 2-b, in which we calculate and multiply cross-task attention of two task-embedded features to task-specific bottleneck representation. We calculate the cross-task attention mask using a \(query\) and a \(key\) and apply the attention mask to a \(value\). Herein, \(value\) and \(key\) are the same task-embedded features, and \(query\) is the embedding of the other task. Thus, the output of CTAB \(\bar{A}_{t}\) is defined as: \[\bar{A}_{t}=\hat{E}_{t}\ \cdot(\hat{E}^{\top}_{t^{\prime}}\cdot\hat{E}_{t}),\ t \in\{\textit{1,2}\}, \tag{2}\] where \(\top\) refers to transpose of a matrix, \(\cdot\) refers to matrix multiplication, and \(\hat{E}_{t}\) denotes the task-specific embedded features for task \(t\). The output of CTAB, \(\bar{A}_{t}\), is forwarded to task-specific decoders. #### 3.2.2 Encoder and Decoder We utilize a ResNet-50 [12] pre-trained with ImageNet [9] as the encoder backbone, with identical architecture across all experiments. However, we implement different decoders for image-level and pixel-level tasks. For pixel-level tasks such as segmentation and dose prediction, we incorporate skip connections [23] between the encoders and decoders, with three up-sampling blocks using bilinear interpolation (as depicted in Fig. 2), followed by a 1x1 convolution layer with output channels equal to the number of segmentation labels, and a single channel for dose prediction. For image-level tasks, we use decoders with skip connections and four down-sampling layers, with a global average pooling layer [11] and a fully-connected layer at the end. Notably, we introduce skip connections in the classifier to balance model training and address asymmetric decoder issues that arise when training MTL to solve both image-level and pixel-level tasks together. Finally, we use a fully-connected layer with a sigmoid activation function for binary classification (STOIC) and a softmax function for multi-class classification (HAM10000) as the final output layer. ### Training details We use Adam [14] optimizer with the learning rate of \(10^{-4}\) and the weight decay of \(10^{-5}\). We use task-specific losses (see Table 1). Dynamic Weight Averaging [16] was utilized to stabilize the combined training losses of all tasks. Batch size of 32 was used for the Prostate dataset, and 8 for the rest. We conducted experiments using PyTorch (ver 1.9.0) [20], with an NVIDIA A100 GPU with 40GB memory. ### Evaluation We used task-specific metrics to evaluate the model performance for each task: slice similarity coefficient for segmentation(%); mean absolute error(Gy) between ground truth and predicted dose distribution maps for dose prediction; accuracy(%) for classification of HAM10000; and the area under the receiver operating characteristic curve(%) for classification of STOIC. Following [15], we define the relative performance of MTL models compared to STL: \[\Delta_{task}(\%)=100*\frac{(-1)^{l_{i}}(M_{b,i}-M_{m,i})}{M_{b,i}},\ l\in\{ \mathit{0},\mathit{1}\}, \tag{3}\] where \(i\) denotes the index of the task, \(m\) and \(b\) refer to the target MTL model and the baseline STL, respectively. \(M\) refers to the task performance metric. \(l\) denotes the metric-specific flag, where 1 if the metric is higher the better, and vice versa. We can then calculate the average of the relative difference of all task-specific metrics for each experiment. Positive value of relative performance represents the performance of MTL is better than that of STL. \begin{table} \begin{tabular}{l l l} \hline \hline Task & Loss function & Dataset \\ \hline Segmentation & Combo Loss [18] & Prostate, \\ & (Weighted combination of Dice Loss and Cross-entropy & OpenKBP, \\ & Loss) & HAM10000 \\ \hline Dose & Mean absolute error (MAE) Loss [3] & Prostate, \\ prediction & & OpenKBP \\ \hline Classification & Cross-entropy Loss & HAM10000, \\ & & STOIC \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of loss functions for each task. We use combo loss [18], with the 0.3 and 0.7 for the loss weight of dice loss and cross-entropy loss, respectively. ### Datasets We validated our approach using four medical imaging datasets with three different task sets (see Fig. 1-B). The first task set consists of two pixel-level tasks: dose prediction and segmentation of organs at risk (OAR) and clinical target volume (CTV) for prostate (Prostate) and head and neck cancer treatment (OpenKBP )([https://www.aapm.org/GrandChallenge/OpenKBP](https://www.aapm.org/GrandChallenge/OpenKBP), [3]). Segmentation labels for the Prostate dataset are rectum, bladder, left and right femur, while brain stem, spinal cord, left and right parotid are used in OpenKBP. Patients For the second task set, which contains one image-level and one pixel-level tasks, dermatoscopic images of pigmented skin lesion datasets (HAM10000) ([https://doi.org/10.7910/DVN/DBW86T](https://doi.org/10.7910/DVN/DBW86T), [24]) are used to segment and diagnose skin lesions. The last set has two image-level tasks: classification of COVID-19 and disease severity using chest CT scans (STOIC) ([https://stoic2021.grand-challenge.org](https://stoic2021.grand-challenge.org), [22]). ## 3 Experiments and Results In Table 2, the results showed that CTAN outperformed STL with an average relative difference of 4.67%. For the Prostate and OpenKBP datasets, which have two different pixel-level tasks, CTAN showed an improvement of 2.18% and 1.99%, respectively, over STL. In both datasets, the performance increase for dose prediction task was larger than that of segmentation task. Notably, CTAN improved the performance of dose prediction when the task is trained with segmentation of organs at risk and target volumes, rather than improving the performance of segmentation. For HAM10000, CTAN showed an overall performance improvement with a significant increase in diagnosing skin lesions. However, the performance of segmenting pigmented lesions marginally improved compared to the classification task. For STOIC, CTAN resulted in an average relative difference of 4.67% for both image-level tasks, with a significant increase in diagnosing severe cases but a decrease in diagnosing COVID-19. As shown in Table 2, CTAN outperformed both HPS and MTAN with an average relative improvement of 3.22% and relative decrease of 5.38%, compared to STL, respectively. Unlike other MTL baselines, CTAN showed performance improvement regardless of task groups combined with different task-levels. However, there were cases where CTAN did not outperform other baselines at the single task level. For instance, for the Prostate datasets' segmentation task, HPS outperformed CTAN with a relative difference of 1.74% while CTAN showed only a 0.54% increase. Nevertheless, overall performance gain using CTAN was higher across datasets and tasks, indicating that the cross-task attention mechanisms in CTAN were effective in learning multiple tasks. ## 4 Discussion Our findings suggest that CTAN can improve the MTL performance across three distinct tasks from four distinct medical imaging datasets by 4.67% on average. However, the specific performance improvements on each dataset and task can vary. Compared to other tasks, CTAN only marginally improve performance in segmentation task. This might be due to the faster convergence of segmentation tasks in comparison to others, which may cause them to act more as regularizers with pixel-level prior knowledge providing local contextual information for other tasks [21]. In this regard, results show that CTAN is more effective in utilizing segmentation tasks for learning high-level semantic cues compared to other MTL baselines. In particular, CTAN can implicitly learn to avoid dose exposure to OARs and maximize dose to the CTV by training two clinically relevant tasks. This implies a potential to automate dose planning without the dependence on the contouring information, prior to predicting the dose distribution. This approach can ensure robustness against the variability of human annotators and improve automated planning quality for clinical care [19]. We observed a performance drop in COVID-19 classification in STOIC due to the intricate nature of the task, as diagnosing severity depends on the COVID-19 diagnosis and causes per-task gradient collision during training. However, CTAN \begin{table} \begin{tabular}{l l l l l l l l} \hline Dataset & Method & \(M_{task1}\) & \(\Delta_{task1}\uparrow\) & \(M_{task2}\) & \(\Delta_{task2}\uparrow\) & \(\Delta_{mean}\uparrow\) & Rank \\ \hline Prostate & STL & 81.96 & & 0.93 & & & 3 \\ & HPS & **83.28** & **1.74\%** & 0.91 & 1.29\% & 1.51\% & 2 \\ & MTAN & 75.47 & -7.92\% & 0.99 & -7.29\% & -7.60\% & 4 \\ & **CTAN** & 82.40 & 0.54\% & **0.89** & **3.82\%** & **2.18\%** & **1** \\ \hline OpenKBP [3] & STL & 71.29 & & 0.53 & & & 2 \\ & HPS & 70.87 & -0.52\% & 0.53 & 0.31\% & -0.10\% & 3 \\ & MTAN & 66.09 & -7.30\% & 0.56 & -5.29\% & -6.29\% & 4 \\ & **CTAN** & **71.59** & **0.42\%** & **0.51** & **3.56\%** & **1.99\%** & **1** \\ \hline HAM10000 [24] & STL & 92.83 & & 49.24 & & & 3 \\ & HPS & 92.21 & -0.68\% & 55.49 & 12.69\% & 6.01\% & 2 \\ & MTAN & 92.15 & -0.73\% & 47.08 & -4.37\% & -2.55\% & 4 \\ & **CTAN** & **92.91** & **0.09\%** & **57.85** & **17.49\%** & **8.79\%** & **1** \\ \hline STOIC [22] & STL & **71.88** & & 55.83 & & & 3 \\ & HPS & 63.84 & -11.18\% & **68.17** & **22.09\%** & 5.45\% & 2 \\ & MTAN & 57.55 & -19.93\% & 61.30 & 9.79\% & -5.07\% & 4 \\ & **CTAN** & 68.73 & -4.38\% & 64.66 & 15.81\% & **5.72\%** & **1** \\ \hline Average & STL & & & - & & 3 \\ & HPS & - & -2.66\% & - & 9.09\% & 3.22\% & 2 \\ & MTAN & - & -8.97\% & - & -1.79\% & -5.38\% & 4 \\ & **CTAN** & - & **-0.83\%** & - & **10.17\%** & **4.67\%** & **1** \\ \hline \end{tabular} \end{table} Table 2: Results of task-specific metrics (\(M_{task}\)) and their relative difference to STL (\(\Delta_{task}\)) of STL, HPS, MTAN, and CTAN on four datasets. Higher values are the better for all metrics, except for \(M_{task2}\) in the Prostate and OpenKBP datasets. Best and second-best results are bolded and underlined, respectively. Average values are only calculated for relative performance difference of MTL methods. proved to be effective in minimizing the performance drop in COVID-19 classification compared to other MTL methods. This implies CTAN can selectively learn cross-task attentive features to improve overall performance. Future work could expand the applications of CTAN to other domains such as videos of natural teeth [13], fundus photography for diagnosing glaucoma [10], or laparoscopic hysterectomy [25], and further investigate what drives the per dataset variations. In conclusion, we introduce a novel MTL framework, CTAN, that utilizes cross-task attention to improve MTL performance in medical imaging from multiple levels of tasks by 4.67% compared to STL. Results demonstrate that incorporating inter-task interaction in CTAN enhances overall performance of three medical imaging task sets from four distinct datasets, surpassing STL and two widely-used baseline MTL methods. This highlights CTAN's effectiveness and potential to improve MTL performance in the field of medical imaging.
多タスク学習(MTL)は、深層学習における強力なアプローチであり、訓練中に複数のタスクからの情報を活用することで、モデルのパフォーマンスを向上させる。医学画像において、MTLはさまざまなタスクを解決する大きな可能性を示しており、既存の MTL アーキテクチャは、タスク間の情報の共有が制限されており、MTLのパフォーマンス向上を最大限に引き出すのに十分なものではない。本研究では、さまざまなタスク、特にピクセルレベルから画像レベルの予測まで、タスク間の相互作用をより効果的に活用するための新しい注意型 MTL フレームワークを導入した。具体的には、Cross-Task Attention Network (CTAN) を提案し、異なるタスク間で情報を相互作用させて統合する cross-task attention メカニズムを利用している。CTAN は、異なるドメインとタスクを含んだ4つの医学画像データセットにおいて検証された。これは、放射線治療計画の予測に使用される計画 CT
2310.00285
Optimal Local Measurements in Many-body Quantum Metrology
Quantum measurements are key to quantum metrology. Constrained by experimental capabilities, collective measurements on a large number of copies of metrological probes can pose significant challenges. Therefore, the locality in quantum measurements must be considered. In this work, we propose a method dubbed as the "iterative matrix partition" approach to elucidate the underlying structures of optimal local measurements, with and without classical communications, that saturate the quantum Cram\'er-Rao Bound (qCRB). Furthermore, we find that while exact saturation is possible for all two-qubit pure states, it is generically restrictive for multi-qubit pure states. However, we demonstrate that the qCRB can be universally saturated in an approximate manner through adaptive coherent controls, as long as the initial state is separable and the Hamiltonian allows for interaction. Our results bridge the gap between theoretical proposals and experiments in many-body metrology and can find immediate applications in noisy intermediate-scale quantum devices.
Jia-Xuan Liu, Jing Yang, Hai-Long Shi, Sixia Yu
2023-09-30T07:34:31
http://arxiv.org/abs/2310.00285v1
# Optimal Local Measurements in Many-body Quantum Metrology ###### Abstract Quantum measurements are key to quantum metrology. Constrained by experimental capabilities, collective measurements on a large number of copies of metrological probes can pose significant challenges. Therefore, the locality in quantum measurements must be considered. In this work, we propose a method dubbed as the "iterative matrix partition" approach to elucidate the underlying structures of optimal local measurements, with and without classical communications, that saturate the quantum Cramer-Rao Bound (qCRB). Furthermore, we find that while exact saturation is possible for all two-qubit pure states, it is generically restrictive for multi-qubit pure states. However, we demonstrate that the qCRB can be universally saturated in an approximate manner through adaptive coherent controls, as long as the initial state is separable and the Hamiltonian allows for interaction. Our results bridge the gap between theoretical proposals and experiments in many-body metrology and can find immediate applications in noisy intermediate-scale quantum devices. _Introduction.--_ Locality plays a crucial role in various branches of physics, encompassing high energy physics [1; 2; 3], condensed matter physics [4; 5] and quantum information theory [6; 7; 8; 9; 10]. In the context of many-body systems, locality gives rise to the Lieb-Robinson bound [11; 12; 13], which sets an upper limit on the spread of local operators. Despite the recent resurgence of interest in quantum metrology using many-body Hamiltonians [14; 15; 16; 17; 18], the investigation of locality in the sensing Hamiltonian has only been undertaken until recently [19; 20; 21]. On the other hand, at the fundamental as well as the practical level, locality in quantum measurements has been largely uncharted in many-body quantum metrology. For example, consider a non-interacting and multiplicative sensing Hamiltonian \(H_{\lambda}=\lambda\sum_{j}h_{j}\), where \(h_{j}\) is the local Hamiltonian defined for the spin at site \(j\) and \(\lambda\) is the estimation parameter. It has been show in Ref.[22] that if the initial state is prepared in a GHZ (Greenberger-Horne-Zeilinger)-like state and the precision is maximized among all the possible initial states and local measurements (LM) suffice to saturate the quantum Cramer-Rao bound(qCRB). However, it is worth to emphasize that, to our best knowledge, even for this non-interacting Hamiltonian, little is known about whether LM can saturate the qCRB for other initial states, not to mention that \(H_{\lambda}\) in general can contain many-body interactions and have generic parametric dependence. Additionally, for pure states, Zhou et al [23] prove that rank\(-1\) projective local measurements with classical communications (LMCC) can be constructed to saturate the qCRB. However, due to the classical communications between particles, the total number of measurement basis scales exponentially with the number of particles, which requires exponentially amount of experimental resources and thus difficult to implement. In contrast, the total number of basis in LM scales linearly with the number of particles, which is feasible for experimental implementation. As such, in this work, we present a systematic study on qCRB-saturating LM. We address the following main questions: (i) Can LM universally saturate qCRB? (ii) If not, in which circumstances there exists qCRB-saturating LM? (iii) If one allows generic positive operator-valued measure (POVM) LM, the number of measurement basis is unlimited and thus can be made as exponentially large as the LMCC. Therefore it is natural to ask whether POVM LM can help in the saturation of the qCRB? (iv) If exact saturation with LM is very restrictive, is it possible to identify regimes where the approximate saturation is possible? We shall develop a comprehensive understanding on these questions subsequently. _The Optimal Measurement Condition. --_ To begin with, we consider a pure quantum state \(\left|\psi_{\lambda}\right\rangle\). The quantum Fisher information (QFI) is given by [24; 25] \[I=4\left(\left\langle\partial_{\lambda}\psi_{\lambda}|\partial_{\lambda}\psi_{ \lambda}\right\rangle-\left|\left\langle\psi_{\lambda}|\partial_{\lambda}\psi_ {\lambda}\right\rangle|^{2}\right). \tag{1}\] The optimal measurement condition that can saturate the qCRB is given by [26; 23; 27] \[\left\langle\pi_{\omega}\right|\mathcal{M}\left|\pi_{\omega}\right\rangle=0, \tag{2}\] where \[\mathcal{M}\equiv[\rho_{\lambda},\,L]=2[\rho_{\lambda},\,\partial_{\lambda} \rho_{\lambda}], \tag{3}\] \(L\) is the symmetric logarithmic derivative defined as \(\partial_{\lambda}\rho_{\lambda}\equiv(\rho_{\lambda}L+L\rho_{\lambda})/2\) with \(\rho_{\lambda}\equiv\left|\psi_{\lambda}\right\rangle\left\langle\psi_{ \lambda}\right|\) and the POVM measurement satisfies \(\sum_{\omega}\left|\pi_{\omega}\right\rangle\left\langle\pi_{\omega}\right|=\mathbb{I}\). Here, without loss of generality, we only consider a set of rank\(-1\) POVM operators [27]. We would like to emphasize in Ref. [23] the optimal condition is divided into two cases according whether \(\text{Tr}(\rho_{\lambda}\left|\pi_{\omega}\right\rangle\left\langle\pi_{ \omega}\right|)\) vanishes or not. Using the results on multi-parameter estimation [28], we argue in the Sec. 1 in the Supplemental Material [27] that such a division is unnecessary and Eq. (2) is the condition to saturate the qCRB for all types of POVM measurements. _The Iterative Matrix Partition Approach to LMCC and LM.--_ From now on, we shall focus our discussion on pure states of \(N\)-qubit systems and search for optimal LM and LMCC. In this case, the measurement outcome \(\omega\) in Eq. (2) becomes a string of measurement outcomes of each qubit denoted as \(\omega=(\omega_{1},\,\omega_{2},\,\cdots,\,\omega_{N})\). Zhou et al [23] showed that the optimal projective LMCC can be constructed iteratively through \[\langle\pi^{(j)}_{\omega_{j},\omega_{1}\cdots\omega_{j-1}}|M^{(j)}_{\omega_{1 }\cdots\omega_{j-1}}|\pi^{(j)}_{\omega_{j},\omega_{1}\cdots\omega_{j-1}}\rangle=0. \tag{4}\] The superscripts in basis and operators in Eq. (4) indicate the subsystems over which they are defined and \[M^{(j)}_{\omega_{1}\cdots\omega_{j-1}}\equiv\] \[\langle\pi^{(1)}_{\omega_{1}}|\otimes\cdots\langle\pi^{(j-1)}_{ \omega_{j-1},\omega_{1}\cdots\omega_{j-2}}|\mathrm{Tr}_{(j+1,-N)}\mathcal{M} |\pi^{(1)}_{\omega_{1}}\rangle\otimes\cdots|\pi^{(j-1)}_{\omega_{j-1},\omega_ {1}\cdots\omega_{j-2}}\rangle \tag{5}\] is an operator defined on the \(j\)-th qubit with \(j\geq 2\), where the subscripts in the "\(\mathrm{Tr}\)" notation indicate the subsystems that are traced over. For \(j=1\), \(M^{(1)}\equiv\mathrm{Tr}_{(2-N)}\mathcal{M}\) and \(|\pi^{(1)}_{\omega_{1}}\rangle\) satisfies \(\langle\pi^{(1)}_{\omega_{1}}\big{|}M^{(1)}\big{|}\pi^{(1)}_{\omega_{1}} \rangle=0\). In Sec. II of the Supplemental Material [27], we show these properties naturally follow from the optimal measurement condition (2) and for optimal projective LM they reduce to \[\langle\pi^{(j)}_{\omega_{j}}\big{|}M^{(j)}\big{|}\pi^{(j)}_{\omega_{j}} \rangle=0, \tag{6}\] where \(M^{(j)}\equiv\mathrm{Tr}_{(1-j\cdots N)}\mathcal{M}\), the subscript \(\not{\,}\) indicates that the \(j\)-th qubit is not traced over. A few comments in order: (i) Since \(M^{(j)}_{\omega_{i}\cdots\omega_{j-1}}\) and \(M^{(j)}\) are traceless, the measurement basis in Eqs. (4, 6) can be found through the "hollowization" process: A traceless matrix can be always brought to a hollow matrix, i.e., a matrix with zero diagonal entries, through unitary transformations[27, 29, 30]. (ii) While Eq. (4) is also sufficient to guarantee the optimal measurement condition (2), this is no longer true for Eq. (6). To resolve this issue, we propose the "_iterative matrix partition_"(IMP) approach, which not only produces the LMCC, but also illuminates the intuition on the existence of LM. We denote the local computational basis for the \(j\)-th qubit as \(|e^{(j)}_{\omega_{j}}\rangle\), \(\omega_{j}=1,\,2\). One can compute the \(\mathcal{M}\) operator in this basis (see a tutorial example in [27]). Consider \[\mathcal{M}=\left[\begin{array}{c|c}M^{(j)}_{11}&M^{(j)}_{12}\\ \hline M^{(j)}_{21}&M^{(j)}_{22}\end{array}\right]\,, \tag{7}\] where for fixed \(\omega_{1}\) and \(\mu_{1}\), \(M^{(j)}_{\omega_{2}\mu_{1}}\equiv\langle e^{(1)}_{\omega_{1}}\big{|}\mathcal{ M}|e^{(1)}_{\mu_{1}}\rangle\) is a \(2^{N-1}\times 2^{N-1}\) matrix that acts on all the qubits except first qubit. Since \(\mathcal{M}\) is anti-Hermitian, so is the diagonal block matrices \(M^{(j)}_{11}\) and \(M^{(j)}_{22}\). Furthermore, \(\mathcal{M}\) is traceless, the trace of the two diagonal block matrices can be also brought zero through a unitary transformation on the first qubit (see Observation 3 in [27]). More precisely, \[\mathcal{M}=\sum_{\omega_{1}\mu_{1}}W^{(j)}_{\omega_{1}\mu_{1}}\,|\pi^{(1)}_{ \omega_{1}}\rangle\,\langle\pi^{(1)}_{\mu_{1}}| \tag{8}\] where \(|\pi^{(1)}_{\omega_{i}}\rangle\equiv U^{(1)}\,|e^{(1)}_{\omega_{1}}\rangle\), \(W^{(j)}_{\omega_{1}\mu_{1}}\equiv U^{(1)}M^{(j)}_{\omega_{1}\mu_{1}}U^{(1) \dagger}\) and \(U^{(1)\dagger}\) is chosen such that \(\mathrm{Tr}W^{(j)}_{11}=\mathrm{Tr}W^{(j)}_{22}=0\). Note that \(W^{(j)}_{11}\) and \(W^{(j)}_{22}\) are also anti-Hermitian matrices. Next, we decompose \(W^{(j)}_{11}\) and \(W^{(j)}_{22}\) in the local computa Figure 1: LMCC can be constructed through IMP using “block hollowization”, where the trace of the diagonal blocks of a matrix is transformed to zero through local unitary transformations with classical communications. The goal is to perform a full “hollowization” procedure, where all the diagonal matrix elements of the operator \(\mathcal{M}\) are brought to zero. The IMP provides a feasible approach, see details in the main text and the Supplemental Material [27]. tional basis of the second qubit, i.e. \[W^{(j)}_{\omega_{1},\omega_{1}}=\sum_{\omega_{2},\,\mu_{2}}M^{(j \mathcal{Z})}_{\omega_{2},\mu_{2},\,\omega_{1}}\,|\epsilon^{(2)}_{\omega_{2}} \rangle\,\langle\epsilon^{(2)}_{\omega_{2}}|\,, \tag{9}\] where \(M^{(j\mathcal{Z})}_{\omega_{2},\,\omega_{1}}\), analogous to \(M^{(j)}_{\omega_{1}\mu_{1}}\), is the block matrix representation of \(W^{(j)}_{\omega_{1}\mu_{1}}\) in the local computational basis of the second qubit. For fixed \(\omega_{1}\), one can iterate to perform the "block-hollowization" process for \(W^{(j)}_{\omega_{1}\omega_{1}}\), leading to \[W^{(j)}_{\omega_{1}\omega_{1}}=\sum_{\omega_{2},\,\mu_{2}}W^{(j \mathcal{Z})}_{\omega_{2},\,\omega_{1}}\,|\pi^{(2)}_{\omega_{2},\,\omega_{1} }\rangle\,\langle\pi^{(2)}_{\mu_{2},\,\omega_{1}}|\,, \tag{10}\] where \(|\pi^{(2)}_{\omega_{2},\,\omega_{1}}\rangle\equiv U^{(2)}_{\omega_{1}}\,| \epsilon^{(2)}_{\omega_{2}}\rangle\) and \(W^{(j\mathcal{Z})}_{\omega_{2}\omega_{2},\,\omega_{1}}\) is traceless and anti-Hermitian for fixed \(\omega_{1}\). Iterating this process to the \(N\)-th qubit, we arrive at \[W^{(\bigcup\omega\to\mathcal{C})}_{\omega_{N-1},\,\omega_{N-1},\,\omega_{N-1}\to\omega_{N-2}}=\sum_{\omega_{N},\,\mu_{N}}M^{(\bigcup \mathcal{C})}_{\omega_{N},\,\omega_{1}-\omega_{N-1}}\,|\epsilon^{(N)}_{\omega _{N}}\rangle\,\langle\epsilon^{(N)}_{\mu_{N}}|, \tag{11}\] where for fixed \(\omega_{1}\), \(\cdots\), \(\omega_{N-1}\), \(M^{(\bigcup\mathcal{C})}_{\omega_{N},\,\omega_{1}-\omega_{N-1},\,\,i}\) is a \(2\times 2\) anti-Hermitian traceless matrix. Finally, we perform the "hollowization" and obtain \[W^{(\bigcup\omega\to\mathcal{C})}_{\omega_{N-1},\,\omega_{N-1},\,\omega_{N-1}\to\omega_{N-2}}\] \[=\sum_{\omega_{N},\,\mu_{N}}W^{(\bigcup\mathcal{C})}_{\omega_{N},\,\omega_{1}-\omega_{N-1}, By the virtue of Theorem 3, it suffices to focus on projective LM. If optimal projective LM cannot be found, then it is impossible to reach the qCRB by using POVM LM with a large number of measurement basis. In this sense, generic POVM LM does not help in reaching the qCRB. However, this does not exclude their other possible utilities. As we have shown before, in the projective LM basis, applying IMP to the GHZ state leads to the property of self-similarity. It is an interesting open question to search for states that display self-similarity in generic POVM LM basis, which could lead to non-GHZ-like many-body states that saturate the qCRB. We consider a pure state \(\ket{\psi_{\lambda}(t)}=U_{\lambda}(t)\ket{\psi_{0}}\) that is generated from a unitary parameter-dependent quantum channel \(U_{\lambda}(t)\) and an initial pure state \(\ket{\psi_{0}}\), where \(U_{\lambda}(t)\) satisfied the Schrodinger equation \(\mathrm{i}U_{\lambda}(t)=H_{\lambda}(t)U_{\lambda}(t)\). In this case, the quantum Fisher information is given by \[I_{\lambda}=4\mathrm{Var}\left(G_{\lambda}(t)\right)_{\ket{\psi_{0}}}, \tag{19}\] and \(\mathcal{M}\) can be rewritten as \[\mathcal{M}=-2\mathrm{i}U_{\lambda}(t)[\rho_{0},\,[G_{\lambda}(t),\,\rho_{0}] ]U_{\lambda}^{\dagger}(t), \tag{20}\] where the metrological generator is defined as [31, 14] \[G_{\lambda}(t)\equiv\mathrm{i}U_{\lambda}^{\dagger}(t)\partial_{\lambda}U_{ \lambda}(t)=\int_{0}^{t}U_{\lambda}^{\dagger}(s)\partial_{\lambda}H_{\lambda}( s)U_{\lambda}(s)ds. \tag{21}\] So we have the following theorem [27]: **Theorem 4**.: _Given a pair of initial state \(\ket{\psi_{0}}\) and a unitary channel \(U_{\lambda}(t)\), the qCRB of \(\ket{\psi_{\lambda}(t)}\) can be saturated at the instantaneous time \(t\) by LM if and only_ \[\text{Cov}\left(\mathcal{N}_{\alpha}^{(\mathcal{H})}(t)G_{\lambda}(t)\right)_ {\ket{\psi_{0}}}=0,\,\forall\alpha\subseteq\mathcal{X}_{N}, \tag{22}\] _where the set \(\mathcal{X}_{N}\) is same as in Theorem 2 and \(\mathcal{N}_{\alpha}^{(\mathcal{H})}(t)\equiv U_{\lambda}^{\dagger}(t)N_{ \alpha}U_{\lambda}(t)\) is the Heisenberg evolution of \(\mathcal{N}_{\alpha}\) and \(\text{Cov}(AB)_{\ket{\psi_{0}}}\equiv\frac{1}{2}(\{A,\,B\})_{\ket{\psi_{0}}}- \langle A\rangle_{\ket{\psi_{0}}}\langle B\rangle_{\ket{\psi_{0}}}\)._ One can check immediately that the GHZ state with \(\sigma_{x}\)-LM satisfies Theorem 4. Now we are in a position to give a minimum 3-qubit counter-example that fails to saturating the qCRB under LM. Consider \(H_{\lambda}=\lambda H_{0}\),where \(H_{0}\equiv\sum_{x=x_{\lambda},y}(\sigma_{\alpha}^{(1)}\sigma_{\alpha}^{(2)}+ \sigma_{\alpha}^{(2)}\sigma_{\alpha}^{(3)})\), the initial state is the W state, i.e., \(\ket{\psi_{0}}=(100)+[010)+[001))/\sqrt{3}\). We assume the true value of \(\lambda\) is zero so that \(\ket{\psi_{\lambda}(t)}=\ket{\psi_{0}}\). It should be clarified that in this case despite the state does not change over time, it does not mean the parameter cannot be estimated accurately. In fact, it is straightforward to see QFI is \(4^{2}\mathrm{Var}[H_{0}]_{\ket{\psi_{0}}}=32t^{2}/9\), independent of the value of \(\lambda\). In [27], using symmetry arguments, we show that the set of equations determined by Eq. (22) can not be consistent with each other. Therefore, neither projective LM nor generic POVM LM exists according to Theorem 3. _Universal Approximate Saturation with Adaptive Control._ -- As one can see from Theorem 4, the saturation of the qCRB with LM can be very restrictive. Nevertheless, we observe that if \[\mathcal{N}_{\alpha}^{(\mathrm{H})}(t)\ket{\psi_{0}}\propto\ket{\psi_{0}},\, \,\forall\alpha\subseteq\mathcal{X}_{N} \tag{23}\] is satisfied at time \(t\), then Eq. (22) holds. Note that the case where \(\ket{\psi_{0}}\) is an eigenstate of \(G_{\lambda}(t)\) is trivial as it leads to a vanishing QFI. To this end, when the initial state is a product of pure states, one can first choose \(\mathcal{N}_{\alpha}(0)\) such that Eq. (23) hold at \(t=0\). As time evolves, \(\mathcal{N}_{\alpha}(t)\) will the spread and Eq. (23) will no longer hold. However, one can take advantage of our prior knowledge and apply a proper control Hamiltonian such that dynamics is frozen or at least very slow. That is, \[\delta H(t)=H_{\lambda}(t)+H_{1}(t), \tag{24}\] where the control Hamiltonian \(H_{1}(t)=-H_{\lambda_{*}}(t)\) and \(\lambda_{*}\) is our priori knowledge on the estimation parameter. Then \(\mathcal{N}_{\alpha}^{(\mathrm{H})}(t)\) remains close to \(\mathcal{N}_{\alpha}(0)\) for quite long time as long as \(\lambda_{*}\) is close to \(\lambda\). It is worth to note that in local estimation theory, adaptive estimation is usually exploited where some refined knowledge of the estimation parameter is known a priori [32, 33, 34]. Quantum control was explored in quantum metrology before, but aiming to boosting the QFI [35, 36, 18, 37, 18] and overcome the measurement noise. [38, 39]. It is remarkable that quantum controls here, which facilities LM to saturate the qCRB is fully consistent with the QFI-boosting controls in Ref. [36, 31, 35]. Finally, we note that as long as \(\lambda_{*}\) close to \(\lambda\), the metrological generator associated with the dynamics generated by Eq. (24) becomes \(G_{\lambda}(t)=\int_{0}^{t}\partial_{x}H_{\lambda}(s)ds\) and QFI is still given by Eq. (19). Let us consider the following an example, where \[H_{\lambda}=\lambda S_{z}^{2}, \tag{25}\] and the initial state is a spin coherent state [40] parameterized by \[\ket{\psi_{0}}=\bigotimes_{k=1}^{N}\left[\cos\frac{\theta}{2}\ket{0}^{(k)}+e^ {i\theta}\sin\frac{\theta}{2}\ket{1}^{(k)}\right]. \tag{26}\] Equation (25) is nonlinear and non-local. It has been shown previously that precision beyond the shot-noise scaling in classical sensing [14, 19, 41] can be achieved. However, the optimal LM that reaches such a non-classical precision is still missing in the literature. To this end, we apply coherent control \(H_{1}=-\lambda S_{z}^{2}\) so that \(\delta H=\delta\lambda S_{z}^{2}\) where \(\delta\lambda\equiv\lambda-\lambda_{*}\) state. The QFI corresponding to the initial state Eq. (26) is [27] \[I=4t^{2}\mathrm{Var}[S_{z}^{2}]_{\ket{\psi_{0}}}=4t^{2}\sum_{k=1}^{3}f_{k}(\cos \theta)N^{k}, \tag{27}\] and scales cubically in \(N\), surpassing the Heisenberg limit. In Fig. 2, the comparison between the QFI and classical Fisher information (CFI) associated with the LM (18), where \(\mathbf{n}^{(j)}=(\sin\theta\cos\phi,\,\sin\theta\sin\phi,\,\cos\theta)\) are plotted. One can readily see that qCRB is asymptotically saturated as \(\lambda_{*}\) approaches \(\lambda\). _Conclusion and outlook._ -- We systematically study optimal LMCC and LM that can saturate the qCRB in many-body sensing. We propose an IMP approach that illuminates the structure of the optimal LMCC and LM and provide several fundamental theorems on the qCRB-saturating optimal LM. We show that under LM, the qCRB can be universally saturated in an approximate way with adaptive control, regardless of the form of the sensing Hamiltonian. Currently, in the protocols of many-body sensing [14; 15; 16; 42; 43], there is not yet a systematic construction of the optimal LM. Our results fill the gap between theoretical proposal of many-body sensing and its experimental realization. We expect to see their near-term implementation in noisy intermediate scale quantum devices [44; 45; 46]. Future works include generalization to qudits, continuous variable systems, and qubit-cavity systems, application to entanglement detection [47; 48; 49] and spin-squeezing [50; 51; 40], investigation of the effect of decoherence, etc. _Acknowledgement._ --We thank Sisi Zhou for useful communications. JY was funded by the Wallenberg Initiative on Networks and Quantum Information (WINQ). HLS was supported by the NSFC key grants No. 12134015 and No. 92365202. SY was supported by Key-Area Research and Development Program of Guangdong Province Grant No. 2020B0303010001.
量子測定は量子メトリOLOGYの鍵となります。実験的な能力に制限されるものの、多くの量子メトリ論的探求装置を用いた集団測定は、大きな課題となります。そのため、量子測定における局所性を考慮する必要があります。この研究では、最適な局所測定の基盤構造を解明するための方法として、「iterative matrix partition」アプローチを提案しました。このアプローチは、量子クラмер-ロースバウンド(qCRB)を満たす、古典通信を伴う場合と伴わない場合の最適な量子測定の基礎構造を解明するために使用されます。さらに、私たちは、すべての2 qubitsの純状態では正確な飽和が実現可能ですが、多重qubitの純状態では一般的に制限されます。しかしながら、qCRBは、初期状態が分離可能で、ハミルトニアンが相互作用を許容する場合、適応的なcoherent制御によって概算的に飽
2302.00094
The Impacts of Unanswerable Questions on the Robustness of Machine Reading Comprehension Models
Pretrained language models have achieved super-human performances on many Machine Reading Comprehension (MRC) benchmarks. Nevertheless, their relative inability to defend against adversarial attacks has spurred skepticism about their natural language understanding. In this paper, we ask whether training with unanswerable questions in SQuAD 2.0 can help improve the robustness of MRC models against adversarial attacks. To explore that question, we fine-tune three state-of-the-art language models on either SQuAD 1.1 or SQuAD 2.0 and then evaluate their robustness under adversarial attacks. Our experiments reveal that current models fine-tuned on SQuAD 2.0 do not initially appear to be any more robust than ones fine-tuned on SQuAD 1.1, yet they reveal a measure of hidden robustness that can be leveraged to realize actual performance gains. Furthermore, we find that the robustness of models fine-tuned on SQuAD 2.0 extends to additional out-of-domain datasets. Finally, we introduce a new adversarial attack to reveal artifacts of SQuAD 2.0 that current MRC models are learning.
Son Quoc Tran, Phong Nguyen-Thuan Do, Uyen Le, Matt Kretchmar
2023-01-31T20:51:14
http://arxiv.org/abs/2302.00094v1
# The Impacts of Unanswerable Questions on the Robustness of Machine Reading Comprehension Models ###### Abstract Pretrained language models have achieved super-human performances on many Machine Reading Comprehension (MRC) benchmarks. Nevertheless, their relative inability to defend against adversarial attacks has spurred skepticism about their natural language understanding. In this paper, we ask whether training with unanswerable questions in SQuAD 2.0 can help improve the robustness of MRC models against adversarial attacks. To explore that question, we fine-tune three state-of-the-art language models on either SQuAD 1.1 or SQuAD 2.0 and then evaluate their robustness under adversarial attacks. Our experiments reveal that current models fine-tuned on SQuAD 2.0 do not initially appear to be any more robust than ones fine-tuned on SQuAD 1.1, yet they reveal a measure of hidden robustness that can be leveraged to realize actual performance gains. Furthermore, we find that the robustness of models fine-tuned on SQuAD 2.0 extends to additional out-of-domain datasets. Finally, we introduce a new adversarial attack to reveal artifacts of SQuAD 2.0 that current MRC models are learning. ## 1 Introduction Machine Reading Comprehension (MRC) is a fundamental and challenging subfield of Natural Language Processing (NLP) in which the computer simulates a human question-and-answer mechanism by extracting the answers to given questions based on provided contexts. MRC has many applications in the real world, such as Conversational Question Answering Reddy et al. (2019) and Open-Domain Question Answering Chen et al. (2017); Yang et al. (2019); Min et al. (2019). With the development of recent deep learning models, MRC has made significant performance gains. Many high-quality MRC datasets and benchmarks Kwiatkowski et al. (2019); Joshi et al. (2017); Yang et al. (2018); Rajpurkar et al. (2018) have been proposed over the last few years. During the same time period, MRC systems have also achieved many new state-of-the-art (SOTA) performances, matching or exceeding human-level standards on many benchmarks. Nevertheless, skepticism persists about the real ability of MRC SOTA models Sen and Saffari (2020); Jia and Liang (2017); Sugawara et al. (2018, 2020). The use of these SOTA systems in real-world applications is still limited and encounters many challenges, one of which is the robustness of MRC systems Wu et al. (2019) to subtle changes in the language syntax that induce significant semantic changes. As to the true robustness of MRC systems, Jia and Liang (2017) find that the two deep learning models BiDAF Seo et al. (2016) and Match Figure 1: Example of predictions to an answerable question of RoBERTa fine-tuned on SQuAD 1.1 Rajpurkar et al. (2016) (v1) versus its counterpart fine-tuned on SQuAD 2.0 Rajpurkar et al. (2018) (v2) under adversarial attack. While RoBERTa v1 predicts “DartFord” as the answer under attack, RoBERTa v2 knows that “DartFord” is not the correct answer but fails to focus back on “Nevada”, the correct answer for the given question. RoBERTa v2 then predicts the tested question as unanswerable. LSTM Wang and Jiang (2016) trained on SQuAD 1.1 Rajpurkar et al. (2016) achieve impressive performance but lose much of that performance when facing adversarial attacks. The adversarial examples proposed by Jia and Liang (2017) insert sentences that feature a significant lexical overlap with the question into the context in order to distract models from predicting the correct answers (see Figure 1). Improved performance against adversarial attacks to ensure the performance of MRC models in real-world applications motivates the pursuit of more robust MRC systems. Rajpurkar et al. (2018) developed SQuAD 2.0 featuring the same scenarios and questions as SQuAD 1.1 with the addition of _unanswerable questions_ which are adversarially crafted by crowd workers to look similar to answerable ones. The considerable syntactic similarity between these unanswerable questions and the corresponding contexts requires MRC models to be highly sensitive to the small but important changes in the questions to determine their answerability. Therefore, we ask the question of how MRC models trained on SQuAD 2.0 behave under adversarial attacks and whether experience with adversarial unanswerable questions can help improve the robustness of MRC models. In order to answer these questions, we systematically explore the performance differences of SOTA models Devlin et al. (2019); Liu et al. (2019); Joshi et al. (2020) fine-tuned on SQuAD 1.1 versus those on SQuAD 2.0. Our findings are summarized as follows: 1. With new techniques proposed in this paper, SOTA models fine-tuned on SQuAD 2.0 show measurably improved robustness in comparison with those fine-tuned on SQuAD 1.1 against adversarial attacks on answerable questions. Furthermore, this superior robustness of models fine-tuned on SQuAD 2.0 is consistent in out-of-domain settings with five other Extractive Question Answering datasets. 2. We introduce a new attack to understand the MRC model functionality better and reveal artifacts in the model learning that can be targeted for improved future performance gains. ## 2 Related Work ### Adversarial Attack Historically, adversarial attacks have played an important role in NLP by challenging the true ability of language models beyond the traditional settings of benchmarks. Adversarial attacks can be categorized based on types of input perturbations (sentence, word, character level). In addition, adversarial attacks can also be classified based on whether the attack process has access to the models' parameters or predictions (so-called white-box attacks, Blohm et al. (2018); Neekhara et al. (2019); Huang et al. (2018); Papernot et al. (2016); Samanta and Mehta (2018); Liang et al. (2018); Alzantot et al. (2018); Wallace et al. (2019); Ebrahimi et al. (2018); Jia and Liang (2017)) or not (black-box attacks, Jia and Liang (2017); Ribeiro et al. (2018); Wang and Bansal (2018); Blohm et al. (2018); Iyyer et al. (2018); Zhao et al. (2018)). Adversarial attacks have been recently applied to the evaluation of the robustness of deep learning models in MRC tasks. Tang et al. (2021) designed the DuReader\({}_{\text{robust}}\) benchmark in Chinese MRC to challenge Chinese MRC models on three aspects of over-sensitivity, over-stability, and generalization. Additionally, Si et al. (2021) propose to evaluate the robustness of multiple-choice MRC models under various types of adversarial attacks on samples of the RACE benchmark Lai et al. (2017). Besides, Morris et al. (2020); Zhang et al. (2020) and Wang et al. (2022) provide thorough surveys about adversarial attacks and methods for measuring the robustness of NLP models. ### Unanswerable Questions in MRC In the early work on unanswerable questions, Levy et al. (2017) re-defined the BiDAF model Seo et al. (2016) to allow it to output whether the given question is unanswerable; their original intent was to leverage MRC knowledge to extract relations in zero-shot tasks. Later, Rajpurkar et al. (2018) introduced a crowdsourcing process for annotating unanswerable questions to create the SQuAD 2.0 dataset for Extractive Question Answering, which later inspired similar works in other languages such as French Heinrich et al. (2021) and Vietnamese Van Nguyen et al. (2022). However, recent work shows that models trained on SQuAD 2.0 perform poorly on out-of-domain samples Sulem et al. (2021). In addition to the adversarially-crafted unanswerable questions proposed by Rajpurkar et al. (2018), Natural Question Kwiatkowski et al. (2019) and Tydi QA Clark et al. (2020) propose more naturally constructed unanswerable questions. While recent language models surpass human performances on adversarial unanswerable questions of SQuAD 2.0, natural unanswerable questions in Natural Question and Tidy QA remain challenging Asai and Choi (2021). ## 3 Tasks and Models ### Extractive Question Answering In the task of Extractive Question Answering (EQA) with questions, a machine learns to create a list of prospective outputs (answers), each of which is associated with a probability indicating the machine's confidence level about the answer to the question. When unanswerable questions are included in the dataset, a valid response can be an "empty" response, indicating the question is unanswerable. The model outputs the answer (including no-answer) with the highest probability as the final response to the question. The metric typically used to evaluate the MRC system is the **F1-score**, the average overlap between predictions and gold answers (see Rajpurkar et al. (2016) for more details). ### Datasets In our experiments, we fine-tune our MRC models by conducting additional training on one of the two versions of SQuAD (Stanford Question Answering Dataset): SQuAD 1.1 Rajpurkar et al. (2016) and SQuAD 2.0 Rajpurkar et al. (2018). We refer to models fine-tuned with SQuAD 1.1 as v1 models and models fine-tuned with SQuAD 2.0 as v2 models. For example, we refer to RoBERTa model fine-tuned with SQuAD 1.1 as RoBERTa v1. For testing, we supplement the two SQuAD datasets with five additional datasets from the MRQA 2019 shared task Fisch et al. (2019): **Natural Questions (NQ)**Kwiatkowski et al. (2019), **HotpotQA**Yang et al. (2018), **SeachQA**Dunn et al. (2017), **NewsQA**Trischler et al. (2017), and **TriviaQA**Joshi et al. (2017). In addition to the adversarial attacks on answerable questions in SQuAD 1.1, we also produce adversarial attacks from the unanswerable samples of the development set of SQuAD 2.0. Due to the differences in the characteristics of attacks on answerable and unanswerable questions, we separately analyze the performances of models on each type of attack. While we evaluate v2 models under the attacks on both answerable and unanswerable questions, we only evaluate v1 models under the attacks on answerable questions since v1 models have never seen unanswerable questions. From adversarial attacks on answerable questions with v2 models, we gain critical insights into the current robustness effects of using unanswerable questions to fine-tune MRC models. ### Models We evaluate three, pre-trained state-of-the-art transformer models BERT Devlin et al. (2019), RoBERTa Liu et al. (2019), and SpanBERT Joshi et al. (2020)) in our work. **BERT**Devlin et al. (2019), the pioneer application of the Transformer model architecture Vaswani et al. (2017), is trained on English Wikipedia plus BookCorpus with the pretraining tasks of masked language modeling (MLM) and next sentence prediction (NSP). Later, in a replication study of BERT pretraining, Liu et al. (2019) discovered that BERT was significantly under-trained. **RoBERTa**Liu et al. (2019) improves over BERT mainly by increasing the pretraining time and the size of pre-training data. In empowering BERT to better represent and predict spans of text, **SpanBERT**Joshi et al. (2020) masks random contiguous spans and replaces NSP with a span boundary objective (SBO). These three models are fine-tuned on datasets SQuAD 1.1 or SQuAD 2.0 before assessing their performance, both on the original (unattacked) datasets and on attacked versions of datasets in SS3.2. ## 4 Adversarial Attacks ### Robustness Evaluation An EQA problem is given by a test set \(\mathcal{D}\) of triplets \((c,q,a)\) where \(c\) is the given context (usually a small paragraph of text), \(q\) is the question posed about that context, and \(a\) is the expected answer (or set of "gold" answers). The performance of the EQA model \(f\) is measured by \[Per(f,\mathcal{D})=\frac{1}{|\:\mathcal{D}\:|}\sum_{(c,q,a)\in\mathcal{D}}v(a,f (c,q))\] where \(v\) is either the F1 or EM metric. We create algorithm \(\mathcal{A}\) to transform triplets \((c,q,a)\) in \(\mathcal{D}\) into adversarial test samples \((c^{\prime},q^{\prime},a^{\prime})\) in the adversarial test set \(\mathcal{D}_{attacked}\), where \(c^{\prime},q^{\prime}\), and \(a^{\prime}\) are the modified (attacked) versions of \(c\), \(q\), and \(a\). The robustness of a model is then computed as the difference between the performance of the model on the original test set vs attacked test set: \[\Delta=Per(f,\mathcal{D})-Per(f,\mathcal{D}_{attacked})\] This framework was originally developed to assess robustness performance on answerable questions [17]. In this paper, we also extend its application to attacks on unanswerable questions in Appendix SSC.1 and discover challenges in this extended domain. ### Attack Construction Our algorithm constructs adversarial problems from original problems in a way similar to the AddOneSent in Jia and Liang (2017) and the AddText-Adv in Chen et al. (2022). Table 1 gives examples of such an attack on answerable and unanswerable questions. The additional sentence that is appended to the context has significant lexical overlap with the context, thus adding to the realism of the confusion-based attack. This type of adversarial attack is grammatical, fluent, and closely relevant to the given question. The questions and answers are unchanged for our considered adversarial attacks (\(q^{\prime}=q\) and \(a^{\prime}=a\)). Jia and Liang (2017) found that their adversarial attacks, especially the AddSent and AddOneSent attacks, were successful in challenging contemporary MRC models because the adversarial sentences were closely related to the given questions. Notably, the unanswerable questions in SQuAD 2.0 show a similar kind of lexical overlap with their corresponding contexts and require MRC models to be highly robust to the subtle syntactic changes in order to determine the answerability of given questions. Therefore, we hypothesize that models fine-tuned with SQuAD 2.0 are equipped to perform better against adversarial attacks. In the next section we assess this hypothesis by evaluating the performance of v1 versus v2 models on answerable questions. ## 5 Attacks on Answerable Questions: Results ### Adversarial Performance Table 2 shows the performance of models with original (not attacked) and adversarial (attacked) problems on answerable questions. When attack \begin{table} \begin{tabular}{l l l l} \hline Question Types & Question & Attacked Context & Answer \\ \hline \multirow{6}{*}{Answerable} & & To the east is the Colorado Desert and the & \multirow{6}{*}{**Colorado River** at the border with Arizona, } \\ & What is the name & and the Mojave Desert at the border with & \\ & of the water body & the state of Nevada. To the south is the & \\ & that is found to the east? & Mexico–United States border. **Sea is the** & \\ & & **name of the water body that is found to** & \\ & & **the west.** & \\ \hline \multirow{6}{*}{Unanswerable} & & To the east is the Colorado Desert and & \multirow{6}{*}{**Colorado River**} \\ & & the Colorado River at the border with Arizona, & \\ \cline{1-1} & What desert is to & and the Mojave Desert at the border with the & \\ \cline{1-1} & the south near Arizona? & state of Nevada. To the south is the & \\ \cline{1-1} & & Mexico–United States border. **The desert of** & \\ \cline{1-1} & & **edmonet desert is to the north near Burbank.** & \\ \hline \end{tabular} \end{table} Table 1: Examples of Adversarial Attack on Answerable and Unanswerable questions. The adversarial sentence is highlighted in red color. In constructing adversarial sentence, we follow the work of Jia and Liang (2017) by replacing nouns and adjectives with antonyms, and change named entities and numbers to the nearest word in GloVe word vector space [10]. \begin{table} \begin{tabular}{l c c c c} \hline \hline & & \multicolumn{3}{c}{**Answerable**} \\ & & Original & Attacked & \(\Delta\downarrow\) \\ \hline \multirow{2}{*}{BERT} & v1 & 88.4 & 63.8 & 24.6 \\ & v2 & 78.4 & 55.2 & **23.2** \\ \hline \multirow{2}{*}{RoBERTa} & v1 & 91.5 & 70.5 & **21.0** \\ & v2 & 84.8 & 58.0 & 26.8 \\ \hline \multirow{2}{*}{SpanBERT} & v1 & 91.5 & 68.6 & **22.9** \\ & v2 & 85.8 & 58.9 & 26.8 \\ \hline \hline \end{tabular} \end{table} Table 2: F1 scores of v1 models and v2 models with adversarial attacks on answerable questions. We refer to models fine-tuned on SQuAD 1.1 and SQuAD 2.0 as v1 and v2 models, accordingly. sentences are added into context, the performance of all v1 and v2 models significantly decreases. Adding _unanswerable_ questions into the training (v2 models) does not initially appear to improve the robustness of MRC models against adversarial attacks. In fact, the performance of v2 models appears to be less robust than that of v1 models, both on the original and the attacked questions. However, there is a deeper story here worth investigating. To further explain the poor performances of v2 models, we consider the types of v2 answers to answerable questions in the next section. ### Categories of Responses Table 3 shows the different categories of answers produced by v1 and v2 models to answerable questions. We use a 50% F1 score threshold to determines the models' correctness to a question (correct if F1 score is above 50%, incorrect otherwise). Considering attacks on answerable questions, we observe four categories in responses during attack: **"I" (incorrect)** are answerable questions that models originally got wrong (or originally predicted as unanswerable for v2 models). **"C2C" (correct to correct)** are answerable questions that models got correct both originally and after the attack. **"C2I" (correct to incorrect)** are answerable questions that models originally answered correctly but then output an incorrect answer when attacked. **"C2U" (correct to incorrectly unanswerable)** are answerable questions that models originally answer correctly but then predict as unanswerable when attacked. The C2I and C2U together account for the performance decline of models when attacked. We see that v2 models, especially RoBERTa and SpanBERT, are particularly susceptible to the C2U challenge; they initially output a correct answer, but when attacked, decide (incorrectly) the question is now unanswerable. This is in contrast to the v1 models, which not being trained on unanswerable questions and do not have the option of responding "unanswerable". The v2 models' refusal to output an incorrect answer (opting instead to reply "unanswerable") indicates that their additional training on unanswerable questions has possibly provided them more depth to handle the confusion introduced by the attack. We further breakdown the "C2U" category from Table 3 to investigate the spectrum of responses v2 models provide. Recall that models produce multiple responses to a MRC sample, each accompanied by a confidence score reflecting the models' confidence in that response. In this analysis, to evaluate the difficulty of questions in category "C2U" of each v2 model, we use the corresponding v1 model as baseline. Then, to answer the question whether v2 models prefer correct answers to incorrect answers, we evaluate the second most confident response of v2 models for questions in category "C2U". Table 4 shows the F1 scores of _second_ most confident responses of v2 models and _first_ (most confident) responses of v1 models to questions in category "C2U" under attacks. We observe that v2 models often have fairly good answers for questions in category "C2U" given that performance of v2 models lag significantly behind that of v1 models when attacked. However, v2 models fail to put forward the correct answers (their second option) ahead of the "unanswerable" responses (their first option). From these analyses, we hypothesize that models with additional training on _unanswerable_ questions have the ability to perceive the attacks on \begin{table} \begin{tabular}{l c c c} \hline \hline & & \multicolumn{3}{c}{**C2U**} \\ & & Attacked & \# Questions \\ \hline \multirow{2}{*}{BERT} & v1 & **46.1** & \multirow{2}{*}{871} \\ & v2 & 42.5 & \\ \hline \multirow{2}{*}{RoBERTa} & v1 & **50.3** & \multirow{2}{*}{1212} \\ & v2 & 44.7 & \\ \hline \multirow{2}{*}{SpanBERT} & v1 & 46.1 & \multirow{2}{*}{1194} \\ & v2 & **47.6** & \\ \hline \hline \end{tabular} \end{table} Table 4: F1 scores of second most confident responses of v2 models and most confident responses of v1 models to questions in category “C2U” of v2 models in Table 3. For each language model, we extract a set of “C2U” questions and then evaluate corresponding v1 and v2 models on this set of questions. \begin{table} \begin{tabular}{c c c|c c|c} \hline \hline & & I & C2I & C2U & C2C \\ \hline \multirow{2}{*}{BERT} & v1 & 10.9 & 28.7 & - & 60.4 \\ & v2 & 21.3 & 10.9 & 14.7 & 53.2 \\ \hline \multirow{2}{*}{RoBERTa} & v1 & 8.0 & 24.5 & - & 67.7 \\ & v2 & 14.5 & 8.0 & 20.5 & 57.1 \\ \hline \multirow{2}{*}{SpanBERT} & v1 & 8.0 & 26.7 & - & 65.4 \\ & v2 & 13.8 & 8.3 & 20.1 & 57.8 \\ \hline \hline \end{tabular} \end{table} Table 3: The percentage of answerable questions by types of answers produced by v1 and v2 models before and after adversarial attacks. _answerable_ questions but fail to completely overcome them. ### Force To Answer The comparison of v1 and v2 models on answerable questions has a built-in bias because v2 models have the "penalty" of being able to respond "unanswerable" even though this is never a legitimate response. Furthermore, we have just shown that the v2 models often produce the correct answer, even under attack, but fail to put forward that correct output ahead of the "unanswerable" output in which it has more confidence. In this section, we re-run the analysis but this time eliminate the option for v2 models to output "unanswerable" (to answerable questions) so that we can better ascertain the robustness of v1 and v2 models to attacks. Table 5 shows the results of this experiment. We can see now in this table that both v1 and v2 models exhibit similar performance on original answerable questions. When we introduce adversarial attacks on these same questions, the v2 models (being forced to answer) now exhibit noticeably stronger performance than their v1 counterparts. The additional training afforded to v2 models on unanswerable questions has given them a performance advantage over the v1 models. The robustness of v2 models against adversarial attacks is hidden in normal testing circumstances but can be realized by forcing the v2 models to output non-empty response in settings with only answerable questions. ## 6 Attacks in Out-Of-Domain Settings: Results We now seek to determine if this additional robustness of v2 models extends to other out-of-domain test sets. In particular, we evaluate our v1 and v2 models on development sets of other Extractive Question Answering datasets. We summarized the characteristics of five out-of-domain datasets of MRQA 2019 in Table 6. Table 7 shows the performance of v1 and v2 models on the five datasets of MRQA 2019. Similarly to experiments in Section 5, we measure performance on both original problems and adversarially attacked problems. First, the performance on original (unattacked) problems shows that adversarial unanswerable questions in SQuAD 2.0 have little negative effects on the generalization performance of MRC models. While the performance of v2 models is higher than that of v1 models on TriviaQA and SearchQA, v1 models outperform v2 models slightly on Natural Questions (0.8\(\%\)), NewsQA (0.2 \(\%\)), and considerably on HotpotQA (6.5 \(\%\)). On average, the generalization performance of v2 models to that of v1 models on out-of-domain unattacked problems is slightly worse (53.7\(\%\) to 54.5\(\%\)). However, on problems with adversarial attacks, v2 models significantly outperform v1 models in four out of the five datasets. Specifically, on average, v2 models significantly outperform v1 models by 2.9\(\%\) on NewsQA, 4.7\(\%\) on Natural Question, 4.8\(\%\) on SearchQA, and 5.2\(\%\) on TriviaQA. Although v2 models do not show superior performance to v1 models on HotpotQA, the performance gap between v2 and v1 models after attacks decreases significantly thanks to the superior robustness of v2 models. Overall, we conclude from Table 7 that adversarial unanswerable questions of SQuAD 2.0 do not have negative effects on the generalization of v2 models to out-of-domain datasets, and the robustness of v2 models against adversarial attack is consistently superior to that of v1 models. ## 7 New Attack In this section, we explore _why_ v2 models often incorrectly put forward "unanswerable" as an incorrect response to answerable questions under adversarial attacks. We hypothesize that MRC models trained with SQuAD 2.0 have learned to identify target sentences with significant lexical overlap to decide whether the corresponding questions are unanswerable; the models rely _primarily_ on that target sentence to determine their output. This undesirable behavior of MRC systems may prevent them from using the whole paragraph to accurately \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{4}{c}{**Answerable**} \\ & & Original & Attacked & \(\Delta\downarrow\) \\ \hline \multirow{2}{*}{BERT} & v1 & 88.4 & 63.8 & 24.6 \\ & v2 & 88.5 & 69.6 & **18.9** \\ \hline \multirow{2}{*}{RoBERTa} & v1 & 91.5 & 70.5 & 21.0 \\ & v2 & 91.4 & 75.1 & **16.4** \\ \hline \multirow{2}{*}{SpanBERT} & v1 & 91.5 & 68.6 & 22.9 \\ & v2 & 91.3 & 75.8 & **15.5** \\ \hline \hline \end{tabular} \end{table} Table 5: The performance of v1 and v2 models (when being forced to output non-empty answer on answerable questions) before and after adversarial attacks. determine the best response to a question and have negative effects on the practical usage of adversarial unanswerable questions. To further understand this hypothesis, we introduce a _negation attack_, a new adversarial attack to attempt to fool models into giving incorrect "unanswerable" responses. In particular, we construct an attack statement that significantly overlaps with the question yet is easy to determine as contradicting the question; we form our negation attack by inserting "not" in front of the adjective. Our attack (see Table 8) differs from previous adversarial attacks as our attack is designed to elicit an unanswerable response instead of an incorrect response. Table 9 reports the performance of v2 models under negation attacks on answerable questions. We observe that our negation attack is highly effective in revealing the weaknesses of v2 models as the performance of all three considered v2 models significantly drops by almost 60\(\%\) F1 when we introduce the negation attack. We then examine the shifts in answers of v2 models when attacked with negation type. Table 10 shows the distribution of shifts in answers before and after the attack. We observe that the most \begin{table} \begin{tabular}{l l c c c|c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Question (Q)**} & \multicolumn{2}{c|}{**Distant**} & \multirow{2}{*}{**Context (C)**} & \multirow{2}{*}{**Q \(\perp\) C**} & \multirow{2}{*}{**Dev**} \\ & & & **Supervision** & & & & \\ \hline SQuAD & Crowdsourced & ✗ & Wikipedia & ✗ & 10,507 \\ HotpotQA & Crowdsourced & ✗ & Wikipedia & ✗ & 5,904 \\ TriviaQA & Trivia & ✓ & Web snippets & ✓ & 7,785 \\ SearchQA & Jeopardy & ✓ & Web snippets & ✓ & 16,980 \\ NewsQA & Crowdsourced & ✗ & News articles & ✓ & 4,212 \\ Natural Questions & Search logs & ✗ & Wikipedia & ✓ & 12,836 \\ \hline \hline \end{tabular} \end{table} Table 6: Characteristics of each datasets used in our out-of-domain experiments. Distant supervision is True if datasets used distant supervision to match questions and contexts. **Q \(\perp\) C** is True if questions in datasets are written independently from the passage used for context. Table adopted from shared task MRQA 2019 (Fisch et al., 2019). \begin{table} \begin{tabular}{c c c c c|c c c|c c c} \hline \hline & \multicolumn{4}{c|}{**Natural Question**} & \multicolumn{4}{c|}{**HotpotQA**} & \multicolumn{4}{c}{**TriviaQA**} \\ & & Original & Attacked & \(\Delta\downarrow\) & Original & Attacked & \(\Delta\downarrow\) & Original & Attacked & \(\Delta\downarrow\) \\ \hline \multirow{2}{*}{BERT} & v1 & **54.6** & 20.1 & 34.5 & **61.6** & 45.5 & 16.1 & **59.4** & 48.9 & 10.5 \\ & v2 & 52 & **23.7** & **28.3** & 58.9 & **47.4** & **11.5** & 58.9 & **53.3** & **5.6** \\ \hline \multirow{2}{*}{RoBERTa} & v1 & 62.1 & 28.3 & 33.8 & **67.4** & 46.3 & 21.1 & 64.1 & 55 & 9.1 \\ & v2 & **63.5** & **33.2** & **30.3** & 65 & **49.8** & **15.2** & **65.5** & **59.2** & **6.3** \\ \hline \multirow{2}{*}{SpanBERT} & v1 & **65** & 34.5 & 30.5 & **66.2** & **46.4** & 19.8 & **63.2** & 51.9 & 11.3 \\ & v2 & 63.9 & **40.2** & **23.7** & 51.9 & 32.3 & **19.6** & 62.9 & **58.8** & **4.1** \\ \hline \multirow{2}{*}{Average} & v1 & **60.6** & 27.6 & 33 & **65.1** & **46.1** & 19 & 62.2 & 51.9 & 10.3 \\ & v2 & 59.8 & **32.3** & **27.5** & 58.6 & 43.2 & **15.4** & **62.4** & **57.1** & **5.3** \\ \hline \hline \multirow{2}{*}{BERT} & \multicolumn{4}{c|}{**SearchQA**} & \multicolumn{4}{c|}{**NewsQA**} & \multicolumn{4}{c}{**Average**} \\ & & Original & Attacked & \(\Delta\downarrow\) & Original & Attacked & \(\Delta\downarrow\) & Original & Attacked & \(\Delta\downarrow\) \\ \hline \multirow{2}{*}{BERT} & v1 & **30.4** & 25.5 & 4.9 & 53.6 & 41.8 & 11.8 & **51.9** & 36.4 & 15.5 \\ & v2 & 28.6 & **26.7** & **1.9** & **53.9** & **46.2** & **7.7** & 50.5 & **39.5** & **11** \\ \hline \multirow{2}{*}{RoBERTa} & v1 & 22.8 & 20.3 & 2.5 & **61.2** & **54.2** & **7** & 55.5 & 40.8 & 14.7 \\ & v2 & **33** & **31.6** & **1.4** & 60.6 & 52.5 & 8.1 & **57.5** & **45.3** & **12.2** \\ \hline \multirow{2}{*}{SpanBERT} & v1 & 28.1 & 26.9 & 1.2 & **58.2** & 44.1 & 14.1 & **56.1** & 40.8 & 15.3 \\ & v2 & **29.4** & **28.8** & **0.6** & 58 & **50** & **8** & 53.2 & **42** & **11.2** \\ \hline \multirow{2}{*}{Average} & v1 & 27.1 & 24.2 & 2.9 & **57.7** & 46.7 & 11 & **54.5** & 39.3 & 15.2 \\ & v2 & **30.3** & **29** & **1.3** & 57.5 & **49.6** & **7.9** & 53.7 & **42.3** & **11.4** \\ \hline \hline \end{tabular} \end{table} Table 7: Robustness of MRC models fine-tuned on SQuAD 1.1 (v1) and SQuAD 2.0 (v2) in out-of-domain settings. For models fine-tuned on SQuAD 2.0 (v2), we force models to output non-empty answers. For each dataset, we report the average performance of 3 experimented models. We also report the average performance of each models on 5 considered datasets. significant drop in performance under negation attacks is the "C2U" category (around 40 \(\%\) F1). This result is consistent with our hypothesis that v2 models rely on target sentences with significant lexical overlap to decide whether the corresponding questions are unanswerable. ## 8 Conclusion In this work, we investigate the effects of training MRC models with unanswerable questions on their robustness against adversarial attacks. We construct adversarial samples from answerable and unanswerable questions in SQuAD 2.0 and evaluate three MRC models fine-tuned on either SQuAD 1.1 (v1 models) or SQuAD 2.0 (v2 models) independently. Adversarial attacks on answerable questions reveal that v2 models initially show little improved robustness over v1 models yet possess a latent ability to deal with these attacks that v1 models do not; the correct responses are often hidden as second-best answers, an indicator of the "hidden robustness" of v2 models resulting from additional training on unanswerable questions. By eliminating the "unanswerable" option and forcing v2 models to output an answer to any answerable questions, we leverage this hidden robustness to improve the performance of MRC models to attacks on answerable questions. Furthermore, we also show that this robustness translates well to out-of-domain test sets. Finally, to encourage future work in evaluating the robustness of MRC models trained on both answerable and unanswerable questions, we introduce a new type of adversarial attack to reveal the short-comings of MRC models. Our experiments with the _negation_ attack reveal that the performance of v2 MRC models drops significantly (around 50% F1). We hypothesize that the decline in the performance of v2 models is mainly due to how v2 models have learned to suboptimally identify target sentences in the context to use as their primary mechanism of response. ## 9 Future Work Our findings raise two critical messages for future research in the usage of adversarial unanswerable questions in NLP: First, our work highlights innovative ways to use adversarial unanswerable questions in training to improve the performance of MRC-based systems. MRC datasets are important sources of transfer learning for zero-shot settings in many other NLP tasks Wu et al. (2020); Levy et al. (2017); Lyu et al. (2021); Du and Cardie (2020); Li et al. (2019). Given that the improved robustness of v2 models from the additional training on unanswerable questions generalizes well to out-of-domain test sets, future research about using MRC knowledge in zero-shot settings can explore whether adversarial unanswerable questions improve the robustness of MRC models in these zero-shot settings. Second, we propose an open question about an undesirable behavior of MRC models fine-tuned on SQuAD 2.0. We find that simple negation attacks induce a considerable drop in the performance of MRC models fine-tuned on SQuAD 2.0 \begin{table} \begin{tabular}{c c|c c|c} \hline & I & C2U & C2I & C2C \\ \hline BERT v2 & 14.4 & 45.4 & 17.7 & 22.5 \\ RoBERTa v2 & 21.6 & 41.8 & 17.5 & 19.1 \\ SpanBERT v2 & 12.5 & 37.9 & 22.8 & 26.8 \\ \hline \end{tabular} \end{table} Table 10: The percentage of answerable questions by types of answers produced by v2 models before and after negation attacks. \begin{table} \begin{tabular}{l l} \hline \hline \multirow{2}{*}{Question} & In the effort of maintaining a level of \\ & abstraction, what choice is typically \\ & left **independent**? \\ \hline Answer & **encoding** \\ \hline \multirow{6}{*}{Context} & [...] one tries to keep the discussion \\ & abstract enough to be independent \\ & of the choice of **encoding**. [...] In \\ & the effort of maintaining a level of \\ & abstraction, base64 choice is \\ & typically left **not independent**. \\ \hline \hline \end{tabular} \end{table} Table 8: An example of the Negation Attack on answerable questions. The adversarial sentence is highlighted in red color. In constructing the adversarial sentence, we negate adjective “independent” to “not independent”. \begin{table} \begin{tabular}{l c c c} \hline \hline & Original & Attacked & \(\Delta\downarrow\) \\ \hline BERT v2 & 84.8 & 24.2 & 60.6 \\ RoBERTa v2 & 78.1 & 21 & 57.1 \\ SpanBERT v2 & 87.3 & 28.6 & 58.7 \\ \hline \hline \end{tabular} \end{table} Table 9: F1 score of v2 models before and after negation attacks on answerable questions. In this experiment, we do not force v2 models to output non-empty answers. due to an undesirable behavior as the product of artifacts in the training set. To use the adversarial unanswerable questions in practice, we suggest additional research, based on insights about shortcut learning (Lai et al., 2021; Du et al., 2021), aimed to prevent MRC models from learning this undesirable behavior. ## Limitations We acknowledge that there exist few aspects to which our findings are limited, that include the dominant use of pretrained language models, the insufficiency of MRC datasets in other languages, and the limited types of adversarial attacks examined. ## Acknowledgements We would like to thank Dr. Ashwin Lall for constructive feedback on the early version of this paper. We thank the anonymous reviewers for their constructive and insightful feedback. We want to thank The William G. and Mary Ellen Bowen Research Endowment and The Laurie and David Hodgson Faculty Support Endowment for supporting the first and third authors.
言語モデルのトレーニングによって、多くの機械読解理解 (MRC) ベンチマークで超人的な性能を達成した。しかし、その対抗攻撃に対する相対的な脆弱性により、彼らの自然言語理解に対する懐疑的意見が浮上した。この論文では、SQuAD 2.0 で非回答可能な質問を用いたトレーニングが MRC モデルの対抗攻撃に対する robustness を向上させることができるかどうかを問い、その問いを探求するために、SQuAD 1.1 や SQuAD 2.0 における既存の3つの最先端言語モデルを微調整し、対抗攻撃下での robustness を評価した。実験結果から、SQuAD 2.0 に微調整されたモデルは、SQuAD 1.1 に微調整されたモデルよりも初期的に robustness が低いことがわかったが、隠れた robustness を発見することで、実質的な性能向上を実現できる可能性がある。さらに、SQuAD
2310.20155
MLatom 3: Platform for machine learning-enhanced computational chemistry simulations and workflows
Machine learning (ML) is increasingly becoming a common tool in computational chemistry. At the same time, the rapid development of ML methods requires a flexible software framework for designing custom workflows. MLatom 3 is a program package designed to leverage the power of ML to enhance typical computational chemistry simulations and to create complex workflows. This open-source package provides plenty of choice to the users who can run simulations with the command line options, input files, or with scripts using MLatom as a Python package, both on their computers and on the online XACS cloud computing at XACScloud.com. Computational chemists can calculate energies and thermochemical properties, optimize geometries, run molecular and quantum dynamics, and simulate (ro)vibrational, one-photon UV/vis absorption, and two-photon absorption spectra with ML, quantum mechanical, and combined models. The users can choose from an extensive library of methods containing pre-trained ML models and quantum mechanical approximations such as AIQM1 approaching coupled-cluster accuracy. The developers can build their own models using various ML algorithms. The great flexibility of MLatom is largely due to the extensive use of the interfaces to many state-of-the-art software packages and libraries.
Pavlo O. Dral, Fuchun Ge, Yi-Fan Hou, Peikun Zheng, Yuxinxin Chen, Mario Barbatti, Olexandr Isayev, Cheng Wang, Bao-Xin Xue, Max Pinheiro Jr, Yuming Su, Yiheng Dai, Yangtao Chen, Lina Zhang, Shuang Zhang, Arif Ullah, Quanhao Zhang, Yanchi Ou
2023-10-31T03:41:39
http://arxiv.org/abs/2310.20155v1
# MLatom 3: Platform for machine learning-enhanced computational chemistry simulations and workflows ###### Abstract We present a new MLatom 3 for machine learning-enhanced computational chemistry simulation of a molecular ###### Abstract Machine learning (ML) is increasingly becoming a common tool in computational chemistry. At the same time, the rapid development of ML methods requires a flexible software framework for designing custom workflows. MLatom 3 is a program package designed to leverage the power of ML to enhance typical computational chemistry simulations and to create complex workflows. This open-source package provides plenty of choice to the users who can run simulations with the command line options, input files, or with scripts using MLatom as a Python package, both on their computers and on the online XACS cloud computing at XACScloud.com. Computational chemists can calculate energies and thermochemical properties, optimize geometries, run molecular and quantum dynamics, and simulate (ro)vibrational, one-photon UV/vis absorption, and two-photon absorption spectra with ML, quantum mechanical, and combined models. The users can choose from an extensive library of methods containing pre-trained ML models and quantum mechanical approximations such as AIQM1 approaching coupled-cluster accuracy. The developers can build their own models using various ML algorithms. The great flexibility of MLatom is largely due to the extensive use of the interfaces to many state-of-the-art software packages and libraries. Introduction Computational chemistry simulations are common in chemistry research thanks to abundant general-purpose software, most of which has started as purely quantum mechanical (QM) and molecular mechanical (MM) packages. More recently, the rise of artificial intelligence (AI)/machine learning (ML) applications for chemical simulations has caused the proliferation of programs mostly focusing on specific ML tasks such as learning potential energy surfaces (PESs).[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17] The rift between the development of the traditional QM and MM packages on the one hand and ML programs on the other hand, is bridged to some extent by the higher-level library ASE,[18] which enables usual computational tasks via interfacing heterogeneous software. The further integration of QM, MM, and ML has been prompted by the maturing of ML techniques and is evidenced by the growing trend of incorporating ML methods in the QM and MM computational chemistry software.[19, 20, 21, 21] Against this backdrop, the MLatom package started in 2013 as a pure stand-alone ML package to provide a general-purpose experience for computational chemists akin to the black-box QM packages.[22] The early MLatom could be used for training, testing, and using ML models and their combinations with QM methods (e.g., \(\Delta\)-learning[23] and learning of Hamiltonian parameters[24]), accurate representation of PES,[25, 26] sampling of points from data sets,[26] ML-accelerated nonadiabatic dynamics,[27] and materials design[28]. The fast pace of method and software development in QM, MM, ML, and other computational science domains led to MLatom 2 which started to include interfaces to third-party packages.[29] Such an approach provided a unique opportunity for the package users to choose one of the many established ML models - similar to the users of the traditional QM software who can choose one of the many QM methods. MLatom 2 could perform training of the ML models, evaluate their accuracy, and then use the models for geometry optimizations and frequency calculations. Special workflows were also implemented, such as acceleration of the absorption UV/vis spectra calculations with ML[30] and prediction of two-photon absorption spectra[31]. In addition, MLatom 2 could be used to perform simulations with general-purpose AI-enhanced QM method[32] AIQM1 and universal machine learning potentials of the ANI family[33, 34, 35, 2] with the accurate scheme developed for calculating heats of formation[36] with uncertainty quantification with these methods. With time, the need to develop increasingly complex workflows that incorporate ML and QM for a broad range of applications has necessitated the rethink and redesign of MLatom to enable the rapid development of highly customized routines. These additional design requirements for MLatom to serve not just as a black-box general-purpose package but also as a flexible platform for developers resulted in a significant extension, redesign, and rewrite of the program. The subsequent upgrade has allowed the use of MLatom through the versatile Python API (MLatom PyAPI) and also included the implementation of more simulation tasks such as molecular and quantum dynamics and the support of QM methods and composite schemes based on the combinations of QM and ML models. This upgrade was released[37] as MLatom 3 in 2023 - ten years after the start of the project. During this decade, MLatom went through a drastic transformation from a pure Fortran package to a predominantly Python package with one-third of the code written in Fortran for efficient implementations of critical parts. MLatom 3 comes under the open-source permissive MIT license (modified to request proper citations). Here we give an overview of the capabilities of MLatom 3 and provide examples of its applications. ## 2 Overview MLatom merges the functionality from typical quantum chemical and other atomistic simulation packages and the capabilities of desperate ML packages with a strong focus on molecular systems. The user can choose from a selection of ready-to-use QM and ML models Figure 1: Overview of the MLatom 3 capabilities. and design and train ML models to perform the required simulations. The bird's view of the MLatom capabilities is best given in Figure 1. One of the current main goals of MLatom is to enable simulation tasks of interest for a computational chemist with generic types of models that can be based on ML, QM, and their combinations (see Section 4). These tasks include single-point calculations, optimization of geometries of minima and transition states (which can be followed by intrinsic reaction coordinate (IRC) analysis[38]), frequency and thermochemical property calculations, molecular and quantum dynamics, rovibrational (infrared (IR) and power) spectra, ML-accelerated UV/vis absorption and two-photon absorption spectra simulations. This part of MLatom is more similar to traditional QM and MM packages but with much more flexibility in model choice and unique tasks. A dedicated Section 5 will give a more detailed account of the simulations. Enabling the users to create their own ML models was MLatom's original main focus and it continues to play a major role. The MLatom supports a range of carefully selected representative ML algorithms that can learn the desired properties as a function of a 3D atomistic structure. Typically, these algorithms are used, but not limited to, for learning PESs and hence often can be called, for simplicity, ML (interatomic) potentials (MLPs)[39, 40, 41, 42, 43]. One particular specialization of MLatom is the original implementation of kernel ridge regression (KRR) algorithms for learning any property as a function of any user-provided input vectors or XYZ molecular coordinates[22]. In addition, the user can create custom multi-component models based on concepts of \(\Delta\)-learning[23], hierarchical ML[25], and self-correction[26]. These models may consist of ML and QM methods. MLatom provides standardized means for training, hyperparameter optimization, and evaluation of the models so that switching from one model type to another may need just one keyword change[29]. This allows one to easily experiment with different models and choose the most appropriate for the task. The data is as important as choosing and training the ML algorithms. MLatom 3 provides several data structures specialized for computational chemistry needs, mainly based on versatile Python classes for atoms, molecules, molecular databases, and dynamics trajectories. These classes allow not just storing the data in a clearly structured format, but also handling it by, e.g., converting to different molecular representations and data formats and splitting and sampling the data sets into the training, validation, and test subsets. Because data is a central concept in the age of data-driven models and MLatom as a package, we describe data structures in Section 3 before describing models, simulations, and machine learning. How the user interacts with the program is also important and ideally the features should be easily accessible and their use intuitive. MLatom calculations can be requested by providing command-line options either directly or through the input file. Alternatively, MLatom can be used as a Python module which can be imported and used for creating calculation workflows of varying complexity. A side-by-side comparison of these two approaches is given in Figure 2. More examples highlighting different use cases of MLatom are interspersed throughout this article. MLatom as an open-source package can be conveniently installed via PyPI, i.e., simply using the command pip install mlatom or from the source code available on GitHub at [https://github.com/dralgroup/mlatom](https://github.com/dralgroup/mlatom). To additionally facilitate access to AI-enhanced computational chemistry, MLatom can be conveniently used in the XACS cloud computing service at [https://XACScloud.com](https://XACScloud.com) whose basic functionality is free for non-commercial uses such as education and research. Cloud computing eliminates the need for program installation and might be particularly useful for users with limited computational resources. Figure 2: Side-by-side comparison of the usage of MLatom in both command-line mode and via Python API for a common task of geometry optimization with one of the pre-trained ML models ANI-1cx. ## 3 Data In MLatom, everything revolves around operations on data: databases and data points of different types such as an atom, molecule, molecular database, and molecular trajectory (Figure 3). They are implemented as Python classes that contain many useful properties and provide different tools to load and dump these data-type objects using different formats. For example, the key type is a molecule that can be loaded from XYZ file or SMILES and then automatically parsed into the constituent atom objects. Atom objects contain information about the nuclear charge and mass as well as nuclear coordinates. A molecule object is assigned charge and multiplicity. The information about molecular and atomic properties can be passed to perform simulations, e.g., MD, with models that update and create new molecule objects with calculated quantum mechanical properties such as energies and energy gradients. See Figure 2 for an example of loading a molecule object init_mol from the file init.xyz, used as the initial guess for the geometry optimization, returning an optimized geometry as a new molecule object final_mol, which is saved into the final.xyz file. Data objects can be directly accessed and manipulated via MLatom Python API. When using the MLatom in the command-line mode, many similar operations are done under the hood so Figure 3: Overview of different data types in MLatom. that the user often just needs to prepare input files in standard formats such as files with XYZ coordinates. Molecule objects can be combined into or created by parsing the molecular database that has, e.g., functions to split it into the different subsets needed for training and validation of ML models. The databases can be loaded and dumped in plain text (i.e., several files including XYZ coordinates, labels, XYZ derivatives), JSON, and npz formats. Another data type is molecular trajectory which consists of steps containing molecules and other information. Molecular trajectory objects are created during geometry optimization and MD simulations and in the latter case, the step is a snapshot of MD trajectory, containing information about the time, nuclear coordinates and velocities, atomic numbers and masses, energy gradients, kinetic, potential and total energies, and, if available, dipole moments and other properties. The trajectories can be loaded and dumped in JSON, H5MD,[44] and plain text formats. Molecules for which XYZ coordinates are provided can be transformed in several supported descriptors: inverse internuclear distances and their version normalized relative to the equilibrium structure (RE)[26], Coulomb matrix,[45, 46] and their variants.[29] MLatom also has separate statistics routines to calculate different error measures and perform other data analyses.[29] Routines for preparing common types of plots, such as scatter plots and spectra, are available too. ## 4 Models and methods Any of the simulations needs a model that provides the required output for a given input. The architecture and algorithms behind the models can be designed by an expert or chosen from the available selection. ML models typically require training to find their parameters before they can be used for simulations. Some of these models, such as universal MLPs of ANI family,[33, 34, 2, 35] are already pre-trained for the user who does not have to train them. This is similar to QM methods, commonly used out-of-the-box without tuning their parameters. In MLatom, we call a _method_ any model that can be used out-of-the-box for simulations. Both pre-trained ML models and QM methods belong to the methods in MLatom's terminology, which is reflected in the keyword names. This model type also includes hybrid pre-trained ML and QM methods. Below, we overview models available in MLatom when writing this article, the selection of available methods and models with provided architectures that need to be trained, and the ways to design custom models (Figure 4). ### Methods MLatom provides access to a broad range of methods through interfaces to many third-party state-of-the-art software packages: * Pre-trained ML models: * Universal potentials ANI-1ccx[34], ANI-1x[33], ANI-2x[35], ANI-1x-D4, and ANI-2x-D4. ANI-1ccx is the most accurate and approaches gold-standard CCSD(T) accuracy. We have seen an example of its use in geometry optimization in Figure 2. Other methods approach the density functional theory (DFT) level. ANI-1ccx and ANI-1x are limited to CHNO elements, while ANI-2x can be used for CHNOFCIS elements. We allow the user to use D4-dispersion-corrected universal ANI potentials that might be useful for noncovalent complexes. D4 correction[47] is taken for oB97X functional[48] used to generate data for pre-training ANI-1x and ANI-2x. ANI models are provided via an interface to TorchANI[2] and D4 corrections via the interface to dftd4[49]. These methods are limited to predicting energies and forces for neutral closed-shell compounds in their ground state. MLatom reports uncertainties for calculations with these methods based on the standard deviation between neural network (NN) predictions[36]. Figure 4: Overview of different model types in MLatom. * Special ML-TPA model for predicting the two-photon absorption (TPA) cross sections [31]. * Hybrid QM/ML methods: AIQM1, AIQM1@DFT, and AIQM1@DFT* [32]. More transferable and accurate than pre-trained ML models but slower (the speed of semi-empirical QM methods which are still much faster than DFT). AIQM1 is approaching gold-standard CCSD(T) accuracy, while AIQM1@DFT and AIQM1@DFT* target the DFT accuracy for neutral, closed-shell molecules in their ground state. All these methods are limited to the CHNO elements. AIQM1 and AIQM1@DFT include explicit D4-dispersion corrections for \(\omega\)B97X functional while AIQM1@DFT* does not. They also include modified ANI-type networks and modified semi-empirical QM method ODM2 [50] (ODM2*, provided by either the MNDO [51] or Sparrow [52] program). These methods can also be used to calculate charged species, radicals, excited states, and other QM properties such as dipole moments, charges, oscillator strengths, and nonadiabatic couplings. MLatom reports uncertainties for calculations with these methods based on the standard deviation between NN predictions [36]. * A range of established QM methods from _ab initio_ (e.g., HF, MP2, coupled cluster, _etc._) to DFT (e.g., B3LYP [53, 54]\(\omega\)B97X [48], etc.) via interfaces to PySCF [55] and Gaussian [55]. * A range of semi-empirical QM methods (GFN2-xTB [56], OM2 [57], ODM2 [50], AM1 [58, PM6 [59], _etc._) via interfaces to the xtb [60], MNDO [51], and Sparrow [52] programs. * A special composite method CCSD(T)*/CBS [34] extrapolating CCSD(T) to the complete basis set via an interface to Orca [61, 62]. This method is relatively fast and accurate. It allows the user to check the quality of calculations with other methods and generate robust reference data for ML. This method was used to generate the reference data for AIQM1 and ANI-1ccx. ### Available standard models needing training The field of MLPs is very rich in models. Hence, the user can often choose one of the popular MLP architectures reported in literature rather than developing a new one. MLatom provides a toolset of MLPs from different types (see Ref. [39] for an overview and Ref. [29] for implementation details). These supported types can be categorized in a simplified scheme as * Models based on kernel methods (KMs)[63] with global descriptors to which (p)KREG,[63, 26] sGDML,[65] and KRR-CM[45, 46] belong as well as with local descriptors represented by only GAP[66]-SOAP[67]. * Models based on neural networks (NNs) with fixed local descriptors to which ANI-type MLPs[2] and DPMD[68] belong and with learned local descriptors represented by PhysNet[69] and DeepPot-SE[70]. Any of these models can be trained and used for simulations, e.g., geometry optimizations or dynamics. MLatom also supports hyperparameter optimization with many algorithms including grid search,[22] Bayesian optimization via the hyperopt package,[71, 72] and standard optimization algorithms available in SciPy[73]. Generalization errors of the resulting models can also be evaluated in standard ways (hold-out and cross-validation). More on this in a dedicated Section 6. ### Custom models based on kernel methods MLatom also provides the flexibility of training custom models based on kernel ridge regression (KRR) for a given set of input vectors **x** or XYZ coordinates and any labels **y**.[74, 75] If XYZ coordinates are provided, they can be transformed in one of the several supported descriptors (e.g., inverse internuclear distances and their version normalized relative to the equilibrium structure (RE), and Coulomb matrix). The user can choose from one of the implemented kernel functions, including the linear,[75, 22, 76] Gaussian,[75, 22, 76] exponential,[75, 76] Laplacian,[75, 22, 76] and Matern[75, 76, 22] as well as periodic[76, 78, 79] and decaying periodic[76, 78, 80] functions, which are summarized in Table 1. These kernel functions \(k\big{(}\textbf{x},\textbf{x}_{j};\textbf{h}\big{)}\) are key components required to solve the KRR problem of finding the regression coefficients \(\alpha\) of the approximating function \(\hat{f}(\textbf{x};\textbf{h})\) of the input vector \(\textbf{x}\).[74, 75] \[\hat{f}(\textbf{x};\textbf{h})=\sum_{j=1}^{N_{\text{tr}}}\alpha_{j}k\big{(} \textbf{x},\textbf{x}_{j};\textbf{h}\big{)}. \tag{1}\] The kernel function, in most cases, has hyperparameters **h** to tune, and they can be viewed as measuring similarity between the input vector **x** and all of \(N_{\text{tr}}\) training points \(\textbf{x}_{j}\) (both vectors should be of the same length \(N_{x}\)). In addition to the hyperparameters in the kernel function, all KRR models have at least one more regularization parameter \(\lambda\) used during training to improve the generalizability. \begin{table} \begin{tabular}{l l l} \hline Kernel function & Formula & Hyperparameters \\ & & in kernel \\ & & function \\ \hline Linear & \(k\big{(}\mathbf{x},\mathbf{x}_{j}\big{)}=\mathbf{x}^{\mathsf{T}}\mathbf{x}_{j}\) & \\ Gaussian & \(k\big{(}\mathbf{x},\mathbf{x}_{j}\big{)}=\exp\left(-\dfrac{1}{2\sigma^{2}} \underset{s}{\sum}\big{(}x_{s}-x_{j,s}\big{)}^{2}\right)\) & \(\sigma>0\), length \\ exponential & \(k\big{(}\mathbf{x},\mathbf{x}_{j}\big{)}=\exp\left(-\dfrac{1}{\sigma} \underset{s}{\sum}\big{(}x_{s}-x_{j,s}\big{)}^{2}\right)^{1/2}\) & \(\sigma>0\), length \\ Laplacian & \(k\big{(}\mathbf{x},\mathbf{x}_{j}\big{)}=\exp\left(-\dfrac{1}{\sigma} \underset{s}{\sum}\big{|}x_{s}-x_{j,s}\big{|}\right)\) & \(\sigma>0\), length \\ & \(k\big{(}\mathbf{x},\mathbf{x}_{j}\big{)}=\exp\left(-\dfrac{1}{\sigma} \underset{s}{\sum}\big{(}x_{s}-x_{j,s}\big{)}^{\frac{1}{2}}\right)\) & \\ Matérn & \(\times\sum_{k=0}^{n}\dfrac{(n+k)!}{(2n)!}\binom{n}{k}\) & \(\sigma>0\), length \\ & \(\times\left(\dfrac{2}{\sigma}\underset{s}{\sum}\big{(}x_{s}-x_{j,s}\big{)}^{ 2}\right)^{1/2}\right)^{n-k}\) & \\ periodic & \(k\big{(}\mathbf{x},\mathbf{x}_{j}\big{)}=\exp\left(-\dfrac{2}{\sigma^{2}} \underset{s}{\sum}\left(\dfrac{n}{p}\underset{s}{\sum}\big{(}x_{s}-x_{j,s} \big{)}^{2}\right)^{1/2}\right)\) & \(\sigma>0\), length \\ & \(k\big{(}\mathbf{x},\mathbf{x}_{j}\big{)}=\exp\left(-\dfrac{1}{2\sigma^{2}} \underset{s}{\sum}\big{(}x_{s}-x_{j,s}\big{)}^{2}\right.\) & \(\sigma>0\), length \\ decaying periodic & \(-\dfrac{2}{\sigma_{p}^{2}}\sin^{2}\left\{\dfrac{\pi}{p}\underset{s}{\sum} \big{(}x_{s}-x_{j,s}\big{)}^{2}\right\}^{1/2}\right)\) & \(\sigma>0\), length \\ \hline \end{tabular} \end{table} Table 1: Summary of the available kernel functions for solving the kernel ridge regression problem (Eq. 1) as implemented in MLatom. ### Composite models Often, it is beneficial to combine several models. One example of such composite models is based on \(\Delta\)-learning [23] where the low-level QM method is used as a baseline which is corrected by an ML model to approach the accuracy of the target higher-level QM method. Another example is ensemble learning [81] where multiple ML models are created, and their predictions are averaged during the simulations to obtain more robust results and use in the query-by-committee strategy of active learning [82]. Both of these concepts can also be combined in more complex workflows as exemplified by the AIQM1 method [32] which uses the NN ensemble as a correcting \(\Delta\)-learning model and the semi-empirical QM method as the baseline. To easily implement these workflows, MLatom allows the construction of the composite models as model trees; see an example for AIQM1 in Figure 5. Other examples of possible composite models are hierarchical ML [25], which combines several (correcting) ML models trained on (differences between) QM levels, and self-correction [26], when each next ML model corrects the prediction by the previous model. Figure 5: Composite models can be constructed as a model tree in MLatom. Here an example is shown for the AIQM1 method where the root parent node comprises 3 children, the semi-empirical QM method ODM2*, the NN ensemble, and additional D4 dispersion correction. NN ensemble in turn is a parent of 8 ANI-type NN children. Predictions of parents are obtained by applying an operation ‘average’ or ‘sum’ to children’s predictions. The code snippets are shown, too. Simulations MLatom supports a range of simulation tasks such as single-point simulations, geometry optimizations, frequency and thermochemistry calculations, molecular and quantum dynamics, one- and two-photon absorption and (ro)vibrational spectra simulations (Figure 6). Most of them need any model that can provide energies and energy derivatives (gradients and Hessians). ### Single-point calculations Single-point calculations are calculations of quantum mechanical properties -- mostly energies and energy gradients, but also Hessians, charges, dipole moments, _etc._ -- for a single geometry. These calculations are very common in ML research in computational chemistry as they are used both to generate the reference data with QM methods for training and validating ML and to make inferences with ML to validate the trained model and generate required data for new geometries. MLatom is a convenient tool to perform single-point calculations not just for a single geometry, as in many QM packages, but for data sets with many geometries. Figure 6: Overview of simulation tasks in MLatom. The inset in one-photon UV/vis spectra is reproduced from Ref. [29] under the CC-BY-4.0 license. ### Geometry optimizations Locating stationary points on PES, such as energy minima and transition states, is crucial for understanding the molecular structure and reactivity. Hence, geometry optimizations are among the most important and frequent tasks in computational chemistry. MLatom can locate energy minima and transition states (TS) with any models providing energies and gradients. An example of geometry optimization is given in Figure 2. Hessians are also required for the Berny TS optimization algorithm. Once the TS is located, the user can follow the intrinsic reaction coordinate (IRC)[38] to check its nature. Geometry optimizations can be performed with many algorithms provided by the interfaces to SciPy[73], ASE[18], or Gaussian[55]. TS search can be performed with the dimer method[83] in ASE and the Berny algorithm[84] in Gaussian. IRC calculations can only be performed with the interface to Gaussian. The seamless integration of the variety of QM and ML methods for performing geometry optimizations is advantageous because it allows the use of methods from interfaced programs that do not implement some of such simulation tasks by themselves. For example, MLatom can be used to perform TS search with the GFN2-xTB method via an interface to the xtb program, while there is no option for TS search with the latter program. Similarly, Sparrow, which provides access to many semi-empirical methods, can only be used for single-point calculations. Since analytical gradients and Hessians are not available for many models and implementations, MLatom also implements a finite-difference numerical differentiation, further expanding the applicability of the models for geometry optimizations. ### Frequency calculations Simulation of vibrational frequencies is another common and important task in computational chemistry as it is useful to additionally verify the nature of stationary points, visualize molecular vibrations, calculate zero-point vibrational energy (ZPE) and thermochemical properties, as well as obtain spectroscopic information, which can be compared to experimental vibrational spectra. These calculations can be performed within the ridge-rotor harmonic approximation via an adapted TorchANI implementation[2] and Gaussian[55] interface. The latter also allows the calculation of anharmonic frequencies using the second-order perturbative approach[85]. Similarly to geometry optimizations, MLatom can perform these simulations with any model -- ML and QM or their combination -- that provides energies. Calculations also need Hessian, and wherever available, analytical Hessian is used. If it is unavailable, semi-analytical (with analytical gradients) or fully numerical Hessian can be calculated. ### Thermochemistry calculations Therrochemical properties such as enthalpies, entropies, and Gibbs free energies can be derived from frequency calculations. In turn, enthalpies can be used to calculate heats (enthalpies) of formation. MLatom uses the scheme analogous to those employed in the _ab initio_[86] and semi-empirical QM calculations[50] to derive heats of formation: \[\Delta H_{\mathrm{f,\,\mathcal{T}}}=\left[\sum_{A}\Delta H_{\mathrm{f,\, \mathcal{T}}}(A)\right]-\Delta H_{\mathrm{at,\,\mathcal{T}}} \tag{2}\] where \(\Delta H_{\mathrm{f,\,\mathcal{T}}}(A)\) is the experimental enthalpies of formation of free atom A, and \(\Delta H_{\mathrm{at,\,\mathcal{T}}}\) is the atomization enthalpy. In AIQM1 and ANI-1ccx, we use the same \(\Delta H_{\mathrm{f,\,\mathcal{T}}}(A)\) values as other semi-empirical QM methods, i.e., 52.102, 170.89, 113.00, 59.559 kcal/mol for elements H, C, N, O, respectively.[51] The atomization enthalpy \(\Delta H_{\mathrm{at,\,\mathcal{T}}}\) can be obtained from the difference between molecular \(H_{\mathrm{\,\mathcal{T}}}\) and atomic absolute enthalpies \(H_{\mathrm{\,\mathcal{T}}}(A)\): \[\Delta H_{\mathrm{at,\,\mathcal{T}}}=\left[\sum_{A}H_{\mathrm{\,\mathcal{T}}}( A)\right]-H_{\mathrm{\,\mathcal{T}}}. \tag{3}\] Analogous to _ab initio_ methods, harmonic-oscillator and rigid-rotor approximations are explicitly considered in the calculation of absolute enthalpies: \[H_{\mathrm{\,\mathcal{T}}}=E_{tot}+ZPVE+E_{\mathrm{trans,\,\mathcal{T}}}+E_{ \mathrm{rot,\,\mathcal{T}}}+E_{\mathrm{vib,\,\mathcal{T}}}+RT, \tag{4}\] \[H_{\mathrm{\,\mathcal{T}}}(A)=E(A)+E_{\mathrm{trans,\,\mathcal{T}}}(A)+RT, \tag{5}\] where \(E_{\mathrm{tot}}\) and \(E(A)\) are the total energy of the molecule and free atom, respectively, and ZPVE is the zero-point vibrational energy. \(E_{\mathrm{trans,\,\mathcal{T}}}\), \(E_{\mathrm{rot,\,\mathcal{T}}}\) and \(E_{\mathrm{vib,\,\mathcal{T}}}\) are the translational, rotational, and vibrational thermal contributions, and \(R\) is the gas constant. The scheme requires the knowledge of free atom energies \(E(A)\). Any model able to calculate them can be used for predicting heats of formation. This is straightforward for QM methods but not for ML-based models that are usually trained on molecular species. We have previously fitted free atom energies (see Table 2) for AIQM1 and ANI-1ccx methods to the experimental data set.[32, 36] As a result, both methods can provide heats of formation close to chemical accuracy with speed orders of magnitude higher than that of alternative high-accuracy QM methods. In addition, we provide an uncertainty quantification scheme based on the deviation of NN predictions in these methods to tell the users when the predictions are confident. This was useful to find errors in the experimental data set of heats of formation [36]. An example of using MLatom to calculate heats of formation with the AIQM1 and B3LYP/6-31G* methods is shown in Figure 7. AIQM1 is both faster and more accurate than B3LYP, as can be seen by comparing the values with the experiment. This is also consistent with our previous benchmark [36]. Figure 7: Calculation of heats of formation of ethylene with AIQM1 and B3LYP/6-31G* (from the interface to PySCF) compared to the experiment [87]. \begin{table} \begin{tabular}{c c c} \hline Element & AIQM1 & ANI-1ccx \\ \hline H & -0.50088038 & -0.50088088 \\ C & -37.79221710 & -37.79199048 \\ N & -54.53360298 & -54.53379230 \\ O & -75.00986203 & -75.00968205 \\ \hline \end{tabular} \end{table} Table 2: The atomic energies (in Hartree) of AIQM1 and ANI-1ccx used in heats of formation calculations [32, 36]. ### Molecular dynamics Molecular dynamics propagates nuclear motion based on the equation of motion according to the classical mechanics.[88] This requires the knowledge of forces acting on nuclei, which are typically derived as the negative of the potential energy gradients (i.e., negative of the derivatives of the model for potential energies) for conservative forces. Due to the high cost of the approach, it is most commonly used with molecular mechanics force fields,[89] but often, calculations based on QM methods are possible in variants called _ab initio_ or Born-Oppenheimer MD (BOMD).[88] The proliferation of ML potentials makes it possible to perform BOMD quality dynamics at a cost comparable to molecular mechanics force fields or much faster than commonly used DFT-based BOMD.[39, 40, 41, 42, 43] For example, the AIQM1 method is faster than DFT and the IR spectra obtained from AIQM1 MD are of higher quality (Figure 8).[90] * [15] MLatom has a native implementation of MD supporting any kind of model that provides forces, not necessarily conservative [90]. Currently, simulations in NVE and NVT ensembles [92], based on the velocity Verlet algorithm [93], are possible. NVT simulations can be carried out with the Andersen [92, 94] and Nose-Hoover [95, 96] thermostats. Trajectories can be saved in different formats, including plain text, JSON and, more compact H5MD [29] database formats. The Nose-Hoover thermostat is a deterministic thermostat that couples the system to a thermal bath through extra terms in the Hamiltonian. Its theory and implementation details are described elsewhere [90]. Here, we briefly mention the relevant methodology [92, 94] used in the Andersen thermostat. In this thermostat, the system is coupled to a heat bath by stochastically changing the velocity of each atom. The changing frequency (or collision frequency) is controlled by a tunable parameter \(\nu\). The collisions follow the Poisson distribution so that the probability of changing the velocity of each atom during a time step \(\Delta t\) is \(\nu\Delta t\). If the atoms collide, new velocities will be assigned to them, sampled from a Maxwell-Boltzmann distribution at target temperature \(T\). MD trajectories can be propagated in parallel, dramatically speeding up the calculations. In addition, we made an effort to better integrate the KREG model implemented in Fortran into the main Python-based MLatom code which makes MD with KREG very efficient. Note that MD can also be propagated without forces using the concept of the 4D-spacetime AI atomistic models, which directly predict nuclear configurations as a function of time [79]. Our realization of this concept, called the GICnet model, is currently available in a publicly available development version of MLatom version [79]. The above implementations can propagate MD on an adiabatic potential energy surface, i.e., typically, for ground-state dynamics. Nonadiabatic MD based on the trajectory surface hopping algorithms can also be performed with the help of MLatom, currently, via Newton-X [96]'s interface to MLatom [97, 98, 27]. MLatom also supports quantum dissipative dynamics as described in the next Section 5.6. ### Quantum dissipative dynamics It is often necessary and beneficial to treat the entire system quantum mechanically and also include the environmental effects [100]. This is possible via many quantum dissipative dynamics (QD) algorithms, and an increasing number of ML techniques were suggested to accelerate such simulations [98]. MLatom allows performing several unique ML-accelerated QD simulations using either a recursive scheme based on KRR [101] or a conceptually different AI QD approach[102] predicting the trajectories as a function of time or OSTL technique[103] outputting the entire trajectories in one shot. These approaches are enabled via an interface to a specialized program MLQD[104]. In the recursive KRR scheme, a KRR model is trained, establishing a map between future and past dynamics. This KRR model, when provided with a brief snapshot of the current dynamics, can be leveraged to forecast future dynamics. In the AIQD approach, a convolution neural network (CNN) model is trained mapping simulation parameters and time to the corresponding system's state. Using the trained CNN model, the state of the system can be predicted at any time without the need to explicitly simulate the dynamics. Similarly, the ultra-fast OSTL method utilizes CNN-based architecture and, based on simulation parameters, predicts future dynamics of the system's state up to a predefined time in a single shot. In addition, as optimization is a key component in training, users can optimize both KRR and CNN models using MLatom's grid search functionality for KRR and Bayesian optimization via the hyperopt[71] library for CNN. Moreover, we also incorporate the auto-plotting functionality, where the predicted dynamics is plotted against the provided reference trajectory. ### _Rovibrational (infrared and power) spectra_ Rovibrational spectra can be calculated in several ways with MLatom. The simplest one is by performing frequency calculations on an optimized molecular geometry. This requires any model providing Hessians and, preferably, dipole moments. Another one is performing molecular dynamics simulations with any model providing energy gradients and, then, post-processing the trajectories. Both frequency calculations and the MD-based approach require the model to also provide dipole moments to calculate absorption intensities. If no dipole moments are provided, only frequencies are available, or, in the case of MD, only power spectra rather than IR can be obtained. The IR spectra are obtained via the fast Fourier transform using the autocorrelation function of dipole moment[104, 105] with our own implementation[90]. The power spectra only need the fast Fourier transform[105], which is also implemented[79] in MLatom. We have previously shown[90] that the high quality of the AIQM1 method results in rather accurate IR spectra obtained from MD simulations compared to spectra obtained with a representative DFT (which is also substantially slower; see example in Figure 8) or a semi-empirical QM method. ### One-photon UV/vis absorption spectra UV/vis absorption spectra simulations are computationally intensive because they require calculating excited-state properties. In addition, better-quality spectra can be obtained via the nuclear ensemble approach (NEA)[107], which necessitates the calculation of excited-state properties for thousands of geometries for high precision. MLatom implements an interpolation ML-NEA scheme[30] that improves the precision of the spectra with a fraction of the computational cost of traditional NEA simulations. Currently, the ML-NEA calculations are based on interfaces to Newton-X[96] and Gaussian[55] and utilize the sampling of geometries from a harmonic Wigner distribution[108]. This scheme also automatically determines the optimal number of required reference calculations, providing a user-friendly, black-box implementation of the algorithm[29]. ### Two-photon absorption Beyond one-photon absorption, MLatom has an implementation of a unique ML approach for calculating two-photon absorption (TPA) cross sections of molecules just based on their SMILES strings[44], which are converted into the required descriptors using the interface to RDKit[109], and solvent information[31]. This ML-TPA approach is very fast with accuracy comparable to much more computationally intensive QM methods. We provide a ML model pre-trained on experimental data. ML-TPA was tested in real laboratory settings and shown to provide a good estimate for new molecules not present in the training experimental database. ## 6 Machine learning In Sections 4 and 5, we discussed the supported types of models and how they can be applied to simulations. Here, we briefly overview the general considerations for training and validating the ML models with MLatom. The models share the standard MLatom's conventions for input, output, training, hyperparameter optimization, and testing, which allows to conveniently switch from one to another model and benchmark them. ### Training To create an ML model, the user has to choose and train the ML model and prepare data. MLatom provides many tools for different stages of this process. The model can be either chosen from a selection of provided types of ML models with pre-defined architecture or customized based on available algorithms and preset models. Once a model is chosen, it must be trained, and, in many cases, it is advisable or even required (particularly in the case of the kernel methods) to optimize its hyperparameters, which can be done as explained in Section 6.2. For training, the data set should be appropriately prepared. MLatom has strict naming conventions for data set splits to avoid any confusion when changing and comparing different model types. All the data that is used directly or indirectly for creating a ML model is called the training set. This means that the validation set, which can be used for hyperparameter optimization or early stopping during NN training, is a subset of the training set. Thus, the part of the training set remaining after excluding the validation set is called the sub-training set and is actually used for training the model, i.e., optimizing model parameters (weights in NN terminology and regression coefficients in kernel methods terminology). MLatom can split the training data set into the sub-training and validation data subsets or create a collection of these subsets via cross-validation [24, 29]. The sampling into the subsets can be performed randomly or using furthest-point or structure-based sampling. In the case of kernel methods, the final model in MLatom is typically trained on the entire training set after the hyperparameter optimization. This is possible because the kernel methods have a closed, analytical solution to finding their regression coefficients, and after hyperparameters are appropriately chosen, overfitting can be mitigated to a great extent. In the case of NNs, the final model is the one trained on the sub-training set because it would be too dangerous to train on the entire training set without any validation subset to check for the signs of overfitting. #### 6.1.1 Training pre-defined types of ML models Most pre-defined types of ML models, such as ANI-type or KREG models, expect XYZ molecular coordinates as input. This should be either provided by the user or can be obtained using MLatom's conversion routines, e.g., from the SMILES strings [10], which rely on OpenBabel [111]'s Pybel API. These models have a default set of hyperparameters, but especially in the case of kernel methods such as KREG, it is still strongly advised to optimize them. The models can be, in principle, trained on any molecular property. Most often, they are used to learn PESs and, hence, require energy labels in the training set. The PES model accuracy can be greatly improved if the energy gradients are also provided for training. Thus, the increased training time is usually justified [39, 112]. An example of training and testing the KREG and DPMD models on a data set with energies and energy gradients for the urea molecule in the WS22 database[113] is shown in Figure 9. The KREG model is both faster to train and more accurate, which is a typical situation for small-size molecular databases, while for larger databases, NN-based models might be preferable[39]. Figure 9: Side-by-side comparison of the usage of MLatom in both command-line mode and via Python API for training and testing the KREG and DeepPot-SE models on a 1000-point data set on the urea molecular PES data set randomly sampled from the WS22 database. Hyperparameter optimization of the KREG model is required is also shown. Calculations were run on 36 Intel(R) Xeon(R) Gold 6240 CPU @ 2.60GHz. #### 6.1.2 Designing and training custom ML models MLatom's user can also create models on any set of input vectors and labels using a variety of KRR kernel functions. In this case, hyperparameter optimization is strongly advised too. In all other aspects, training of such KRR models is similar to training the pre-defined models, i.e., the preparation of the data set is also performed by splitting it into the required subsets for training and validation. Importantly, the user can construct models of varying complexity by using a model tree implementation. Special cases of such composite models are \(\Delta\)-learning and self-correcting models and they can be trained similarly to other ML models by supplying input vectors or XYZ coordinates and labels. In the case of \(\Delta\)-learning, the user needs to supply the baseline values. For other, more complicated models, the user must train and combine each component separately. ### Hyperparameter optimization The performance of ML models strongly depends on the chosen hyperparameters such as the regularization parameters and number of layers in NNs. Hence, it is often necessary to optimize the hyperparameters to achieve reasonable results and to improve the accuracy. The hyperparameter optimization commonly requires multiple trainings, making it an expensive endeavor, and caution must paid in balancing performance/cost issues. MLatom can optimize hyperparameters by minimizing the validation loss using one of the many available algorithms. The validation loss is usually based on the error in the validation set which can be a single hold-out validation set, or a combined cross-validation error. For few hyperparameters, the robust grid search on the log or linear scale can be used to find optimal values. It is a common choice for kernel methods (see Figure 9 for an example of optimizing hyperparameters of the KREG model which is the kernel method). For a larger number of hyperparameters, other algorithms are recommended instead. Popular choices are Bayesian optimization with the Tree-structured Parzen Estimator (TPE)[72] and many SciPy optimizers. The choice of the validation loss also matters. In most cases, MLatom minimizes the root-mean-squared error (RMSE) for the labeled data. However, when multiple labels are provided, i.e., energies and energy gradients for learning PES, the choice should be made on how to combine them in the validation loss. By default, MLatom calculates the geometric mean of the RMSEs for energies and gradients[29]. The users can also choose a weighted sum of RMSEs, but in this case, they must choose the weight. In addition, the user can supply MLatom with any custom validation loss function, which can be arbitrarily complicated. ### Evaluating models Once the model has been trained, it is common to evaluate its generalization ability before deploying it in production simulations. MLatom provides dedicated options for such evaluations. The simplest and one of the most widespread approaches is calculating the error for the independent hold-out test set not used in the training. To emphasize, in MLatom terminology, the test set has no overlap with the training set, which might consist of the sub-training and validation subsets[29]. Alternatively, cross-validation and its variant leave-one-out cross-validation are recommended whenever computationally affordable, especially for small data sets. MLatom provides a broad range of error measures for the test set, including RMSE, mean absolute error (MAE), mean signed error, the Pearson correlation coefficient, the R\({}^{2}\) value, outliers, _etc.[29]_. The testing can be performed with the training and hyperparameter optimization for most models, including \(\Delta\)-learning and self-correcting models. Since the errors depend on the size of the training set, the learning curves showing this dependence are very useful for comparing different models[29]. MLatom can generate the learning curves, which have been instrumental in preparing guidelines for choosing the ML interatomic potential[39]. **Summary** MLatom 3 is a unique software package combining machine learning and quantum mechanical models for accelerating and improving the accuracy of computational chemistry simulations. It can be used as a black-box package accepting input files with a simple structure or as a transparent Python module enabling custom workflows. MLatom provides access to pre-trained models such as AIQM1 and ANI-1ccx aiming at high accuracy of coupled-cluster level, making them more accurate and much faster than common DFT approaches for ground-state properties of closed-shell organic molecules. Another special pre-trained model can be used to simulate two-photon absorption spectra. The user of MLatom has an option to create their own models. Pre-defined ML architectures of the ANI-type, KREG, PhysNet, GAP-SOAP, DPMD, or sGDML make it easier. Alternatively, the custom models of varying complexity and based on combinations of both ML and QM models, such as \(\Delta\)-learning can be easily built with the package. MLatom provides a toolset for training, hyperparameter optimization, and performance analysis of the models. This wide variety of models can be used for single-point calculations on large data sets, geometry optimizations, calculation of rovibrational (frequencies, IR spectra) and thermochemical (enthalpies, entropies, heats of formation) properties, molecular dynamics, and UV/vis absorption spectra. The ML models can also be trained and used for quantum dissipative dynamics simulations. The richness of MLatom functionality is available open source and can be exploited on the XACS cloud computing service. The package is accompanied by extensive and detailed manuals and tutorials that are developed and improved in close connection with teaching computational chemistry and machine learning in regular workshops and university courses. **Data availability** No data was generated for this article. **Code availability** The MLatom code is open-source and available both on GitHub ([https://github.com/dralgroup/mlatom](https://github.com/dralgroup/mlatom)) and PyPI (i.e., it can be installed via the command pip install mlatom). The simulations can also be run on MLatom(r)XACS cloud computing service on [https://XACScloud.com](https://XACScloud.com). **Author contributions** P.O.D. is the lead designer, developer, and maintainer of MLatom. F.G. is co-maintaining the MLatom package, implemented interfaces to third-party machine learning packages (PhysNet, DeePMD-kit, TorchANI, and GAP-SOAP), hyperopt, wrote the code for learning curve, and made numerous other improvements in MLatom. Y.F.H. co-implemented the KREG model, implemented molecular dynamics and vibrational spectra simulations, and improved many other parts of the code such as interfaces. P.Z. implemented AIQM1 and the ANI family of models (ANI-1ccx, ANI-2x, ANI-1x and their dispersion-corrected variants) through interfaces to third-party packages (MNDO, TorchANI, Sparrow) as well as geometry optimizations, frequency and thermochemistry simulations via interfaces to Gaussian, ASE, and TorchANI. Y.X.X.C. implemented interfaces to PySCF and Orca and extended thermochemical calculations to many methods. M.B. contributed to planning the implementation of MLPs and the methodology behind the ML-NEA approach. O.I. contributed to the research involving AIQM1 methods and ANI universal potentials. C.W. led the development of the ML-TPA methodology. B.X.X. implemented the ML-NEA approach and initial argument parsing routines. M.P.J. helped implement of the interfaces to TorchANI, PhysNet, DeePMD-kit, and Newton-X. Y.S., Y.D., and Y.T.C. implemented ML-TPA approach. L.Z. implemented routines for nonadiabatic dynamics and extensions of the MNDO interface to excited-state properties. S.Z. contributed to atomic properties collection and implemented some of the NN-based approaches. A.U. interfaced MLQD to MLatom. Q.Z. contributed to the program documentation and tests. Y.O. contributed to plotting routines. P.O.D. wrote the original manuscript and all authors revised and commented on the manuscript. F.G., Y.F.H., Y.X.X.C., and P.O.D. prepared the figures. ## Acknowledgments P.O.D. acknowledges funding by the National Natural Science Foundation of China (No. 22003051 and funding via the Outstanding Youth Scholars (Overseas, 2021) project), the Fundamental Research Funds for the Central Universities (No. 20720210092), and via the Lab project of the State Key Laboratory of Physical Chemistry of Solid Surfaces. This project is supported by Science and Technology Projects of Innovation Laboratory for Sciences and Technologies of Energy Materials of Fujian Province (IKKEM) (No: RD2022070103). M.B. and M.P.J. are financially supported by the European Union's Horizon 2020 research and innovation program under ERC advanced grant (grant agreement No 832237, SubNano). He also acknowledges the Centre de Calcul Intensif d'Aix-Marseille. O.I. acknowledges support from the National Science Foundation (NSF) CHE-2154447. O.I. acknowledges Extreme Science and Engineering Discovery Environment (XSEDE) Award CHE200122, which is supported by NSF Grant Number ACI-1053575. C.W. acknowledges funding support from the National Key R&D Program of China (2021YFA1502500), the National Natural Science Foundation of China (22071207, 22121001, 21721001, and 22003051), NFFTBS (no. J1310024), and the Fundamental Research Funds for the Central Universities (nos. 20720220128 and 20720220011).
機械学習 (ML) は、計算化学においてますます普及し、その一方で、ML方法の急速な発展に伴い、カスタマイズされたワークフローを設計するための柔軟なソフトウェアフレームワークが必要とされています。MLatom 3 は、MLの力を活用し、典型的な計算化学シミュレーションを向上させ、複雑なワークフローを作成するプログラムパッケージです。このオープンソースパッケージは、ユーザーにコマンドラインオプション、入力ファイル、または Python パッケージを介して、MLatom を利用してシミュレーションを実行するための多くの選択肢を提供します。これには、ユーザーのコンピュータとオンライン XACSクラウドコンピューティングサービス(XACScloud.com)での利用が含まれます。計算化学者は、エネルギー、熱化学的特性、構造最適化、分子と量子力学のダイナミクスの計算、そして (ro)振動、1光子 UV/vis 吸収、および2光子吸収スペクトルを、ML、量子力学
2309.14387
Exploring Robot Morphology Spaces through Breadth-First Search and Random Query
Evolutionary robotics offers a powerful framework for designing and evolving robot morphologies, particularly in the context of modular robots. However, the role of query mechanisms during the genotype-to-phenotype mapping process has been largely overlooked. This research addresses this gap by conducting a comparative analysis of query mechanisms in the brain-body co-evolution of modular robots. Using two different query mechanisms, Breadth-First Search (BFS) and Random Query, within the context of evolving robot morphologies using CPPNs and robot controllers using tensors, and testing them in two evolutionary frameworks, Lamarckian and Darwinian systems, this study investigates their influence on evolutionary outcomes and performance. The findings demonstrate the impact of the two query mechanisms on the evolution and performance of modular robot bodies, including morphological intelligence, diversity, and morphological traits. This study suggests that BFS is both more effective and efficient in producing highly performing robots. It also reveals that initially, robot diversity was higher with BFS compared to Random Query, but in the Lamarckian system, it declines faster, converging to superior designs, while in the Darwinian system, BFS led to higher end-process diversity.
Jie Luo
2023-09-25T06:46:19
http://arxiv.org/abs/2309.14387v1
# Exploring Robot Morphology Spaces through Breadth-First Search and Random Query ###### Abstract Evolutionary robotics offers a powerful framework for designing and evolving robot morphologies, particularly in the context of modular robots. However, the role of query mechanisms during the genotype-to-phenotype mapping process has been largely overlooked. This research addresses this gap by conducting a comparative analysis of query mechanisms in the brain-body co-evolution of modular robots. Using two different query mechanisms, Breadth-First Search (BFS) and Random Query, within the context of evolving robot morphologies using CPPNs and robot controllers using tensors, and testing them in two evolutionary frameworks, Lamarckian and Darwinian systems, this study investigates their influence on evolutionary outcomes and performance. The findings demonstrate the impact of the two query mechanisms on the evolution and performance of modular robot bodies, including morphological intelligence, diversity, and morphological traits. This study suggests that BFS is both more effective and efficient in producing highly performing robots. It also reveals that initially, robot diversity was higher with BFS compared to Random Query, but in the Lamarckian system, it declines faster, converging to superior designs, while in the Darwinian system, BFS led to higher end-process diversity. evolutionary robotics, artificial life, morphological evolution, query mechanism, CPPN, mapping, breadth-first search ## I Introduction Evolutionary robotics empowers the design and evolution of robot morphologies through a process of genotype to phenotype mapping. In the context of modular robots, the challenge lies in determining the presence or absence of specific components at precise positions within the robot body and getting a balance in exploring and exploiting the design space. Several genotype-to-phenotype mapping techniques have been employed in various research studies, including L-systems [1], CPPNs (Compositional pattern-producing networks) [2, 3, 4], and Direct Mapping [5]. However, scant attention has been given to the query mechanism utilized in these mapping processes, despite its pivotal role in shaping the resultant robot bodies. This research aims to address the open research area of investigating different query mechanisms in the evolutionary robotic field. The primary objective is to conduct a comparative analysis of query mechanisms and their influence on the evolution and performance of modular robot bodies. These investigations focus on understanding how different query mechanisms affect the key characteristics of evolved robot morphologies in evolutionary robot systems. To achieve this objective, we design and implement an experimental setup where we evolve modular robot morphologies using CPPNs with one commonly used query mechanism: Breadth-First Search (BFS) [6] and compare it with our design: Random Query [7]. We test these two query mechanisms on two evolutionary systems to evolve both the body and brain. The main contributions of this research are threefold. Firstly, we provide a comprehensive analysis of the influence of two different query mechanisms on the evolution and performance of modular robot morphologies. Secondly, we contribute to the understanding of genotype to phenotype mapping in modular robotics by highlighting the importance of the query mechanism and its impact on the diversity and complexity of evolved robot morphologies. Our findings can inform the development of more effective approaches for evolving robot bodies and contribute to the advancement of adaptive and versatile robotic systems. Finally, we evaluate the efficiency and convergence properties of the query mechanisms, considering the computational resources required for generating desirable robot body configurations. This analysis provides valuable insights for researchers and practitioners working on evolutionary robotics, enabling them to make informed decisions regarding the choice of query mechanism based on their specific requirements and constraints. Overall, this research enhances our understanding of query mechanisms in genotype to phenotype mapping for modular robots and sheds light on key aspects of evolutionary robotics. ## II Evolution+Learning A search space comprises distinct layers that stack upon one another. At its foundational level lies the phenotype space, while one layer above that resides the genotype space, which may not always have a straightforward one-to-one representation with the phenotype layer. Numerous factors influence our search process, including reproduction operators and selection mechanisms, among others. Our particular focus revolves around examining how the query mechanisms employed for mapping the body genotype to the robot's morphology impact the exploration of the morphological search space. ### _Robot Phenotype_ #### Ii-A1 Robot Morphology We adopt RoboGen's components as the robot body's phenotype. RoboGen [8] is a popular open-source platform for evolving robots, offering modular components: a core component, one or more brick components, and active hinges. The phenotype follows a tree structure, with the core module as the root node, enabling 3D morphologies through 90-degree rotations. #### Iii-A2 Robot Controller We employ Central Pattern Generators (CPGs) for driving modular robots, a proven method for controlling various robot types [3, 9]. Each robot joint has an associated CPG consisting of three neurons: an \(x_{i}\)-neuron, a \(y_{i}\)-neuron, and an \(out_{i}\)-neuron. The \(x_{i}\) and \(y_{i}\) neuron states change over time by multiplying the activation value of the opposing neuron by a corresponding weight: \(\dot{x}_{i}=w_{i}y_{i}\) and \(\dot{y}_{i}=-w_{i}x_{i}\). To simplify, we set \(w_{x_{i}y_{i}}\) equal to \(-w_{y_{i}x_{i}}\), denoting their absolute value as \(w_{i}\). Initial states of all \(x\) and \(y\) neurons are \(\frac{\sqrt{2}}{2}\) to create a sine wave with an amplitude of 1, matching joint rotation limits. To allow complex output patterns, we implement connections between neighboring joint CPGs. For the \(i_{th}\) joint and \(\mathcal{N}_{i}\) as the set of neighboring joint indices, with \(w_{ij}\) representing the connection weight between \(x_{i}\) and \(x_{j}\) (also set to \(-w_{ji}\)), the system of differential equations becomes: \[\begin{split}\dot{x}_{i}&=w_{i}y_{i}+\sum_{j\in \mathcal{N}_{i}}w_{ji}x_{j}\\ \dot{y}_{i}&=-w_{i}x_{i}\end{split} \tag{1}\] Due to this addition, \(x\) neurons are no longer bounded within \([-1,1]\). To handle this, we use the hyperbolic tangent function (_tanh_) as the activation function for \(out_{i}\)-neurons. ### _Robot Genotype_ #### Iii-B1 Body Genotype The phenotype of bodies is encoded in a Compositional Pattern Producing Network (CPPN) which was introduced by Stanley [2] and has been successfully applied to the evolution of both 2D and 3D robot morphologies in prior studies [10]. The structure of the CPPN has four inputs and five outputs. The first three inputs are the x, y, and z coordinates of a component, and the fourth input is the distance from that component to the core component in the tree structure. The first three outputs are the probabilities of the modules being a brick, a joint, or empty space, and the last two outputs are the probabilities of the module being rotated 0 or 90 degrees. For both module type and rotation the output with the highest probability is always chosen; randomness is not involved. #### Iii-B2 Brain Genotype We utilize an array-based structure for the brain's genotypic representation to map the CPG weights. This is achieved via direct encoding, a method chosen specifically for its potential to enable reversible encoding in future stages. We have seen how every modular robot can be represented as a 3D grid in which the core module occupies the central position and each module's position is given by a triple of coordinates. When building the controller from our genotype, we use the coordinates of the joints in the grid to locate the corresponding CPG weight. To reduce the size of our genotype, instead of the 3D grid, we use a simplified 3D in which the third dimension is removed. For this reason, some joints might end up with the same coordinates and will be dealt with accordingly. Since our robots have a maximum of 10 modules, every robot configuration can be represented in a grid of \(21\times 21\). Each joint in a robot can occupy any position of the grid except the center. For this reason, the possible positions of a joint in our morphologies are exactly \((21\cdot 21)-1=440\). We can represent all the internal weights of every possible CPG in our morphologies as a \(440\)-long array. When building the phenotype from this array, we can simply retrieve the corresponding weight starting from a joint's coordinates in the body grid. To represent the external connections between CPGs, we need to consider all the possible neighbours a joint can have. In the 2-dimensional grid, the number of cells in a distance-2 neighbourhood for each position is represented by the Delannoy number \(D(2,2)=13\), including the central element. Each one of the neighbours can be identified using the relative position from the joint taken into consideration. Since our robots can assume a 3D position, we need to consider an additional connection for modules with the same 2D coordinates. To conclude, for each of the \(440\) possible joints in the body grid, we need to store 1 internal weight for its CPG, 12 weights for external connections, and 1 weight for connections with CPGs at the same coordinate for a total of 14 weights. The genotype used to represent the robots' brains is an array of size \(440\times 14\). An example of the brain genotype of a "+" shape robot is shown in Figure 2. It is important to notice that not all the elements of the genotype matrix are going to be used by each robot. This means that their brain's genotype can carry additional information that could be exploited by their children with different morphologies. ### _Query Mechanisms_ Query Mechanism is a critical aspect of the genotype-to-phenotype translation process in designing robot bodies. It serves as the bridge between the genetic information encoded in the genotypes (such as CPPN, L-system, array) and the actual physical characteristics of the robot. Essentially, the Fig. 1: Brain phenotype (CPG network) of a ”+” shape robot. In our design, the topology of the brain is determined by the topology of the body. query mechanism is a technique used to extract information from the genotypic representation to determine the composition and arrangement of modules in the resulting robot body. To produce the phenotypes of the robot bodies, the core component is generated at the origin. Then, two different mechanisms are used to query the CPPN-based genotypes: Breadth-First Searchan algorithm for searching a tree data structure for a node that satisfies a given property [11]. It starts at the tree root and explores all nodes at the present depth prior to moving on to the nodes at the next depth level. We move outwards from the core component until there are no open sockets(breadth-first exploration), querying the CPPN network to determine whether a module will be placed at each location, its type and its rotation. If a module would be placed in a location already occupied by a previous module, the module is simply not placed and the branch ends there. Random Queryan algorithm for searching a tree data structure for a node randomly with a given number of queries. All open sockets have an equal chance of being randomly selected to be queried, in no specific order. The CPPN network determines the type and rotation of each module. If a module would be placed in a location already occupied by a previous module, then this module is not expressed in the body. A number of nine queries are applied. For both methods, the coordinates of each module are integers; a module attached to the front of the core module will have coordinates (0,1,0). We stop when ten modules have been created. ### _Learning Algorithm_ We use Reversible Differential Evolution (RevDE) [12] as the learning algorithm because it has proven to be effective in previous research [3]. This method works as follows: 1. Initialize a population with \(\mu\) samples (\(n\)-dimensional vectors), \(\mathcal{P}_{\mu}\). 2. Evaluate all \(\mu\) samples. 3. Apply the reversible differential mutation operator and the uniform crossover operator. _The reversible differential mutation operator_: Three new candidates are generated by randomly picking a triplet from the population, \((\mathbf{w}_{i},\mathbf{w}_{j},\mathbf{w}_{k})\in\mathcal{P}_{\mu}\), then all three individuals are perturbed by adding a scaled difference. 4. Perform a selection over the population based on the fitness value and select \(\mu\) samples. 5. Repeat from step (2) until the maximum number of iterations is reached. As explained above, we apply RevDE here as a learning method for 'newborn' robots. In particular, it will be used to optimize the weights of the CPGs of our modular robots for the tasks during the Infancy stage. The Algorithm 1 displays the pseudocode of the complete integrated process of evolution and learning. With the highlighted yellow code, it is the Lamarckian system, without it is the Darwinian system. Note that for the sake of generality, we distinguish two types of quality testing depending on the context, evolution or learning. ``` 1:INITIALIZE robot population 2:EVALUATE each robot 3:while not STOP-EVOLUTION do 4: SELECT parents; 5: RECOMBINE+MUTATE parents' bodies; 6: MUTATE parents' brains; 7: CREATE offspring robot body; 8: CREATE offspring robot brain; 9: INITIALIZE brain(s) for the learning process; 10:while not STOP-LEARNING do 11: ASSESS offspring; 12: GENERATE new brain for offspring; 13:endwhile 14: EVALU offspring with the learned brain; 15: UPDATE brain genotype 16: SELECT survivors / UPDATE population 17:endwhile ``` **Algorithm 1** Evolution+Learning ### _Task and Fitness function_ Point navigation is a closed-loop controller task which needs feedback (coordinates)from the environment passing to the controller to steer the robot. The coordinates are used to obtain the angle between the current position and the target. If the target is on the right, the right joints are slowed down and vice versa. A robot is spawned at the centre of a flat arena (10 \(\times\) 10 m\({}^{2}\)) to reach a sequence of target points \(P_{1},...,P_{N}\). In each evaluation, the robot has to reach as many targets in order Fig. 2: Brain genotype to phenotype mapping of a ”+” shape robot. The left image (brain phenotype) shows the schema of the ”+” shape robot with the coordinates of its joints in the 2D body grid. The right image (brain genotype) is the distance 2 neighbour of the joint at (1,0). The coordinates reported in the neighbourhood are relative to this joint. The CPG weight of the joint is highlighted in purple and its 2-distance neighbours are in blue. as possible. Success in this task requires the ability to move fast to reach one target and then quickly change direction to another target in a short duration. A target point is considered to be reached if the robot gets within 0.01 meters from it. Considering the experimental time, we set the simulation time per evaluation to be 40 seconds which allows robots to reach at least 2 targets \(P_{1}(1,-1),P_{2}(0,-2)\). The data collected from the simulator is the following: * The coordinates of the core component of the robot at the start of the simulation are approximate to \(P_{0}(0,0)\); * The coordinates of the robot, sampled during the simulation at 5Hz, allowing us to plot and approximate the length of the followed path; * The coordinates of the robot at the end of the simulation \(P_{T}(x_{T},y_{T})\); * The coordinates of the target points \(P_{1}(x_{1},y_{1})\)... \(P_{n}(x_{n},y_{n})\). * The coordinates of the robot, sampled during the simulation at 5Hz, allow us to plot and approximate the length of the path \(L\). The fitness function for this task is designed to maximize the number of targets reached and minimize the path followed by the robot to reach the targets. \[F=\sum_{i=1}^{k}dist(P_{i},P_{i-1})\\ +(dist(P_{k},P_{k-1})-dist(P_{T},P_{k}))\\ -\omega\cdot L \tag{2}\] where \(k\) is the number of target points reached by the robot at the end of the evaluation, and \(L\) is the path travelled. The first term of the function is a sum of the distances between the target points the robot has reached. The second term is necessary when the robot has not reached all the targets and it calculates the distance travelled toward the next unreached target. The last term is used to penalize longer paths and \(\omega\) is a constant scalar that is set to 0.1 in the experiments. E.g., if a robot just reached 2 targets, the maximum fitness value will be \(dist(P_{1},P_{0})+(dist(P_{2},P_{1})-dist(P_{2},P2))-0.1*L=\sqrt{2}+\sqrt{2}-0.2*\sqrt{2}\approx 2.54\) (\(L\) is shortest path length to go through \(P_{1}\) and \(P_{2}\) which is equal to \(2*\sqrt{2}\)). ## III Experimental Setup The stochastic nature of evolutionary algorithms requires multiple runs under the same conditions and a sound statistical analysis ([13]). We perform 10 runs for each query mechanism and evolutionary system, namely BFS Darwinian, BFS Lamarckian, Random Query Darwinian, and Random Query Lamarckian. In total, 40 experiments. Each experiment consists of 30 generations with a population size of 50 individuals and 25 offspring. A total of \(50+(25\cdot(30-1))=775\) morphologies and controllers are generated, and then the learning algorithm RevDE is applied to each controller. For RevDE we use a population of 10 controllers for 10 generations, for a total of \((10+30\cdot(10-1))=280\) performance assessments. The fitness measures used to guide the evolutionary process are the same as the performance measure used in the learning loop. For this reason, we use the same test process for both. The tests for the task of point navigation use 40 seconds of evaluation time with two target points at the coordinates of \((1,-1)\) and \((0,-2)\). All the experiments are run with Mujoco simulator-based wrapper called Revolve2 on a 64-core Linux computer, where they each take approximately 7 hours to finish. The code for replicating this work and carrying out the experiments is available online: [https://shorturl.at/aES26](https://shorturl.at/aES26). ## IV Results To compare the effects of BFS and Random Query, we consider two generic performance indicators: efficiency and efficacy, meanwhile we also look into robots' morphologies. ### _Robot Performance_ #### Iv-A1 Efficacy the average fitness in the final generation. Figure 3 shows that both query mechanisms can produce robots able to solve the task, but robots queried by BFS are approximately 20% better. Moreover, around generation 14, Lamarckian system had already significantly outperformed the result that was produced by Darwinian system only by the end of the evolutionary process. This holds true for both query mechanisms. #### Iv-A2 Efficiency how much effort is needed to reach a given quality threshold (fitness level). It is calculated as the number of solution evaluations until the quality threshold is reached. BFS in the Lamarckian system is the most efficient, as it finds the best solution (maximum fitness) fastest (Figure 3). ### _Robot Morphologies_ #### Iv-A1 Morphological intelligence in this research, we consider a special property of robot morphology: Morphological Intelligence. Morphology influences how the brain learns. Some bodies are more suitable for the brains to learn with than others. How well the brain learns can be empowered by a better body. Therefore we define the intelligence of a body as a measure of how well it facilitates the brain to learn and achieve tasks. To quantify the measurement, we did an extra experiment, using the fixed bodies of 50 initial robots from the first generation of each run to evolve only the brains of them with these two methods, then we calculate the learning delta of each experiment, being the fitness value after the parameters \begin{table} \begin{tabular}{l|c|l} \hline \hline Parameters & Value & Description \\ \hline Population size & 50 & Number of individuals per generation \\ Offspring size & 25 & Number of offspring produced per generation \\ Generations & 30 & Termination condition for each run \\ Learning trials & 280 & Number of the evaluations performed by \\ & & RevDE on each robot \\ Tournament size & 2 & Number of individuals used in the parent \\ & & selection - (k-tournament) \\ Repetitions & 10 & Number of repetitions per experiment \\ \hline \hline \end{tabular} \end{table} TABLE I: Main experiment parameters were learned minus the fitness value before the parameters were learned. We finally quantify morphological intelligence by the delta of the learning delta of each method, being the learning delta of the evolved body minus the learning delta of the fixed body. In Figure 4, we see that the average learning \(\Delta\) of both methods with evolved bodies grow steadily across the generations. This effect has been discovered previously in [3, 14], with different tasks, a different learning method and a different representation, so the current results provide additional support that lifetime learning leads the evolutionary search towards morphologies with increasing learning potential. While the average learning Deltas of both methods with fixed body show no significant change which indicates that there is low morphological intelligence in the fixed robot body. The morphological intelligence in Lamarckian system is 30% greater than that in Darwinian system, as indicated by the higher delta of the learning delta. The delta of learning delta in BFS is about 75% higher than in Random Query, which indicates more morphological intelligence in the bodies produced by BFS. #### Iv-B2 Diversity the morphological variety of each population using tree-edit distance. It is measured in two steps: firstly, the measure of difference between any two robots, denoted as d(x,y); and secondly, the measure of diversity within a population, which is represented by the average distance along the evolutionary process. Figure 5 demonstrates that initially, robots generated by BFS exhibit greater diversity compared to those generated by Random Query. Moreover, the morphological diversity of the Lamarckian system using BFS diminishes at a notably faster rate than the other three methods, indicating a convergence toward superior body designs at a faster pace. In the case of the Darwinian system, employing BFS led to a higher diversity value at the conclusion of the evolutionary process. #### Iv-B3 Morphological traits We additionally examine the morphological characteristics of the robots, delving into eight specific traits (further information on the measurements can be found in [[15]]). Figure 8 illustrates that the differences among robots generated by two evolutionary systems are notably larger when employing the Random Query method across all morphological traits, except for branching and symmetry, as opposed to using BFS. Except for'rel_num_bricks,' the values in all the other morphological traits from BFS are higher than those from Random Query. This means that robots produced by BFS are much more symmetrical, have more branching, more hinges, and fewer bricks compared to the ones produced by Random Query. Furthermore, a PCA analysis (Figure 7) employing these identical eight traits reveals no difference in the morphologies generated by two evolutionary systems using BSF (subplot a). When employing the Random Query approach, there is a slight variation in the clustering circles (subplot b). Hence, when applying the same query mechanism, the distinctions in the robots produced by the two evolutionary systems are marginal, whereas the differences in the robot bodies resulting from the two query mechanisms are considerable. This is also supported by Figure 6 which displays the 10 best robots produced by each method. The morphologies of the best-performing robots using BFS mainly converged into a "+" shape, while using the Random Query, the morphologies predominantly converge into an "L" shape, irrespective of the evolution system used. The best morphologies evolved by BFS from both evolution systems typically feature three or four limbs, primarily consisting of hinges with either no bricks or just one. In contrast, those generated through the Random Query method tend to have a relatively higher likelihood of containing one or two bricks and consist of only two limbs. ## V Conclusions and Future Work In this research, we investigated the influence of two different query mechanisms used in genotype to phenotype mapping Fig. 3: Mean (lines) and maximum (dots) fitness over 30 generations (averaged over 10 runs) for Lamarckian system in purple and Darwinian system in blue. Subfigure (a) exhibits mean average fitness for robots produced with BFS, and Subfigure (b) is for Random Query. The bands indicate the 95% confidence intervals (\(\pm 1.96\times SE\), Standard Error). within two evolutionary robotics systems. Based on our analysis, we draw the following conclusions: Firstly, the choice of query mechanism significantly affects the evolution and performance of modular robot bodies. Robots are not able to change the system's performance of the robot and the environment's performance of the robot. The results are shown in Fig. 4(a) and Fig. 4(b). The results are shown in Fig. 4(c) and Fig. 4(d). The results are shown in Fig. 4(a) and Fig. 4(b). The results are shown in Fig. 4(a) and Fig. [MISSING_PAGE_POST] queried by BFS exhibited approximately 20% better efficacy in solving the given task. Additionally, BFS in the Lamarckian system demonstrated superior efficiency, finding the best solution faster compared to Random Query. Secondly, the query mechanism plays a crucial role in shaping the morphological intelligence of evolved robot bodies. Our experiments showed that morphological intelligence, measured as the ability of the body to facilitate learning in the brain, was significantly higher in robots produced by BFS. This highlights the importance of the query mechanism in determining the learning potential and adaptability of the evolved robot morphologies. Furthermore, our analysis revealed that the query mechanism influenced the diversity and morphological traits of the evolved robot bodies. Robots produced by BFS exhibited higher diversity initially. In the Lamarckian system, it declines faster, converging to superior designs, while in the Darwinian system, BFS led to higher end-process diversity. Regarding morphological traits, for the same query mechanism, the distinctions in the robots produced by the two evolutionary systems are marginal, whereas the differences in the robot bodies resulting from the two query mechanisms are considerable. In conclusion, BFS offers a systematic and deterministic approach, ensuring the exploration of every possible branch of the genotype tree. This results in increased stability and efficiency. On the contrary, the Random query approach, in theory, introduces variability that might lead to innovative body designs - the primary rationale behind our initial choice. However, our experimental results do not definitively showcase any discernible advantages. As we move forward, there is scope to explore alternative query mechanisms within various evolutionary frameworks.
進化的なロボット工学は、特にモジュラーロボットの設計と進化において、強力なフレームワークを提供します。しかし、ゲノムからphenotypeへのマッピングプロセスにおけるクエリメカニズムの影響は、大きく無視されてきました。この研究では、モジュールロボットの脳と身体の共進化におけるクエリメカニズムの比較分析を行い、二つの異なるクエリメカニズムであるBFSとランダムクエリを使用して、CPPNとtensorを使用してロボット制御を行う、二つの進化フレームワーク、ラマーカのような系統とダーウィンのような系統でテストします。この研究では、これらのクエリメカニズムが進化の成果とパフォーマンスにどのように影響を与えているかについて調査しています。結果が示すのは、二つのクエリメカニズムがモジュールロボットの進化とパフォーマンスに与える影響です。その影響は、morphological intelligence、diversity、 morphological traitsにまたがっています。この研究は、BFSが両
2310.00291
Coexistence of insulating phases in confined fermionic chains with a Wannier-Stark potential
We study fermions on a finite chain, interacting repulsively when residing on the same and on nearest-neighbor sites, and subjected to a Wannier-Stark linearly-varying potential. Using the density matrix renormalization-group numerical technique to solve this generalized extended Hubbard model, the ground state exhibits a staircase of (quasi) plateaus in the average local site density along the chain, decreasing from being doubly-filled to empty as the potential increases. These `plateaus' represent locked-in commensurate phases of charge density waves together with band and Mott insulators. These phases are separated by incompressible regions with incommensurate fillings. It is suggested that experimental variations of the slope of the potential and of the range of the repulsive interactions will produce such a coexistence of phases which have been individually expected theoretically and observed experimentally for uniform systems.
N. Aucar Boidi, K. Hallberg, A. Aharony, O. Entin-Wohlman
2023-09-30T07:48:44
http://arxiv.org/abs/2310.00291v2
# Coexistence of insulating phases in confined ###### Abstract We study fermions on a finite chain, interacting repulsively when residing on the same and on nearest-neighbor sites, and subjected to a Wannier-Stark linearly-varying potential. Using the density matrix renormalization-group numerical technique to solve this generalized extended Hubbard model, the ground state exhibits a staircase of (quasi) plateaus in the average local site density along the chain, decreasing from being doubly-filled to empty as the potential increases. These 'plateaus' represent locked-in commensurate phases of charge density waves together with band and Mott insulators. These phases are separated by incompressible regions with incommensurate fillings. It is suggested that experimental variations of the slope of the potential and of the range of the repulsive interactions will produce such a coexistence of phases which have been individually expected theoretically and observed experimentally for uniform systems. _Introduction.--_ The complexity of quantum many-body systems originates from the interplay of strong interactions, quantum statistics, and the large number of quantum-mechanical degrees of freedom. This interplay generates a multitude of phases, e.g., insulating commensurate charge (CDW) and spin (SDW) density waves and compressible (metallic) phases. This complexity already shows up in one dimension, in which one can use (and test) a variety of theoretical and experimental tools for their study. The simplest picture for interacting particles in one dimension (1D) is given by the Hubbard Hamiltonian, which includes interactions, \(U\), only between particles residing on the same lattice site [1]. This interaction competes with the kinetic [nearest-neighbor (nn) tunneling] energy, \(t\), resulting for instance, in antiferromagnetic structures [2]. However, this simple Hamiltonian cannot reproduce certain phases, like charge density-waves. Those are generated _e.g._ by the _extended Hubbard model_, which also includes nn interactions, \(V\). Its one-dimensional version reveals a rich phase diagram, which includes the band and Mott insulating phase [3], SDW and CDW and metallic phases [4; 5; 6; 7]. It has also been used to describe data collected in experiments performed on chains of cold atoms [8; 9]. In higher dimensions, it has been used to describe bulk and edge states in electronic insulators [10]. An exact analytic solution of the extended Hubbard Hamiltonian, in particular on a finite chain (the system amenable to cold-atom experiments), has not yet been found. It has been studied by a variety of numerical and approximate methods (e.g., Refs. [11; 12; 13; 14; 15; 16]), emphasizing the half-filled case, where one finds (for fermions in 1D), the insulating Mott antiferromagnetic phase [3] and CDW phases. Experiments on cold-atom arrays naturally involve finite samples. Numerical calculations performed on such systems used various boundary conditions: hard walls, periodic and open boundaries, or potentials representing confining harmonic traps [17; 9; 18]. These works concentrate mostly on the region around the 'center' of the confined structure, whose details are usually not sensitive to the particular form of the boundaries, and so its possible structures are determined by \(U,\;V\) and particle density \(n\). Remarkably, experiments (e.g., on cold atoms) have observed some of the theoretically predicted phases [19; 20]. Less attention has been paid to the structures near the 'edges' of the samples and to their dependence on the details of the boundary conditions, in particular when the confinement is achieved by varying site energies. Such a confining scheme has been recently considered, using the self-consistent Hartree-Fock approximation, for the two-dimensional extended Hubbard Hamiltonian, and found coexistence of various structures (phases) near the free ends of the samples [10]. In this Letter we generalize the extended Hubbard Hamiltonian to a 1D fermionic chain, confined by a _linear potential_, which mimics either edge configurations in bulk systems or cold-atom arrays placed in an electric field. Such a potential can be produced by a longitudinal electric field, as in the Wannier-Stark model [21]. HERE Given the complex nature of the many-body problem associated with our system, we resort to one of the most accurate numerical methods for correlated systems, the density matrix renormalization-group (DMRG) [22; 23; 24; 25; 26; 27], which uses quantum information to keep the most relevant states. As we show, the linear potential generates in the ground state the simultaneous existence of segments in which different phases coexist, each of which having been observed separately before, on long uniform chains. Our results are presented by plots of the local quantum-averaged density on the sites \(i\) on the chain, \(\langle n_{i}\rangle\), the nn density-density correlations \(\langle n_{i}n_{i+1}\rangle\) and the nn spin-spin correlations \(\langle s^{z}_{i}s^{z}_{i+1}\rangle\), (e.g., Fig. 2). Instead of a smooth decrease, the local average of \(\langle n_{i}\rangle\) shows flat steps, corresponding to locked-in Mott or CDW structures (e.g., \(212121\dots\), \(101010\dots\), [28]). These locked-in steps are similar to those observed for commensurate wave vectors in the devil's staircase [29; 30]. Between these steps, \(\langle n_{i}\rangle\) decreases more smoothly, representing incommensurate regions, which can be thought of as 'domain walls' with varying lengths [31]. As shown below, the local density of states on these intermediate sites exhibits small energy gaps, which imply that they are incompressible (insulating), in spite of having incommensurate fillings. We will refer to them hereafter as incompressible incommensurate-filling phases 'IIF'. The specific sequence of phases, and their sizes, can be modified experimentally, e.g., by changing the slope of the potential. Neighboring structures in a sequence are often also neighboring in the phase diagrams found for uniform systems (which are not subjected to the linear potential). _Model.--_ We study the generalized 1D extended Hubbard Hamiltonian \[\mathcal{H}= -t\sum_{i,\sigma}\big{(}c^{\dagger}_{i,\sigma}c_{i+1,\sigma}+{ \rm h.c.}\big{)}+\sum_{i}(\mu_{i}-\mu)n_{i}\] \[+U\sum_{i}\big{(}n_{i,\uparrow}-1/2\big{)}\big{(}n_{i,\downarrow} -1/2\big{)}\] \[+V\sum_{i}\big{(}n_{i}-1\big{)}\big{(}n_{i+1}-1\big{)}\, \tag{1}\] where \(i\) is the site index, \(i=0,\dots,L-1\) (we consider an odd number of sites without loss of generality). Here, \(\mu\) is the fixed external chemical potential, \(c^{\dagger}_{i,\sigma}\) creates an electron with spin \(\sigma(=\uparrow,\downarrow)\) at site \(i\), \(n_{i,\sigma}=c^{\dagger}_{i,\sigma}c_{i,\sigma}\), \(n_{i}=n_{i,\uparrow}+n_{i,\downarrow}\), while \(U\) and \(V\) are the repulsive interactions between electrons on the same and nn sites, respectively (see Fig. 1). The site-dependent local energy (the Wannier-Stark potential) \(\mu_{i}\) describes a linear external potential, \[\mu_{i}=\mu_{0}[i/i_{c}-1]. \tag{2}\] The site \(i_{c}=(L-1)/2\) represents the center of the 'edge', where \(\mu_{i_{c}}=0\). The particular form of \(\mathcal{H}\) was chosen so that at \(\mu=0\) (up to a constant energy) it is particle-hole symmetric when \(i\to L-1-i\) and \(n_{i}\to 2-n_{i}\). In that case we always have \(n_{i_{c}}=1\). For an infinite chain, \(\mu_{i}\) is large and negative at large and negative \(i\), and therefore we expect all the sites there to be filled, i.e.,\(n_{i}=n_{i,\uparrow}+n_{i,\downarrow}=2\). Similarly, \(\mu_{i}\) is large and positive at large and positive \(i\), and therefore we expect all the sites there to be empty, i.e. \(n_{i}=0\), as drawn in Fig. 1. For a finite chain, as we use here, this is still expected for a large slope, \(\mu_{0}\gg 1\), when the whole 'edge' between the fully-occupied and empty 'phases' is confined within the chain. Indeed, this is confirmed by our calculations. However, the 'end' trivial phases disappear for small slopes, for which the observed structures depend on the open boundaries. _Results.--_ Unless otherwise stated, we use \(U/t\to U=10\), \(\mu=0\) and \(L=41\). All energies are measured in units of \(t\). The Hamiltonian is diagonalized exploiting the DMRG technique, with around \(m=500\) states and \(4\) to \(6\) finite-size sweeps, which leads to a precision of around \(10^{-10}\) in the energy. For a very steep potential (\(\mu_{0}\to\infty\)) we obtain only two coexisting 'phases': a completely filled band (\(n_{i}=2\)) up to the center point \(i_{c}\), and completely empty sites (\(n_{i}=0\)) above that point, as expected. Both regions are incompressible and insulating. As the slope \(\mu_{0}\) decreases (but remains large), these two 'phases' remain near the two ends of the system, but new structures ('phases') appear between them, in which \(\langle n_{i}\rangle\) decreases gradually from \(2\) to \(0\). Figure 2 presents typical results, for three values of \(V\). Note the electron-hole symmetry between the two sides of Figs. 2(a-c), which follows directly from Eq. (1) at \(\mu=0\). For \(V=0\) (i.e., the simplest Hubbard Hamiltonian, left column in Fig. 2), the system shows the following phases: for large (but finite) values of \(\mu_{0}\) it is a band insulator at both extremes, completely filled on the left and completely empty on the right. In the region located symmetrically around the center point \(i_{c}\), we find a Mott-insulating state (one particle per site, \(\langle n_{i}\rangle=1\)), and an antiferromagnetic spin-spin correlation function, Fig. 2(g). As seen in this figure, the spin correlation function, \(\langle s^{z}_{i}s^{z}_{i+1}\rangle\simeq-0.14\) (note: \(s^{z}_{i}\equiv(n_{i,\uparrow}-n_{i,\downarrow})/2\), the \(z-\)direction is arbitrarily chosen), agrees with its value of the infinite Mott phase [23]. The three insulating commensurate phases are separated by IIF regions with very small but finite gaps, see Fig. 3. These regions differ from the compressible regions found in Ref. [10], possibly because Ref. [10] explores 2D systems using the mean-field approximation. As \(\mu_{0}\) decreases, the band insulating phases on both ends disappear and the Mott region grows, as estimated below. These results are also consistent with the behavior of the density-density correlations, which vary between \(4\) on the left, via \(1\) in the Mott phase, to \(0\) on the right, Fig. 2(d). For \(V=3\) (middle column in Fig. 2) the above three Figure 1: Schematic representation of the system considered, for \(L=9\) sites. insulating 'phases' are supplemented by two regions with an incipient (doped) CDW order on the two sides of the Mott 'phase', with local mean fillings 'quasi-plateaus' around \(\overline{\langle n_{i}\rangle}\simeq 1.5\) and \(\overline{\langle n_{i}\rangle}\simeq 0.5\) (quarter filling of holes and of electrons, respectively). The bar indicates a local average over a few sites. Unlike the uniform case \(\mu_{i}=0\), the local average fillings in these regions are not exactly \(1.5\) and \(0.5\). Rather, they can be fitted by \(\langle n_{i}\rangle=A-Bi+C\cos(i\pi)\) (note that \(i\) is the site number!). The oscillating term corresponds to a CDW, with a wave vector \(q=\pi\) (our lattice constant is \(1\)) and structures \(212121\dots\) or \(101010\dots\)[28]. However, the term \(-Bi\) represents a linear decrease of the actual average, presumably in response to the linear potential. Without this linear 'background', such a CDW is consistent with the results of the density-density and spin-spin correlations and with previous results for the doped (non-half-filled) 1D extended Hubbard model [32] in a uniform potential, \(\mu_{i}=0\), for which there is a transition from a Tomonaga-Luttinger liquid to a CDW phase for intermediate values of \(2t\leq V<U/2\) and large values of \(U\) (\(U\gg t\)). In those cases this CDW phase is insulating and incompressible. As we discuss below, we also find that, in spite of the varying average local densities, the local density of states has a (small) gap at the Fermi energy, which is consistent with an incompressible state. As before, when \(\mu_{0}\) decreases, the Mott region grows, the incipient CDW regions move towards the boundaries and the band-insulating regions disappear. For \(V=6\) (right column in Fig. 2) the Mott region disappears and is replaced by a half-filled CDW, \(202020\dots\). For large \(\mu_{0}\)'s this phase exists in the center and coexists with doped CDW's at both sides, with fillings \(\overline{\langle n_{i}\rangle}\simeq 1.5\) and \(\overline{\langle n_{i}\rangle}\simeq 0.5\) respectively (black diamonds in Fig. 2(c)). This coexistence of two different CDW's has not been seen before and constitutes a situation which could be observed in cold-atom experiments. As before, the doped CDW's are accompanied by a very small gradual decrease of the local average occupation -'quasi-plateaus', presumably due to the slope in the potential. When \(\mu_{0}\) is lowered, the half-filled CDW occupies the whole chain. This is expected, since it is well known that when \(V>U/2\) and for a half-filled system, the uniform chain undergoes a transition from a Mott phase to a CDW [7; 32]. The results are consistent with the behavior of the density-density and spin-spin correlations. It is interesting to see a finite value of the spin-spin correlations at the phase boundaries between the half-filled and doped CDW's. It is also interesting to see that for \(V=3\) the average occupation \(\overline{\langle n_{i}\rangle}\), and the amplitude of the incipient CDW decrease gradually towards the central Mott or CDW region, but this decrease becomes abrupt for \(V=6\). The width of the IIF region (domain wall) between the two CDW phases seems to shrink to zero above some 'critical' value of \(V\). The above results exhibited 'plateaus' only for \(1/2\), \(1/4\) and \(3/4\) fillings. We expect similar 'plateaus', corresponding to other simple fraction, e.g., \(1/8\). However, to see these one would need a much larger number of sites, and this is not possible with our present computer capabilities. Note, though, that calculations with a smaller number of sites do still show similar steps for these commensurate fillings. Local Density of States.--To further explore the different phases, we have calculated the local, site dependent, density of states (LDOS), using the lesser and greater Green's functions, see details in Ref. [33]. In Fig. 3 we show the LDOS for particular sites of the chain for differ Figure 3: (color online) Top: Local density profile \(\langle n_{i}\rangle\) showing the sites where the local density of states ( LDOS) has been calculated, for \(V=0\) (\(\mu_{0}=10\)), \(V=3\) (\(\mu_{0}=16\)) and \(V=6\) (\(\mu_{0}=20\)). Bottom: LDOS showing gaps at the Fermi energy (at \(\omega=E_{F}=0\)) for all cases, using \(\eta=0.01\) (Eq. S1 in [33]). Figure 2: (color online) (a)-(c): The local density \(\langle n_{i}\rangle\); (d)-(f): the nn density-density correlations \(\langle n_{i}n_{i+1}\rangle\); (g)-(i): the nn spin-spin correlations \(\langle s_{i}^{x}\tilde{s}_{i+1}^{x}\rangle\), for \(V=0,3,6\) and different values of \(\mu_{0}\). The black diamonds in (c) indicate the mean value between neighboring sites. ent parameters. We observe that there is always a gap at \(E_{F}=0\), even for the partially filled sites (we have added the filling profile for comparison). The gaps corresponding to these sites are smaller than the corresponding gaps of the fully formed CDW (see the \(V=6\) case) and much smaller than those of the Mott region (see Fig. 4). These gaps indicate that these regions are incompressible (non-metallic). This is not a finite size effect (since we would have a finite LDOS at \(E_{F}\) for fractional densities), but a consequence of the linear potential. We also observe that the LDOS consists of a series of peaks separated by minig gaps, a possible indication of Stark discretization [21]. Figure 4 shows a heatplot of the local density of states along the chain for \(V=0\), \(\mu=0\) and \(\mu_{0}=10\). The Fermi energy is marked by a white (dashed) line at \(\omega=0\). As the Hamiltonian is particle-hole symmetric around the middle of the chain, the density of states for the right half of the chain (\(20\leq i\leq 40\), not shown) is inverted as a function of \(\omega\) (details see in [33]). As mentioned above (Fig. 3), we always find a gap at \(E_{F}\), indicating an incompressible state. This gap is more than an order of magnitude smaller than the Mott gap. We also see a structure in the Hubbard bands in the form of three main substructures which evolve along the chain sites. Each substructure extends to around three neighboring sites, also an indication of Stark localization which requires future study [21]. An interesting result for \(V=0\) is the existence of a (negative) high-energy localized state in the IIF region (clearly seen in the density of states plots at the left of the chain, Fig. S2 in Ref. [33]). We can see a small and narrow peak at energies around \(\omega\sim-14\) for the first sites of this region, which evolves to higher energies (following the increase of \(\mu\)), while we approach the Mott region, increasing its width. This state is reminiscent of the lower Hubbard band for the left regions. A similar state is seen for the right half of the chain which is reminiscent of the upper Hubbard band (not shown). More results for the density of states, together with some calculations in the atomic limit, are presented in Ref. [33]. _Size of the Mott region.--_ At the electron-hole symmetric case (and \(V=0\)) the upper and lower Hubbard bands are centered at \(\pm\frac{U}{2}\) respectively, each with a total width of 4. For \(\mu=0\), the size of the Mott region can be estimated recalling that the Mott insulating state requires that the local \(\mu_{i}\) lies within the Mott gap i.e., \(-U/2+2<\mu_{i}<U/2-2\). At the lower limit \(\mu_{\rm min}=-\frac{U}{2}+2\), yielding by Eq. (2) that \(i_{\rm min}\mu_{0}=i_{c}(\mu_{0}-\frac{U}{2}+2)\), while at \(\mu_{\rm max}=\frac{U}{2}-2\) one finds \(i_{\rm max}\mu_{0}=i_{c}(\mu_{0}+\frac{U}{2}-2)\). Consequently, assuming that the width of the Hubbard bands is not modified by the presence of the confining potential, the size of the Mott region is: \[L_{\rm Mott}=i_{\rm max}-i_{\rm min}=(U/2-2)\,(L-1)/\mu_{0}. \tag{3}\] As the confining potential slightly increases the width of the Hubbard bands (not shown), the gap in-between them and \(L_{\rm Mott}\) are slightly overestimated. To compare Eq. (S1) with our numerical results, we have estimated the size of the Mott region by defining its boundaries at the points where the linear fits of the numerical derivative of the local occupation intercept 0 for each value of \(\mu_{0}\), using the results shown in Fig. 2(a). This procedure reveals that indeed the size of the Mott region is proportional to \(1/\mu_{0}\) (see Fig. S1 in Ref. [33]), and it shrinks to zero for very steep potentials. _Changing the global chemical potential.--_ The coexisting phases are robust against changes in the global chemical potential \(\mu\). In Fig. 5 we show our results for two cases, \(V=0\) (with coexisting band and Mott insulators, separated by intermediate IIF regions), and \(V=4\) (with CDW's and Mott insulators). The different phases shift towards the right or the left with respect to their position for \(\mu=0\) but otherwise are not changed, except for the regions close to the boundaries where they are affected by the open boundaries. _Discussion.--_ In this paper we study the one-dimensional extended Hubbard model, subject to a linearly-varying Wannier-Stark potential on a finite chain, applying the density-matrix renormalization group. We find an interesting sequence of several insulating electronic phases in the ground state, in which regions with commensurate charge density waves coexist with band and Mott insulating phases. These regions are separated by incompressible domain walls with incommensurate fillings, which were not reported before. The results are summarized in Fig. 6. Further research is needed to define whether these incompressible walls are due to the Stark many-body localization [34]. The steeper the slope of the external potential, the narrower the domain walls. These phases and domain walls can be moved around by vary Figure 4: Heatplot of the local density of states at different sites (\(6\leq i\leq 20\)) for \(\mu_{0}=10\) and \(V=0\). The Fermi energy is marked by a white (dashed) line at \(\omega=0\) ing a global chemical potential, thus providing a possible functionality of this kind of systems. Cold-atom chains placed in an external electric field are suggested as experimental realizations of our system. Acknowledgments.--NAB and KH acknowledge support from ICTP through the STEP and Associates Programmes respectively, and from the PICT 2018-01546 grant of the ANPCyT. The authors thank useful discussions with Carlos Balseiro.
有限鎖においてフェミオンを研究する。相互作用が同じと隣接するサイトにいるとき、 repulsively 接触する。Wannier-Stark linear なポテンシャルの下で、この一般化された拡張 Hubbard モデルを解くために、密度行列ラージナラチャージのアルゴリズムを使用する。この基底状態は、鎖に沿って平均的な局所密度における ( quasi) stepped plateau を持ちます。ポテンシャルが増加すると、doubly-filled から空になるように。これらの `plateau` は、チャージ密度波とバンド、Mott INSulator の組み合わせを表しています。これらの相は、不均等な充填を持つ非 compressible 領域で分けられています。この相は、ポテンシャルの勾配と repulsive 接触の範囲の変動によって生じると考えられています。これは、単独で理論的に期待され、実験的に観察された均質系に期待されている相の共
2310.20141
Contrastive Difference Predictive Coding
Predicting and reasoning about the future lie at the heart of many time-series questions. For example, goal-conditioned reinforcement learning can be viewed as learning representations to predict which states are likely to be visited in the future. While prior methods have used contrastive predictive coding to model time series data, learning representations that encode long-term dependencies usually requires large amounts of data. In this paper, we introduce a temporal difference version of contrastive predictive coding that stitches together pieces of different time series data to decrease the amount of data required to learn predictions of future events. We apply this representation learning method to derive an off-policy algorithm for goal-conditioned RL. Experiments demonstrate that, compared with prior RL methods, ours achieves $2 \times$ median improvement in success rates and can better cope with stochastic environments. In tabular settings, we show that our method is about $20 \times$ more sample efficient than the successor representation and $1500 \times$ more sample efficient than the standard (Monte Carlo) version of contrastive predictive coding.
Chongyi Zheng, Ruslan Salakhutdinov, Benjamin Eysenbach
2023-10-31T03:16:32
http://arxiv.org/abs/2310.20141v2
# Contrastive Difference Predictive Coding ###### Abstract Predicting and reasoning about the future lie at the heart of many time-series questions. For example, goal-conditioned reinforcement learning can be viewed as learning representations to predict which states are likely to be visited in the future. While prior methods have used contrastive predictive coding to model time series data, learning representations that encode long-term dependencies usually requires large amounts of data. In this paper, we introduce a temporal difference version of contrastive predictive coding that stitches together pieces of different time series data to decrease the amount of data required to learn predictions of future events. We apply this representation learning method to derive an off-policy algorithm for goal-conditioned RL. Experiments demonstrate that, compared with prior RL methods, ours achieves \(2\times\) median improvement in success rates and can better cope with stochastic environments. In tabular settings, we show that our method is about \(20\times\) more sample efficient than the successor representation and \(1500\times\) more sample efficient than the standard (Monte Carlo) version of contrastive predictive coding. **Code**: [https://github.com/chongyi-zheng/td_infonce](https://github.com/chongyi-zheng/td_infonce) **Website**: [https://chongyi-zheng.github.io/td_infonce](https://chongyi-zheng.github.io/td_infonce) ## 1 Introduction Learning representations is important for modeling high-dimensional time series data. Many applications of time-series modeling require representations that not only contain information about the contents of a particular observation, but also about how one observation relates to others that co-occur in time. equiring representations that encode temporal information is challenging, especially when attempting to capture long-term temporal dynamics: the frequency of long-term events may decrease with the time scale, meaning that learning longer-horizon dependencies requires larger quantities of data. In this paper, we study contrastive representation learning on time series data - positive examples co-occur nearby in time, so the distances between learned representations should encode the likelihood of transiting from one representation to another. Building on prior work that uses the InfoNCE [79, 67] loss to learn representations of time-series data effectively, we will aim to build a temporal difference version of this loss. Doing so may allow us to optimize this objective with fewer samples, may enable us to stitch together pieces of different time series data, and may enable us to perform counterfactual reasoning - we should be able to estimate which representations we would have learned, if we had Figure 1: **TD InfoNCE** is a nonparametric version of the successor representation. _(Top)_ The distances between learned representations indicate the probability of transitoning to a set of randomly-sampled states. _(Bottom)_ We update these representations so they assign high likelihood to _(a)_ the next state and _(b)_ states likely to be visited after the next state. See Sec. 3 for details. collected data in a different way. After a careful derivation, our resulting method can be interpreted as a non-parametric form of the successor representation [15], as shown in Fig. 1. The main contribution of this paper is a temporal difference estimator for InfoNCE. We then apply this estimator to develop a new algorithm for goal-conditioned RL. Experiments on both state-based and image-based benchmarks show that our algorithm outperforms prior methods, especially on the most challenging tasks. Additional experiments demonstrate that our method can handle stochasticity in the environment more effectively than prior methods. We also demonstrate that our algorithm can be effectively applied in the offline setting. Additional tabular experiments demonstrate that TD InfoNCE is up to \(1500\times\) more sample efficient than the standard Monte Carlo version of the loss and that it can effectively stitch together pieces of data. ## 2 Related Work This paper will study the problem of self-supervised RL, building upon prior methods on goal-condition RL, contrastive representation learning, and methods for predicting future state visitations. Our analysis will draw a connection between these prior methods, a connection which will ultimately result in a new algorithm for goal-conditioned RL. We discuss connections with unsupervised skill learning and mutual information in Appendix B. Goal-conditioned reinforcement learning.Prior work has proposed many frameworks for learning goal-conditioned policies, including conditional supervised learning [16; 32; 36; 19; 54; 65; 81], actor-critic methods [2; 59; 10], semi-parametric planning [68; 25; 26; 22; 62; 36], and distance metric learning [89; 63; 18]. These methods have demonstrated impressive results on a range of tasks, including real-world robotic tasks [55; 78; 95]. While some methods require manually-specified reward functions or distance functions, our work builds upon a self-supervised interpretation of goal-conditioned RL that casts this problem as predicting which states are likely to be visited in the future [23; 24; 7]. Contrastive representation learning.Contrastive learning methods have become a key tool for learning representations in computer vision and NLP [14; 76; 79; 66; 88; 67; 87; 92; 40; 71; 12; 84; 30]. These methods assign similar representations to positive examples and dissimilar representations to negative examples or outdated embeddings [35]. The two main contrastive losses are based on binary classification ("NCE") ranking loss ("InfoNCE") [56]. Modern contrastive learning methods typically employ the ranking-based objective to learn representations of images [12; 84; 41; 93], text [53; 44; 71] and sequential data [64; 77]. Prior works have also provided theoretical analysis for these methods from the perspective of mutual information maximization [52; 70], noise contrastive estimation [37; 56; 86; 3], and the geometry of the learned representations [88]. In the realm of RL, prior works have demonstrated that contrastive methods can provide effective reward functions and auxiliary learning objectives [49; 50; 39; 13; 60; 61], and can also be used to formulate the goal-reaching problem in an entirely self-supervised manner [55; 18; 23; 24]. Our method will extend these results by building a temporal difference version of the "ranking"-based contrastive loss; this loss will enable us to use data from one policy to estimate which states a different policy will visit. Temporal difference learning and successor representation.Another line of work studies using temporal difference learning to predict states visited in the future, building upon successor representations and successor features [15; 4; 5; 7]. While learning successor representation using temporal difference bears a similarity to the typical Q-Learning algorithm [91; 27; 58] in the tabular setting, directly estimating this quantity is difficult with continuous states and actions [43; 4; 85; 7]. To lift this limitation, we will follow prior work [24; 23; 85] in predicting the successor representation indirectly: rather than learning a representation whose coordinates correspond to visitation probabilities, we will learn state representations such that their inner product corresponds to a visitation probability. Unlike prior methods, we will show how the common InfoNCE objective can be estimated in a temporal difference fashion, opening the door to off-policy reasoning and enabling our method to reuse historical data to improve data efficiency. Method We start by introducing notation and prior approaches to the contrastive representation learning and the goal-conditioned RL problems. We then propose a new self-supervised actor-critic algorithm that we will use in our analysis. ### Preliminaries We first review prior work in contrastive representation learning and goal-conditioned RL. Our method (Sec. 3) will use ideas from both. Contrastive representation via InfoNCE.Contrastive representation learning aims to learn a representation space, pushing representations of positive examples together and pushing representations of negative examples away. InfoNCE (also known as contrastive predictive coding) [79, 45, 67, 41] is a widely used contrastive loss, which builds upon noise contrastive estimation (NCE) [37, 56]. Given the distribution of data \(p_{\mathcal{X}}(x),p_{\mathcal{Y}}(y)\) over data \(x\in\mathcal{X},y\in\mathcal{Y}\) and the conditional distribution of positive pairs \(p_{\mathcal{Y}|\mathcal{X}}(y|x)\) over \(\mathcal{X}\times\mathcal{Y}\), InfoNCE loss is defined as \[\mathcal{L}_{\text{InfoNCE}}(f)\triangleq\mathbb{E}_{\begin{subarray}{c}x \sim p_{\mathcal{X}}(x),y^{(1)}\sim p_{\mathcal{Y}|\mathcal{X}}(y|x)\\ y^{(2:\mathcal{N})}\sim p_{\mathcal{Y}}(y)\end{subarray}}\left[\log\frac{e^{f (x,y^{(1)})}}{\sum_{i=1}^{N}e^{f(x,y^{(i)})}}\right], \tag{1}\] where \(f:\mathcal{X}\times\mathcal{Y}\mapsto\mathbb{R}\) is a parametric function. Following prior work [24, 88, 85], we choose to parameterize \(f(\cdot,\cdot)\) via the inner product of representations of data \(f(x,y)=\phi(x)^{\top}\psi(y)\), where \(\phi(\cdot)\) and \(\psi(\cdot)\) map data to \(\ell_{2}\) normalized vectors of dimension \(d\). We will call \(f\) the _critic function_ and \(\phi\) and \(\psi\) the _contrastive representations_. The Bayes-optimal critic for the InfoNCE loss satisfies [70, 56, 67] \[\exp\left(f^{\star}(x,y)\right)=\frac{p(y\mid x)}{p(y)c(x)},\] where \(c(\cdot)\) is an arbitrary function. We can estimate this arbitrary function using the optimal critic \(f^{\star}\) by sampling multiple negative pairs from the data distribution: \[\mathbb{E}_{p(y)}\left[\exp\left(f^{\star}(x,y)\right)\right]=\int p(\not{ \mathcal{Y}})\frac{p(y\mid x)}{p(\not{\mathcal{Y}})c(x)}dy=\frac{1}{c(x)} \underbrace{\int p(y\mid x)dy}_{=1}=\frac{1}{c(x)}. \tag{2}\] Reinforcement learning and goal-conditioned RL.We will consider a Markov decision process defined by states \(s\in\mathcal{S}\), actions \(a\in\mathcal{A}\), rewards \(r:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\mapsto\mathbb{R}\). Using \(\Delta(\cdot)\) denotes the probability simplex, we define an initial state distribution \(p_{0}:\mathcal{S}\mapsto\Delta(\mathcal{S})\), discount factor \(\gamma\in(0,1]\), and dynamics \(p:\mathcal{S}\times\mathcal{A}\mapsto\Delta(\mathcal{S})\). Given a policy \(\pi:\mathcal{S}\mapsto\Delta(\mathcal{A})\), we will use \(p_{t}^{\pi}(s_{t+}\mid s,a)\) to denote the probability density of reaching state \(s_{t+}\) after exactly \(t\) steps, starting at state \(s\) and action \(a\) and then following the policy \(\pi(a\mid s)\). We can then define the discounted state occupancy measure [42, 94, 23, 24, 95] starting from state \(s\) and action \(a\) as \[p^{\pi}(s_{t+}\mid s,a)\triangleq(1-\gamma)\sum_{t=1}^{\infty}\gamma^{t-1}p_{t }^{\pi}(s_{t+}\mid s,a). \tag{3}\] Prior work [15] have shown that this discounted state occupancy measure follows a recursive relationship between the density at the current time step and the future time steps: \[p^{\pi}(s_{t+}\mid s,a)=(1-\gamma)p(s^{\prime}=s_{t+}\mid s,a)+\gamma\mathbb{ E}_{\begin{subarray}{c}s^{\prime}\sim p(s^{\prime}|s,a)\\ a^{\prime}\sim\pi(a^{\prime}|s^{\prime})\end{subarray}}\left[p^{\pi}(s_{t+} \mid s^{\prime},a^{\prime})\right]. \tag{4}\] For goal-conditioned RL, we define goals \(g\in\mathcal{S}\) in the same space as states and consider a goal-conditioned policy \(\pi(a\mid s,g)\) and the corresponding goal-conditioned discounted state occupancy measure \(p^{\pi}(s_{t+}\mid s,a,g)\). For evaluation, we will sample goals from a distribution \(p_{g}:\mathcal{S}\mapsto\Delta(\mathcal{S})\). Following prior work [23, 74], we define the objective of the goal-reaching policy as maximizing the probability of reaching desired goals under its discounted state occupancy measure while commanding the same goals: \[\max_{\pi(\cdot|\cdot,\cdot)}\mathbb{E}_{p_{g}(g),p_{0}(s),\pi(a|s,g)}\left[p^ {\pi}(s_{t+}=g\mid s,a,g)\right]. \tag{5}\] In tabular settings, this objective is the same as maximizing expected returns using a sparse reward function \(r(s,a,s^{\prime},g)=(1-\gamma)\delta(s^{\prime}=g)\)[24]. Below, we review two strategies for estimating the discounted state occupancy measure. Our proposed method (Sec. 3.2) will combine the strengths of these methods while lifting their respective limitations. Contrastive RL and C-Learning.Our focus will be on using contrastive representation learning to build a new goal-conditioned RL algorithm, following a template set in prior work [24, 23]. These _contrastive RL_ methods are closely related to the successor representation [15]: they aim to learn representations whose inner products correspond to the likelihoods of reaching future states. Like the successor representation, representations from these contrastive RL methods can then be used to represent the Q function for any reward function [57]. Prior work [24] has shown how both NCE and the InfoNCE losses can be used to derive Monte Carlo algorithms for estimating the discounted state occupancy measure. We review the Monte Carlo InfoNCE loss below. Given a policy \(\pi(a\mid s)\), consider learning contrastive representations for a state and action pair \(x=(s,a)\) and a potential future state \(y=s_{t+}\). We define the data distribution to be the joint distribution of state-action pairs \(p_{\mathcal{X}}(x)=p(s,a)\) and the marginal distribution of future states \(p_{\mathcal{Y}}(y)=p(s_{t+})\), representing either the distribution of a replay buffer (online) or the distribution of a dataset (offline). The conditional distribution of positive pairs is set to the discounted state occupancy measure for policy \(\pi\), \(p_{\mathcal{Y}|\mathcal{X}}(y\mid x)=p^{\pi}(s_{t+}\mid s,a)\), resulting in a Monte Carlo (MC) estimator \[\mathcal{L}_{\text{MC InfoNCE}}(f)=\mathbb{E}_{\begin{subarray}{c}(s,a)\sim p (s,a),s^{(1)}_{t+}\sim p^{\pi}(s_{t+}\mid s,a)\\ s^{(2;N)}_{t+}\sim p(s_{t+})\end{subarray}}\left[\log\frac{e^{f(s,a,s^{(1)}_{t +})}}{\sum_{i=1}^{N}e^{f(s,a,s^{(i)}_{t+})}}\right] \tag{6}\] and an optimal critic function satisfying \[\exp(f^{\star}(s,a,s_{t+}))=\frac{p^{\pi}(s_{t+}\mid s,a)}{p(s_{t+})c(s,a)}. \tag{7}\] This loss estimates the discounted state occupancy measure in a Monte Carlo manner. While conceptually simple, computing this estimator requires sampling future states from the discounted state occupancy measure of the policy \(\pi\), i.e., on-policy data. Such an estimate is potentially sample inefficient because collecting samples for different policies is expensive. That is, we cannot share experiences collected by one policy with the learning of the discounted state occupancy measure of another policy. In the same way that temporal difference (TD) algorithms tend to be more sample efficient than Monte Carlo algorithms for reward maximization [82], we expect that TD contrastive methods are more sample efficient at estimating probability ratios than their Monte Carlo counterparts. Given that the InfoNCE tends to outperform the NCE objective in other machine learning disciplines, we conjecture that our TD InfoNCE objective will outperform the TD NCE objective [23] (see experiments in Sec. 4). ### Temporal Difference InfoNCE In this section, we derive a new loss for estimating the discounted state occupancy measure for a fixed policy. This loss will be a temporal difference variant of the InfoNCE loss. We will use **temporal difference InfoNCE (TD InfoNCE)** to refer to our loss function. In the off-policy setting, we aim to estimate the discounted state occupancy measure of the policy \(\pi\) given a dataset of transitions \(\mathcal{D}=\{(s,a,s^{\prime})_{i}\}_{i=1}^{D}\) collected by another behavioral policy \(\beta(a\mid s)\). This setting is challenging because we do not obtain samples from the discounted state occupancy measure of the target policy \(\pi\). Addressing this challenge involves two steps: _(i)_ expanding the MC estimator (Eq. 6) via the recursive relationship of the discounted state occupancy measure (Eq. 4), and _(ii)_ estimating the expectation over the discounted state occupancy measure via importance sampling. We first use the identity from Eq. 4 to express the MC InfoNCE loss as the sum of a next-state term and a future-state term: \[\mathbb{E}_{\begin{subarray}{c}(s,a)\sim p(s,a)\\ s^{(2:N)}_{t+}\sim p(s_{t+})\end{subarray}}\Bigg{[} (1-\gamma)\underbrace{\mathbb{E}_{s^{(1)}_{t+}\sim p(s^{\prime}|s,a)} \left[\log\frac{e^{f(s,a,s^{(1)}_{t+})}}{\sum_{i=1}^{N}e^{f(s,a,s^{(i)}_{t+})} }\right]}_{\mathcal{L}_{1}(f)}\] \[+\gamma\underbrace{\mathbb{E}_{s^{\prime}\sim p(s^{\prime}|s,a),a^ {\prime}\sim\pi(a^{\prime}|s^{\prime})}\left[\log\frac{e^{f(s,a,s^{(1)}_{t+})} }{\sum_{i=1}^{N}e^{f(s,a,s^{(i)}_{t+})}}\right]}_{\mathcal{L}_{2}(f)}\Bigg{[} \log\frac{e^{f(s,a,s^{(1)}_{t+})}}{\sum_{i=1}^{N}e^{f(s,a,s^{(i)}_{t+})}} \Bigg{]}\Bigg{]}.\] While this estimate is similar to a TD target for Q-Learning [91, 27], the second term requires sampling from the discounted state occupancy measure of policy \(\pi\). To avoid this sampling, we next replace the expectation over \(p^{\pi}(s_{t+}\mid s^{\prime},a^{\prime})\) in \(\mathcal{L}_{2}(f)\) by an importance weight, \[\mathcal{L}_{2}(f)=\mathbb{E}_{s^{\prime}\sim p(s^{\prime}|s,a),a^{\prime}\sim \pi(a^{\prime}|s^{\prime})}\left[\frac{p^{\pi}(s^{(1)}_{t+}\mid s^{\prime},a^ {\prime})}{p(s^{(1)}_{t+})}\log\frac{e^{f(s,a,s^{(1)}_{t+})}}{\sum_{i=1}^{N}e ^{f(s,a,s^{(i)}_{t+})}}\right].\] If we could estimate the importance weight, then we could easily estimate this term by sampling from \(p(s_{t+})\). We will estimate this importance weight by rearranging the expression for the optimal critic (Eq. 7) and substituting our estimate for the normalizing constant \(c(s,a)\) (Eq. 2): \[\frac{p^{\pi}(s^{(1)}_{t+}\mid s,a)}{p(s^{(1)}_{t+})}=c(s,a)\cdot\exp\left(f^{ \star}(s,a,s^{(1)}_{t+})\right)=\frac{e^{f^{\star}(s,a,s^{(1)}_{t+})}}{ \mathbb{E}_{p(s_{t+})}\left[e^{f^{\star}(s,a,s_{t+})}\right]}. \tag{8}\] We will use \(w(s,a,s^{(1:N)}_{t+})\) to denote our estimate of this, using \(f\) in place of \(f^{\star}\) and using a finite-sample estimate of the expectation in the denominator: \[w(s,a,s^{(1:N)}_{t+})\triangleq\frac{e^{f(s,a,s^{(1)}_{t+})}}{\frac{1}{N} \sum_{i=1}^{N}e^{f(s,a,s^{(1)}_{t+})}} \tag{9}\] This weight accounts for the effect of the discounted state occupancy measure of the target policy. Additionally, it corresponds to the categorical classifier that InfoNCE produces (without constant \(N\)). Taken together, we can now substitute the importance weight in \(\mathcal{L}_{2}(f)\) with our estimate in Eq. 9, yielding a temporal difference (TD) InfoNCE estimator \[\mathcal{L}_{\text{TD InfoNCE}}(f)\triangleq\mathbb{E}_{ \begin{subarray}{c}(s,a)\sim p(s,a)\\ s^{(2:N)}_{t+}\sim p(s_{t+})\end{subarray}}\left[(1-\gamma)\mathbb{E}_{s^{(1 )}_{t+}\sim p(s^{\prime}|s,a)}\left[\log\frac{e^{f(s,a,s^{(1)}_{t+})}}{\sum_{ i=1}^{N}e^{f(s,a,s^{(i)}_{t+})}}\right]\right.\] \[\left.+\gamma\mathbb{E}_{s^{\prime}\sim p(s^{\prime}|s,a)}\left[ \lfloor w(s^{\prime},a^{\prime},s^{(1:N)}_{t+})\rfloor_{\text{sg}}\log\frac{e^ {f(s,a,s^{(1)}_{t+})}}{\sum_{i=1}^{N}e^{f(s,a,s^{(i)}_{t+})}}\right]\right], \tag{10}\] where \(\lfloor\cdot\rfloor_{\text{sg}}\) indicates the gradient of the importance weight should not affect the gradient of the entire objective. As shown in Fig. 1, we can interpret the first term as pulling together the representations of the current state-action pair \(\phi(s,a)\) and the next state \(\psi(s^{\prime})\); the second term pulls the representations at the current step \(\phi(s,a)\) similar to the (weighted) predictions from the future state \(\psi(s_{t+})\). Importantly, the TD InfoNCE estimator is equivalent to the MC InfoNCE estimator for the optimal critic function: \(\mathcal{L}_{\text{TD InfoNCE}}(f^{\star})=\mathcal{L}_{\text{MC InfoNCE}}(f^{\star})\). Convergence and connections.In Appendix A, we prove that optimizing a variant of the TD InfoNCE objective is equivalent to perform one step policy evaluation with a new Bellman operator; thus, repeatedly optimizing this objective yields the correct discounted state occupancy measure. This analysis considers the tabular setting and assumes that the denominators of the softmax functions and \(w\) in Eq. 10 are computed using an exact expectation. We discuss the differences between TD InfoNCE and C-learning [23] (a temporal difference estimator of the NCE objective) in Appendix E.2. Appendix C discusses how TD InfoNCE corresponds to a nonparametric variant of the successor representation. ``` 1:Input contrastive representations \(\phi_{\theta}\) and \(\psi_{\theta}\), target representations \(\phi_{\bar{\theta}}\) and \(\psi_{\bar{\theta}}\), and goal-conditioned policy \(\pi_{\omega}\). 2:for each iteration do 3: Sample \(\{(s_{t}^{(i)},a_{t}^{(i)},s_{t+1}^{(i)},g^{(i)},s_{t+}^{(i)})\}_{i=1}^{N}\)\(\sim\) replay buffer / dataset,\(a^{(i)}\sim\pi(a\mid s_{t}^{(i)},g^{(i)})\). 4: Compute \(F_{\text{next}}\), \(F_{\text{future}}\), \(F_{\text{goal}}\) using \(\phi_{\theta}\) and \(\psi_{\theta}\). 5: Compute \(\tilde{F}_{\text{w}}\) using \(\phi_{\bar{\theta}}\) and \(\psi_{\bar{\theta}}\). 6:\(W\gets N\cdot\textsc{stcp}.\textsc{grad}\left(\textsc{SoftMax}(\tilde{F}_{ \text{w}})\right)\) 7:\(\mathcal{L}(\theta)\leftarrow(1-\gamma)\mathcal{CE}(\text{logits}=F_{\text{ next}}\), \(\text{labels}=I_{N})+\gamma\mathcal{CE}(\text{logits}=F_{\text{future}},\text{ labels}=W)\) 8:\(\mathcal{L}(\omega)\leftarrow\mathcal{CE}(\text{logits}=F_{\text{goal}}, \text{labels}=I_{N})\) 9: Update \(\theta,\omega\) by taking gradients of \(\mathcal{L}(\theta),\mathcal{L}(\omega)\). 10: Update \(\bar{\theta}\) using an exponential moving average. 11:Return\(\phi_{\theta}\), \(\psi_{\theta}\), and \(\pi_{\omega}\). ``` **Algorithm 1** Temporal Difference InfoNCE ### Goal-conditioned Policy Learning The TD InfoNCE method provides a way for estimating the discounted state occupancy measure. This section shows how this estimator can be used to derive a new algorithm for goal-conditioned RL. This algorithm will alternate between _(1)_ estimating the occupancy measure using the TD InfoNCE objective and _(2)_ optimizing the policy to maximize the likelihood of the desired goal under the estimated occupancy measure. Pseudo-code is shown in Algorithm 1, additional details are in Appendix D.1, and code is available online.1 Footnote 1: [https://github.com/chongyi-zheng/td_infonce](https://github.com/chongyi-zheng/td_infonce) While our TD InfoNCE loss in Sec. 3.2 estimates the discounted state occupancy measure for policy \(\pi(a\mid s)\), we can extend it to the goal-conditioned setting by replacing \(\pi(a\mid s)\) with \(\pi(a\mid s,g)\) and \(f(s,a,s_{t+})\) with \(f(s,a,g,s_{t+})\), resulting in a goal-conditioned TD InfoNCE estimator. This goal-conditioned TD InfoNCE objective estimates the discounted state occupancy measure of _any_ future state for a goal-conditioned policy commanding _any_ goal. Recalling that the discounted state occupancy measure corresponds to the Q function [24], the policy objective is to select actions that maximize the likelihood of the commanded goal: \[\mathbb{E}_{\begin{subarray}{c}p_{g}(g),p_{0}(s)\\ \pi(a_{0}|s,g)\end{subarray}}\left[\log p^{\pi}(s_{t+}=g\mid s,a,g)\right]= \mathbb{E}_{\begin{subarray}{c}g\sim p_{g}(g),s\sim p_{0}(s)\\ a_{0}\sim\pi(a|s,g),s_{t+}^{(1:N)}\sim p(s_{t+})\end{subarray}}\left[\log \frac{e^{f^{*}(s,a,g,s_{t+}=g)}}{\sum_{i=1}^{N}e^{f^{*}(s,a,g,s_{t+}^{(i)})}} \right]. \tag{11}\] In practice, we optimize both the critic function and the policy for one gradient step iteratively, using our estimated \(f\) in place of \(f^{*}\). ## 4 Experiments Our experiments start with comparing goal-conditioned TD InfoNCE to prior goal-conditioned RL approaches on both online and offline goal-conditioned RL (GCRL) benchmarks. We then analyze the properties of the critic function and the policy learned by this method. Visualizing the representations learned by TD InfoNCE reveals that linear interpolation corresponds to a form of planning. Appendix E.2 ablates the difference between TD InfoNCE and a prior temporal difference method based on NCE. All experiments show means and standard deviations over three random seeds. ### Comparing to Prior Goal-conditioned RL methods We compare TD InfoNCE to four baselines on an online GCRL benchmark [69] containing four manipulation tasks for the Fetch robot. The observations and goals of those tasks can be either a state of the robot and objects or a \(64\times 64\) RGB image. We will evaluate using both versions. The first baseline, Quasimetric Reinforcement Learning (QRL) [89], is a state-of-the-art approach that uses quasimetric models to learn the optimal goal-conditioned value functions and the corresponding policies. The second baseline is contrastive RL [24], which estimates the discounted state occupancy measure using \(\mathcal{L}_{\text{MC InfoNCE}}\) (Eq. 6). Our third baseline is the goal-conditioned behavioral cloning (GCBC) [16; 19; 32; 54; 80; 81]. We also include a comparison with an off-the-shelf actor-critic algorithm augmented with hindsight relabeling [2, 51, 73, 75] to learn a goal-conditioned policy (DDPG + HER). We report results in Fig. 1(a), and defer the full learning curves to Appendix Fig. 7. These results show that TD InfoNCE matches or outperforms other baselines on all tasks, both for state and image observations. On those more challenging tasks (pick & place (state / image) and slide (state / image)), TD InfoNCE achieves a \(2\times\) median improvement relative to the strongest baseline. On the most challenging tasks, image-based pick & place and slide, TD InfoNCE is the only method achieving non-negligible success rates. We speculate this observation is because TD InfoNCE estimates the discounted state occupancy measure more accurately, a hypothesis we will investigate in Sec. 4.3. Among those baselines, QRL is the strongest one. Unlike TD InfoNCE, the derivation of QRL assumes the dynamics are deterministic. This difference motivates us to study whether TD InfoNCE continues achieving high success rates in environments with stochastic noise. To study this, we compare TD InfoNCE to QRL on a variant of the Fetch benchmark where observations are corrupted with probability \(0.1\). As shown in Fig. 1(b), TD InfoNCE maintains high success rates while the performance of QRL decreases significantly, suggesting that TD InfoNCE can better cope with stochasticity in the environment. ### Evaluation on Offline Goal Reaching We next study whether the good performance of TD InfoNCE transfers to the setting without any interaction with the environment (i.e., offline RL). We evaluate on AntMaze tasks from the D4RL benchmark [28]. The results in Table 1 show that TD InfoNCE outperforms most baselines on most tasks. See Appendix D.3 for details. ### Accuracy of the estimated discounted state occupancy measure This section tests the hypothesis that our TD InfoNCE loss will be more accurate and sample efficient than alternative Monte Carlo methods (namely, contrastive RL [24]) in predicting the discounted state occupancy measure. We will use the tabular setting so that we can get a ground truth estimate. We compare TD InfoNCE to three baselines. Successor representations [15] can also be learned in a TD manner, though can be challenging to apply beyond tabular settings. C-learning is similar \begin{table} \begin{tabular}{c|c c c c c c} \hline \hline & TD InfoNCE & QRL & Contrastive RL & GCBC & DT & IQL & TD3 + BC \\ \hline unaze-v2 & **85.8 \(\pm\) 0.9** & \(77.2\pm 2.3\) & \(79.8\pm 1.4\) & \(65.4\) & \(65.6\) & **87.5** & \(78.6\) \\ unaze-diverse-v2 & **92.1 \(\pm\) 1.1** & \(79.4\pm 1.5\) & \(77.6\pm 2.8\) & \(60.9\) & \(51.2\) & \(62.2\) & \(71.4\) \\ medium-play-v2 & **87.5 \(\pm\) 1.2** & \(74.9\pm 1.9\) & \(72.6\pm 2.9\) & \(58.1\) & \(1.0\) & \(71.2\) & \(10.6\) \\ medium-diverse-v2 & **82.3 \(\pm\) 2.8** & \(73.1\pm 1.1\) & \(71.5\pm 1.3\) & \(67.3\) & \(0.6\) & \(70.0\) & \(3.0\) \\ large-play-v2 & \(47.3\pm 2.9\) & **52.3 \(\pm\) 3.2** & \(48.6\pm 4.4\) & \(32.4\) & \(0.0\) & \(39.6\) & \(0.2\) \\ large-diverse-v2 & **56.2 \(\pm\) 3.8** & \(50.9\pm 4.6\) & **54.1 \(\pm\) 5.5** & \(36.9\) & \(0.2\) & \(47.5\) & \(0.0\) \\ \hline \hline \end{tabular} \end{table} Table 1: Evaluation on offline D4RL AntMaze benchmarks. Figure 2: **Evaluation on online GCRL benchmarks. (Left) TD InfoNCE performs similarly to or outperforms all baselines on both state-based and image-based tasks. _(Right)_ On stochastic versions of the state-based tasks, TD InfoNCE outperforms the strongest baseline (QRL). Appendix Fig. 7 shows the learning curves.** to TD InfoNCE in that it uses a temporal difference method to optimize a contrastive loss, but differs in using a binary cross entropy loss instead of a softmax cross entropy loss. Contrastive RL is the MC counterpart of TD InfoNCE. We design a \(5\times 5\) gridworld with 125 states and 5 actions (up, down, left, right, and no-op) and collect 100K transitions using a uniform random policy, \(\mu(a\mid s)=\textsc{Unif}(\mathcal{A})\). We evaluate each method by measuring the absolute error between the predicted probability \(\hat{p}\) and the ground truth probability \(p^{\mu}\), averaging over all pairs of \((s,a,s_{t+})\): \[\frac{1}{|\mathcal{S}||\mathcal{A}||\mathcal{S}|}\sum_{s,a,s_{t+}}|\hat{p}(s_{ t+}\mid s,a)-p^{\mu}(s_{t+}\mid s,a)|.\] For the three TD methods, we compute the TD target in a SARSA manner [82]. For those methods estimating a probability ratio, we convert the prediction to a probability by multiplying by the empirical state marginal. Results in Fig. 3 show that TD methods achieve lower errors than the Monte Carlo method, while TD InfoNCE converges faster than C-Learning. Appendix E.1 discusses why all methods plateau above zero. Our next experiments studies sample efficiency. We hypothesize that the softmax in the TD InfoNCE loss may provide more learning signal than alternative methods, allowing it to achieve lower error on a fixed budget of data. To test this hypothesis, we run experiments with dataset sizes from 1K to 10M on the same gridworld, comparing TD InfoNCE to the same set of baselines. We report results in Fig. 3 with errors showing one standard deviation after training for 50K gradient steps for each approach. These results suggest that methods based on temporal difference learning predict more accurately than Monte Carlo method when provided with the same amount of data. Compared with its Monte Carlo counterpart, TD InfoNCE is \(1500\times\) more sample efficient (\(6.5\times 10^{3}\) vs \(10^{7}\) transitions). Compared with the only other TD method applicable in continuous settings (C-learning), TD InfoNCE can achieve a comparable loss with \(130\times\) less data (\(7.7\times 10^{4}\) vs \(10^{7}\) transitions). Even compared with the strongest baseline (successor representations), which makes assumptions (tabular MDPs) that our method avoids, TD InfoNCE can achieve a comparable error rate with almost \(20\times\) fewer samples (\(5.2\times 10^{5}\) vs \(10^{7}\) transitions). ### Does TD InfoNCE enable off-policy reasoning? The explicit temporal difference update (Eq. 10) in TD InfoNCE is similar to the standard Bellman backup, motivating us to study whether the resulting goal-conditioned policy is capable of performing dynamic programming with offline data. To answer these questions, we conduct two experiments on the same gridworld environment as in Sec. 4.3, comparing TD InfoNCE to contrastive RL (i.e., Monte Carlo InfoNCE). Fig. 4 shows that TD InfoNCE successfully stitches together pieces of different trajectories to find a route between unseen (state, goal) pairs. Fig. 5 shows that TD InfoNCE can perform off-policy reasoning, finding a path that is shorter than the average path demonstrated in the dataset. See Appendix D.4 for details. Figure 3: **Estimating the discounted state occupancy measure in a tabular setting.**_(Left)_ Temporal difference methods have lower errors than the Monte Carlo method. Also note that our TD InfoNCE converges as fast as the best baseline (successor representation). _(Right)_ TD InfoNCE is more data efficient than other methods. Using a dataset of size 10M, TD InfoNCE achieves an error rate \(25\%\) lower than the best baseline; TD InfoNCE also matches the performance of C-learning with \(130\times\) less data. ### Representation Interpolation Prior work has shown that representations from self-supervised learning can reflect the geometry of the underlying data [88, 3]. We study this property for the representations learned by TD InfoNCE, interpolating between the learned representations of 29-dimensional observations from the offline AntMaze medium-play-v2 task. We visualize this interpolation in Fig. 6, using nearest-neighbors to retrieve the 29-dim observation with the most similar representation. These results suggest that the learned representations are structured so that linear interpolation corresponds to planning a path from one state to another. See Appendix E.3 for details. ## 5 Conclusion This paper introduced a temporal difference estimator for the InfoNCE loss. Our goal-conditioned RL algorithm based on this estimator outperforms prior methods in both online and offline settings, and is capable of handling stochasticity in the environment dynamics. While we focused on a specific type of RL problem (goal-conditioned RL), in principle the TD InfoNCE estimator can be used to drive policy evaluation for arbitrary reward functions. One area for future work is to determine how it compares to prior off-policy evaluation techniques. While we focused on evaluating the TD InfoNCE estimator on control tasks, it is worth noting that the MC InfoNCE objective has been previously applied to NLP, audio, video settings; one intriguing and important question is whether the benefits of TD learning seen on these control tasks translate into better representations in these other domains. Limitations.One limitation of TD InfoNCE is complexity: compared with its Monte Carlo counterpart, ours is more complex and requires more hyperparameters. It is worth noting that even TD InfoNCE struggles to solve the most challenging control tasks with image observations. On the theoretical front, our convergence proof uses a slightly modified version of our loss (replacing a sum with an expectation), which would be good to resolve in future work. AcknowledgmentsWe thank Ravi Tej and Wenzhe Li for discussions and feedback on drafts of the paper. We thank Raj Ghugare for sharing code. We thank Tongzhou Wang for providing performance of baselines in online GCRL experiments. Figure 4: **Stichting trajectories in a dataset. The behavioral policy collects “Z” style trajectories. Unlike the Monte Carlo method (contrastive RL), our TD InfoNCE successfully “stitches” these trajectories together, navigating between pairs of (start, goal ) states unseen in the training trajectories. Appendix Fig. 8 shows additional examples.** Figure 5: **Searching for shortcuts in skewed datasets.**_(Left)_ Conditioned on different initial states and goals, we collect datasets with \(95\%\) long paths (dark) and \(5\%\) short paths (light). _(Center)_ TD InfoNCE infers the shortest path, _(Right)_ while contrastive RL fails to find this path. Appendix Fig. 9 shows additional examples.**
未来予測と推論は、多くの時系列問題の核心に位置しています。例えば、ゴール条件付けされた強化学習は、将来訪れる状態を予測する表現を学習することで見ることができます。従来の方法では、対照的予測コーディングを用いて時系列データをモデル化していましたが、長期依存関係を学習させるには、大規模なデータが必要になることが多いです。この論文では、対照的予測コーディングの temporadifference版を導入し、異なる時系列データの断片を繋げて、将来のイベントの予測に必要なデータ量を削減します。この表現学習方法を目的とするゴール条件付けされた強化学習のためのアルゴリズムを導き出し、実験結果により、従来の強化学習手法と比較して、成功率において、2倍の改善が見られ、確率的環境においても良好なパフォーマンスを発揮することが示されました。表形式の場合、この手法は、後続の表現学習方法と
2302.00080
Rainbow Hamilton cycle in hypergraph system
In this paper, we develop a new rainbow Hamilton framework, which is of independent interest, settling the problem proposed by Gupta, Hamann, M\"{u}yesser, Parczyk, and Sgueglia when $k=3$, and draw the general conclusion for any $k\geq3$ as follows. A $k$-graph system $\textbf{H}=\{H_i\}_{i\in[n]}$ is a family of not necessarily distinct $k$-graphs on the same $n$-vertex set $V$, moreover, a $k$-graph $H$ on $V$ is rainbow if $E(H)\subseteq \bigcup_{i\in[n]}E(H_i)$ and $|E(H)\cap E(H_i)|\leq1$ for $i\in[n]$. We show that given $\gamma> 0$, sufficiently large $n$ and an $n$-vertex $k$-graph system $\textbf{H}=\{H_i\}_{i\in[n]}$ , if $\delta_{k-2}(H_i)\geq(5/9+\gamma)\binom{n}{2}$ for $i\in[n]$ where $k\geq3$, then there exists a rainbow tight Hamilton cycle. This result implies the conclusion in a single graph, which was proved by Lang and Sanhueza-Matamala [$J. Lond. Math. Soc., 2022$], Polcyn, Reiher, R\"{o}dl and Sch\"{u}lke [$J. Combin. Theory \ Ser. B, 2021$] independently.
Yucong Tang, Bin Wang, Guanghui Wang, Guiying Yan
2023-01-31T20:23:00
http://arxiv.org/abs/2302.00080v1
# Rainbow Hamilton cycle in hypergraph system ###### Abstract. In this paper, we develop a new rainbow Hamilton framework, which is of independent interest, settling the problem proposed by Gupta, Hamann, Muyesser, Parczyk, and Sgueglia when \(k=3\), and draw the general conclusion for any \(k\geq 3\) as follows. A \(k\)-graph system \(\boldsymbol{H}=\{H_{i}\}_{i\in[n]}\) is a family of not necessarily distinct \(k\)-graphs on the same \(n\)-vertex set \(V\), moreover, a \(k\)-graph \(H\) on \(V\) is rainbow if \(E(H)\subseteq\bigcup_{i\in[n]}E(H_{i})\) and \(|E(H)\cap E(H_{i})|\leq 1\) for \(i\in[n]\). We show that given \(\gamma>0\), sufficiently large \(n\) and an \(n\)-vertex \(k\)-graph system \(\boldsymbol{H}=\{H_{i}\}_{i\in[n]}\), if \(\delta_{k-2}(H_{i})\geq(5/9+\gamma)\binom{n}{2}\) for \(i\in[n]\) where \(k\geq 3\), then there exists a rainbow tight Hamilton cycle. This result implies the conclusion in a single graph, which was proved by Lang and Sanhueza-Matamala [_J. Lond. Math. Soc., 2022_], Polcyn, Reiher, Rodl and Schulke [_J. Combin. Theory Ser. B, 2021_] independently. ## 1. Introduction Finding Hamilton cycles in graphs is one of the key areas in graph theory and extremal combinatorics with a profound history. The classical Dirac's theorem [13] states that every \(n\)-vertex graph with minimum degree at least \(n/2,n\geq 3\), contains a Hamilton cycle. There are also many extensions of Dirac's theorem in hypergraphs. ### Hamilton cycles in hypergraphs Let \([a,b]\), \(a,b\in\mathbb{Z}\), denote the set \(\{a,a+1,\ldots,b\}\) and the set \([1,n]\) is denoted by \([n]\) in short. Given a \(k\)-graph \(H\) with a set \(S\) of \(d\) vertices (\(d\in[k-1]\)), we define \(\deg_{H}(S)\) to be the number of edges containing \(S\) (the subscript \(H\) is omitted if it is clear from the context), the relative degree \(\overline{\deg}(S)\) to be \(\deg(S)/\binom{n-d}{k-d}\). The _minimum relative d-degree_ of a \(k\)-graph \(H\), written by \(\overline{\delta}_{d}(H)\), is the minimum of \(\overline{\deg}(S)\) over all sets \(S\) of \(d\) vertices. Katona and Kierstead [22] defined a type of cycle in hypergraphs, which has been studied extensively. A \(k\)-graph is called an \(\ell\)-cycle if its vertices can be ordered cyclically such that each of its edges consists of \(k\) consecutive vertices and every two consecutive edges (in the natural order of the edges) share exactly \(\ell\) vertices. In \(k\)-graphs, a \((k-1)\)-cycle is often called a tight cycle. We say that a \(k\)-graph contains a Hamilton \(\ell\)-cycle if it contains an \(\ell\)-cycle as a spanning subhypergraph. Without special instruction, the tight cycle is referred as cycle for short. Katona and Kierstead [22] gave a sufficient condition for finding a Hamilton cycle in a \(k\)-graph with minimum \((k-1)\)-degree: every \(n\)-vertex \(k\)-graph \(H\) with \(\delta_{k-1}(H)>(1-1/(2k))n+4-k-5/(2k)\) admits a Hamilton cycle. They conjectured that the bound on the minimum \((k-1)\)-degree can be reduced to roughly \(n/2\), which was confirmed asymptotically by Rodl, Rucinski and Szemeredi in [41, 42]. The same authors gave the exact version for \(k=3\) in [43]. **Theorem 1.1** ([42, 43]).: _Let \(k\geq 3,\gamma>0\) and \(H\) be an \(n\)-vertex \(k\)-graph, where \(n\) is sufficiently large. If \(\delta_{k-1}(H)\geq(1/2+\gamma)n\), then \(H\) contains a Hamilton cycle. Furthermore, when \(k=3\) it is enough to have \(\delta_{2}(H)\geq\lfloor n/2\rfloor\)._ More generally, Kuhn and Osthus [25] and Zhao [46] noted that it is much more difficult to determine the minimum \(d\)-degree condition for tight Hamilton cycle for \(d\in[k-2]\). Based on the results of Cooley and Mycroft [11], Glebov, Person and Weps [16], Rodl and Rucinski [39] and Rodl, Rucinski, Schacht and Szemeredi [40], Reiher, Rodl, Rucinski, Schacht, and Szemeredi [37] gave the asymptotic version when \(d=k-2\) and \(k=3\), while Polcyn, Reiher, Rodl, Rucinski, Schacht, and Schulke [35] gave the asymptotic version when \(d=k-2\) and \(k=4\). Glebov, Person and Weps [16] proved the minimum relative \(d\)-degree condition for a tight Hamilton cycle is a function of \(k\). The best general bound was given by Lang and Sanhueza-Matamala [27], Polcyn, Reiher, Rodl and Schulke [36] independently. They proved the following theorem. **Theorem 1.2** ([27, 36]).: _Let \(k\geq 3\), \(\gamma>0\) and \(H\) be an \(n\)-vertex \(k\)-graph, where \(n\) is sufficiently large. If \(\delta_{k-2}(H)\geq(5/9+\gamma)\binom{n}{2}\), then \(H\) contains a Hamilton cycle._ A construction due to Han and Zhao [19] showed that the constant \(5/9\) appearing in the above theorem is optimal. For more background, we refer the readers to the recent surveys of Kuhn and Osthus [25], Rodl and Rucinski [38], Simonovits and Szemeredi [45] and Zhao [46]. ### Rainbow settings in hypergraph systems A \(k\)-graph system \(\textbf{{H}}=\{H_{i}\}_{i\in[n]}\) is a family of not necessarily distinct \(k\)-graphs on the same \(n\)-vertex set \(V\), moreover, a \(k\)-graph \(H\) on \(V\) is rainbow if \(E(H)\subseteq\bigcup_{i\in[n]}E(H_{i})\) and \(|E(H)\cap E(H_{i})|\leq 1\) for \(i\in[n]\). Let \(|H|\) denote the size of the vertex set of \(H\). The study of rainbow structures in graph systems has attracted much more attention. Aharoni, DeVos, Maza, Montejano, and Samal. [1] conjectured that: for \(|V|=n\geq 3\) and \(n\)-vertex graph system \(\textbf{{G}}=\{G_{i}\}_{i\in[n]}\) on \(V\), if \(\delta(G_{i})\geq n/2\) for each \(i\in[n]\), then there exists a rainbow Hamilton cycle with edge set \(\{e_{1},\ldots,e_{n}\}\) such that \(e_{i}\in E(G_{i})\) for \(i\in[n]\). This was recently verified asymptotically by Cheng, Wang and Zhao [9], and completely by Joos and Kim [21]. In [6], Bradshaw, Halasz, and Stacho strengthened the Joos-Kim result by showing that given an \(n\)-vertex graph system \(\textbf{{G}}=\{G_{i}\}_{i\in[n]}\) with \(\delta(G_{i})\geq n/2\) for \(i\in[n]\), then \(G\) has exponentially many rainbow Hamilton cycles. Similarly, a degree condition of Moon and Moser [34] for Hamiltonicity in bipartite graphs has been generalized to the rainbow setting by Bradshaw in [5]. Generally, for each graph \(F\), let \(\delta_{F}\) be the smallest real number \(\delta\geq 0\) such that, for each \(\varepsilon>0\) there exists some \(n_{0}\) such that, for every \(n\geq n_{0}\) with \(|F|\) dividing \(n\), if an \(n\)-vertex graph \(G\) has minimum degree at least \((\delta+\varepsilon)n\), then \(G\) contains an \(F\)-factor. Cheng, Han, Wang, and Wang [7] proved that the minimum degree bound \(\delta_{K_{r}}\) is asymptotically sufficient for the existence of rainbow \(K_{r}\)-factor in graph systems. Montgomery, Muyesser and Pehova [33] generalized the above conclusion for some \(F\) satisfying \(\delta_{F}\geq 1/2\) or \(F\) has a bridge. In hypergraph systems, Cheng, Han, Wang, Wang and Yang [8] proved that given \(k\geq 3,\gamma>0\), sufficiently large \(n\) and an \(n\)-vertex \(k\)-graph system \(\textbf{{H}}=\{H_{i}\}_{i\in[n]}\), if \(\delta_{k-1}(H_{i})\geq(1/2+\gamma)n\) for \(i\in[n]\), then there exists a rainbow tight Hamilton cycle. There are also some works on rainbow subgraphs, see [2, 14, 20, 23, 26, 29, 28, 30, 31, 7, 12]. Recently, Gupta, Hamann, Muyesser, Parczyk and Sgueglia [18] gave a unified approach to this problem. However, they mentioned that "there is a well-known (uncoloured) Dirac-type result whose rainbow version is missing" (Given a \(3\)-graph system \(\textbf{{H}}=\{H_{i}\}_{i\in[n]}\), with minimum vertex degree condition of each \(H_{i}\), does \(H\) admit a rainbow Hamilton cycle?) and "it would be an interesting challenge to obtain this result". It hits a technical barrier to tackle. In this paper, we develop the new rainbow Hamilton framework, whose uncolored version is first established in [27], and give a general result as follows. **Theorem 1.3**.: _For every \(k\geq 3,\gamma>0\), there exists \(n_{0}\) such that the following holds for \(n\geq n_{0}\). Given a \(k\)-graph system \(\textbf{{H}}=\{H_{i}\}_{i\in[n]}\), if \(\delta_{k-2}(H_{i})\geq(5/9+\gamma){n\choose 2}\) for \(i\in[n]\), then **H** admits a rainbow Hamilton cycle._ ### Notation and preliminary We call a hypergraph \(H\) a \((1,k)\)-graph if \(V(H)\) can be partitioned into \(V_{1}\) and \(V_{2}\) such that every edge contains exactly one vertex of \(V_{1}\) and \(k\) vertices of \(V_{2}\). Given a partition \(V(H)=V_{1}\cup V_{2}\), a \((1,d)\)-subset \(S\) of \(V(H)\) contains one vertex in \(V_{1}\) and \(d\) vertices in \(V_{2}\). Let \(\delta_{1,d}(H):=\min\{\deg_{H}(S):S\text{ is a }(1,d)\text{-subset of }V(H)\}\) for \(d\in[k-1]\). A \(k\)-partite graph is a graph whose vertices are (or can be) partitioned into \(k\) different independent sets. Given a \((k+1)\)-partite \((k+1)\)-graph \(H\) with \(V(H)=V_{0}\cup V_{1}\cup\cdots\cup V_{k}\). A \((k+1)\)-uniform sequentially path \(P\) of _length_\(t\) in \(H\) is a \((k+1)\)-graph with vertex set \(V(P)=C(P)\cup I(P)\) where \(C(P)=\{c_{1},\ldots,c_{t-k+1}\}\subseteq V_{0}\), \(I(P)=\{v_{1},\ldots,v_{t}\}\subseteq V_{1}\cup\cdots\cup V_{k}\) and edge set \(\{e_{1},\ldots,e_{t-k+1}\}\) such that \(e_{i}=\{c_{i},v_{i},\ldots,v_{i+k-1}\}\) for \(i\in[t-k+1]\). Denote the length of \(P\) by \(\ell(P)\). We call \(c_{1},\ldots,c_{t-k+1}\) the _colors_ of \(P\) and \(v_{1},\ldots,v_{t}\) the _points_ of \(P\). For convenience, we use \((C(P),I(P))\) to denote the above sequentially tight path. Furthermore, if \((v_{1},\ldots,v_{t})\) is a cyclically ordered set, then we call this sequentially path a _sequentially cycle_. A \((k+1)\)-uniform sequentially walk is an ordered set of points with an ordered set of colors such that the \(i_{th}\)\(k\) consecutive points along with the \(i_{th}\) color forms an edge. Note that the points, edges and colors in a sequentially walk are allowed to be repeated. The length of a sequentially walk is its number of points. Before we give the proof of Theorem 1.3, we use the following similar definitions with [27]. **Definition 1.4** (Sequentially Hamilton cycle threshold).: _The minimum \((1,k-2)\)-degree threshold for sequentially Hamilton cycles, denoted by \(thc_{k-2}(k)\), is the smallest number \(\delta>0\) such that, for every \(\varepsilon>0\), there exists an \(n_{0}\in\mathbb{N}\) such that every \((1,k)\)-graph \(H\) on \([n]\cup V\) with minimum degree \(\delta_{1,k-2}(H)\geq(\delta+\varepsilon){n\choose 2}\) contains a sequentially Hamilton cycle where \(|V|=n\geq n_{0}\)._ **Definition 1.5** (Sequentially tight connectivity).: _A subgraph \(H^{\prime}\) of a \((1,k)\)-graph \(H\) is sequentially tightly connected, if any two edges of \(H^{\prime}\) can be connected by a sequentially walk. A sequentially tight component of \(H\) is an edge maximal sequentially tightly connected subgraph._ Given \(\mathbf{b}\): \(V(H)\rightarrow[0,1]\), we define the \(\mathbf{b}\)-_fractional matching_ to be a function \(\mathbf{w}\): \(E(H)\rightarrow[0,1]\) such that \(\sum_{e:v\in e}\mathbf{w}(e)\leq\mathbf{b}(v)\) for every vertex \(v\in V(H)\). Moreover, if the equality holds, then we call \(\mathbf{w}\) perfect. Denote the maximum size of a \(\mathbf{b}\)-fractional matching by \(\nu(H,\mathbf{b})=\max_{\mathbf{w}}\sum_{e\in E(H)}\mathbf{w}(e)\) where \(\mathbf{w}\) is a \(\mathbf{b}\)-fractional matching. It is well-known that perfect matchings are closely related to its fractional counterpart. In particular, when \(\mathbf{b}\equiv 1\), the \(\mathbf{b}\)-_fractional matching_ is called _fractional matching_. The _density_ of a \(\mathbf{b}\)-fractional matching is \(\sum_{e\in E(H)}\mathbf{w}(e)/|V(H)|\) Besides, we require the following characterization. Given a \(k\)-graph \(H\), we say that \(H\) is \(\gamma\)-_robustly matchable_ if the following holds. For every vertex weight \(\mathbf{b}\): \(V(H)\to[1-\gamma,1]\), there is an edge weight \(\mathbf{w}\): \(E(H)\to[0,1]\) with \(\sum_{e:v\in e}\mathbf{w}(e)=\mathbf{b}(v)/(k-1)\) for every vertex \(v\in V(H)\). Note that a \(\gamma\)-robustly matchable \(k\)-graph \(H\) admits a \(\mathbf{b}\)-fractional matching of size \(\sum_{v\in V(H)}\mathbf{b}(v)/k(k-1)\) for every vertex weighting \(\mathbf{b}\): \(V(H)\to[1-\gamma,1]\). The following definition plays an important role in our proof. **Definition 1.6** (Link graph).: _Consider a \((1,k)\)-graph \(H\) on \(V(H)=[n]\cup V\) where \(|V|=n\) and a set \(S\) of \((1,\ell)\)-subset of \(V(H)\). We define the link \((k-\ell)\)-graph of \(S\) in \(H\) as the graph \(L_{H}(S)\) with vertex set \(V\) and edge set \(\{X:X\cup S\in E(H)\}\) for \(\ell\in[0,k-1]\). If \(H\) is clear, then we simply write \(L(S)\)._ Let \(H=(V,E)\) be a \(k\)-graph, \(V^{\prime}\subseteq V\), an _induced subgraph_\(H[V^{\prime}]\) of a \(k\)-graph \(H\) is a \(k\)-graph with vertex set \(V^{\prime}\) and edge set \(E^{\prime}\) where each edge is precisely the edge of \(H\) consisting of \(k\) vertices in \(V^{\prime}\). We usually denote \(H^{\prime}\) by \(H[V^{\prime}]\). **Definition 1.7** (Rainbow Hamilton framework).: _Let \(\alpha,\gamma,\delta\) be positive constants. Suppose \(R\) is a \((1,k)\)-graph on \([t]\cup V\) where \(|V|=t\), we call a subgraph \(H\) of \(R\) an \((\alpha,\gamma,\delta)\)-rainbow Hamilton framework, if \(H\) has the following properties._ 1. \(H_{i}:=H[\{i\}\cup V]\) _is sequentially tightly connected for_ \(i\in[t]\)_,_ 2. \(H_{i}\) _contains a sequentially closed walk of length 1 mod_ \(k\) _for_ \(i\in[t]\)_,_ 3. \(H_{W_{i}}:=H[[t(i-1)/k+1,ti/k]\cup V]\) _is_ \(\gamma\)_-robustly matchable for_ \(i\in[k]\)_,_ 4. _For every color_ \(i\in[t]\)_, there are at least_ \((1-\alpha)t\) _points_ \(v\in V\) _such that_ \(\{i,v\}\) _has relative_ \((1,1)\)_-degree at least_ \(1-\delta+\gamma\)_,_ 5. \(L_{H}(\{i\})\) _and_ \(L_{H}(\{j\})\) _intersect in an edge for each_ \(i,j\in[t]\)_._ We write \(x\ll y\) to mean that for any \(y\in(0,1]\), there exists an \(x_{0}\in(0,1)\) such that for all \(x\leq x_{0}\), the subsequent statements hold. Hierarchies with more constants are defined similarly to be read from right to left. **Definition 1.8** (Rainbow Hamilton framework threshold).: _The minimum \((1,k-2)\)-degree threshold for \((1,k)\)-uniform rainbow Hamilton framework, denoted by \(rhf_{k-2}(k)\), is the smallest value of \(\delta\) such that the following holds._ _Suppose \(\varepsilon,\alpha,\gamma,\mu>0\) and \(t\in\mathbb{N}\) with \(1/t\ll\varepsilon\ll\alpha\ll\gamma\ll\mu\). If \(R\) is a \((1,k)\)-graph on \([t]\cup V\) where \(|V|=t\), with minimum relative \((1,k-2)\)-degree at least \(\delta+\mu\) and a set \(I\subseteq E(R)\) of at most \(\varepsilon t\binom{t}{k}\) perturbed edges, then \(R\) contains an \((\alpha,\gamma,\delta)\)-rainbow Hamilton framework \(H\) that avoids the edges of \(I\)._ We transform the problem of bounding the sequentially Hamilton cycle threshold to bound the rainbow Hamilton framework threshold. **Theorem 1.9** (Framework Theorem).: _For \(k\geq 3\), we have \(thc_{k-2}(k)\leq rhf_{k-2}(k)\)._ Let the _shadow graph_\(\partial_{j}(H)\) of \((1,k)\)-graph \(H\) at level \(j\) be the \((1,j)\)-graph on \([n]\cup V\) whose edges are \((1,j)\)-sets contained in the edges of \(H\) for \(j\in[k]\). **Definition 1.10** (Vicinity).: _Given a \((1,k)\)-graph \(R\) on \([t]\cup V\), we say that \(\mathcal{C}_{i}=\{C_{S}\subseteq L(S):S\in\partial_{k-2}(R)\text{ and }i\in S\}\) for each \(i\in[t]\) is a \((k-2)\)-vicinity. We define the \((1,k)\)-graph \(H\) generated by \(\mathcal{C}_{i}\) as the subgraph of \(R\) with vertex set \(V(H)=\{i\}\cup V\) and edge set_ \[E(H)=\bigcup_{i\in S,S\in\partial_{k-2}(R)}\{A\cup S:A\in C_{S}\}.\] Besides, we need the following structures. **Definition 1.11** (Switcher).: _A switcher in a graph \(G\) is an edge \(ab\) such that \(a\) and \(b\) shares a common neighbor in \(G\)._ Note that a switcher together with its common neighbor generates a triangle. **Definition 1.12** (Arc).: _Let \(R_{i}\) be a \((1,k)\)-graph on \(\{i\}\cup V\) with \((k-2)\)-vicinity \(\mathcal{C}_{i}=\{C_{S}:S\in\partial_{k-2}(R_{i})\}\). We say that a \((1,k+1)\)-tuple \((i,v_{1},\dots,v_{k+1})\) is an arc for \(\mathcal{C}_{i}\) if the following holds._ * \(\{i,v_{1},\dots,v_{k-2}\}\in\partial_{k-2}(R_{i})\) _with_ \(\{v_{k-1},v_{k}\}\in C_{\{i,v_{1},\dots,v_{k-2}\}}\)_._ * \(\{i,v_{2},\dots,v_{k-1}\}\in\partial_{k-2}(R_{i})\) _with_ \(\{v_{k},v_{k+1}\}\in C_{\{i,v_{2},\dots,v_{k-1}\}}\)_._ **Definition 1.13** (Rainbow Hamilton vicinity).: _Let \(\gamma,\delta>0\). Suppose that \(R\) is a \((1,k)\)-graph on \([t]\cup V\), let \(R_{i}:=R[\{i\}\cup V]\). We say that a family \(\mathcal{C}=\{\mathcal{C}_{i}:i\in[t]\}\) of \((k-2)\)-vicinities where \(\mathcal{C}_{i}=\{C_{S}:S\in\partial_{k-2}(R_{i})\}\) is \((\gamma,\delta)\)-rainbow Hamilton if for any \(S,S^{\prime}\in\partial_{k-2}(R_{i})\) and \(T\in\partial_{k-2}(R_{j})\) where \(i\neq j\), the followings hold,_ 1. \(C_{S}\) _is tightly connected,_ 2. \(C_{S}\) _and_ \(C_{S^{\prime}}\) _intersect in an edge,_ 3. \(C_{S}\) _has a switcher and the vicinity_ \(\mathcal{C}_{i}\) _has an arc for_ \(i\in[t]\)_,_ 4. \(C_{S}\) _has a fractional matching of density_ \((1+1/k)(1/(k+1)+\gamma)\)_,_ 5. \(C_{S}\) _has edge density at least_ \(1-\delta+\gamma\)_,_ 6. \(C_{S}\) _and_ \(C_{T}\) _intersect in an edge._ **Definition 1.14** (Perturbed degree).: _Let \(\alpha,\delta>0\). We say that a \((1,k)\)-graph \(R\) has \(\alpha\)-perturbed minimum relative \((1,k-2)\)-degree at least \(\delta\) if the followings hold for \(j\in[k-2]\)._ 1. _every edge of_ \(\partial_{j}(R)\) _has relative degree at least_ \(\delta\) _in_ \(R\)_,_ 2. \(\overline{\partial_{j}(R)}\) _has edge density at most_ \(\alpha\)_, where_ \(\overline{\partial_{j}(R)}\) _denotes the complement of_ \(\partial_{j}(R)\)_,_ 3. _each_ \((1,j-1)\)_-tuple of_ \(\partial_{j-1}(R)\) _has relative degree less than_ \(\alpha\) _in_ \(\overline{\partial_{j}(R)}\)_._ **Definition 1.15** (Rainbow Hamilton vicinity threshold).: _The minimum \((1,k-2)\)-degree threshold for \((1,k)\)-uniform rainbow Hamilton vicinities, denoted by \(rhv_{k-2}(k)\), is the smallest value \(\delta>0\) such that the following holds. Let \(\alpha,\gamma,\mu>0\), \(t\in\mathbb{N}\) with \(1/t\ll\alpha\ll\gamma\ll\mu\) and \(R\) be a \((1,k)\)-graph on \([t]\cup V\). If each \(R_{i}:=R[\{i\}\cup V]\) has \(\alpha\)-perturbed minimum relative \((1,k-2)\)-degree at least \(\delta+\mu\) for \(i\in[t]\), then \(R\) admits a family of \((\gamma,\delta)\)-rainbow Hamilton \((k-2)\)-vicinities._ **Theorem 1.16** (Vicinity Theorem).: _For \(k\geq 3\), \(rhf_{k-2}(k)\leq rhv_{k-2}(k)\)._ Combining Theorem 1.16 with Theorem 1.9, we just need to prove the following theorem, and we can obtain Theorem 1.3. **Theorem 1.17**.: _For \(k\geq 3\), \(rhv_{k-2}(k)\leq 5/9\)._ We use the following concentration inequalities. **Proposition 1.18** (Chernoff's inequality [4]).: _Suppose that \(X\) has the binomial distribution and \(0<a<3/2\), then \(\Pr(|X-\mathbb{E}X|\geq a\mathbb{E}X)\leq 2e^{-a^{2}\mathbb{E}X/3}\)._ **Proposition 1.19** (McDiarmid's inequality [32]).: _Suppose \(X_{1},\ldots,X_{m}\) are independent Bernoulli random variables and \(b_{i}\in[0,B]\) for \(i\in[m]\). Suppose that \(X\) is a real-valued random variable determined by \(X_{1},\ldots,X_{m}\) such that altering the value of \(X_{i}\) changes \(X\) by at most \(b_{i}\) for \(i\in[m]\). For all \(\lambda>0\), we have_ \[\Pr(|X-\mathbb{E}X|>\lambda)\leq 2\exp\left(\frac{-2\lambda^{2}}{B\Sigma_{i=1 }^{m}b_{i}}\right).\] ### Organisation of the paper The paper is organised as follows. In Section 2, we show that how a rainbow Hamilton vicinity deduce a rainbow Hamilton framework. In Section 3, we show the minimum degree condition guarantees a rainbow Hamilton vicinity. We review the hypergraph regularity method in Section 4. In Section 5, the proof of Theorem 1.9 is obtained by the absorption method and almost cover lemma, whose details can be seen in Section 6 and Section 7 respectively. We conclude the paper with a discussion in Section 8. For the proof of absorption lemma and almost cover lemma, we develop a new rainbow Hamilton framework. It is of great interest to tackle the rainbow Hamilton cycle embedding problem with other conditions. In the proof of absorption lemma, which was widely popularised by Rodl, Rucinski and Szemeredi [41], we have an innovation point where we provides a mentality for absorbing a color set and a point set. An absorber can be divided into two parts, one for the color set and the other for the point set. The almost cover lemma is obtained by tools with regularity. However, connecting the end-pairs of paths arising in the proof requires more involved changes. The traditional connecting lemma asserts that for every pair of disjoint pairs of vertices there exists a relatively short tight path, but there might be pairs of vertices that are not contained in any hyperedge at all as the following example shows. Consider a \(3\)-graph system \(\boldsymbol{H}=\{H_{i}\}_{i\in[n]}\), \(V(H_{i})=V=X\cup Y\) where \(|X|<\frac{1}{3}n\), each \(H_{i}\) has edge set \(E=\{e\in V^{(3)}:|X\cap e|\neq 2\}\), it is easy to confirm that this \(3\)-graph system satisfies the degree condition in Theorem 1.3, but every tight path starting with a pair of vertices in \(X\) is bound to stay in \(X\). We overcome the obstacle in Section 6. ## 2. From vicinity to framework Our goal is to prove Theorem 1.16 in this part. We need the followings lemmas. **Lemma 2.1**.: _Let \(R_{i}\) be a \((1,k)\)-graph on \(\{i\}\cup V\) with a \((k-2)\)-vicinity \(\mathcal{C}_{i}=\{C_{S}:S\in\partial_{k-2}(R_{i})\}\) for \(i\in[t]\). For every \(S,S^{\prime}\in\partial_{k-2}(R_{i})\), if the vicinity \(\mathcal{C}_{i}\) has an arc for \(i\in[t]\), \(C_{S}\) and \(C_{S^{\prime}}\) intersect, \(C_{S}\) is tightly connected and has a switcher, then the vertex spanning subgraph \(H\) of \(R_{i}\) generated by \(\mathcal{C}_{i}\) is sequentially tightly connected and contains a sequentially closed walk of length 1 mod \(k\)._ **Lemma 2.2**.: _Let \(\gamma,\alpha,\delta>0\) such that \(1/t\ll\alpha,\gamma\ll 1/k\). Let \(R\) be a \((1,k)\)-graph on \([t]\cup V\) where \(|V|=t\) and each \(R_{i}\) has \(\alpha\)-perturbed minimum relative \((1,k-2)\)-degree at least \(\delta\). Let \(\mathcal{C}=\{\mathcal{C}_{i}:i\in[t]\}\) be a family of \((k-2)\)-vicinities where \(\mathcal{C}_{i}=\{C_{S}:S\in\partial_{k-2}(R_{i})\}\). If for every \(S\in\partial_{k-2}(R)\), \(C_{S}\) has a fractional matching of density \((1+1/k)(1/(k+1)+\gamma)\), then the graph \(H\subseteq R\) generated by \(\mathcal{C}_{W_{i}}:=\{\mathcal{C}_{j}:j\in[t(i-1)/k+1,ti/k]\}\) is \(\gamma\)-robustly matchable for each \(i\in[k]\)._ **Lemma 2.3**.: _Let \(t,k\in\mathbb{N},i\in[t]\) and \(\delta,\alpha,\varepsilon>0\) with \(1/t\ll\varepsilon\ll\alpha\ll\delta,1/k\). Let \(R_{i}\) be a \((1,k)\)-graph on \(\{i\}\cup V\) with minimum relative \((1,k-2)\)-degree at least \(\delta\) where \(|V|=t\). Let \(I\) be a subgraph of \(R_{i}\) with edge density at most \(\varepsilon\), there exists a vertex spanning subgraph \(R^{\prime}_{i}\subseteq R_{i}-I\) of \(\alpha\)-perturbed minimum relative \((1,k-2)\)-degree at least \(\delta-\alpha\)._ Proof of Theorem 1.16.: Let \(\delta=rhv_{k-2}(k)\) and \(\varepsilon,\alpha,\gamma>0\) with \(t_{0}\in\mathbb{N}\) such that \[1/t\ll\varepsilon\ll\alpha\ll\alpha^{\prime}\ll\gamma\ll\mu\ll\delta,1/k.\] Moreover,the constants \(t,\varepsilon,\alpha,\mu\) are compatible with the constant hierarchy given by Definition 1.15, \(t,\varepsilon,2\alpha,\mu\) satisfy the conditions of Lemma 2.2 and \(t,\varepsilon,\alpha,\delta\) satisfy the conditions of Lemma 2.3. Given a \((1,k)\)-graph \(R_{i}\) on \(\{i\}\cup V\) with minimum relative \((1,k-2)\)-degree at least \(\delta+2\mu\) and a set \(I\) of at most \(\varepsilon\binom{t}{k}\) perturbed edges. We start by selecting a subgraph of \(R_{i}\). By Lemma 2.3, we obtain a vertex spanning subgraph \(R^{\prime}_{i}\subseteq R_{i}-I\) of \(\alpha\)-perturbed minimum relative \((1,k-2)\)-degree at least \(\delta+\mu\). By the definition of 1.15, \(R^{\prime}:=\bigcup_{i\in[t]}R^{\prime}_{i}\) has a family of \((2\gamma,\delta)\)-rainbow Hamilton \((k-2)\)-vicinities \(\mathcal{C}=\{\mathcal{C}_{i}:i\in[t]\}\) where \(\mathcal{C}_{i}=\{C_{S}:S\in\partial_{k-2}(R_{i})\}\). Each \(\mathcal{C}_{i}\) generates a \((1,k)\)-graph \(G_{i}\). Let \(H=\bigcup_{i\in[t]}G_{i}\). Note that \(G_{i}\) does not contain the edges of \(I\) and \(V(G_{i})=V(R^{\prime}_{i})\). By Lemma 2.1 and 2.2, \(H\) also satisfies (F1)-(F3). For \(k\geq 4\), by repeatedly applying Definition 1.14, we deduce that for all but at most \(\alpha t\)\((1,1)\)-sets of \(V(R^{\prime}_{i})\) is contained in at least \((1-2\alpha)^{k-3\binom{|V^{\prime}|-1}{k-3}}\geq(1-2(k-3)\alpha)\binom{|V^{ \prime}|-1}{k-3}\) many \((1,k-2)\)-sets in \(\partial_{k-2}(R^{\prime}_{i})\). Note that \(\partial_{k-2}(R^{\prime}_{i})=\partial_{k-2}(G_{i})\). This implies that for all but at most \(\alpha t\)\((1,1)\)-tuples of \(V(G_{i})\) has relative degree at least \(1-2(k-3)\alpha\) in \(\partial_{k-2}(G_{i})\). Moreover, every \((1,k-2)\)-set in \(\partial_{k-2}(G_{i})\) has relative degree at least \(1-\delta+2\gamma\) in \(G_{i}\), since \(G_{i}\) is generated from \((2\gamma,\delta)\)-rainbow Hamilton \((k-2)\)-vicinity and Definition 1.13. Thus, we obtain that for each color \(i\in[t]\), there are at least \((1-\alpha)t\) points \(v\in V\) such that \(\{i,v\}\) has relative \((1,1)\)-degree at least \(1-\delta+\gamma\), which implies (F4) for \(k\geq 4\). While for \(k=3\), by Definition 1.13, we have every \((1,1)\)-set has relative degree at least \(1-\delta+2\gamma\) in \(G_{i}\), which implies (F4) for \(k=3\). Besides, it is obvious that (V6) implies (F5), we obtain an \((\alpha,\gamma,\delta)\)-framework, as desired. ### The Proof of Lemma 2.1 We define a _directed edge_ in a \(k\)-graph to be a \(k\)-tuple whose vertices correspond to an underlying edge. Note that the directed edges \((a,b,c),(b,c,a)\) corresponds to the same underlying edge \(\{a,b,c\}\). Given a \(k\)-graph system \(\textbf{{H}}=\{H_{i}\}_{i\in[n]}\) on vertex set \(V\), we consider the hypergraph \(H\) with vertex set \([n]\cup V\) and edge set \(\{\{i\}\cup e:e\in E(H_{i}),i\in[n]\}\). Define a directed edge to be a \((1,k)\)-tuple \((i,v_{1},\ldots,v_{k})\) with \(k\) points corresponding to an underlying edge \(\{v_{1},\ldots,v_{k}\}\) in \(H_{i}\). Given a \(k\)-tuple \(\overrightarrow{S}:=(v_{1},\ldots,v_{k})\), abbreviated as \(v_{1}\cdots v_{k}\), we use \(\overrightarrow{S}\subseteq V\) to mean that the corresponding \(k\)-set of \(\overrightarrow{S}\) is a subset of \(V\). Similarly, given a family \(F\) of \(k\)-sets and a \(k\)-tuple \(\overrightarrow{S}\), we use \(\overrightarrow{S}\in F\) to denote that the corresponding \(k\)-set of \(\overrightarrow{S}\) is an element of \(F\). Let \(\overrightarrow{S}=(v_{1},\ldots,v_{k})\), \(\overrightarrow{S}\setminus\{v_{i}\}\) is the \((k-1)\)-tuple \((v_{1},\ldots,v_{i-1},v_{i+1},\ldots,v_{k})\) for \(i\in[k]\), \(\{v^{\prime}_{i}\}\cup\overrightarrow{S}\setminus\{v_{i}\}\) is the \(k\)-tuple \((v_{1},\ldots,v_{i-1},v^{\prime}_{i},v_{i+1},\ldots,v_{k})\). **Definition 2.4** (Strong Connectivity).: _A hypergraph is called strongly connected, if every two directed edges lie on a sequentially walk._ **Claim 2.5**.: _If \(G\) is a tightly connected graph, then \(G\) is strongly connected._ Proof.: Let \(ab\) be a switcher in \(G\), by Definition 1.11, we obtain that \(a\) and \(b\) share a neighbor \(c\). If we can prove that \((a,b)\) and \((b,a)\) are on a walk \(W\), then we can obtain that \(G\) is strongly connected. Since we consider any two directed edges \(D_{1}\) and \(D_{2}\) of \(G\), there are walks \(W_{1}\) and \(W_{2}\) starting from \(D_{1}\) and \(D_{2}\) respectively and ending with \(\{a,b\}\), \(W_{1}WW_{2}\) is a tight walk starting from \(D_{1}\) and ending with \(D_{2}\). While it is easy to see that \(aba\) is a tight walk from \((a,b)\) to \((b,a)\), as desired. Next, we want to show that switchers can control the length of sequentially walks. **Proposition 2.6**.: _If \(G\) is a tightly connected graph containing a switcher, then \(G\) has a closed tight walk of odd length._ **Proposition 2.7**.: _Let \(R\) be a \((1,k)\)-graph with a subgraph \(H\) which is generated by \(\mathcal{C}_{i}\). Suppose that \(\mathcal{C}_{i}\) satisfies the conditions of Lemma 2.1, for any \((1,k-2)\)-tuple \(\overrightarrow{S}\in\partial_{k-2}(H)\) and two directed edges \(D_{1},D_{2}\in C_{\overrightarrow{S}}\), there exists a sequentially walk \(W\) of length 0 mod \(k\) in \(H\) starting from \(\overrightarrow{S}D_{1}\) and ending with \(\overrightarrow{S}D_{2}\)._ Proof.: Let \(\mathcal{C}_{i}=\{C_{\overrightarrow{S}}:\overrightarrow{S}\in\partial_{k-2}( R)\text{ and }i\in\overrightarrow{S}\}\) and \(\overrightarrow{S}=\{i\}\cup\overrightarrow{S}^{\prime}\) where \(\overrightarrow{S}^{\prime}\) is a \((k-2)\)-tuple. By Proposition 2.6, there is a closed tight walk \(W_{1}\) of odd length in \(C_{\overrightarrow{S}}\). By Proposition 2.5, there is a tight walk \(W_{2}\) starting from \(D_{1}\), ending with \(D_{2}\) and containing \(W_{1}\) as a subwalk. Let \(\ell(W_{2})=p\). We obtain \(W_{3}\) from \(W_{2}\) by replacing \(W_{1}\) with the concatenation of \(p+1\) mod 2 copies of \(W_{1}\). Hence, \(W_{3}\) is a tight walk of even length in \(C_{\overrightarrow{S}}\) starting from \(D_{1}\) and ending with \(D_{2}\). Suppose that \(W_{3}=(a_{1},a_{2}\ldots,a_{2m})\), we have \(D_{1}=(a_{1},a_{2})\) and \(D_{2}=(a_{2m-1},a_{2m})\). Note that \((i\ldots i,\overrightarrow{S}^{\prime}a_{1}a_{2}\overrightarrow{S}^{\prime}a_{ 3}a_{4}\cdots\overrightarrow{S}^{\prime}a_{2m-1}a_{2m})\) is a sequentially walk in \(H\). Moreover, it has length 0 mod \(k\), as desired. **Proposition 2.8**.: _Let \(R\) be a \((1,k)\)-graph with a subgraph \(H\) that is generated by \(\mathcal{C}_{i}\). Suppose \(\mathcal{C}_{i}\) satisfies the conditions of Lemma 2.1, we consider directed edges \(\overrightarrow{S},\overrightarrow{T}\in\partial_{k-2}(H)\) and \(D_{1}\in C_{\overrightarrow{S}}\), \(D_{2}\in C_{\overrightarrow{T}}\). If \(\overrightarrow{S}\) and \(\overrightarrow{T}\) differ in exactly one coordinate, then there is sequentially walk of length 0 mod \(k\) in \(H\) starting from \(\overrightarrow{S}D_{1}\) and ending with \(\overrightarrow{T}D_{2}\)._ Proof.: Let \(\overrightarrow{S}=(i,v_{1}\ldots v_{i}\ldots v_{k-2})\) and \(\overrightarrow{T}=(i,v_{1}\ldots u_{i}\ldots v_{k-2})\) where \(u_{i}\neq v_{i}\). By Definition 1.13, there is a directed edge \(D_{3}\) in \(C_{\overrightarrow{S}}\cap C_{\overrightarrow{T}}\), thus \((ii,\overrightarrow{S}\setminus\{i\}D_{3}\overrightarrow{T}\setminus\{i\})\) is a sequentially walk in \(H\). By Proposition 2.7, there is a sequentially walk \(W_{1}\) of length 0 mod \(k\) starting from \(\overrightarrow{S}D_{1}\) and ending with \(\overrightarrow{S}D_{3}\), \(W_{2}\) of length 0 mod \(k\) starting from \(\overrightarrow{T}D_{3}\) and ending with \(\overrightarrow{T}D_{2}\), \((C(W_{1})C(W_{2}),I(W_{1})I(W_{2}))\) is the desired walk. **Proposition 2.9**.: _Let \(R\) be a \((1,k)\)-graph with a subgraph \(H\) that is generated by \(\mathcal{C}_{i}\). Suppose \(\mathcal{C}_{i}\) satisfies the conditions of Lemma 2.1, we consider directed edges \(\overrightarrow{S},\overrightarrow{T}\in\partial_{k-2}(H)\) and \(D_{1}\in C_{\overrightarrow{S}}\), \(D_{2}\in C_{\overrightarrow{T}}\). There is a sequentially walk of length 0 mod \(k\) in \(H\) starting from \(\overrightarrow{S}D_{1}\) and ending with \(\overrightarrow{T}D_{2}\)._ Proof.: Let \(r\in[k-2]\) be the number of indices where \(\overrightarrow{S}\) and \(\overrightarrow{T}\) differ. If \(r=1\), the result follows from Proposition 2.8. Suppose the result is known for \(r-1\). By Definition 1.13, there exists an edge \(pq\) in \(C_{\overrightarrow{S}}\cap C_{\overrightarrow{T}}\). Suppose that the \(i\)th coordinate vertex of \(\overrightarrow{S}\) and \(\overrightarrow{T}\) are different, which are replaced with \(p\), we obtain \(\overrightarrow{S}^{\prime}\) and \(\overrightarrow{T}^{\prime}\). Note that \(\overrightarrow{S}^{\prime}\), \(\overrightarrow{T}^{\prime}\in\partial_{k-2}(H)\). Choose \(D_{1}^{\prime}\in C_{\overrightarrow{S}^{\prime}}\). By Proposition 2.8, there is a sequentially walk \(W_{1}\) of length 0 mod \(k\) from \(\overrightarrow{S}D_{1}\) to \(\overrightarrow{S}^{\prime}D_{1}^{\prime}\), similarly, there is a sequentially walk \(W_{3}\) of length 0 mod \(k\) from \(\overrightarrow{T}^{\prime}D_{2}^{\prime}\) to \(\overrightarrow{T}D_{2}\) where \(D_{2}^{\prime}\in C_{\overrightarrow{T}^{\prime}}\). By induction, there is a sequentially walk \(W_{2}\) from \(\overrightarrow{S}^{\prime}D_{1}^{\prime}\) to \(\overrightarrow{T}^{\prime}D_{2}^{\prime}\) of length 0 mod \(k\). Thus, \((C(W_{1})C(W_{2})C(W_{3}),I(W_{1})I(W_{2})I(W_{3}))\) is the desired walk. _The proof of Lemma 2.1._ Consider any two edges \(X\) and \(Y\) of \(H\). Since \(H\) is generated by \(\mathcal{C}_{i}\), let \(X=S\cup A\) and \(Y=T\cup B\) where \(A\in C_{S}\) and \(B\in C_{T}\). The desired walk can be obtained from Proposition 2.9. Next, we need to show that \(H\) contains a closed walk of length 1 mod \(k\). Since \(\mathcal{C}_{i}\) admits an arc \(\{i,v_{1},\ldots,v_{k+1}\}\), by Proposition 2.9, there is a sequentially walk \(W\) of length 0 mod \(k\) from \(\{i,v_{2},\ldots,v_{k+1}\}\) to \(\{i,v_{1},\ldots,v_{k}\}\). Thus, \((C(W)i,I(W)v_{k+1})\) is a closed walk of length 1 mod \(k\). ### The proof of Lemma 2.2 In this section, we show the details of proving Lemma 2.2. The following claim can be seen in [27], we use a corollary of the claim in this paper. **Claim 2.10**.: [27] _Let \(H\) be a \(k\)-graph and \(\boldsymbol{b}:V(H)\to[0,1]\). Suppose that there exists \(m\leq\sum_{v\in V(H)}\boldsymbol{b}(v)/k\) such that for every \(v\in V(H)\), the link graph \(L_{H}(\{v\})\) has a \(\boldsymbol{b}\)-fractional matching of size \(m\), then \(H\) has a \(\boldsymbol{b}\)-fractional matching of size \(m\)._ **Corollary 2.1**.: _Let \(H\) be a \(k\)-graph, \(\alpha\in[0,1)\) and \(\boldsymbol{b}:V(H)\to[0,1]\). Suppose that there exists \(m\leq\sum_{v\in V(H)}\boldsymbol{b}(v)/k\) such that for all but at most \(\alpha|V(H)|\) isolated vertices \(v\), the link graph \(L_{H}(\{v\})\) has a \(\boldsymbol{b}\)-fractional matching of size \(m\), then \(H\) has a \(\boldsymbol{b}\)-fractional matching of size \(m\)._ Proof.: We first delete the isolated vertices of \(H\) and obtain a subgraph \(H^{\prime}\) of \(H\). Thus, \(L_{H^{\prime}}(\{v\})\) has a \(\boldsymbol{b}\)-fractional matching of size \(m\). By Claim 2.1, we obtain that \(H^{\prime}\) has a \(\boldsymbol{b}\)-fractional matching \(\boldsymbol{w}\) of size \(m\). Assign a weight \(\boldsymbol{b}^{\prime}(u)\in[0,1]\) to each isolated vertex \(u\) of \(H\), and \(\boldsymbol{b}^{\prime}(v)=\boldsymbol{b}(v)\) for each non-isolated vertex \(v\) of \(H\), it is obvious that \(H\) has a \(\boldsymbol{b}^{\prime}\)-fractional matching \(\boldsymbol{w}\) of size \(m\) since \(\sum_{e\ni u}\boldsymbol{w}(e)=0\) for any isolated vertex \(u\) and \(E(H^{\prime})=E(H)\). **Proposition 2.11**.: _Let \(R\) be a \((1,k)\)-graph on \([n/k]\cup V\) where \(|V|=n\), \(\gamma>0\), \(\alpha\in[0,1)\), \(\boldsymbol{b}:[n/k]\cup V\to[1-\gamma,1]\). Suppose that there exists \(m\leq\sum_{v\in V(R)}\boldsymbol{b}(v)/(k+1)\) such that given \(c\in[n/k]\), for all but at most \(\alpha n\) vertices \(v\in V\), the link graph \(L_{R}(\{c,v\})\) has a \(\boldsymbol{b}\)-fractional matching of size \(m\), then \(R\) has a \(\boldsymbol{b}\)-fractional matching of size \(m/k\)._ Proof.: By Corollary 2.1 with \(H\) being \(L_{R}(\{c\})\) for \(c\in[n/k]\), we obtain that \(L_{R}(\{c\})\) has a \(\mathbf{b}\)-fractional matching of size \(m\) for \(c\in[n/k]\). Next, we want to construct a \(\mathbf{b}\)-fractional matching of size \(m/k\) for \(R\). Let \(\mathbf{w}_{c}:E(L_{R}(\{c\}))\to[0,1]\) such that \(\sum_{v\in e,e\in L_{R}(\{c\})}\mathbf{w}_{c}(e)\leq\mathbf{b}(v)\) where \(\sum_{e\in L_{R}(\{c\})}\mathbf{w}_{c}(e)=m\). Let \(\mathbf{w}(f)=\frac{1}{n}\mathbf{w}_{c}(e)\) for \(e\in L_{R}(\{c\})\) and \(f=e\cup\{c\}\), \(c\in[n/k]\). Thus, we have \(\sum_{f\in E(R)}\mathbf{w}(f)=\sum_{c\in[n/k]}\sum_{e\in L_{R}(\{c\})}\frac{1} {n}\mathbf{w}_{c}(e)=\frac{m}{k}\). It is easy to see that \(\sum_{c\in f}\mathbf{w}(f)=\sum_{e\in L_{R}(\{c\})}\frac{1}{n}\mathbf{w}_{c}( e)=\frac{m}{n}\leq\frac{1}{k}\leq\mathbf{b}(c)\). And \(\sum_{v\in f}\mathbf{w}(f)=\sum_{c\in[n/k]}\sum_{v\in e,e\in L_{R}(\{c\})} \frac{k}{n}\mathbf{w}_{c}(e)\leq\sum_{c\in[n/k]}\frac{k}{n}\mathbf{b}(v)= \mathbf{b}(v)\) for \(v\in V\). As desired. We use the following results of [27] directly. **Proposition 2.12**.: _[_27_]_ _Let \(H\) be a \(k\)-graph and \(m\leq v(H)/k\). If for every vertex \(v\) of \(V(H)\), \(L_{H}(\{v\})\) has a fractional matching of size \(m\), then \(H\) has a fractional matching of size \(m\)._ **Proposition 2.13**.: _Let \(d\in[k-2]\) and \(\alpha,\gamma,\delta>0,k\geq 3\) such that \(\alpha,\gamma\ll 1/k\). Let \(R\) be a \((1,k)\)-graph on \([t]\cup V\) with \(\alpha\)-perturbed minimum \((1,k-2)\)-degree \(\delta\) where \(|V|=t\). If for every \(S\in\partial_{d}(R)\), the link graph \(L(S)\) contains a fractional matching of size at least \((1+1/k)(1/(k+1)+\gamma)t\), then for every edge \(S^{\prime}\in\partial_{1}(R)\), the link graph \(L(S^{\prime})\) contains a fractional matching of size at least \((1+1/k)(1/(k+1)+\gamma)t\)._ Proof.: We prove it by induction on \(d\). Note that the base case when \(d=1\) is obvious. Suppose that given \(d\in[2,k-2]\), we obtain the conclusion for \(d^{\prime}<d\). Let \(S\subseteq V(R)\) be a \((1,d-1)\)-set in \(\partial_{d-1}(R)\). Consider any vertex \(s^{\prime}\) in \(\partial_{1}(L_{R}(S))\), \(S\cup\{s^{\prime}\}\) is an edge in \(\partial_{d}(R)\). By assumption, \(L_{R}(S\cup\{s^{\prime}\})\) has a fractional matching of size at least \((1+1/k)(1/(k+1)+\gamma)t\), thus, we have \(L_{R^{\prime}}(\{s^{\prime}\})\) contains a fractional matching of size at least \((1+1/k)(1/(k+1)+\gamma)t\) for any vertex \(s^{\prime}\) of \(V\) where \(R^{\prime}\) is the subgraph of \(L_{R}(S)\) induced on the non-isolated vertices of \(L_{R}(S)\). By Definition 1.14, \(S\) has at most \(\alpha t\) neighbors in \(\overline{\partial_{d}(R)}\). It follows that \(v(R^{\prime})=\partial_{1}(L_{R}(S))\geq(1-\alpha)t\) and \((1+1/k)(1/(k+1)+\gamma)t\leq v(R^{\prime})/(k-d+1)\) since \(\alpha,\gamma\ll 1/k\). By Proposition 2.12 with the condition that \(L_{R^{\prime}}(\{s^{\prime}\})\) contains a fractional matching of size at least \((1+1/k)(1/(k+1)+\gamma)t\) for any vertex \(s^{\prime}\) of \(V\), we obtain \(R^{\prime}\)(and thus \(L_{R}(S)\)) contains a fractional matching of size \((1+1/k)(1/(k+1)+\gamma)t\). Since \(S\) is arbitrary, for any \(S\in\partial_{d-1}(R)\), \(L_{R}(S)\) contains a fractional matching of size \((1+1/k)(1/(k+1)+\gamma)t\). Hence, we are done by the induction hypothesis. The proof of Lemma 2.2.: Suppose that \(V(H)=[t/k]\cup V^{\prime}\) where \(|V^{\prime}|=t\). By assumption, \(C_{S}\) contains a fractional matching of size \((1+1/k)(1/(k+1)+\gamma)t\) for every \(S\in\partial_{k-2}(H)\) and \(C_{S}\) is a subgraph of \(L_{H}(S)\). By Proposition 2.13, we have \(L_{H}(\{i,v\})\) contains a fractional matching of size \((1+1/k)(1/(k+1)+\gamma)t\) for every \(\{i,v\}\in\partial_{1}(H)\). We want to show that \(H\) is \(\gamma\)-robustly matchable. Given a vertex weight \(\mathbf{b}\): \([t/k]\cup V^{\prime}\to[1-\gamma,1]\), we have to find a \(\mathbf{b}\)-fractional matching \(\mathbf{w}\) such that \(\sum_{e\ni v}\mathbf{w}(e)=\mathbf{b}(v)/k\) for any vertex \(v\in V(H)\). That is, we need to find a \(\mathbf{b}\)-fractional matching with size \(\sum_{v\in V(H)}\mathbf{b}(v)/k(k+1)\). Given \(i\in[t/k]\), there are at most \(\alpha t\) isolated \((1,1)\)-tuples by Definition 1.14. For any non-isolated \((1,1)\)-tuple \((i,v)\) of \(V(H)\), let \(\mathbf{x}\) be a fractional matching in \(L_{H}(\{i,v\})\) of size at least \((1+1/k)(1/(k+1)+\gamma)t\) and let \(\mathbf{w}^{\prime}=(1-\gamma)\mathbf{x}\), since \(1-\gamma\leq\mathbf{b}(v)\) for any \(v\in V(H)\), thus \(\mathbf{w}^{\prime}\) is a \(\mathbf{b}\)-fractional matching in \(L_{H}(\{i,v\})\). Moreover \(\mathbf{w}^{\prime}\) has size at least \((1-\gamma)(1+1/k)(1/(k+1)+\gamma)t\geq(1+1/k)t/(k+1)\geq\sum_{v\in V(H)}\mathbf{ b}(v)/(k+1)\) since \(1/t\ll\gamma\ll 1/k\). We can assume that \(\mathbf{w}^{\prime}\) has size exactly \(\sum_{v\in V(H)}\mathbf{b}(v)/(k+1)\). By Proposition 2.11, we obtain that \(H\) has a \(\mathbf{b}\)-fractional matching of size \(\sum_{v\in V(H)}\mathbf{b}(v)/k(k+1)\), as desired. ### The Proof of Lemma 2.3 We use the following claim directly, which can be seen in [27]. **Claim 2.14**.: _[_27_]_ _Let \(t,d,k\) be integers with \(d\in[k-1]\) and \(\delta,\varepsilon,\alpha>0\) with \(1/t\ll\varepsilon\ll\alpha\leq\delta,1/k\). Let \(R\) be a \(k\)-graph on \(t\) vertices with minimum relative \(d\)-degree \(\overline{\delta_{d}(R)}\geq\delta\). Let \(I\) be a subgraph of \(R\) of edge density at most \(\varepsilon\). Then there exists a vertex spanning subgraph \(R^{\prime}\subseteq R-I\) od \(\alpha\)-perturbed minimum relative \(d\)-degree at least \(\delta-\alpha\)._ The \((1,k)\)-graph \(R_{i}\) on \(\{i\}\cup V\) with minimum relative \((1,k-2)\)-degree at least \(\delta\) is equivalent to a \(k\)-graph \(R^{\prime}_{i}\) on \(V\) with minimum relative \((k-2)\)-degree at least \(\delta\). Thus, by Claim 2.14, we obtain Lemma 2.3. ## 3. Obtaining vicinity In this section, we determine the \((k-2)\)-vicinity threshold of \((1,k)\)-graphs. Lovasz's formulation of the Kruskal-Katona theorem states that, for any \(x>0\), if \(G\) is a \(k\)-graph with \(e(G)\geq\binom{x}{k}\) edges, then \(e_{j}(G)\geq\binom{x}{j}\) for every \(j\in[k]\) (Theorem 2.14 in [15]). By approximating the binomial coefficients, they [27] deduce the following variant. **Lemma 3.1** (Kruskal-Katona theorem).: _[_27_]_ _Let \(1/t\ll\varepsilon\ll 1/k\) and \(G\) be a graph on \(t\) vertices and edge density \(\delta\), then \(\partial(G)\) has at least \((\delta^{1/2}-\varepsilon)t\) vertices._ **Proposition 3.2**.: _Let \(t\in N\) and \(\gamma,\delta^{\prime},\delta>0\) with \(1/t\ll\varepsilon\ll\delta\) and \(\delta+\delta^{1/2}>1+\varepsilon\). Let \(R_{i}\) be a \((1,k)\)-graph on \(\{i\}\cup V\) where \(|V|=t\) with a subgraph that is generated by a \((k-2)\)-vicinity \(\mathcal{C}_{i}\). Suppose that each \(C_{S}\in\mathcal{C}_{i}\) has edge density at least \(\delta+\mu\), then \(\mathcal{C}_{i}\) admits an arc._ Proof.: Consider an arbitrary set \(S=\{i,v_{1},\ldots,v_{k-2}\}\in\partial_{k-2}(R_{i})\). By averaging, there is a vertex \(v_{k-1}\) with relative vertex degree at least \(\delta\) in \(C_{S}\). Set \(S^{\prime}=\{i,v_{2},\ldots,v_{k-1}\}\), we have \(S^{\prime}\in\partial_{k-2}(R_{i})\). Thus, \(C_{S^{\prime}-\{v_{1}\}}\) has edge density at least \(\delta+\mu/2\). By Lemma 3.1, \(\partial(C_{S^{\prime}}-\{v_{1}\})\) has at least \((\delta^{1/2}-\varepsilon)t\) vertices. By the choice of \(v_{k-1}\) and the pigeonhole principle, \(\partial(C_{S^{\prime}}-\{v_{1}\})\) and \(L(\{i,v_{1},\ldots,v_{k-1}\})\) must share a common vertex \(v_{k}\). Since \(v_{k}\in\partial(C_{S^{\prime}}-\{v_{1}\})\), there is another vertex \(v_{k+1}\) such that \(\{v_{k},v_{k+1}\}\in C_{S^{\prime}}-\{v_{1}\}\). Thus, \(\{i,v_{1},\ldots,v_{k+1}\}\) is an arc. We use the following result of [27]. **Lemma 3.3**.: _[_27_]_ _Let \(1/t\ll\gamma\ll\mu\), suppose that \(L_{1}\) and \(L_{2}\) are graphs on a common vertex set of size \(t\) such that \(L_{1}\), \(L_{2}\) has edge density at least \(5/9+\mu\). For \(i\in[2]\), let \(C_{i}\) be a tight component of \(L_{i}\) with a maximum number of edges. We have_ 1. \(C_{1}\) _and_ \(C_{2}\) _has an edge in common,_ 2. \(C_{i}\) _has a switcher for_ \(i\in[2]\)_,_ 3. \(C_{i}\) _has a fractional matching of density_ \(1/3+\gamma\) _for_ \(i\in[2]\) * \(C_{i}\) _has edge density at least_ \(4/9+\gamma\) _for_ \(i\in[2]\)_._ _The proof of Lemma 1.17._ Let \(\alpha,\gamma,\mu>0\) with \[1/t\ll\alpha\ll\delta\ll\mu\ll 5/9.\] Consider a \((1,k)\)-graph \(R\) on \([t]\cup V\) where \(|V|=t\) and each \(R_{i}:=R[\{i\}\cup V]\) has \(\alpha\)-perturbed minimum relative \((1,k-2)\)-degree at least \(5/9+\mu\). For every \(S\in\partial_{k-2}(R)\), let \(C_{S}\) be a tight component of \(L(S)\) with a maximum number of edges and \(\mathcal{C}_{i}=\{C_{S}:S\in\partial_{k-2}(R)\) and \(i\in S\}\). By the choice of \(C_{S}\), (V1) holds obviously. By Lemma 3.3, \(\mathcal{C}_{i}\) satisfies (V2), (V4), (V5) and (V6). Every \(C_{S}\in\mathcal{C}_{i}\) contains a switcher. By Proposition 3.2, \(\mathcal{C}_{i}\) contains an arc since \(4/9+(4/9)^{1-1/2}=1+1/9\), thus \(\mathcal{C}=\{\mathcal{C}_{i}:i\in[t]\}\) satisfies (V3), as desired. ## 4. Tools ### Regular Complexes A hypergraph \(H=(V,E)\) is a _complex_ if its edge set is down-closed, meaning that whenever \(e\in E\) and \(e^{\prime}\subseteq e\), we have \(e^{\prime}\in E\). A \(k\)-complex is a complex where all edges have size at most \(k\). Given a complex \(H\), we use \(H^{(i)}\) to denote the \(i\)-graph obtained by taking all vertices of \(H\) and edges of size \(i\). Denote the number of edges of size \(i\) in \(H\) by \(e_{i}(H)\). Let \(\mathcal{P}\) partition a vertex set \(V\) into parts \(V_{1},\ldots,V_{s}\). Then we say that a subset \(S\subseteq V\) is \(\mathcal{P}\)_-partite_ if \(|S\cap V_{i}|\leq 1\) for every \(i\in[s]\). Similarly, we say that hypergraph \(\mathcal{H}\) is \(\mathcal{P}\)_-partite_ if all of its edges are \(\mathcal{P}\)-partite. In this case we refer to the parts of \(\mathcal{P}\) as the _vertex class_ of \(\mathcal{H}\). We say that a hypergraph \(\mathcal{H}\) is \(s\)_-partite_ if there is some partition \(\mathcal{P}\) of \(V(\mathcal{H})\) into \(s\) parts for which \(\mathcal{H}\) is \(\mathcal{P}\)-partite. Let \(\mathcal{H}\) be a \(\mathcal{P}\)-partite complex. Then for any \(A\subseteq[s]\) we write \(V_{A}\) for \(\bigcup_{i\in A}V_{i}\). The _index_ of a \(\mathcal{P}\)-partite set \(S\subseteq V\) is \(i(S):=\{i\in[s]:|S\cap V_{i}|=1\}\). We write \(\mathcal{H}_{A}\) to denote the collection of edges in \(\mathcal{H}\) with index \(A\), that is, \(\mathcal{H}_{A}\) can be regarded as an \(|A|\)-partite \(|A|\)-graph on vertex set \(V_{A}\). Similarly, if \(X\) is a \(j\)-set of indexes of vertex classes of \(\mathcal{H}\) we write \(\mathcal{H}_{X}\) for the \(j\)-partite \(j\)-uniform subgraph of \(\mathcal{H}^{(j)}\) induced by \(\bigcup_{i\in X}V_{i}\). We write \(\mathcal{H}_{X<}\) for the \(j\)-partite hypergraph with vertex set \(\bigcup_{i\in V_{X}}V_{i}\) and edge set \(\bigcup_{X^{\prime}\subset X}\mathcal{H}_{X^{\prime}}\). Let \(H_{i}\) be any \(i\)-partite \(i\)-graph and \(H_{i-1}\) be any \(i\)-partite \((i-1)\)-graph on a common vertex set \(V\) partitioned into \(i\) common vertex classes. Denote \(K_{i}(H_{i-1})\) by the \(i\)-partite \(i\)-graph on \(V\) whose edges are all \(i\)-sets which are supported on \(H_{i-1}\)(i.e. induce a copy of complete \((i-1)\)-graph \(K_{i}^{i-1}\) on \(i\) vertices in \(H_{i-1}\)). The _density of_\(H_{i}\)_with respect to_\(H_{i-1}\) is defined to be \[d(H_{i}|H_{i-1}):=\frac{|K_{i}(H_{i-1})\cap H_{i}|}{|K_{i}(H_{i-1})|}\] if \(|K_{i}(H_{i-1})|>0\). For convenience, we take \(d(H_{i}|H_{i-1}):=0\) if \(|K_{i}(H_{i-1})|=0\). When \(H_{i-1}\) is clear from the context, we simply refer \(d(H_{i}|H_{i-1})\) as the _relative density of_\(H_{i}\). More generally, if \(\mathbf{Q}:=(Q_{1},\ldots,Q_{r})\) is a collection of \(r\) not necessarily disjoint subgraphs of \(H_{i-1}\), we define \[K_{i}(\mathbf{Q}):=\bigcup_{j=1}^{r}K_{i}(Q_{j})\] \[d(H_{i}|\mathbf{Q}):=\frac{|K_{i}(\mathbf{Q})\cap H_{i}|}{|K_{i}(\mathbf{Q})|}\] if \(|K_{i}(\mathbf{Q})|>0\). Similarly, we take \(d(H_{i}|\mathbf{Q}):=0\) if \(|K_{i}(\mathbf{Q})|=0\). We say that \(H_{i}\) is \((d_{i},\varepsilon,r)\)-_regular with respect to \(H_{i-1}\)_ if we have \(d(H_{i}|\mathbf{Q})=d_{i}\pm\varepsilon\) for every \(r\)-set \(\mathbf{Q}\) of subgraphs of \(H_{i-1}\) such that \(|K_{i}(\mathbf{Q})|>\varepsilon|K_{i}(H_{i-1})|\). We refer to \((d_{i},\varepsilon,1)\)-regularity simply as \((d_{i},\varepsilon)\)-_regularity_. We say that \(H_{i}\) is \((\varepsilon,r)\)-regular with respect to \(H_{i-1}\) to mean that there exists some \(d_{i}\) for which \(H_{i}\) is \((d_{i},\varepsilon,r)\)-regular with respect to \(H_{i-1}\). Given an \(i\)-graph \(G\) whose vertex set contains that of \(H_{i-1}\), we say that \(G\) is \((d_{i},\varepsilon,r)\)-_regular with respect to \(H_{i-1}\)_ if the \(i\)-partite subgraph of \(G\) induced by the vertex classes of \(H_{i-1}\) is \((d_{i},\varepsilon,r)\)-regular with respect to \(H_{i-1}\). Similarly, when \(H_{i-1}\) is clear from the context, we refer to the relative density of this \(i\)-partite subgraph of \(G\) with respect to \(H_{i-1}\) as the _relative density of_\(G\). Now let \(\mathcal{H}\) be an \(s\)-partite \(k\)-complex on vertex classes \(V_{1},\ldots,V_{s}\), where \(s\geq k\geq 3\). Since \(\mathcal{H}\) is a complex, if \(e\in\mathcal{H}^{(i)}\) for some \(i\in[2,k]\), then the vertices of \(e\) induce a copy of \(K_{i}^{i-1}\) in \(\mathcal{H}^{(i-1)}\). This means that for any index \(A\in{[s]\choose i}\), the density \(d(\mathcal{H}^{(i)}[V_{A}]|\mathcal{H}^{(i-1)}[V_{A}])\) can be regarded as the proportion of 'possible edges' of \(\mathcal{H}^{(i)}[V_{A}]\) which are indeed edges. We say that \(\mathcal{H}\) is \((d_{2},\ldots,d_{k},\varepsilon_{k},\varepsilon,r)\)-_regular_ if 1. for \(i\in[2,k-1]\) and \(A\in{[s]\choose i}\), the induced subgraph \(\mathcal{H}^{(i)}[V_{A}]\) is \((d_{i},\varepsilon)\)-regular with respect to \(\mathcal{H}^{(i-1)}[V_{A}]\) and 2. for any \(A\in{[s]\choose k}\), the induced subgraph \(\mathcal{H}^{(k)}[V_{A}]\) is \((d_{k},\varepsilon_{k},r)\)-regular with respect to \(\mathcal{H}^{(k-1)}[V_{A}]\). ### Regular Slices The Regular Slice Lemma says that any \(k\)-graph \(G\) admits a regular slice. Informally speaking, a regular slice of \(G\) is a partite \((k-1)\)-complex \(\mathcal{J}\) whose vertex classes have equal size, whose subgraphs \(\mathcal{J}^{(2)},\ldots,\mathcal{J}^{(k-1)}\) satisfy certain regularity properties and which moreover has the property that \(G\) is regular with respect to \(\mathcal{J}^{(k-1)}\). The first two of these conditions are formalised in the following definition: we say that a \((k-1)\)-complex \(\mathcal{J}\) is \((t_{0},t_{1},\varepsilon)\)-_equitable_, if it has the following properties. 1. \(\mathcal{J}\) is \(\mathcal{P}\)-partite for a \(\mathcal{P}\) which partitions \(V(\mathcal{J})\) into \(t\) parts of equal size, where \(t_{0}\leq t\leq t_{1}\). We refer to \(\mathcal{P}\) as the _ground partition_ of \(\mathcal{J}\), and to the parts of \(\mathcal{P}\) as the _clusters_ of \(\mathcal{J}\). 2. There exists a _density vector_\(\mathbf{d}=(d_{2},\ldots,d_{k-1})\) such that for \(i\in[2,k-1]\) we have \(d_{i}\geq 1/t_{1}\) and \(1/d_{i}\in\mathbb{N}\) and for each \(A\subseteq\mathcal{P}\) of size \(i\), the \(i\)-graph \(\mathcal{J}^{(i)}[V_{A}]\) induced on \(V_{A}\) is \((d_{i},\varepsilon)\)-regular with respect to \(\mathcal{J}^{(i-1)}[V_{A}]\). If \(\mathcal{J}\) has density vector \(\mathbf{d}=(d_{2},\ldots,d_{k-1})\), then we will say that \(\mathcal{J}\) is \((d_{2},\ldots,d_{k-1},\varepsilon)\)-regular, or \((\mathbf{d},\varepsilon)\)-_regular_, for short. For any \(k\)-set \(X\) of clusters of \(\mathcal{J}\), we write \(\hat{\mathcal{J}}_{X}\) for the \(k\)-partite \((k-1)\)-graph \(\mathcal{J}_{X<}^{(k-1)}\). Given a \((t_{0},t_{1},\varepsilon)\)-equitable \((k-1)\)-complex \(\mathcal{J}\), a \(k\)-set \(X\) of clusters of \(\mathcal{J}\) and a \(k\)-graph \(G\) on \(V(\mathcal{J})\), we say that \(G\) is \((d,\varepsilon_{k},r)\)-_regular with respect to \(X\)_ if \(G\) is \((d,\varepsilon_{k},r)\)-regular with respect to \(\hat{\mathcal{J}}_{X}\). We will also say that \(G\) is \((\varepsilon_{k},r)\)-_regular with respect to \(X\)_ if there exists a \(d\) such that \(G\) is \((d,\varepsilon_{k},r)\)-regular with respect to \(X\). We write \(d_{\mathcal{J},G}^{*}(X)\) for the relative density of \(G\) with respect to \(\hat{\mathcal{J}}_{X}\), or simply \(d^{*}(X)\) if \(\mathcal{J}\) and \(G\) are clear from the context, which will always be the case in applications. We now give the key definition of the Regular Slice Lemma. **Definition 4.1** (Regular Slice).: _Given \(\varepsilon,\varepsilon_{k}>0\), \(r,t_{0},t_{1}\in\mathbb{N}\), a \(k\)-graph \(G\) and a \((k-1)\)-complex \(\mathcal{J}\) on \(V(G)\), we call \(\mathcal{J}\) a \((t_{0},t_{1},\varepsilon,\varepsilon_{k},r)\)-regular slice for \(G\) if \(\mathcal{J}\) is \((t_{0},t_{1},\varepsilon)\)-equitable and \(G\) is \((\varepsilon_{k},r)\)-regular with respect to all but at most \(\varepsilon_{k}{t\choose k}\) of the \(k\)-sets of clusters of \(\mathcal{J}\), where \(t\) is the number of clusters of \(\mathcal{J}\)._ It will sometimes be convenient not to specify all parameters, we may write that \(\mathcal{J}\) is \((\cdot,\cdot,\varepsilon)\)-equitable or is a \((\cdot,\cdot,\varepsilon,\varepsilon_{k},r)\)-slice for \(G\), if we do not wish to specify \(t_{0}\) and \(t_{1}\). Given a regular slice \(\mathcal{J}\) for a \(k\)-graph \(G\), it will be important to know the relative densities \(d^{*}(X)\) for \(k\)-sets \(X\) of clusters of \(\mathcal{J}\). To keep track of these we make the following definition. **Definition 4.2** (Weighted reduced \(k\)-graph).: _Let \(G\) be a \((1,k)\)-graph and let \(\mathcal{J}\) be a \((t_{0},t_{1},\varepsilon,\varepsilon_{k+1},r)\)-regular slice for \(G\). We define the weighted reduced \((1,k)\)-graph of \(G\), denoted by \(R(G)\), to be the complete weighted \((1,k)\)-graph whose vertices are the clusters of \(\mathcal{J}\) and where each edge \(X\) is given weight \(d^{*}(X)\)._ _Similarly, for \(d_{k+1}>0\), we define the \(d_{k+1}\)-reduced \((1,k)\)-graph \(R_{d_{k+1}}(G)\) to be the (unweighted) \((1,k)\)-graph whose vertices are the clusters of \(\mathcal{J}\) and whose edges are all \((1,k)\)-sets \(X\) of clusters of \(\mathcal{J}\) such that \(G\) is \((\varepsilon_{k+1},r)\)-regular with respect to \(X\) and \(d^{*}(X)\geq d_{k+1}\)._ Given a \((1,k)\)-graph \(G\) on \([n]\cup V\), a vertex \(v\in V\) and a color \(c\in[n]\), recall that \(\deg_{G}(c,v)\) is the number of edges of \(G\) containing \(c\) and \(v\) and \(\overline{\deg}_{G}(c,v)=\deg_{G}(c,v)/{n-1\choose k-1}\) is the relative degree of \(v\) in \(G\). Given a \((t_{0},t_{1},\varepsilon)\)-equitable \((k-1)\)-complex \(\mathcal{J}\) with \(V(\mathcal{J})\subseteq V(G)\), the _rooted degree_ of \((c,v)\)_supported by \(\mathcal{J}\)_, written by \(\deg_{G}((c,v),\mathcal{J})\), is defined as the number of \((k-1)\)-sets \(T\) in \(\mathcal{J}^{(k-1)}\) such that \(T\cup\{c,v\}\) forms an edge in \(G\). Then the relative degree \(\overline{\deg}_{G}((c,v);\mathcal{J})\) of \((c,v)\) in \(G\) supported by \(\mathcal{J}\) is defined as \(\overline{\deg}_{G}((c,v);\mathcal{J})=\deg_{G}((c,v);\mathcal{J})/e( \mathcal{J}^{(k-1)})\). **Definition 4.3** (Representative rooted degree).: _Let \(\eta>0\), \(G\) be a \((1,k)\)-graph on \([n]\cup V\) and \(\mathcal{J}\) be a \((t_{0},t_{1},\varepsilon,\varepsilon_{k+1})\)-regular slice for \(G\). We say that \(\mathcal{J}\) is \(\eta\)-rooted-degree-representative if for any vertex \(v\in V\) and any color \(c\in[n]\), we have_ \[|\overline{\deg}_{G}((c,v);\mathcal{J})-\overline{\deg}_{G}(c,v)|<\eta.\] **Definition 4.4** (Regular Setup).: _Let \(k,m,r,t\in\mathbb{N}\) and \(\varepsilon,\varepsilon_{k+1},d_{2},\ldots,d_{k+1}>0\). We say that \((G,G_{\mathcal{J}},\mathcal{J},\mathcal{P},R)\) is a \((k,m,t,\varepsilon,\varepsilon_{k+1},r,d_{2},\ldots,d_{k+1})\)-regular setup, if_ 1. \(G\) _is a_ \((1,k)\)_-graph on_ \([n]\cup V\) _where_ \(|V|=n\) _and_ \(G_{\mathcal{J}}\subseteq G\)_,_ 2. \(\mathcal{J}\) _is a_ \((\cdot,\cdot,\varepsilon,\varepsilon_{k+1},r)\)_-regular slice for_ \(G\) _with density vector_ \(\textbf{d}=(d_{2},\ldots,d_{k})\)_,_ 3. \(\mathcal{P}\) _is the ground partition of_ \(\mathcal{J}\) _with initial partition of_ \([n]\cup V\) _and_ \(2t\) _clusters, each of size_ \(m\)_,_ 4. \(R\) _is a subgraph of_ \(R_{d_{k+1}}(G)\)_,_ 5. _for each_ \(X\in E(R)\)_,_ \(G_{\mathcal{J}}\) _is_ \((d_{k+1},\varepsilon_{k+1},r)\)_-regular with respect to_ \(X\)_._ _We further say that \((G,G_{\mathcal{J}},\mathcal{J},\mathcal{P},R)\) is representative if_ 1. \(\mathcal{J}\) _is_ \(\varepsilon_{k+1}\)_-rooted-degree-representative._ The Regular Slice Lemma of [3] ensures that every sufficiently large \(k\)-graph has a representative regular slice. Given the existence of a regular slice, it is easy to derive the existence of a regular setup. In [27], it is stated directly in terms of regular setups. And it is an easy corollary of giving a sufficiently large \((1,k)\)-graph. **Lemma 4.5** (Regular Setup Lemma [3]).: _Let \(k,t_{0}\) be positive integers, \(\delta,\mu,\alpha,\varepsilon_{k+1},d_{k+1}\) be positive and \(r:\mathbb{N}\to\mathbb{N}\) and \(\varepsilon:\mathbb{N}\to(0,1]\) be functions. Suppose that_ \[k\geq 3,\varepsilon_{k+1}\ll\alpha,d_{k+1}\ll\mu.\] _Then there exists \(t_{1}\) and \(m_{0}\) such that the following holds for all \(n\geq 2t_{1}m_{0}\). Let \(G\) be a \((1,k)\)-graph on \([n]\cup V\) where \(|V|=n\) and suppose that \(G\) has minimum relative \((1,k-2)\)-degree \(\overline{\delta}_{1,k-2}(G)\geq\delta+\mu\). There exists \(\textbf{d}=(d_{2},\ldots,d_{k+1})\) and a representative \((k,m,2t,\varepsilon(t_{1}),\varepsilon_{k+1},r(t_{1}),\textbf{d})\)-regular setup \((G,G_{\mathcal{J}},\mathcal{J},\mathcal{P},R_{d_{k+1}})\) with \(t\in[t_{0},t_{1}]\), \(m_{0}\leq m\) and \(n\leq(1+\alpha)mt\). Moreover, there is a \((1,k)\)-graph \(I\) on \(\mathcal{P}\) of edge density at most \(\varepsilon_{k+1}\) such that \(R=R_{d_{k+1}}\cup I\) has minimum relative \((1,k-2)\)-degree at least \(\delta+\mu/2\)._ ### Tools for working with regularity Let \(\mathcal{G}\) be a \(\mathcal{P}\)-partite \(k\)-complex and \(X_{1},\ldots,X_{s}\in\mathcal{P}\)(possibly with repetition), and let \(\mathcal{H}\) be a \(k\)-complex on vertices \([s]\). We say that an embedding of \(\mathcal{H}\) in \(\mathcal{G}\) is _partition-respecting_, if \(i\) is embedded in \(X_{i}\) for \(i\in[s]\). Note that this notion depends on the labeling of \(V(\mathcal{H})\) and the clusters \(X_{1},\ldots,X_{s}\), but these will be clear in the paper. Denote the set of labelled partition-respecting copies of \(\mathcal{H}\) in \(\mathcal{G}\) by \(\mathcal{H}_{\mathcal{G}}[\bigcup_{i\in S}X_{i}]\). When \(X_{1},\ldots,X_{s}\) are clear, we denote it by \(\mathcal{H}_{\mathcal{G}}\) for short. Recall that \(e_{i}(\mathcal{H})\) denotes the number of edges of size \(i\) in \(\mathcal{H}\). The following lemma states that the number of copies of a given small \(k\)-graph inside a regular slice is roughly what we expect if the edges inside a regular slice were chosen randomly. There are many different versions in [3, 10, 17, 44] and we use the following version in [10]. **Lemma 4.6** (Counting Lemma [10]).: _Let \(k,s,r,m\) be positive integers and let \(\beta,d_{2},\ldots,d_{k},\varepsilon,\varepsilon_{k}\) be positive constants such that \(1/d_{i}\in\mathbb{N}\) for \(i\in[2,k-1]\) and such that_ \[1/m\ll 1/r,\varepsilon\ll\varepsilon_{k},d_{2},\ldots,d_{k-1},\] \[\varepsilon_{k}\ll\beta,d_{k},1/s.\] _Let \(H\) be a \(k\)-graph on \([s]\) and let \(\mathcal{H}\) be the \(k\)-complex generated by the down-closure of \(H\). Let \(\textbf{d}=(d_{2},\cdots,d_{k})\), let \((G,G_{\mathcal{J}},\mathcal{J},\mathcal{P},R)\) be a \((k,m,\cdot,\varepsilon,\varepsilon_{k},r,\textbf{d})\)-regular setup and \(\mathcal{G}=\mathcal{J}\cup G_{\mathcal{J}}\). Suppose \(X_{1},\ldots,X_{s}\) are such that \(i\mapsto X_{i}\) is a homomorphism from \(H\) into \(R\), then the number of labelled partition-respecting copies of \(\mathcal{H}\) in \(\mathcal{G}\) satisfies_ \[|\mathcal{H}_{\mathcal{G}}|=(1\pm\beta)\left(\prod_{i=2}^{k}d_{i}^{e_{i}( \mathcal{H})}\right)m^{s}.\] The following tool allows us to extend small subgraphs into a regular slice. It was given by Cooley, Fountoulakis, Kuhn and Osthus [10]. **Lemma 4.7** (Extension Lemma [10]).: _Let \(k,s,s^{\prime},r,m\) be positive integers, where \(s^{\prime}<s\) and let \(\beta,d_{2},\ldots,d_{k},\varepsilon,\varepsilon_{k}\) be positive constants such that \(1/d_{i}\in\mathbb{N}\) for \(i\in[2,k-1]\) and such that_ \[1/m\ll 1/r,\varepsilon\ll\varepsilon_{k},d_{2},\ldots,d_{k-1},\] \[\varepsilon_{k}\ll\beta,d_{k},1/s.\] _Suppose \(H\) is a \(k\)-graph on \([s]\). Let \(\mathcal{H}\) be the \(k\)-complex generated by the down-closure of \(H\) and \(\mathcal{H}^{\prime}\) be an induced subcomplex of \(\mathcal{H}\) on \(s^{\prime}\) vertices. Let \(\textbf{d}=(d_{2},\ldots,d_{k})\) and \((G,G_{\mathcal{J}},\mathcal{J},\mathcal{P},R)\) be a \((k,m,\cdot,\varepsilon,\varepsilon_{k},r,\textbf{d})\)-regular setup and \(\mathcal{G}=\mathcal{J}\cup G_{\mathcal{J}}\). Suppose \(X_{1},\ldots,X_{s}\) are such that \(i\mapsto X_{i}\) is a homomorphism from \(H\) into \(R\). Then all but at most \(\beta|\mathcal{H}^{\prime}_{\mathcal{G}}|\) labelled partition-respecting copies of \(\mathcal{H}^{\prime}\) in \(\mathcal{G}\) extend to_ \[(1\pm\beta)\left(\prod_{i=2}^{k}d_{i}^{e_{i}(\mathcal{H})-e_{i}(\mathcal{H}^{ \prime})}\right)m^{s-s^{\prime}}\] _labelled partition-respecting copies of \(\mathcal{H}\) in \(\mathcal{G}\)._ In some certain situation, we look for structures whose edges lie entirely in the \((k-1)\)-complex \(\mathcal{J}\) of a regular setup. We can no longer use the above lemmas whose input is a regular setup rather than an equitable complex. Also, the above lemmas requires \(r\) to be large enough with respect to \(\varepsilon_{k}\) while the \((k-1)\)-th level of \(\mathcal{J}\) will only need to be \((d_{k-1},\varepsilon)\)-regular with respect to the lower level. We can use a Dense Counting Lemma as proved by Kohayakawa, Rodl and Skokan [24]. We state the following version given by Cooley, Fountoulakis, Kuhn and Osthus [10]. **Lemma 4.8** (Dense Counting Lemma [10]).: _Let \(k,s,m\) be positive integers and \(\varepsilon,d_{2},\ldots,d_{k-1},\beta\) be positive constants such that_ \[1/m\ll\varepsilon\ll\beta\leq d_{2},\ldots,d_{k-1},1/s.\] _Suppose \(H\) is a \((k-1)\)-graph on \([s]\) and \(\mathcal{H}\) is the \((k-1)\)-complex generated by the down-closure of \(H\). Let \(\textbf{d}=(d_{2},\ldots,d_{k-1})\) and \(\mathcal{J}\) be a \((\textbf{d},\varepsilon)\)-regular \((k-1)\)-complex with ground partition \(\mathcal{P}\), each size of whose vertex class is \(m\). If \(X_{1},\ldots,X_{s}\in\mathcal{P}\), then_ \[|\mathcal{H}_{\mathcal{J}}|=(1\pm\beta)\prod_{i=2}^{k-1}d_{i}^{e_{i}(\mathcal{ H})}m^{s}.\] The following lemma gives the number of edges in each layer of a regular slice. **Lemma 4.9**.: [3] _Suppose that \(1/m\ll\varepsilon\ll\beta\ll d_{2},\ldots,d_{k-1},1/k\) and that \(\mathcal{J}\) is a \((\cdot,\cdot,\varepsilon)\)-equitable \((k-1)\)-complex with density vector \((d_{2},\ldots,d_{k-1})\) and clusters of size \(m\). Let \(X\) be a set of at most \(k-1\) clusters of \(\mathcal{J}\). Then_ \[|\mathcal{J}_{X}|=(1\pm\beta)\left(\prod_{i=2}^{|X|}d_{i}^{\binom{|X|}{i}} \right)m^{|X|}.\] Analogously, we have a dense version of Extension Lemma [10]. **Lemma 4.10** (Dense Extension Lemma [10]).: _Let \(k,s,s^{\prime},m\) be positive integers, where \(s^{\prime}<s\) and \(\varepsilon,\beta,d_{2},\ldots,d_{k-1}\) be positive constants such that \(1/m\ll\varepsilon\ll\beta\ll d_{2},\ldots,d_{k-1},1/s\). Let \(H\) be a \((k-1)\)-graph on \([s]\). Let \(\mathcal{H}\) be the \((k-1)\)-complex generated by the down-closure of \(H\) and \(\mathcal{H}^{\prime}\) be an induced subcomplex of \(\mathcal{H}\) on \(s^{\prime}\) vertices. Let \(\textbf{d}=(d_{2},\ldots,d_{k-1})\) and let \(\mathcal{J}\) be a \((\textbf{d},\varepsilon)\)-regular \((k-1)\)-complex, with ground partition \(\mathcal{P}\) with vertex classes of size \(m\) each. If \(X_{1},\ldots,X_{s}\in\mathcal{P}\), then all but at most \(\beta|\mathcal{H}^{\prime}_{\mathcal{J}}|\) labelled partition-respecting copies of \(\mathcal{H}^{\prime}\) in \(\mathcal{J}\) extend to_ \[(1\pm\beta)\left(\prod_{i=2}^{k-1}d_{i}^{e_{i}(\mathcal{H})-e_{i}(\mathcal{H} ^{\prime})}\right)m^{s-s^{\prime}}\] _labelled partition-respecting copies of \(\mathcal{H}\) in \(\mathcal{J}\)._ The restriction of a regular complex to a large subset of its vertex is also a regular complex, with slightly altered constants. **Lemma 4.11** (Regular Restriction Lemma [3]).: _Let \(k,r,m,s\) be integers and \(\alpha,\varepsilon,\varepsilon_{k},d_{2},\ldots,d_{k}\) be positive constants such that \(1/d_{i}\in\mathbb{N}\) for \(\in[2,k]\) and_ \[1/m\ll\varepsilon\ll\varepsilon_{k},d_{2},\ldots,d_{k-1},\] _and_ \[\varepsilon_{k}\ll\alpha.\] _Let \(\mathcal{G}\) be an \(s\)-partite \(k\)-complex on vertex classes \(V_{1},\ldots,V_{s}\), each of size \(m\) and which is \((\textbf{d},\varepsilon_{k},\varepsilon,r)\)-regular where \(\textbf{d}=(d_{2},\ldots,d_{k})\). Choose any \(V_{i}^{\prime}\subseteq V_{i}\) with \(|V_{i}^{\prime}|\geq\alpha m\) for \(i\in[s]\). Then the induced subcomplex \(\mathcal{G}[V_{1}^{\prime}\cup\cdots\cup V_{s}^{\prime}]\) is \((\textbf{d},\sqrt{\varepsilon_{k}},\sqrt{\varepsilon},r)\)-regular._ ## 5. Framework lemma In this section, we use the following Absorption Lemma and Almost Cover Lemma to prove Theorem 1.9. The proof of these two lemmas will be found in Section 8 and 9. Before we give these two lemmas, we need some definition. **Definition 5.1** (Extensible paths).: _Let \((G,G_{\mathcal{J}},\mathcal{J},\mathcal{P},R)\) be a \((k,m,2t,\varepsilon,\varepsilon_{k+1},r,\textbf{d})\)-regular setup, \(G\) be a \((1,k)\)-graph on \([n]\cup V\) where \(|V|=n\) and \(c,\nu>0\). A \((k-1)\)-tuple \(A\) in \(V^{k-1}\) is said to be \((c,\nu)\)-extensible rightwards to an ordered edge \(Y=(Y_{0},Y_{1},\ldots,Y_{k})\) in \(R\) if there exists a connection \(S\subseteq[n]\cup V\) and a target set \(T\subseteq\mathcal{J}_{(Y_{2},\ldots,Y_{k})}\) with the following properties._ * \(|T|\geq\nu|\mathcal{J}_{(Y_{2},\ldots,Y_{k})}|\)_,_ * _for every_ \((v_{2},\ldots,v_{k})\in T\)_, there are at least_ \(cm^{3k+1}\) _many_ \((3k+1)\)_-tuples_ \((c_{1},\ldots,c_{2k},w_{1},\ldots w_{k},v_{1})\) _with_ \(v_{1}\in S\cap Y_{1}\)_,_ \(w_{i}\in S\cap Y_{i}\) _and_ \(c_{j}\in Y_{0}\) _for_ \(i\in[k]\) _and_ \(j\in[2k]\) _such that_ \((c_{1}\ldots c_{2k},Aw_{1}\ldots w_{k}v_{1}\ldots v_{k})\) _is a sequentially path in_ \(G\)_._ Given a sequentially path \(P\) in a \((1,k)\)-graph \(G\) and an ordered edge \(X\) in \(R\), we say that \(P\) is \((c,\nu)\)-_extensible rightwards_ to \(X\) if the \((k-1)\)-tuple corresponding \(P\)'s last \(k-1\) vertices is \((c,\nu)\)-extensible rightwards to \(X\). We call \(X\) as the right extension. We can define leftwards path extensions for \((k-1)\)-tuples and for tight paths in an analogous way (this time corresponding to the first \(k-1\) vertices of \(P\)). A _connection set_ of a sequentially path is the union of the connection set of the initial \((k-1)\)-tuple and the connection set of the end \((k-1)\)-tuple. Given that \(X=(a,b,c)\) and \(Y=(a,c,b)\), there is no guarantee that \(H\) contains a walk from \(X\) to \(Y\). While if \(Y\) is a cyclic shift of \(X\), that is, \((b,c,a)\) or \((c,a,b)\), then a walk from \(X\) to \(Y\) does exist. More generally, a _cyclic shift_ of a tuple \((v_{1},\ldots,v_{k})\) is any \(k\)-tuple of the form \((v_{i},\ldots,v_{k},v_{1},\ldots,v_{i-1})\) for \(i\in[k]\). An orientation of a \((1,k)\)-graph \(G\) on \([n]\cup V\) is a family of ordered \((1,k)\)-tuples \(\{\overrightarrow{e}\in[n]\times V^{k}:e\in E(G)\}\). We say that a family \(\overrightarrow{G}\) of ordered \((1,k)\)-tuples is an _oriented \((1,k)\)-graph_ if there exists a \((1,k)\)-graph \(G\) such that \(\overrightarrow{G}=\{\overrightarrow{e}\in[n]\times V^{k}:e\in E(G)\}\). Given an oriented \((1,k)\)-graph \(\overrightarrow{R}\), we say that \((G,G_{\mathcal{J}},\mathcal{J},\mathcal{P},\overrightarrow{R})\) is an _oriented \((k,m,2t,\varepsilon,\varepsilon_{k+1},r,\mathbf{d})\)-regular setup_ if \(\overrightarrow{R}\) is an orientation of \(R\) and \((G,G_{\mathcal{J}},\mathcal{J},\mathcal{P},R)\) is a \((k,m,2t,\varepsilon,\varepsilon_{k+1},r,\mathbf{d})\)-regular setup. Consider a \((1,k)\)-graph \(G\) with an orientation \(\overrightarrow{G}\) and vertex set \([n]\cup V\). Given an ordered \(k\)-tuple \(Y\) of distinct vertices in \(V\) and \(c\in[n]\), we say that \(\{c\}\cup Y\) is _consistent with_\(\overrightarrow{G}\) if there exists an oriented edge \(\{c\}\cup\overrightarrow{e}\in\overrightarrow{G}\) such that \(\overrightarrow{e}\) is a cyclic shift of \(Y\). We say that an extensible path is _consistent with_\(\overrightarrow{G}\) if its left and right extensions are consistent with \(\overrightarrow{G}\). Finally, when considering multiple paths, we refer to the union of their connection sets as their _joint connection set_. Let \(\overrightarrow{G}\) be an orientation of a \((1,k)\)-graph \(G\). A sequentially walk \(W\) in \(G\) is said to be _compatible_ with \(\overrightarrow{G}\) if each oriented edge of \(\overrightarrow{G}\) appears at least once in \(W\) as a sequence of \(k\) consecutive vertices. Let \(G\) be a \((1,k)\)-graph on \([n]\cup V\) where \(|V|=n\), and \(S\subseteq V,O\subseteq[n],|O|=|S|=k\), \(P\) be a sequentially path. Recall that \((C(P),I(P))\) is used to denote a sequentially path where \(C(P)\) is the color set of \(P\) and \(I(P)\) is the point set of \(P\). We say that \(P\) is \((S,O)\)-_absorbing_ in \(G\) if there exits a sequentially path \(P^{\prime}\) in \(G\) with the same initial \((k-1)\)-tuple and the same terminal \((k-1)\)-tuple with \(P\), \(I(P^{\prime})=I(P)\cup S\) and \(C(P^{\prime})=C(P)\cup O\). We say that \(P\) is \(\eta\)-_absorbing_ in \(G\) if it is \((S,O)\)-absorbing in \(G\) for every \(S\) of size at most \(\eta n\) divisible by \(k\), any \(O\) of size \(|S|\), and \(S\cap I(P)=\emptyset,O\cap C(P)=\emptyset\). **Lemma 5.2** (Absorption Lemma).: _Let \(k,r,m,t\in\mathbb{N}\) and \(d_{2},\ldots,d_{k+1},\varepsilon,\varepsilon_{k+1},\eta,\mu,\delta,\alpha,c, \nu,\lambda\) be such that_ \[1/m\ll 1/r,\varepsilon\ll 1/t,c,\varepsilon_{k+1},d_{2},\ldots,d_{k},\] \[c\ll d_{2},\ldots,d_{k},\] \[1/t\ll\varepsilon_{k+1}\ll d_{k+1},\nu\leq 1/k,\] \[c\ll\varepsilon_{k+1}\ll\alpha\ll\eta\ll\lambda\ll\nu\ll\mu\ll\delta,1/k.\] _Let \(\textbf{d}=(d_{2},\ldots,d_{k+1})\) and let \(\mathfrak{S}=(G,G_{\mathcal{J}},\mathcal{J},\mathcal{P},\overrightarrow{H})\) be an oriented representative \((k,m,2t,\varepsilon,\varepsilon_{k+1},r,\textbf{d})\)-regular setup. Let \(G\) be \((1,k)\)-graph on \([n]\cup V\) with minimum relative \((1,1)\)-degree being at least \(\delta+\mu\) where \(|V|=n\), \(n\leq(1+\alpha)mt\). Suppose that there exists a closed sequentially walk which is compatible with the orientation \(\overrightarrow{H}\) of \(H\) and_ 1. \(H_{i}\) _is sequentially tightly connected,_ 2. _For every color_ \(i\in[t]\)_, there are at least_ \((1-\alpha)t\) _points_ \(v\in V\) _such that_ \(\{i,v\}\) _has relative_ \((1,1)\)_-degree at least_ \(1-\delta+\gamma\)_._ _Then there exists a sequentially path \(P\) in \(G\) such that the following holds._ 1. \(P\) _is_ \((c,\nu)\)_-extensible and consistent with_ \(\overrightarrow{H}\)_,_ 2. \(V(P)\) _is_ \(\lambda\)_-sparse in_ \(\mathcal{P}\) _and_ \(V(P)\cap S=\emptyset\)_, where_ \(S\) _denotes the connection set of_ \(P\) 3. \(P\) _is_ \(\eta\)_-absorbing in_ \(G\)_._ **Lemma 5.3** (Almost Cover Lemma).: _Let \(k,r,m,t\in\mathbb{N}\), \(d_{2},\ldots,d_{k+1},\varepsilon,\varepsilon_{k+1},\alpha,\gamma,c,\nu,\lambda\) be such that_ \[1/m\ll 1/r,\varepsilon \ll 1/t,c,\varepsilon_{k+1},d_{2},\ldots,d_{k},\] \[c \ll d_{2},\ldots,d_{k},\] \[1/t\ll\varepsilon_{k+1} \ll d_{k+1},\nu,\alpha\leq 1/k,\] \[\alpha \ll\eta \ll\lambda\ll\nu\ll\gamma.\] _Let \(\textbf{d}=(d_{2},\ldots,d_{k+1})\) and let \(\mathfrak{S}=(G,G_{\mathcal{J}},\mathcal{J},\mathcal{P},\overrightarrow{H})\) be an oriented \((k,m,2t,\varepsilon,\varepsilon_{k+1},r,\textbf{d})\)-regular setup. Suppose that \(G\) is a \((1,k)\)-graph on \([n]\cup V\) where \(|V|=n\) and \(n\leq(1+\alpha)mt\), \(H\) is a \((1,k)\)-graph on \([t]\cup V^{\prime}\) where \(|V^{\prime}|=t\) and_ 1. \(H_{i}\) _is sequentially tightly connected,_ 2. \(H_{i}\) _contains a sequentially closed walk_ \(W\) _compatible with_ \(\overrightarrow{H}\) _whose length is 1 mod_ \(k\)_,_ 3. \(H_{W_{i}}\) _is_ \(\gamma\)_-robustly matchable for_ \(i\in[k]\)_,_ 4. \(L_{H}(\{i\})\) _and_ \(L_{H}(\{j\})\) _intersect in an edge for each_ \(i,j\in[t]\)_._ _Suppose that \(P\) is a sequentially path in \(G\) such that_ 1. \(P\) _is_ \((c,\nu)\)_-extensible and consistent with_ \(\overrightarrow{H}\)_,_ 2. \(V(P)\) _is_ \(\lambda\)_-sparse in_ \(\mathcal{P}\) _and_ \(V(P)\cap S=\emptyset\) _where_ \(S\) _is the connection set of_ \(P\)_,_ _then there exists a sequentially cycle \(C\) of length at least \((1-\eta)n\) which contains \(P\) as a subpath. Moreover, the number of uncovered points of \(V\) is divisible by \(k\) and the number of uncovered colors of \([n]\) has the same size with the number of uncovered points._ _The proof of Theorem 1.9._ Let \(\delta=rhf_{k-2}(k)\), \(\mu>0\) and \[\varepsilon_{k+1}\ll\alpha\ll\eta\ll\lambda\ll\nu\ll\gamma\ll\mu,\] \[1/t_{0}\ll\varepsilon_{k+1}\ll d_{k+1}\ll\mu.\] We apply Lemma 4.5 with input \(\varepsilon_{k+1},1/t_{0},r,\varepsilon\) to obtain \(t_{1},m_{0}\). Choose \(c\ll 1/t_{1}\) and \(1/n_{0}\ll 1/t_{1},1/m_{0},c,1/r,\varepsilon\). Let \(G\) be a \((1,k)\)-graph on \([n]\cup V\) where \(|V|=n\) and \(2n\geq n_{0}\) vertices with \(\overline{\delta}_{1,k-2}(G)\geq\delta+\mu\). Our goal is to prove that \(G\) contains a sequentially Hamilton cycle. By Lemma 4.5, there exists a representative \((k,m,2t,\varepsilon,\varepsilon_{k+1},r,d_{2},\ldots,d_{k+1})\)-regular setup \((G,G_{\mathcal{J}},\mathcal{J},\mathcal{P},R_{d_{k+1}})\) with \(t_{0}\leq t\leq t_{1}\) and \(n\leq(1+\alpha)mt\). Moreover, there is a \((1,k)\)-graph \(I\) of edge density at most \(\varepsilon_{k+1}\) such that \(R=R_{d_{k+1}}\cup I\) has minimum relative \((1,k-2)\)-degree at least \(\delta+\mu/2\). By Definition 1.8 and \(\delta=rhf_{k-2}(k)\), we obtain that \(R\) contains an \((\alpha,\gamma,\delta)\)-rainbow Hamilton framework \(H\) that avoids edges of \(I\). Thus, \(H\subseteq R_{d_{k+1}}\). Next, we want to fix an orientation \(\overrightarrow{H}\) and a compatible walk \(W\). Since \(H\) is an \((\alpha,\gamma,\delta)\)-rainbow Hamilton framework, \(H_{i}\) is sequentially tightly connected and has a sequentially closed walk of length 1 mod \(k\), \(L_{H}(\{i\})\) and \(L_{H}(\{j\})\) intersect in an edge for each \(i,j\in[t]\). We obtain a sequentially closed walk of length 1 mod \(k\) visiting all edges of \(H\). Define an orientation \(\overrightarrow{H}=\{\overrightarrow{e}\in V(H)^{k}:e\in H\}\) by choosing for every edge \(e\) of \(H\) a \(k\)-tuple (or subpath) \(\overrightarrow{e}\) in \(W\) which contains the vertices of \(e\). Note that \(W\) is compatible with \(\overrightarrow{H}\). Firstly, we select a sequentially absorbing path \(P\). Note that \(1/t_{1}\leq d_{2},\ldots,d_{k}\), since \(\mathcal{J}\) is a \((t_{0},t_{1})\)-equitable complex. Since \(H\) is an \((\alpha,\gamma,\delta)\)-rainbow Hamilton framework, it follows that there exists a sequentially path \(P\) in \(G\) by Lemma 5.2 such that 1. \(P\) is \((c,\nu)\)-extensible and consistent with \(\overrightarrow{H}\), 2. \(V(P)\) is \(\lambda\)-sparse in \(\mathcal{P}\) and \(V(P)\cap T=\emptyset\), where \(T\) denotes the connection set of \(P\), 3. \(P\) is \(\eta\)-absorbing in \(G\). Next, by Lemma 5.3, there is a sequentially cycle \(A\) of length at least \((1-\eta)n\) which contains \(P\) as a subpath. Moreover, the number of uncovered points \(|V\setminus I(A)|\) is divisible by \(k\) and the number of uncovered colors is of size \(|[n]\setminus C(A)|=|V\setminus I(A)|\). Finally, we absorb the uncovered points and colors into \(A\). Note that \(|V\setminus I(A)|\leq\eta n\). Thus, there is a sequentially path \(P^{\prime}\) with point set \(I(P)\cup(V\setminus I(A))\) and color set \(C(P)\cup([n]\setminus C(A))\), which has the same endpoints as \(P\), as desired. ## 6. Almost Covering ### Embedding sequentially paths Given sequentially walks \(W\) and \(W^{\prime}\) with the property that the terminal \((k-1)\)-tuple of \(W\) is identical to the initial \((k-1)\)-tuple of \(W^{\prime}\), we may _concatenate_\(W\) and \(W^{\prime}\) to form a new sequentially walk with color set \(C(W)+C(W^{\prime})\), which we denote \(W+W^{\prime}\). Note that a rainbow path in \(k\)-graph system is a sequentially path in the auxiliary \((1,k)\)-graph \(G\). **Lemma 6.1**.: _Let \(k,r,n_{0},t,B\) be positive integers and \(\psi,d_{2},\ldots,d_{k+1},\varepsilon,\varepsilon_{k+1},\nu\) be positive constants such that \(1/d_{i}\in\mathbb{N}\) for \(i\in[2,k]\) and such that \(1/n_{0}\ll 1/t\),_ \[\frac{1}{n_{0}},\frac{1}{B}\ll\frac{1}{r},\varepsilon\ll\varepsilon_{k+1},d_{ 2},\ldots,d_{k},\] \[\varepsilon_{k+1}\ll\psi,d_{k+1},\nu,\frac{1}{k}.\] _Then the following holds for all integers \(n\geq n_{0}\)._ _Let \(G\) be a \((1,k)\)-graph on \([n]\cup V\) where \(|V|=n\), \(\mathcal{J}\) be a \((\cdot,\cdot,\varepsilon,\varepsilon_{k+1},r)\)-regular slice for \(G\) on \([t]\cup V^{\prime}\) where \(|V^{\prime}|=t\) with density vector \(\textbf{d}=(d_{2},\ldots,d_{k})\). Let \(\mathcal{J}_{W_{i}}\) be the induced subcomplex of \(\mathcal{J}\) on \([t(i-1)/k+1,ti/k]\cup V^{\prime}\) for \(i\in[k]\). We call \([t]\) the family of color clusters and \(V^{\prime}\) the family of point clusters. Let \(R_{W_{i}}:=R\big{[}[t(i-1)/k+1,ti/k]\cup V^{\prime}\big{]}\) be the induced subgraph of \(R:=R_{d_{k+1}}(G)\). Let \(R_{W_{i}}\) be sequentially tightly connected for \(i\in[k]\) and \(\textbf{w}_{i}\) be a fractional matching of size \(\mu_{i}=\sum_{e\in E(R_{W_{i}})}\textbf{w}_{i}(e)\) for \(i\in[k]\) and \(\mu_{i}(Z)=\sum_{Z\in e,e\in E(R_{W_{i}})}\textbf{w}_{i}(e)\leq 1/k\) for each cluster \(Z\). Also, let \(X\) and \(Y\) be \((k-1)\)-tuples of point clusters, \(S_{X}\) and \(S_{Y}\) be the subsets of \(\mathcal{J}_{X}\) and \(\mathcal{J}_{Y}\) of sizes at least \(\nu|\mathcal{J}_{X}|\) and \(\nu|\mathcal{J}_{Y}|\) respectively. Finally, let \(W\) be a sequentially walk from \(X\) to \(Y\) of length at most \(t^{2k+1}\) in \(R_{W_{i}}\) and denote \(\ell(W)\) by \(p\). For \(i\in[k]\), we have_ 1. _for any_ \(\ell\) _divisible by_ \(k\) _with_ \(4k\leq\ell\leq(1-\psi)\mu_{i}kn/t\)_, there is a sequentially path_ \(P\) _in_ \(G\) _of length_ \(\ell-1+\ell(W)(k+1)\) _whose initial_ \((k-1)\)_-tuple belongs to_ \(S_{X}\) _and whose terminal_ \((k-1)\)_-tuple belongs to_ \(S_{Y}\)_,_ 2. \(P\) _uses at most_ \(\mu_{i}(Z)n/t+B\) _vertices from any point cluster_ \(Z\in V^{\prime}\) _and at most_ \(k\mu_{i}(C)n/t+B\) _vertices from any color cluster_ \(C\in[t(i-1)/k+1,ti/k]\) _where_ \(\mu_{i}(Z^{\prime})=\sum_{Z^{\prime}\in e,e\in R_{W_{i}}}\textbf{w}_{i}(e)\) _for any cluster_ \(Z^{\prime}\) Proof.: Let \(\alpha=\psi/5\) and \(\beta=1/200\). When using Lemma 4.7, we require that \(\varepsilon\ll c^{2}\) and choose \(m_{0}\) to be large enough so that \(m\geq\alpha m_{0}\) is acceptable for all these applications. Given \(t\), let \[n_{0}=t\cdot\max(m_{0},\frac{200k^{2}}{\varepsilon},\frac{8k^{2}}{\alpha\sqrt{ \varepsilon}},\frac{10k(k+1)t^{2k+1}}{\alpha}). \tag{1}\] We write \(\mathcal{G}\) for the \((k+1)\)-complex obtained from \(\mathcal{J}_{W_{i}}\) by adding all edges of \(G\) supported on \(\mathcal{J}_{W_{i}}^{(k)}\) as the '\((k+1)\)th level' of \(\mathcal{G}\). So for any edge \(X=(X_{0},X_{1},\ldots,X_{k})\in R_{W_{i}}\), \(\mathcal{G}[\bigcup_{i\in[0,k]}X_{i}]\) is a \((d_{2},\ldots,d_{k},d^{*}(X),\varepsilon,\varepsilon_{k+1},r)\)-regular \((k+1)\)-partite \((k+1)\)-complex with \(d^{*}(X)\geq d_{k+1}\). Since \(\mathcal{J}\) is a regular slice for \(G\), for any \((1,k)\)-set of clusters \(X=\{X_{0},X_{1},\ldots,X_{k}\}\) in \(\mathcal{J}_{W_{i}}\), the \((k+1)\)-partite \(k\)-complex \(\mathcal{J}_{W_{i}}[\bigcup_{j\in[0,k]}X_{j}]\) is \((\mathbf{d},\varepsilon)\)-regular. By adding all \((k+1)\)-sets supported on \(\hat{\mathcal{J}}_{W_{i}X}\) as the '\((k+1)\)th level', we may obtain a \((d_{2},\ldots,d_{k},1,\varepsilon,\varepsilon_{k+1},r)\)-regular \((k+1)\)-partite \((k+1)\)-complex, whose vertex clusters are subsets \(Y_{j}\subseteq X_{j}\) for \(j\in[0,k]\) of size \(|Y_{1}|=\cdots=|Y_{k}|=\alpha m/k\) and \(|Y_{0}|=\alpha m\). \(Y_{0}\) can be seen as \(\bigcup_{i\in[k]}Y_{0,i}\) where \(|Y_{0,i}|=\alpha m/k\) for \(i\in[k]\) and we obtain a \((d_{2},\ldots,d_{k},1,\sqrt{\varepsilon},\sqrt{\varepsilon_{k+1}},r)\)-regular by Lemma 4.11. We conclude by Lemma 4.9 that for any subset \(Y_{i}\), \(i\in[k-1]\) of distinct clusters of \(\mathcal{J}\), each of size \(\alpha m\), we have \[|\mathcal{G}(Y_{1},\ldots,Y_{k-1})|\geq\varepsilon m^{k-1}. \tag{2}\] The following claim plays an important role in Lemma 6.1. **Claim 6.2**.: _Let \(\{X_{0},X_{1},\ldots,X_{k}\}\) be an edge of \(R\) and choose any \(Y_{j}\subseteq X_{j}\) for each \(j\in[0,k]\) so that \(|Y_{0}|=k|Y_{1}|=\cdots=k|Y_{k}|=\alpha m\). Let \(\mathcal{P}\) be a collection of at least \(\frac{1}{2}|\mathcal{G}(Y_{1},\ldots,Y_{k-1})|\) sequentially paths in \(G\)(not necessarily contained in \(\bigcup_{j\in[k]}Y_{j}\)) each of length at most \(3k\) and whose terminal \((k-1)\)-tuples are distinct members of \(\mathcal{G}(Y_{1},\ldots,Y_{k-1})\). Then for each \(\sigma\in\{0,1\}\) there is a path \(P\in\mathcal{P}\) and a collection \(\mathcal{P}^{\prime}\) of \(\frac{9}{10}e(\mathcal{G}(Y_{\sigma+1},\ldots,Y_{\sigma+k-1}))\) sequentially paths in \(G\), each of length \(2k-1+\sigma\), all of whose initial \((k-1)\)-tuples are the same (terminal \((k-1)\)-tuple of \(P\)). Furthermore, the terminal \((k-1)\)-tuples of paths in \(\mathcal{P}^{\prime}\) are distinct members of \(\mathcal{G}(Y_{\sigma+1},\ldots,Y_{\sigma+k-1})\). If \(j\leq k-1\), then the \(j\)th vertex \(x\) of each path in \(\mathcal{P}^{\prime}\) lies in \(Y_{j}\), if \(j\geq k\), then \(x\) is not contained in \(P\), and \(k\) new colors are not contained in \(P\)._ Proof.: Let \(\sigma\in\{0,1\}\) be fixed, we take \(\mathcal{H}\) to be the \((k+1)\)-complex generated by the down-closure of a sequentially path of length \(2k-1+\sigma\) with vertex set \(\{c_{1},\ldots,c_{k+\sigma}\}\cup\{v_{1},\ldots,v_{2k-1+\sigma}\}\) and consider its \((k+1)\)-partition \(V_{0}\cup V_{1}\cup\cdots\cup V_{k}\) where \(\{c_{1},\ldots,c_{k+\sigma}\}\subseteq V_{0}\) and the \(i\)th vertex of the path lies in the vertex class \(V_{j}\) with \(j=i\) mod \(k\). We take \(\mathcal{H}^{\prime}\) to be the subcomplex of \(\mathcal{H}\) induced by \(\{v_{1},\ldots,v_{k-1},v_{k+1+\sigma},\ldots,v_{2k-1+\sigma}\}\). Consider the pair \((e,f)\), where \(e\) is an ordered \((k-1)\)-tuple of \(\mathcal{G}(Y_{1},\ldots,Y_{k-1})\) and \(f\) is an ordered \((k-1)\)-tuple of \(\mathcal{G}(Y_{\sigma+1},\ldots,Y_{\sigma+k-1})\). For any such ordered \((k-1)\)-tuple \(e\), there are at most \(km^{k-2}\) such ordered \((k-1)\)-tuples \(f\) which intersect \(e\), thus there are at most \(1/200\)-proportion of the pairs \((e,f)\) are not disjoint. On the other hand, if \(e\) and \(f\) are disjoint, then the down-closure of the pair \((e,f)\) forms a labelled copy of \(\mathcal{H}^{\prime}\) in \(\mathcal{G}[\bigcup_{j\in[0,k]}Y_{j}]\), so by Lemma 4.7 with \(s=3k+2\sigma-1\) and \(s^{\prime}=2k-2\), for all but at most \(1/200\)-proportion of the disjoint pairs \((e,f)\), there are at least \(c(\alpha m/k)^{k+2\sigma+1}\geq\sqrt{\varepsilon}(\alpha m/k)^{k+2\sigma+1}\) extensions to copies of \(\mathcal{H}\) in \(\mathcal{G}[\bigcup_{j\in[0,k]}Y_{j}]\). Each such copy of \(\mathcal{H}\) corresponds to a sequentially path in \(G\) of length \(2k-1+\sigma\) with all vertices in the desired clusters. We conclude that at least \(99/100\)-proportion of all pairs \((e,f)\) of ordered \((k-1)\)-tuples are disjoint and are linked by at least \(\sqrt{\varepsilon}(\alpha m/k)^{k+2\sigma+1}\) sequentially paths in \(G\) of length \(2k-1+\sigma\), where \(c_{i}\in V_{0}\) for \(i\in[k+\sigma]\) and \(v_{\ell}\in V_{j}\) with \(j=\ell\) mod \(k\). We call these pairs _extensible_. We call an ordered \((k-1)\)-tuple \(e\in\mathcal{G}(Y_{1},\ldots,Y_{k-1})\)_good_ if at most \(1/20\) of the ordered edges \(f\in\mathcal{G}(Y_{\sigma+1},\ldots,Y_{\sigma+k-1})\) do not make an extensible pair with \(e\). Then at most \(1/5\) of the ordered \((k-1)\)-tuples in \(\mathcal{G}(Y_{1},\ldots,Y_{k-1})\) are not good. Thus, there exists a path \(P\in\mathcal{P}\) whose terminal \((k-1)\)-tuple is a good ordered \((k-1)\)-tuple \(e\). Fix such a \(P\) and \(e\), and any ordered \((k-1)\)-tuple \(f\) in \(\mathcal{G}(Y_{\sigma+1},\ldots,Y_{\sigma+k-1})\) which is disjoint from \(P\), suppose that \((e,f)\) is an extensible pair, there are at least \(\sqrt{\varepsilon}(\alpha m/k)^{k+2\sigma+1}\) sequentially paths in \(G\) from \(e\) to \(f\). We claim that at least one of these paths has the further property that if \(j\geq k\), then the \(j\)th vertex is not contained in \(P\) and the \(k+\sigma\) new colors are not contained in \(P\), we can therefore put it in \(\mathcal{P}^{\prime}\). Indeed as \(f\) is disjoint from \(P\), if \(\sigma=0\), then it suffices to show that one of these paths has the property that \(v_{k}\in Y_{k}\setminus V(P)\) and \(c_{i}\in Y_{0}\setminus V(P)\) for \(i\in[k]\). This is true because there are only at most \((2k+1)(\alpha m)^{k}+k(2k+1)(\alpha m/k)^{k}<\sqrt{\varepsilon}(\alpha m/k)^{k+1}\) paths which do not have this property by (1). If \(\sigma=1\), then we need a path whose \(k\)th and \((k+1)\)st vertices are not in \(V(P)\) and \(c_{i}\in Y_{0}\setminus V(P)\) for \(i\in[k+1]\), which is possible since \(2(2k+1)(\alpha m/k)^{k+2}+(k+1)(2k+1)(\alpha m/k)^{k+2}<\sqrt{\varepsilon}( \alpha m/k)^{k+3}\) by (1). Finally, considering the ordered \((k-1)\)-tuple \(f\in\mathcal{G}(Y_{\sigma+1},\ldots,Y_{\sigma+k-1})\), we have \(20|V(P)|(k-1)(\alpha m/k)^{k-2}\leq\varepsilon m^{k-1}\leq e(\mathcal{G}(Y_{ \sigma+1},\ldots,Y_{\sigma+k-1}))\) by (1) and (2), at most \(1/20\) of these \((k-1)\)-tuples \(f\) intersect \(P\) and by the choice of \(e\), at most \(1/20\) of these \((k-1)\)-tuples \(f\) are such that \((e,f)\) is not extensible. This leaves at least \(9/10\) of \((k-1)\)-tuples \(f\) remaining, and choose a sequentially path for each such \(f\) as described above gives the desired set \(\mathcal{P}^{\prime}\). Let \(X=(X_{1},\ldots,X_{k-1})\), \(Y=(Y_{1},\ldots,Y_{k-1})\), \(X_{k}\) be the cluster following \(X\) in \(W\) and \(Y_{k}\) be the cluster preceding \(Y\) in \(W\). Without loss of generality, we may assume that \(\{X_{0},X_{1},\ldots,X_{k}\}\) is an edge of \(R_{1}\) and \(\{Y_{0},Y_{1},\ldots,Y_{k}\}\) is an edge of \(R_{k}\). By the condition, we have \(S_{X}\) constitutes at least a \(\nu\) proportion of \(\mathcal{G}(X_{1},\ldots,X_{k-1})\) and \(S_{Y}\) constitutes at least a \(\nu\) proportion of \(\mathcal{G}(Y_{1},\ldots,Y_{k-1})\). Given any subsets \(X^{\prime}_{j}\subseteq X_{j}\) of size \(\alpha m/k\) for \(j\in[k]\) and \(X^{\prime}_{0}\subseteq X_{0}\) of size \(\alpha m\), we say that a \((k-1)\)-tuple \(e\in\mathcal{G}(X_{1},\ldots,X_{k-1})\) is _well-connected to \((X^{\prime}_{1},\ldots,X^{\prime}_{k-1})\)_ via \(X^{\prime}_{k}\) and \(X^{\prime}_{0}\) if for at least \(9/10\) of the \((k-1)\)-tuples \(f\) in \(\mathcal{G}(X^{\prime}_{1},\ldots,X^{\prime}_{k-1})\), there exist distinct \(k\)-subsets \(\{c_{1},\ldots,c_{k}\}\), \(\{f_{1},\ldots,f_{k}\}\) of \(X^{\prime}_{0}\) and distinct \(u,v\in X^{\prime}_{k}\) such that \((c_{1}\cdots c_{k},e(u)f)\) and \((f_{1}\cdots f_{k},e(v)f)\) are sequentially paths in \(G\) of length \(2k-1\). **Claim 6.3**.: _For any subsets \(X^{\prime}_{j}\subseteq X_{j}\) of size \(\alpha m/k\), \(Z_{j}\subseteq X_{j}\) of size \(\alpha m/k\) for \(j\in[k]\) and \(X^{\prime}_{0}\subseteq X_{0}\), \(Z_{0}\subseteq X_{0}\) of size \(\alpha m\) such that each \(X^{\prime}_{j}\) is disjoint from \(Z_{j}\), the following statements hold._ 1. _At least_ \(9/10\) _of the_ \((k-1)\)_-tuples_ \(e\) _in_ \(\mathcal{G}(Z_{1},\ldots,Z_{k-1})\) _are well-connected to_ \((Z_{1},\ldots,Z_{k-1})\) _via_ \(Z_{k}\) _and_ \(Z_{0}\)_._ 2. _At least_ \(9/10\) _of the_ \((k-1)\)_-tuples_ \(e\) _in_ \(\mathcal{G}(Z_{1},\ldots,Z_{k-1})\) _are well-connected to_ \((X^{\prime}_{1},\ldots,X^{\prime}_{k-1})\) _via_ \(X^{\prime}_{k}\) _and_ \(X^{\prime}_{0}\)_._ 3. _At least_ \(9/10\) _of the_ \((k-1)\)_-tuples_ \(e\) _in_ \(\mathcal{G}(X^{\prime}_{1},\ldots,X^{\prime}_{k-1})\) _are well-connected to_ \((Z_{1},\ldots,Z_{k-1})\) _via_ \(X^{\prime}_{k}\) _and_ \(X^{\prime}_{0}\) Proof.: From the proof of Claim 6.2, we know that all but at most \(1/100\)-proportion of pairs \((e,f)\), where \(e,f\in\mathcal{G}(Z_{1},\ldots,Z_{k-1})\), are disjoint and are linked by at least \(\sqrt{\varepsilon}(\alpha m/k)^{k+1}\) sequentially tight paths in \(G\) of length \(2k-1\). It is obvious that at least \(9/10\)-proportion \((k-1)\)-tuples of \(\mathcal{G}(Z_{1},\ldots,Z_{k-1})\) can be extended to at least \(9/10\)-proportion \((k-1)\)-tuples of \(\mathcal{G}(Z_{1},\ldots,Z_{k-1})\) by at least \(\sqrt{\varepsilon}(\alpha m/k)^{k+1}\) sequentially paths. To prove (2), we apply Lemma 4.10 with \(\mathcal{H}\) being the \((k+1)\)-complex generated by the down-closure of a sequentially path of length \(2k-1\) and \(\mathcal{H}^{\prime}\) being the subcomplex induced by its initial and terminal \((k-1)\)-tuples. We regard \(\mathcal{H}\) as a \((2k)\)-partite \((k+1)\)-complex with \(k\) colors in the color cluster and one vertex in each point cluster. The role of \(\mathcal{G}\) in Lemma 4.7 is the \((2k)\)-partite subcomplex of \(\mathcal{G}\) with vertex classes \(X^{\prime}_{0},Z_{1},\ldots,Z_{k-1},X^{\prime}_{k},X^{\prime}_{1},\ldots,X^{ \prime}_{k-1}\), the colors of \(\mathcal{H}\) are embedded in \(X^{\prime}_{0}\), the first vertex of \(\mathcal{H}\) is to be embedded in \(Z_{1}\), the second one in \(Z_{2}\), and so forth. By Lemmas 4.11 and 4.7, the proportion of pairs \((e,f)\) for which there is no path as in (2) is at most \(1/200\), and the remainder of the argument can be followed in (1). (3) can be proved similarly. We are ready to construct our path. Arbitrarily choose a subset \(X^{(0)}_{0}\subseteq X_{0}\), \(Z_{0}\subseteq Y_{0}\) of size \(\alpha m\) and \(X^{(0)}_{j}\subseteq X_{j}\), \(Z_{j}\subseteq Y_{j}\) of size \(\alpha m/k\) for \(j\in[k]\). By Theorem 4.6, Theorem 4.7, Theorem 4.9, there are at least \(|S_{X}||\mathcal{G}(X^{(0)}_{1},\ldots,X^{(0)}_{k-1})|/2\) pairs \((e,f)\), where \(e\in S_{X}\) and \(f\in\mathcal{G}(X^{(0)}_{1},\ldots,X^{(0)}_{k-1})\), can be extended to \(\sqrt{\varepsilon}(\alpha m/k)^{k+1}\) sequentially paths whose remaining point lies in \(X^{(0)}_{k}\) and colors lie in \(X^{(0)}_{0}\). Thus, we choose a \((k-1)\)-tuple \(P^{(0)}\) of \(S_{X}\) such that the following holds, there is a set \(\mathcal{P}^{(0)}\) of sequentially paths of the form \((c_{1}\cdots c_{k},P^{(0)}(v)f)\) for \(v\in X^{(0)}_{k}\), \(c_{1},\ldots,c_{k}\in X^{(0)}_{0}\) and \(f\in\mathcal{G}(X^{(0)}_{1},\ldots,X^{(0)}_{k-1})\) for which the terminal \((k-1)\)-tuples of paths in \(\mathcal{P}^{(0)}\) are all distinct and constitute at least half of the ordered \((k-1)\)-tuples of \(\mathcal{G}(X^{(0)}_{1},\ldots,X^{(0)}_{k-1})\). Similarly, we can choose \(e\in S_{Y}\) such that for at least half the members \(e^{\prime}\) of \(\mathcal{G}(Z_{1},\ldots,Z_{k-1})\), there is a sequentially path of length \(2k-1\) in \(G\) from \(e^{\prime}\) to \(e\) whose remaining point lies in \(Z_{k}\) and colors lie in \(Z_{0}\). We now construct the desired path. Since \(H_{W_{i}}\) is sequentially tightly connected, we can obtain \(W=e_{1}\cdots e_{s}\) passing all edges of \(H_{W_{i}}\). For each \(i\in[s]\), let \(n_{i}\) be any integer with \(0\leq n_{i}\leq(1-3\alpha)\mathbf{w}(e_{i})m\). Set the initial state to be 'filling the edge \(e_{1}\)', we proceed for \(j\geq 1\) as follows, \(\bigstar\)**:**: The terminal \((k-1)\)-tuple of the path family \(\mathcal{P}^{(j)}\) constitute at least half of the ordered \((k-1)\)-tuples \(\mathcal{G}(X^{(j)}_{1},\ldots,X^{(j)}_{k-1})\). Suppose that our current state is 'filling the edge \(e_{i}\)' for some \(i\), if we have previously completed \(n_{i}\) steps in this state, then we do nothing and change the state to 'position \(1\) in traversing the walk \(W\)'. Otherwise, since \(\bigstar\) holds for \(j-1\), we apply Claim 6.2 with \(\sigma=0\) to obtain a path \(P\in\mathcal{P}^{(j-1)}\) and a collection \(\mathcal{P}^{(j)}\) of \(\frac{9}{10}e(\mathcal{G}(X^{(j-1)}_{1},\ldots,X^{(j-1)}_{k-1}))\) sequentially paths of length \(2k-1\), all of whose initial \((k-1)\)-tuples are the same (the terminal \((k-1)\)-tuple of \(P\)) and whose terminal \((k-1)\)-tuples are distinct numbers of \(\mathcal{G}(X^{(j-1)}_{1},\ldots,X^{(j-1)}_{k-1})\) and are disjoint from \(V(P)\), whose colors lie in \(X^{(j-1)}_{0}\setminus C(P)\), and whose remaining vertex lies in \(X^{(j-1)}_{k}\setminus V(P)\). We define \(P^{(j)}\) to be the concatenation \(P^{(j-1)}+P\) with color classes \(C(P^{(j-1)})\cup C(P)\). For \(p\in[0,k]\), we generate \(X^{(j)}_{p}\) from \(X^{(j-1)}_{p}\) by removing the vertices of \(P^{(j)}\) in \(X^{(j)}_{p}\) and replacing them by vertices from the same cluster which do not lie in \(Z\) or in \(P^{(j)}\). We will prove that this is possible in Claim 6.4. Now suppose that our current state is 'position \(q\) in traversing the walk \(W\)'. Since \(\bigstar\) holds for \(j-1\), applying Claim 6.2 with \(\sigma=1\) to obtain a path \(P\in\mathcal{P}^{(j-1)}\) and a collection \(\mathcal{P}^{(j)}\) of \(\frac{9}{10}e(\mathcal{G}(X_{1}^{(j-1)},\ldots,X_{k-1}^{(j-1)}))\) sequentially paths of length \(2k\), all of whose initial \((k-1)\)-tuples are the same (the terminal \((k-1)\)-tuple of \(P\)) and whose terminal \((k-1)\)-tuples are distinct numbers of \(\mathcal{G}(X_{2}^{(j-1)},\ldots,X_{k}^{(j-1)})\) and are disjoint from \(V(P)\), and whose two remaining vertices lie in \(X_{k}^{(j-1)}\setminus V(P)\) and \(X_{1}^{(j-1)}\setminus V(P)\) respectively with colors in \(X_{0}^{(j-1)}\setminus C(P)\). Exactly as before we define \(P^{(j)}\) to be the concatenation \(P^{(j-1)}+P\). We generate \(X_{p}^{(j)}\) from \(X_{p+1}^{(j-1)}\) for \(p\in[0,k-1]\) by removing the vertices of \(P^{(j-1)}\) in \(X_{p+1}^{(j-1)}\) and replacing them by vertices from the same cluster do not lie in \(Z\) or \(P^{(j)}\). If we have not reached the end of \(W\), we choose \(X_{k}^{(j)}\) to be a subset of the cluster at position \(q+k\) in the sequence of \(W\) such that \(X_{k}^{(j)}\) is disjoint from \(P^{(j)}\cup Z\). In this case, we change our state to 'position \(q+1\) in traversing \(W\)'. Alternatively, if we have reached the end of \(W\), meaning that the \((k-1)\)-tuple of clusters containing \(X_{1}^{(j)},\ldots,X_{k-1}^{(j)}\) is \((Y_{1}\ldots,Y_{k-1})\), then we choose \(X_{k}^{(j)}\) to be a subset of \(Y_{k}\) which has size \(\alpha m/k\) and is disjoint from \(P^{(j)}\cup Z\). We may choose a path \(P\in\mathcal{P}^{(j-1)}\) such that the terminal \((k-1)\)-tuple \(f\in G(X_{1}^{(j)},\ldots,X_{k-1}^{(j)})\) of \(P\) is well-connected to \((Z_{1},\ldots,Z_{k-1})\) via \(Z_{k}\) and \(Z_{0}\). This implies that we may choose a \((k-1)\)-tuple \(e^{\prime}\) in \(\mathcal{G}(Z_{1},\ldots,Z_{k-1})\), \(v,v^{\prime}\) in \(Z_{k}\) with new colors \(C^{*},C^{**}\) in \(Z_{0}\) with \(|C^{*}|=|C^{**}|=k\) such that \((C^{*},f(v^{\prime})e^{\prime})\) is a sequentially path \(Q^{\prime}\) and \((C^{**},e^{\prime}(v)e)\) is a sequentially path \(Q\). Return \(P^{(j)}+Q^{\prime}+Q\) as the output sequentially path in \(G\). Note that an edge may appear multiple times. When it first appears in the walk, the process executes 'filling the edge'. When it appears later, 'filling the edge' is no longer needed. Again we prove Claim 6.4 that these choices are all possible. **Claim 6.4**.: _The algorithm described above is well-defined(that is, it is always possible to construct the sets \(X_{p}^{(j)}\)), maintains \(\bigstar\) and returns a sequentially path of length_ \[4k-1+\left(\sum_{i\in[s]}n_{i}\right)\cdot k+\ell(W)\cdot(k+1).\] Proof.: We prove that \(\bigstar\) is maintained, recall that \(e(\mathcal{G}(X_{1}^{(j)},\ldots,X_{k-1}^{(j)}))\geq\varepsilon m^{k-1}\) for each \(j\). Fixing some \(j\), for either \(A_{p}:=X_{p}^{(j-1)}\) or \(A_{p}:=X_{p+1}^{(j-1)}\), we obtain sets \(A_{1},\ldots,A_{k-1}\), each with size \(\alpha m\) such that the terminal \((k-1)\)-tuples of \(\mathcal{P}^{(j)}\) constitute at least \(9/10\) of the ordered edges of \(\mathcal{G}(A_{1},\ldots,A_{k-1})\) and for each \(i\in[k-1]\), \(X_{i}^{(j)}\) is formed from \(A_{i}\) by removing at most two vertices and replacing them with the same number of vertices. Since each vertex is in at most \(m^{k-2}\) ordered \((k-1)\)-tuples of either \(\mathcal{G}(A_{1},\ldots,A_{k-1})\) or \(\mathcal{G}(X_{1}^{(j)},\ldots,X_{k-1}^{(j)})\), we conclude that the fraction of ordered \((k-1)\)-tuples of \(\mathcal{G}(X_{1}^{(j)},\ldots,X_{k-1}^{(j)})\) which are the terminal \((k-1)\)-tuples of paths in \(\mathcal{P}^{(j)}\) is at least \[\begin{split}&\frac{\frac{9}{10}e(\mathcal{G}(A_{1},\ldots,A_{k-1})) -2(k-1)m^{k-2}}{e(\mathcal{G}(X_{1}^{(j)},\ldots,X_{k-1}^{(j)}))}\\ &\geq\frac{\frac{9}{10}(e(\mathcal{G}(X_{1}^{(j)},\ldots,X_{k-1}^ {(j)}))-2(k-1)m^{k-2})-2(k-1)m^{k-2}}{e(\mathcal{G}(X_{1}^{(j)},\ldots,X_{k-1} ^{(j)}))}\\ &\geq\frac{9}{10}-\frac{4(k-1)m^{k-2}}{\varepsilon m^{k-1}} \geq\frac{1}{2},\end{split} \tag{3}\] where the last equality holds since \(m\geq m_{0}\geq 16(k-1)/\varepsilon\). Thus, we obtain \(\bigstar\). To prove that we can always construct the set \(X_{p}^{(j)}\), observe that it is enough to check that at termination every cluster still have at least \(2\alpha m\) vertices not in \(P^{(j)}\), as then there are at least \(\alpha m\) vertices outside \(Z\). In each walk-traversing step, each path in \(\mathcal{P}^{(j)}\) contains precisely \(k+1\) new vertices and \(k+1\) new colors and the total number of walk-traversing steps is precisely \(\ell(W)\). Recall that this number is at most \(t^{2k+1}\), we have \((k+1)t^{2k+1}<\frac{\alpha m}{2k}\) and \((k+1)^{2}t^{2k+1}<\frac{\alpha m}{2}\) by (1). When we are in the state 'filling the edge \(e_{i}\)', we have \(n_{i}\) steps and in each step, each path in \(\mathcal{P}^{(j)}\) contains \(k\) new vertices, one from each cluster of \(e_{i}\setminus C(e_{i})\) and \(k\) new colors from \(C(e_{i})\). So for any color cluster \(C\), the number of whose vertices which are added to \(P^{(j)}\) is at most \(\sum_{i:C\in e_{i}}kn_{i}\leq\sum_{i:C\in e_{i}}(1-3\alpha)k\mathbf{w}(e_{i})m \leq(1-3\alpha)m\). And for any point cluster \(X\), the number of whose vertices which are added to \(P^{(j)}\) is at most \(\sum_{i:X\in e_{i}}n_{i}\leq\sum_{i:X\in e_{i}}(1-3\alpha)\mathbf{w}(e_{i})m \leq(1-3\alpha)m/k\). Together with \(e\) and the \(k\) vertices of the chosen path in \(\mathcal{P}^{(0)}\), we conclude that there are at most \((1-2\alpha)m\) vertices of any color cluster and at most \((1-2\alpha)m/k\) vertices of any point cluster contained in \(P^{(j)}\) at termination. Finally, the length of the path is equal to the number of vertices. Recall that \(P^{(0)}\) contains \(k-1\) vertices. Next, \(k\) vertices and \(k\) colors are added from \(P^{(0)}\) to form \(P^{(1)}\). Each of the \(\sum_{i\in[s]}n_{i}\) edge-filling steps resulted in \(k\) new vertices and \(k\) new colors being added to \(P^{(j)}\) and each of the \(\ell(W)\) walk-traversing steps resulted in \(k+1\) new vertices and \(k+1\) new colors being added to \(P^{(j)}\). When completing the path, we need \(2k\) vertices which are not in the final paths \(P^{(j)}\) (\(v,v^{\prime},e\) and \(e^{\prime}\)). Thus, the final path has length \[(k-1)+k+\left(\sum_{i\in[s]}n_{i}\right)\cdot k+\ell(W)\cdot(k+1)+2k.\] We obtain the shortest sequentially path by never entering the state 'filling an edge', in which case we can obtain a sequentially path of length \(4k-1+\ell(W)(k+1)\). On the other hand, by extending \(W\) to include all edges of \(R_{W_{i}}\), we take \(n_{i}\) to be \((1-\psi)\mathbf{w}(e_{i})m\) for each \(i\in[s]\). We can obtain a sequentially path of length at least \((1-\psi)\mu_{i}kn/t\), with using at most \(k\mu_{i}(C)n/t+B\) vertices from any color cluster \(C\) in \(R_{W_{i}}\) and at most \(\mu_{i}(X)n/t+B\) where \(\mu_{i}(Z)=\sum_{Z\in e,e\in R_{W_{i}}}\mathbf{w}_{i}(e)\) for \(i\in[k]\) and \(B=B(t,k)\). By choosing \(n_{i}\) appropriately, we can obtain tight cycles of certain length between two extremes. Similarly with Lemma 6.1, we can obtain the following lemma. **Lemma 6.5**.: _Let \(k,r,n_{0},t,B\) be positive integers and \(\psi,d_{2},\ldots,d_{k+1},\varepsilon,\varepsilon_{k+1},\nu\) be positive constants such that \(1/d_{i}\in\mathbb{N}\) for \(i\in[2,k]\) and such that \(1/n_{0}\ll 1/t\),_ \[\frac{1}{n_{0}}\ll\frac{1}{t}\ll\frac{1}{B}\ll\frac{1}{r},\varepsilon\ll \varepsilon_{k+1},d_{2},\ldots,d_{k},\] \[\varepsilon_{k+1}\ll\psi,d_{k+1},\nu,\frac{1}{k}.\] _Then the following holds for all integers \(n\geq n_{0}\)._ _Let \(G\) be a \((1,k)\)-graph on \([n]\cup V\) where \(|V|=n\), \(\mathcal{J}\) be a \((\cdot,\cdot,\varepsilon,\varepsilon_{k+1},r)\)-regular slice for \(G\) on \([t]\cup V^{\prime}\) where \(|V^{\prime}|=t\) with density vector \(\textbf{d}=(d_{2},\ldots,d_{k})\). Let \(\mathcal{J}_{W_{i}}\) be the induced subcomplex of \(\mathcal{J}\) on \([t(i-1)/k+1,ti/k]\cup V^{\prime}\) for \(i\in[k]\). Let \(R_{W_{i}}:=R\left[[t(i-1)/k+1,ti/k]\cup V^{\prime}\right]\) be the induced subgraph of \(R:=R_{d_{k+1}}(G)\). Let \(R_{W_{i}}\) be sequentially tightly connected for \(i\in[k]\) and \(\textbf{w}_{i}\) be a fractional matching of size \(\mu_{i}=\sum_{e\in E(R_{W_{i}})}\textbf{w}_{i}(e)\) for \(i\in[k]\) with \(\mu_{i}(Z)\leq 1/k\) for each cluster \(Z\) and \(i\in[k]\). Also, let \(X\) and \(Y\) be \((k-1)\)-tuples of point clusters, \(S_{X}\) and \(S_{Y}\) be the subsets of \(\mathcal{J}_{X}\) and \(\mathcal{J}_{Y}\) of sizes at least \(\nu|\mathcal{J}_{X}|\) and \(\nu|\mathcal{J}_{Y}|\) respectively. Finally, let \(W\) be a sequentially walk traversing all edges of each \(H_{W_{i}}\) from \(X\) to \(Y\) of length at most \(t^{2k+1}\) and denote \(\ell(W)\) by \(p\). For \(i\in[k]\), we have_ 1. _for any_ \(\ell\) _divisible by_ \(k\) _with_ \(4k\leq\ell\leq(1-\psi)\sum_{i\in[k]}\mu_{i}kn/t\)_, there is a sequentially path_ \(P\) _in_ \(G\) _of length_ \(\ell-1+\ell(W)(k+1)\) _whose initial_ \((k-1)\)_-tuple belongs to_ \(S_{X}\) _and whose terminal_ \((k-1)\)_-tuple belongs to_ \(S_{Y}\)_,_ 2. \(P\) _uses at most_ \(\sum_{i\in[k]}\mu_{i}(Z)n/t+B\) _vertices from any point cluster_ \(Z\in V^{\prime}\) _and at most_ \(k\mu_{i}(C)n/t+B\) _vertices from any color cluster_ \(C\in[t]\) _where_ \(\mu_{i}(Z^{\prime})=\sum_{Z^{\prime}\in e,e\in R_{W_{i}}}\textbf{w}_{i}(e)\) _for any cluster_ \(Z^{\prime}\)_._ ### Connecting Let us begin with the existence of extensible paths. The following proposition states that most tuples in the complex induced by an edge of the reduced graph of a regular slice also extend to that edge. **Proposition 6.6**.: _Let \(k,m,t,r\in\mathbb{N}\) and \(\varepsilon,\varepsilon_{k+1},d_{2},\ldots,d_{k+1},\beta,c,\nu\) be such that_ \[1/m\ll 1/r,\varepsilon\ll c \ll\varepsilon_{k+1},d_{2},\ldots,d_{k},\] \[\varepsilon_{k+1} \ll\beta \ll d_{k+1},\nu.\] _Let \(\textbf{d}=(d_{2},\ldots,d_{k+1})\) and let \((G,G_{\mathcal{J}},\mathcal{J},\mathcal{P},R)\) be a \((k,m,2t,\varepsilon,\varepsilon_{k+1},r,\textbf{d})\)-regular setup. Let \(Y=(Y_{0},Y_{1},\ldots,Y_{k})\) be an ordered edge in \(R\), then all but at most \(\beta|\mathcal{J}_{(Y_{1},\ldots,Y_{k-1})}|\) many tuples \((v_{1},\ldots,v_{k-1})\in\mathcal{J}_{(Y_{1},\ldots,Y_{k-1})}\) are \((c,\nu)\)-extensible both left and rightwards to \(Y\)._ Proof.: Let \(P=(c_{1},\ldots,c_{2k},v_{1},\ldots,v_{3k-1})\) be a sequentially path. Partition its vertex set in \(k+1\) clusters \(X_{0},X_{1},\ldots,X_{k}\) such that \(X_{0}=\{c_{1},\ldots,c_{2k}\}\), and \(X_{i}=\{v_{j}:j=i\text{ mod }k\}\) for \(i\in[k]\). Thus, \(P\) is a \((k+1)\)-partite \((k+1)\)-graph. Let \(\mathcal{H}\) be the down-closure of the path \(P\), which is a \((k+1)\)-partite \((k+1)\)-complex. Let \(V_{1}=\{v_{1},\ldots,v_{k-1}\}\) and \(V_{2}=\{v_{2k+1},\ldots,v_{3k-1}\}\). Let \(\mathcal{H}^{\prime}\) be the induced subcomplex of \(\mathcal{H}\) on \(V_{1}\cup V_{2}\). Thus, \(\mathcal{H}^{\prime}\) is a \(k\)-partite \((k-1)\)-complex on \(2k-2\) vertices. Let \(\mathcal{G}=\mathcal{J}\cup G_{\mathcal{J}}\). Let \(\mathcal{H}^{\prime}_{\mathcal{G}}\) be the set of labelled partition-respecting copies of \(\mathcal{H}^{\prime}\) in \(\mathcal{G}\). It follows that \[|\mathcal{H}^{\prime}_{\mathcal{G}}|=(1\pm\varepsilon_{k+1})|\mathcal{J}_{(Y_{ 1},\ldots,Y_{k-1})}|^{2}, \tag{4}\] where the error term accounts for the fact that we do not count the intersecting pairs of \((k-1)\)-tuples in \(\mathcal{J}_{(Y_{1},\ldots,Y_{k-1})}\). Since \(Y\) is an edge of \(R\), any function \(\phi:V(P)\to V(R)\) such that \(\phi(X_{i})\subseteq Y_{i}\) is a homomorphism. By Lemma 4.7 with \(\beta^{2}\) playing the role of \(\beta\), we deduce that all but at most \(\beta^{2}|\mathcal{H}^{\prime}_{\mathcal{G}}|\) of labelled partition-respecting copies of \(\mathcal{H}^{\prime}\) in \(\mathcal{G}\) extend to at least \(cm^{3k+1}\) labelled partition-respecting copies of \(\mathcal{H}\) in \(\mathcal{G}\), since \(c\ll d_{2},\ldots,d_{k-1}\). For each \(e\in\mathcal{J}_{(Y_{1},\ldots,Y_{k-1})}\), let \(T(e)\) be the number of tuples \(e^{\prime}\) in \(\mathcal{J}_{(Y_{1},\ldots,Y_{k-1})}\) such that \(e\cup e^{\prime}\) can be extended to at least \(cm^{3k+1}\) copies of \(\mathcal{H}\) in \(\mathcal{G}\), We have \[\sum_{e\in\mathcal{J}_{(Y_{1},\ldots,Y_{k-1})}}T(e)\geq(1-2\beta^{2})|\mathcal{ J}_{(Y_{1},\ldots,Y_{k-1})}|^{2}. \tag{5}\] Let \(S\subseteq\mathcal{J}_{(Y_{1},\ldots,Y_{k-1})}\) be the set of \((k-1)\)-tuples \(e\) which is not \((c,\nu)\)-extensible leftwards to \(Y\), that is \(T(e)<\nu|\mathcal{J}_{(Y_{1},\ldots,Y_{k-1})}|\). Combining with (5) and \(\beta\ll\nu\), we have \[\sum_{e\in\mathcal{J}_{(Y_{1},\ldots,Y_{k-1})}}T(e)\leq|S|\cdot\nu|\mathcal{ J}_{(Y_{1},\ldots,Y_{k-1})}|+(|\mathcal{J}_{(Y_{1},\ldots,Y_{k-1})}|-|S|)| \mathcal{J}_{(Y_{1},\ldots,Y_{k-1})}|,\] furthermore, we have \[|S|\leq\frac{2\beta^{2}}{1-\nu}|\mathcal{J}_{(Y_{1},\ldots,Y_{k-1})}|\leq \frac{\beta}{2}|\mathcal{J}_{(Y_{1},\ldots,Y_{k-1})}|.\] A symmetric fact shows that all but at most \(\frac{\beta}{2}|\mathcal{J}_{(Y_{1},\ldots,Y_{k-1})}|\)\((k-1)\)-tuples in \(\mathcal{J}_{(Y_{1},\ldots,Y_{k-1})}\) are not \((c,\nu)\)-extensible rightwards to \(Y\). Thus, all but at most \(\beta|\mathcal{J}_{(Y_{1},\ldots,Y_{k-1})}|\) pairs in \(\mathcal{J}_{(Y_{1},\ldots,Y_{k-1})}\) are not \((c,\nu)\)-extensible both left and rightwards to \(Y\). In Proposition 6.6, we know that most tuples in the complex induced by an edge of the reduced graph of a regular slice also extend to that edge. The following lemma allows us to connect up two extensible paths using either very few or quite a lot of vertices. **Lemma 6.7**.: _Let \(k,r,m,t\in\mathbb{N}\), and \(d_{2},\ldots,d_{k+1},\varepsilon,\varepsilon_{k+1},c,\nu,\lambda\) be such that_ \[1/m\ll 1/r,\varepsilon\ll c \ll\varepsilon_{k+1},d_{2},\ldots,d_{k},\] \[\lambda \ll\nu \ll 1/k,\] \[\varepsilon_{k+1} \ll d_{k+1}.\] _Let \(\textbf{d}=(d_{2},\ldots,d_{k+1})\) and let \(\mathfrak{S}=(G,G_{\mathcal{J}},\mathcal{J},\mathcal{P},H)\) be a \((k,m,2t,\varepsilon,\varepsilon_{k+1},r,\textbf{d})\)-regular setup where \(\mathcal{P}\) has an initial partition of \([n]\cup V\) and \(H\) is a \((1,k)\)-graph on \([t]\cup V^{\prime}\). Suppose that \(H_{W_{i}}=H[[t(i-1)/k,ti/k]\cup V^{\prime}]\) and \(H_{W_{i}}\) is sequentially tightly connected for \(i\in[k]\). Let \(P_{1}\), \(P_{2}\subseteq G\) be \((c,\nu)\)-extensible paths such that \(P_{1}\) extends rightwards to \(X\) and \(P_{2}\) extends leftwards to \(Y\). Suppose that \(P_{1}\) and \(P_{2}\) are either identical or disjoint, let \(W\) be a sequentially walk traversing each \(H_{W_{i}}\) of length at most \(t^{2k+1}\) that starts from \(X\) and ends with \(Y\). Let \(T\) be the joint connection set of \(P_{1}\) and \(P_{2}\). Suppose that \(T\) and \(S\subseteq V(G)\) are \(\lambda\)-sparse in \(\mathcal{P}\), \(V(P_{1})\cup V(P_{2})\subseteq S\) and \(T\cap S=\emptyset\), then_ _(1) there is a sequentially path \(Q\) of length \(4k-1+(\ell(W)+2)(k+1)\) in \(G[V(\mathcal{P})]\) such that \(P_{1}QP_{2}\) is a sequentially path, containing no vertices of \(S\) and exactly \(6k+2\) vertices of \(T\),_ _(2) consider \(\psi\) with \(\varepsilon_{k+1}\ll\psi\), let **w** be a fractional matching of size \(\mu=\sum_{i\in[k]}\sum_{e\in E(H_{W_{i}})}\textbf{w}_{i}(e)\)\(\geq 5/m\) such that \(\sum_{Z\in e,e\in H_{W_{i}}}\textbf{w}_{i}(e)\leq(1-2\lambda)/k\) for each \(Z\in\mathcal{P}\). There is a sequentially path \(Q\) of length \(\ell(W)+1\) mod \(k\) in \(G[V(\mathcal{P})]\) such that \(P_{1}QP_{2}\) is a sequentially path, containing no vertices of \(S\) and exactly \(6k+2\) vertices of \(T\). Moreover, there is a set \(U\subseteq V(\mathcal{P})\) of size at most \(\psi mt\) such that \(U\cup V(Q)\) has exactly \(\lceil\sum_{i\in[k]}\sum_{Z\in e,e\in H_{W_{i}}}\textbf{w}_{i}(e)m\rceil+B\) vertices in each point cluster \(Z\)._ Proof.: Let \(X=(X_{0},X_{1},\ldots,X_{k})\), since \(P_{1}\) extends rightwards to \(X\), thus there exists a target set \(T_{1}\subseteq\mathcal{J}_{(X_{2},\ldots,X_{k})}\) of size \(|T_{1}|\geq\nu|\mathcal{J}_{(X_{2},\ldots,X_{k})}|\) such that for every \((v_{2},\ldots,v_{k})\in T_{1}\), there are at least \(cm^{3k+1}\) many \((3k+1)\)-tuples \((c_{1},\ldots,c_{2k},w_{1},\ldots,w_{k},v_{1})\) with \(c_{i}\in T\cap X_{0}\) for \(i\in[2k]\), \(w_{i}\in T\cap X_{i}\) for \(i\in[k]\) and \(v_{1}\in T\cap X_{1}\) such that \(((c_{1},\ldots,c_{2k}),P_{1}(w_{1},\ldots,w_{k},v_{1},\ldots,v_{k}))\) is a sequentially path. Let \(Y=(Y_{0},Y_{1},\ldots,Y_{k})\), \(P_{2}\) extends leftwards to \(Y\) with target set \(T_{2}\subseteq\mathcal{J}_{(Y_{2},\ldots,Y_{k})}\). For each \(Z\in\mathcal{P}\), let \(Z^{\prime}\subseteq Z\setminus(S\cup T)\) of size \(m^{\prime}=(1-2\lambda)m\) since \(S\) and \(T\) are \(\lambda\)-sparse. Let \(\mathcal{P}^{\prime}=\{Z^{\prime}\}_{Z\in\mathcal{P}}\), \(G^{\prime}=G[V(\mathcal{P}^{\prime})]\) and \(\mathcal{J}^{\prime}=\mathcal{J}[V(\mathcal{P}^{\prime})]\). By lemma 4.11, \(\mathfrak{S}^{\prime}:=(G^{\prime},G^{\prime}_{\mathcal{J}},\mathcal{J}^{ \prime},\mathcal{P}^{\prime},H)\) is a \((k,m^{\prime},2t,\sqrt{\varepsilon},\sqrt{\varepsilon_{k+1}},r,\mathbf{d})\)-regular setup. For (2), let \(\mu^{\prime}=\mu/(1-2\lambda)\) be the scaled size of \(\mathbf{w}\) and \(B\in\mathbb{N}\) such that \(1/B\ll 1/r,\varepsilon\). Let \(\ell\) be the largest integer divisible by \(k\) with \(4k\leq\ell\leq(1-\psi/4)\mu^{\prime}m^{\prime}k\). Note that such an \(\ell\) exists since \((1-\psi/4)\mu^{\prime}m^{\prime}\geq 4\), where the latter inequality follows from \(\mu\geq 5/m\). Applying Lemma 6.5 with \(G^{\prime},\mathcal{J}^{\prime},W,\ell,\mathbf{w},\mu^{\prime}\) and \(T_{1},T_{2}\), we obtain a sequentially path \(Q^{\prime}\) whose initial \((k-1)\)-tuple belongs to \(T_{1}\) and whose terminal \((k-1)\)-tuple belongs to \(T_{2}\). Furthermore, \(Q^{\prime}\) has length \(\ell-1+\ell(W)(k+1)\) and uses at most \(\sum_{i\in[k]}\mu_{i}(Z)m+B\) vertices from any point cluster \(Z\) where \(\mu_{i}(Z)=\sum_{Z\in e,e\in H_{W_{i}}}\mathbf{w}_{i}(e)\) and \(B\ll\psi\mu mk\). Note that \(\ell\geq(1-\psi/4)\mu km-k\), it follows that \[\sum_{Z\in V^{\prime}}\sum_{i\in[k]}\mu_{i}(Z)m-\sum_{Z\in V^{ \prime}}|V(Q^{\prime})\cap Z|\] \[\leq\mu km-(1-\frac{\psi}{4})\mu km+k+1-\ell(W)(k+1)\] \[\leq\frac{\psi}{4}\mu km+k+1\] \[\leq\frac{\psi}{4}(1-2\lambda)tm+k+1\leq\frac{\psi}{2}mt.\] Hence, there is a set \(U\subseteq V(\mathcal{P})\) of size at most \(\psi mt\) such that \(U\cup V(Q^{\prime})\) has \(\lceil\sum_{i\in[k]}\mu_{i}(Z)m\rceil+B\) vertices from any point cluster \(Z\in V^{\prime}\). For (1), we can choose a path \(Q^{\prime}\) in the same way. The only difference is that in this case \(\mathbf{w}\) is a single edge of weight \(1\) and \(\ell=4k\). Hence, \(Q^{\prime}\) is a path of length \(4k-1+\ell(W)(k+1)\). Finally, we use the above extensible paths to choose \(c_{1},\ldots,c_{k+1},w_{1},\ldots,w_{k},v_{1}\) and \(f_{1},\ldots,f_{k+1}\), \(v^{\prime}_{k},w^{\prime}_{1},\ldots,w^{\prime}_{k}\) in \(T\) such that for \[Q=((c_{1},\ldots,c_{k+1})C(Q^{\prime})(f_{1},\ldots,f_{k+1}),(w_{1},\ldots,w_{k},v_{1})Q^{\prime}(v^{\prime}_{k},w^{\prime}_{1},\ldots,w^{\prime}_{k})),\] the concatenation \(P_{1}QP_{2}\) is a sequentially path and \(Q\) is disjoint from \(S\), since \(V(S)\cap T=\emptyset\)\(T\cap V(Q^{\prime})=\emptyset\). It is obvious that the length of \(Q\) in (1) is \(4k-1+(\ell(W)+2)(k+1)\) and the length of \(Q\) in (2) is \(\ell(W)+1\) mod \(k\). **Proposition 6.8**.: _Let \(W\) be a sequentially walk in a \((1,k)\)-graph \(H\) on \([t]\cup V^{\prime}\) which starts from \((1,k)\)-tuple \(X\) and ends with \((1,k)\)-tuple \(Y\) where \(|V^{\prime}|=t\). There exists a sequentially walk \(W^{\prime}\) of length at most \(kt^{k+1}\), which starts from \(X\) and ends with \(Y\). Moreover, \(\ell(W^{\prime})=\ell(W)\) mod \(k\)._ Proof.: Suppose that \(\ell(W)=j\) mod \(k\) for a \(j\in[0,k-1]\). Let \(W^{\prime}\) be a vertex-minimal sequentially tightly walk from \(X\) to \(Y\) of size \(j\) mod \(k\). Our goal is to show that every \((1,k)\)-tuple repeats at most \(k\) times in \(W^{\prime}\). Assume that \(W^{\prime}\) contains \(k+1\) copies of the same \((1,k)\)-tuple \(Z\) and denote by \(n_{j}\) the position in \(W^{\prime}\) where the \(j\)th repetition \(Z\) begins. It is obvious that \(n_{j}-n_{1}\not\equiv 0\) mod \(k\), otherwise it is contrary to the minimal of \(W^{\prime}\). By the pigeonhole principle, there exist two indices \(j,j^{\prime}\) such that \(n_{j}-n_{1}\equiv n_{j^{\prime}}-n_{1}\) mod \(k\) for \(1\leq j<j^{\prime}\leq k+1\). That is, \(n_{j}-n_{j^{\prime}}\equiv 0\) mod \(k\). We can also reduce the length of \(W^{\prime}\) by deleting the vertices between \(n_{j}\) and \(n_{j^{\prime}}-1\), a contradiction. **Proposition 6.9**.: _Let \(j,k,t\in\mathbb{N}\) with \(j\in[k]\). Let \(W\) be a sequentially closed walk that is compatible with respect to an orientation \(\overrightarrow{H}\) of a \((1,k)\)-graph \(H\) on \([t]\cup V^{\prime}\) where \(|V^{\prime}|=t\). Let \(X_{1}\) and \(X_{2}\) be consistent with \(\overrightarrow{H}\). There exists a sequentially walk \(W^{\prime}\) of length at most \(kt^{k+1}\), which starts from \(X_{1}\) and ends with \(X_{2}\). Moreover, if \(W\) has length 1 \(\mathrm{mod}\ k\), then \(W^{\prime}\) has length \(j\)\(\mathrm{mod}\ k\)._ Proof.: For the first part, by Proposition 6.8, it suffices to show that there is a sequentially walk starting from \(X_{1}\) and ending with \(X_{2}\). Since \(X_{1}\) is consistent with \(\overrightarrow{H}\), there is a sequentially path \(W_{X_{1}}\) of length at most \(k-1\) from \(X_{1}\) to \(X_{1}^{\prime}\) in \(H\) where \(X_{1}^{\prime}\) is an oriented edge in \(\overrightarrow{H}\) which is a cyclic shift of \(X_{1}\). Similarly, there is a sequentially path \(W_{X_{2}}\) of length at most \(k-1\) from \(X_{2}\) to \(X_{2}^{\prime}\) in \(H\) where \(X_{2}^{\prime}\) is an oriented edge in \(\overrightarrow{H}\) which is a cyclic shift of \(X_{2}\). Since \(W\) is compatible with respect to an orientation \(\overrightarrow{H}\), there is a subwalk \(W_{X_{1}^{\prime}X_{2}^{\prime}}\subseteq W\) starting from \(X_{1}^{\prime}\) and ending with \(X_{2}^{\prime}\), hence \((C(X_{1})C(W_{X_{1}})C(W_{X_{1}^{\prime}X_{2}^{\prime}})C(W_{X_{2}})C(X_{2}), I(X_{1})I(W_{X_{1}})I(W_{X_{1}^{\prime}X_{2}^{\prime}})I(W_{X_{2}})I(X_{2}))\) is the desired \(W^{\prime}\). Note that we choose \(W_{X_{1}^{\prime}X_{2}^{\prime}}\) such that \(W^{\prime}\) has length \(j\) mod \(k\) by extending \(W_{X_{1}^{\prime}X_{2}^{\prime}}\) along the same \((1,k)\)-tuple with copies of \(W\), for an appropriate number of times. This is possible since any number coprime to \(k\) is a generator for the finite cyclic group \(\mathbb{Z}/k\mathbb{Z}\). **Lemma 6.10** (Connecting Lemma).: _Let \(k,m,r,t\in\mathbb{N}\), \(d_{2},\ldots,d_{k+1},\varepsilon,\varepsilon_{k+1},p,\nu,\lambda,\zeta\) be such that_ \[1/m\ll 1/r,\varepsilon\ll 1/t,\zeta,\varepsilon_{k+1},d_{2},\ldots,d_{k},\] \[\zeta\ll p\ll d_{2},\ldots,d_{k},\] \[1/t\ll\varepsilon_{k+1}\ll d_{k+1},\nu\leq 1/k,\] \[\lambda\ll\nu\ll 1/k.\] _Let \(\textbf{d}=(d_{2},\ldots,d_{k+1})\) and let \((G,G_{\mathcal{J}},\mathcal{J},\mathcal{P},H)\) be a \((k,m,2t,\varepsilon,\varepsilon_{k+1},r,\textbf{d})\)-regular setup with \(H\) being sequentially tightly connected. Let \(\overrightarrow{H}\) be an orientation of \(H\) with a compatible closed walk \(W\). Suppose that \(\mathcal{C}\) is a collection of pairwise disjoint \((p,\nu)\)-extensible paths consistent with \(\overrightarrow{H}\) and with joint connection set \(T\). Assume that_ 1. \(|\mathcal{C}|\leq\zeta m\)_,_ 2. \(V(\mathcal{C})\) _is_ \(\lambda\)_-sparse in_ \(\mathcal{P}\)_,_ 3. \(V(\mathcal{C})\cap T=\emptyset\)_._ _Consider any two elements \(P_{1},P_{2}\) of \(\mathcal{C}\), there is a sequentially path \(P\) in \(G\) such that_ 1. \(P\) _connects every path of_ \(\mathcal{C}\)_,_ 2. \(P\) _starts from_ \(P_{1}\) _and ends with_ \(P_{2}\)_,_ 3. \(V(P)\setminus V(\mathcal{C})\subseteq V(\mathcal{P})\)_,_ 4. \(V(P)\setminus V(\mathcal{C})\) _intersects in at most_ \(10k^{2}\mathcal{C}_{Z}+t^{2t+3k+2}\) _vertices with each cluster_ \(Z\in\mathcal{P}\)_, where_ \(\mathcal{C}_{Z}\) _denotes the number of paths of_ \(\mathcal{C}\) _intersecting with_ \(Z\) Proof.: Choose a set \(T^{\prime}\) from \(V(G)\) by including each vertex of \(V(\mathcal{P})\) independently at random with probability \(p\). By Proposition 1.18 and the union bound, we obtain that the set \(T^{\prime}\) is \((2p)\)-sparse with probability \(1-2t\exp(-\Omega(m))\). By Proposition 1.19, we obtain that the set \(T^{\prime}\) is a connection set of a fixed \((p^{3k+2}/2,\nu)\)-extensible path in \(\mathcal{C}\) with probability \(1-2m^{k-1}\exp(-\Omega(m))\). Since \(|\mathcal{C}|\leq\zeta m\), with positive probability, we get a set \(T^{\prime}\) satisfying all these properties. Initiate \(S=V(\mathcal{C})\). While there are two paths \(Q_{1},Q_{2}\in\mathcal{A}\) such that the extension to the right of \(Q_{1}\) equals to the left of \(Q_{2}\), apply Lemma 6.7 (1) with \(\ell(W)=kp^{k+4}/2\) to obtain a path \(Q\) of length \(10k^{2}\) which avoids \(S\) and has exactly \(6k+2\) vertices in \(T^{\prime}\). Add \(V(Q)\) to \(S\), replace \(Q_{1},Q_{2}\) with \(Q\) in \(\mathcal{C}\) and delete the \(6k+2\) vertices used by \(Q\) in \(T^{\prime}\). Denote the set of paths after the procedure by \(\mathcal{C}^{\prime}\). Note that the size of \(S\) grows by at most \(10k^{2}|\mathcal{C}|\leq 10k^{2}\zeta m\leq\lambda m\), we delete at most \((6k+2)|\mathcal{C}|\leq(6k+2)\zeta m\leq p^{3k+2}m/4\) vertices from \(T\) throughout this process since \(\zeta\ll p\). This implies that every path of \(\mathcal{C}\) remains \((p^{3k+2}/4,\nu)\)-extensible with connection set \(T^{\prime}\). Hence the conditions of Lemma 6.7 (1) are satisfied in every step and \(\mathcal{C}^{\prime}\) is well-defined. Note that when the procedure ends, \(\mathcal{C}^{\prime}\) has size at most \(t^{2t}\). Moreover, the paths of \(\mathcal{C}^{\prime}\) inherit the property of being consistent with \(\overrightarrow{H}\). We continue by connecting up the paths of \(\mathcal{C}^{\prime}\) to the desired path \(P\) along the orientation. As the paths of \(\mathcal{C}^{\prime}\) are consistent with \(\overrightarrow{H}\), the left and right extensions of each path in \(\mathcal{C}^{\prime}\) are contained in the walk \(W\). Since \(W\) is compatible with \(\overrightarrow{H}\), we can apply Proposition 6.9 to obtain a sequentially walk in \(H\) of length of at most \(t^{2k+1}\) between the left and right end of each path in \(\mathcal{C}^{\prime}\). Use Lemma 4.11 and Lemma 6.7 (1), we can connect up the paths of \(\mathcal{C}^{\prime}\) using at most \(t^{2t+3k+2}\) further vertices of \(V(\mathcal{P})\). Thus, \(P\) contains every path in \(\mathcal{C}\) as a subpath and \(V(P)\setminus V(\mathcal{C})\subseteq V(\mathcal{P})\). Moreover, note that \(V(\mathcal{C}^{\prime})\setminus\mathcal{C}\) intersects in at most \(10k^{2}\mathcal{C}_{Z}\) vertices for each \(Z\in\mathcal{P}\), where \(\mathcal{C}_{Z}\) denotes the number of paths of \(\mathcal{C}\) that intersects with \(Z\). It is obvious that \(P\) can start and end with any two paths of \(\mathcal{C}\). Proof of Lemma 5.3.: Let \(P_{1}=P\). Suppose that \(P_{1}\) extends rightwards to \(X\) and leftwards to \(Y\), there exists a path \(P_{2}\) of length \(k-1\) which \((c,\nu)\)-extends both leftwards and rightwards to \(Y\) by Proposition 6.6. Moreover, we can assume that \(V(P_{1})\) is disjoint from \(V(P_{2})\) and \(T_{2}\), where \(T_{2}\) is the connection set of \(P_{2}\). By Proposition 1.18 and Proposition 1.19, we can choose a \(\lambda\)-sparse vertex set \(T^{\prime}\) such that \(P_{1}\), \(P_{2}\) are \((c^{3k+2}/2,\nu)\)-extensible paths with connection set \(T^{\prime}\). Firstly, let \(S_{1}=V(P_{1})\cup V(P_{2})\), and we choose \(\kappa\) such that \(\lambda\ll\kappa\ll\gamma\). For each \(Z\in\mathcal{P}\), we can select a subset \(Z^{\prime}\) of \(Z\) of size \(m^{\prime}=\kappa m\) such that \(Z\cap S_{1}\subseteq Z^{\prime}\) since \(S_{1}\) is \(2\lambda\)-sparse, \(1/m\ll 1/t\ll\alpha\ll\lambda\) and \(2\lambda\ll\kappa\). Let \(\mathcal{P}^{\prime}=\{Z^{\prime}\}_{Z\in\mathcal{P}}\), \(V(\mathcal{P}^{\prime})=\bigcup_{Z\in\mathcal{P}}Z^{\prime}\), \(G^{\prime}=G[V(\mathcal{P}^{\prime})]\), \(G^{\prime}_{\mathcal{J}^{\prime}}=G_{\mathcal{J}}[V(\mathcal{P}^{\prime})]\) be the corresponding induced subgraphs and \(\mathcal{J}^{\prime}=\mathcal{J}[V(\mathcal{P}^{\prime})]\) be the induced subcomplex. By Lemma 4.11, \(\mathfrak{S}^{\prime}=(G^{\prime},G^{\prime}_{\mathcal{J}^{\prime\prime}}, \mathcal{J}^{\prime},\mathcal{P}^{\prime},H)\) is a \((k,m^{\prime},2t,\sqrt{\varepsilon},\sqrt{\varepsilon_{k+1}},r,d_{2},\ldots,d_{k +1})\)-regular setup. Now we define a fractional matching that complements the discrepancy of \(S_{1}\) in the clusters of \(\mathcal{P}\). Consider \(\mathbf{b}_{i}\in\mathbb{R}^{V(H_{W_{i}})}\) by setting \(\mathbf{b}_{i}(Z^{\prime})=|Z^{\prime}\setminus S_{1}|/|Z^{\prime}|\) for every \(Z\in V(H_{W_{i}})\). Recall that \(|S_{1}\cap Z|\leq 2\lambda m\), \(|Z^{\prime}|=\kappa m\) and \(\lambda\ll\kappa,\gamma\). It follows that \[1-\gamma\leq 1-\frac{2\lambda}{\kappa}\leq 1-\frac{|S_{1}|}{|Z^{\prime}|}\leq \mathbf{b}_{i}\leq 1.\] Since \(H_{W_{i}}\) is \(\gamma\)-robustly matchable, there is a fractional matching \(\mathbf{w}_{i}\) such that \(\sum_{Z\in e,e\in H_{W_{i}}}\mathbf{w}_{i}(e)=\mathbf{b}_{i}(Z^{\prime})/k\) for every cluster \(Z^{\prime}\in\mathcal{P}^{\prime}\) of \(H_{W_{i}}\) where \(i\in[k]\). Consider \(\psi>0\) with \(\varepsilon_{k+1}\ll\psi\ll\alpha\), there exists a sequentially path \(Q_{1}\) in \(G^{\prime}\) such that \(P_{2}Q_{1}P_{1}\) is a sequentially path in \(G\) which contains no vertices of \(S_{1}\) and \(4k+2\) vertices of \(T^{\prime}\) by Lemma 6.7. Moreover, there is a set \(U\subseteq V(\mathcal{P})\) of size at most \(\psi mt\) such that \(U\cup V(Q_{1})\) has \(\lceil\sum_{i\in[k]}\sum_{Z\in e,e\in H_{W_{i}}}\mathbf{w}_{i}(e)\kappa m\rceil+B\) vertices in each point cluster \(Z\). In other words, \(V(P_{2}Q_{1}P_{1})\cup U\) has \(\kappa m+B\) vertices in each point cluster of \(V(H)\) and uses \((\kappa m+B)(1-\alpha)t\) vertices of \(V\) since \(|V(L_{H}(i))|\geq(1-\alpha)t\) for \(i\in[t]\). We now choose the second path \(Q_{2}\). Note that \(P_{2}Q_{1}P_{1}\) has right extension \(X\) and left extension \(Y\), which are consistent with \(\overrightarrow{H}\). Since \(W\) is compatible with \(\overrightarrow{H}\), we can apply Proposition 6.9 to obtain a sequentially walk \(W^{\prime}\) in \(H\) of length \(p\leq t^{2k+1}\) starting from \(X\) and ending with \(Y\). Moreover, since \(W\) has length coprime to \(k\), we can choose \(W^{\prime}\) such that \[p+1=|V(G)\setminus V(P_{2}Q_{1}P_{1})|\text{ mod }k.\] Let \(S_{2}=V(P_{2}Q_{1}P_{1})\) and \(T^{\prime\prime}=T^{\prime}\setminus S_{2}\). Define \(\mathbf{c}_{i}\in\mathbb{R}^{V(H_{W_{i}})}\) by setting \(\mathbf{c}_{i}(Z)=(m-|Z\cap S_{2}|)/m\) for every \(Z\in V(H_{W_{i}})\). Note that \(1-\gamma\leq 1-\kappa-\psi\leq\mathbf{c}_{i}\leq 1\). Since \(H_{W_{i}}\) is robustly matchable, there is a fractional matching \(\mathbf{z}_{i}\) such that \(\sum_{Z\in e,e\in H_{W_{i}}}\mathbf{z}_{i}(e)=\mathbf{c}_{i}(Z)/k\) for every \(Z\in\mathcal{P}\) of \(H_{W_{i}}\). By Lemma 6.7, there exists a sequentially path \(Q_{2}\) in \(G\) of length \(p+1\) mod \(k\) which contains no vertices of \(S_{2}\) and \(4k+2\) vertices of \(T^{\prime\prime}\) such that \(P_{2}Q_{1}P_{1}Q_{2}\) is a sequentially cycle. Besides, there is a set \(U^{\prime}\subseteq V(\mathcal{P})\) of size at most \(\psi mt\) such that \(U^{\prime}\cup V(Q_{2})\) has \(\lceil\sum_{i\in[k]}\sum_{Z\in e,e\in H_{W_{i}}}\mathbf{z}_{i}(e)m\rceil+B\) vertices in each point cluster \(Z\). Thus, \(U^{\prime}\cup V(Q_{2})\) uses at least \(((1-\kappa)m-B+B)\,(1-\alpha)t=(1-\kappa)m(1-\alpha)t\) vertices of \(V\). Denote the set of uncovered vertices in all clusters of \(\mathcal{P}\) by \(M\). Note that \(P_{2}Q_{1}P_{1}Q_{2}\) contains all vertices of \(V(G)\) but \(M\), \(U\) and \(U^{\prime}\). We know that \(|M|\leq\alpha mt\), \(|U|\leq\psi mt,|U^{\prime}|\leq\psi mt\). Thus \(P_{2}Q_{1}P_{1}Q_{2}\) covers all but at most \(\alpha mt+2\psi mt\leq 3\alpha n\leq\eta n\) vertices. Since the length of \(Q_{2}\) is \(p+1\) mod \(k\), it follows that \(|V\setminus V(P_{2}Q_{1}P_{1}Q_{2})|\) is divisible by \(k\). ## 7. Absorption We will give the proof of Lemma 5.2 in this section. The method can be sketched as follows. We define absorbing gadget to absorb a set \(T\) of \(k\) vertices and a set \(O\) of \(k\) colors. For each \((T,O)\), the absorbing gadgets are numerous. Based on the above properties, we can choose a small family of vertex-disjoint gadgets such that for every \((T,O)\), there are many absorbing gadgets. Such a family is obtained by probabilistic method. Connecting all these gadgets yields the desired absorbing path. This section can be organised as follows. In subsection 7.1, we attach vertices to regular complexes since the gadgets we need should be well-integrated in regular setups. In section 7.2, we count the number of absorbing gadgets for each \((T,O)\). In section 7.3, we select a well-behaved family of absorbing gadgets, which is used to absorb a small number of arbitrary sets of \(k\) vertices and \(k\) colors. ### Technical Tools In this part, we will obtain some results to help us attach vertices to regular complexes. Let \(H\) be a \((1,k)\)-graph with vertex set \([n]\cup V\), \(\mathcal{J}\) be a regular slice with cluster set \(\mathcal{P}\). Given a \((0,k-1)\)-subset \(X\subseteq\mathcal{P}\), \(\mathcal{J}_{X}\) is an \(|X|\)-partite \(|X|\)-graph containing all edges of \(|X|\)-level of \(\mathcal{J}\). For any \(v\in V\), \(\delta>0\) and any color cluster \(C\), let \[N_{\mathcal{J}}((v,C),\delta)=\{X\subseteq\mathcal{P}:|X|=k-1,\text{for any }c\in C,|N_{H}((v,c);\mathcal{J}_{X})|>\delta|\mathcal{J}_{X}|\},\] **Lemma 7.1**.: _Let \(k,r,m,t\in\mathbb{N}\) and \(d_{2},\ldots,d_{k+1},\varepsilon,\varepsilon_{k+1},\mu,\delta\) be such that_ \[1/m\ll 1/r,\varepsilon\ll\varepsilon_{k+1},d_{2},\ldots,d_{k},\] \[\varepsilon_{k+1}\ll d_{k+1}\leq 1/k,\] _and_ \[\varepsilon_{k+1}\ll\mu\ll\delta.\] _Let \(\textbf{d}=(d_{2},\ldots,d_{k+1})\) and let \((H,H_{\mathcal{J}},\mathcal{J},\mathcal{P},R)\) be a representative \((k,m,2t,\varepsilon,\varepsilon_{k+1},r,\textbf{d})\)-regular setup. Suppose that \(H\) has minimum relative \((1,1)\)-degree at least \(\delta+\mu\) with vertex set \([n]\cup V\). Then for any \(v\in V\) and any color cluster \(C\), we have_ \[|N_{\mathcal{J}}((v,C),\frac{\mu}{3})|\geq(\delta+\frac{\mu}{4})\binom{t}{k-1 }.\] _For any \(c\in[n]\) and any point cluster \(Z\), we have_ \[|N_{\mathcal{J}}((c,Z),\frac{\mu}{3})|\geq(\delta+\frac{\mu}{4})\binom{t}{k-1 }.\] Proof.: Let \(v\in V\) and \(c\in C\) be arbitrary. The minimum relative degree condition implies that \(\overline{\deg}_{H}(v,c)\geq\delta+\mu\). Since the regular setup is representative and \(\varepsilon_{k+1}\ll\mu\), we have \(|\overline{\deg}_{H}(v,c)-\overline{\deg}_{H}((v,c);\mathcal{J})|< \varepsilon_{k+1}\) and \[\deg_{H}((v,c),\mathcal{J}^{(k-1)})\geq(\delta+\mu-\varepsilon_{k+1})| \mathcal{J}^{(k-1)}|\geq(\delta+\frac{2}{3}\mu)|\mathcal{J}^{(k-1)}|.\] For any \((0,k-1)\)-subset \(X\) of \(\mathcal{P}\), \(\mathcal{J}_{X}\) corresponds to the \((k-1)\)-edges of \(\mathcal{J}^{(k-1)}\) which are \(X\)-partite. Define \(d_{X}=\prod_{i=2}^{k-1}d_{i}^{\binom{k-1}{i}}\). By Lemma 4.9, we have \(|\mathcal{J}_{X}|=(1\pm\varepsilon_{k+1})d_{X}m^{k-1}\). By summing over all the \((0,k-1)\)-subsets of \(\mathcal{P}\), we have \[|\mathcal{J}^{(k-1)}|\geq(1-\varepsilon_{k+1})\binom{t}{k-1}d_{X}m^{k-1}.\] Moreover, let \(X\) range over all \((0,k-1)\)-subsets of \(\mathcal{P}\), we have \[\sum_{X}|N_{H}((v,c);\mathcal{J}_{X})|=\deg_{H}((v,c);\mathcal{J}^{(k-1)}) \geq(\delta+\frac{2}{3}\mu)|\mathcal{J}^{(k-1)}|.\] Finally, we obtain \[(\delta+\frac{2}{3}\mu)|\mathcal{J}^{(k-1)}|\] \[\leq\sum_{X}|N_{H}((v,c);\mathcal{J}_{X})|\leq\sum_{X\in N_{ \mathcal{J}}((v,c),\mu/3)}|\mathcal{J}_{X}|+\sum_{X\notin N_{\mathcal{J}}((v,c),\mu/3)}\frac{\mu}{3}|\mathcal{J}_{X}|\] \[\leq\left(|N_{\mathcal{J}}((v,c),\mu/3)|+\frac{\mu}{3}\left( \binom{t}{k-1}-|N_{\mathcal{J}}((v,c),\mu/3)|\right)\right)(1+\varepsilon_{k+1 })d_{X}m^{k-1}\] \[\leq\left((1-\frac{\mu}{3})|N_{\mathcal{J}}((v,c),\mu/3)|+\frac{ \mu}{3}\binom{t}{k-1}\right)\frac{1+\varepsilon_{k+1}}{1-\varepsilon_{k+1}} \frac{|\mathcal{J}^{(k-1)}|}{\binom{t}{k-1}}\] \[\leq\left(|N_{\mathcal{J}}((v,c),\mu/3)|+\frac{\mu}{3}\binom{t}{k- 1}\right)(1+2\varepsilon_{k+1})\frac{|\mathcal{J}^{(k-1)}|}{\binom{t}{k-1}}.\] Thus, for any \(v\in V\) and \(c\in C\), we have \[|N_{\mathcal{J}}((v,c),\mu/3)|\geq(\delta+\frac{\mu}{4})\binom{t}{k-1},\] and by definition, the following holds for any \(v\in V\) and color cluster \(C\), \[|N_{\mathcal{J}}((v,C),\mu/3)|\geq(\delta+\frac{\mu}{4})\binom{t}{k-1}.\] Similarly, we can obtain the following result holds for any \(c\in[n]\) and point cluster \(Z\), \[|N_{\mathcal{J}}((c,Z),\mu/3)|\geq(\delta+\frac{\mu}{4})\binom{t}{k-1}.\] **Lemma 7.2**.: _Let \(k,r,m,t\in\mathbb{N}\) and \(d_{2},\ldots,d_{k+1},\varepsilon,\varepsilon_{k+1},\mu,\lambda\) be such that_ \[1/m\ll 1/r,\varepsilon\ll\varepsilon_{k+1},d_{2},\ldots,d_{k},\] \[\varepsilon_{k+1}\ll d_{k+1}\leq 1/k,\] _and_ \[\varepsilon_{k+1}\ll\lambda\ll\mu.\] _Let \(\textbf{d}=(d_{2},\ldots,d_{k+1})\) and let \((H,H_{\mathcal{J}},\mathcal{J},\mathcal{P},R)\) be a \((k,m,2t,\varepsilon,\varepsilon_{k+1},r,\textbf{d})\)-regular setup. Let \(T\subseteq V(H)\) such that \(|Z_{1}\cap T|=|Z_{2}\cap T|\leq\lambda m\) for every \(Z_{1},Z_{2}\in\mathcal{P}\). Let \(Z^{\prime}=Z\setminus T\) for each \(Z\in\mathcal{P}\), and let \(\mathcal{J}^{\prime}=\mathcal{J}[\bigcup Z^{\prime}]\) be the induced subcomplex. For every \(v\in V\) and color cluster \(C\), we have_ \[|N_{\mathcal{J}}((v,C),2\mu)|\leq|N_{\mathcal{J}^{\prime}}((v,C),\mu)|,\] _and for every \(c\in[n]\) and point cluster \(Z\), we have_ \[|N_{\mathcal{J}}((c,Z),2\mu)|\leq|N_{\mathcal{J}^{\prime}}((c,Z),\mu)|,\] Proof.: For any \(v\in V\), color cluster \(C\) and a \((0,k-1)\)-set \(X\in N_{\mathcal{J}}((v,C),2\mu)\). By the definition, we have \(|N_{H}((v,c);\mathcal{J}_{X})|>2\mu|\mathcal{J}_{X}|\) for any \(c\in C\). Let \(X=\{X_{1},\ldots,X_{k-1}\}\) and \(X^{\prime}=\{X^{\prime}_{1},\ldots,X^{\prime}_{k-1}\}\) be the corresponding clusters in the complex \(\mathcal{J}^{\prime}\). Our goal is to prove that \(X^{\prime}\in N_{\mathcal{J}^{\prime}}((v,C),\mu)\). Let \(\varepsilon\ll\beta\ll\varepsilon_{k+1}\) and \(d_{X}=\prod_{i=2}^{k-1}d_{i}^{\binom{k-1}{i}}\). By Lemma 4.9, we have \[|\mathcal{J}_{X}|=(1\pm\beta)d_{X}m^{k-1}\] and \[|N_{H}((v,c);\mathcal{J}_{X})|>2\mu|\mathcal{J}_{X}|\geq 2\mu(1-\beta)d_{X}m^{k-1}.\] Let \(m^{\prime}=|X_{1}\setminus T|\), we have \(|Z^{\prime}|=m^{\prime}\) for each \(Z\in\mathcal{P}\), note that \(m^{\prime}\geq(1-\lambda)m\). By Lemma 4.11, \(\mathcal{J}^{\prime}\) is a \((\cdot,\cdot,\sqrt{\varepsilon},\sqrt{\varepsilon_{k+1}},r)\)-regular slice. By Lemma 4.9, we have \[(1+\beta)d_{X}(m^{\prime})^{k-1}\geq|\mathcal{J}_{X^{\prime}}^{\prime}|\geq(1 -\beta)d_{X}(m^{\prime})^{k-1}\geq(1-\beta)(1-\lambda)^{k-1}d_{X}m^{k-1}.\] Since \(\beta\ll\varepsilon_{k+1}\ll\lambda\ll\mu\), we have \[|N_{H}((v,c);\mathcal{J}_{X^{\prime}}^{\prime})| \geq|N_{H}((v,c);\mathcal{J}_{X})|-(|\mathcal{J}_{X}|-|\mathcal{ J}_{X^{\prime}}^{\prime}|)\] \[\geq(1-\beta)(2\mu-(1-(1-\lambda)^{k-1}))d_{X}m^{k-1}\] \[\geq\mu(1+\beta)d_{X}m^{k-1}\geq\mu|\mathcal{J}_{X^{\prime}}^{ \prime}|.\] Thus, we obtain that \(X\in N_{\mathcal{J}^{\prime}}((v,C),\mu)\). Similarly, for every \(c\in[n]\) and point cluster \(Z\), we have \[|N_{\mathcal{J}}((c,Z),2\mu)|\leq|N_{\mathcal{J}^{\prime}}((c,Z),\mu)|.\] In a \((k+1)\)-uniform sequentially cycle, the link graph of a point corresponds to a \(k\)-uniform sequentially path. Thus, we will look for sequentially paths in the neighbors of vertices inside a regular complex. The following lemma states that by looking at a \(\mu\)-fraction of \((1,k-1)\)-edges of a regular complex, we will find lots of sequentially paths. **Lemma 7.3**.: _Let \(1/m\ll\varepsilon\ll d_{2},\ldots,d_{k},1/k,\mu\) and \(k\geq 3\). Suppose that \(\mathcal{J}\) is a \((\cdot,\cdot,\varepsilon)\)-equitable complex with density vector \(\textbf{d}=(d_{2},\ldots,d_{k})\) and ground partition \(\mathcal{P}\), the size of each vertex class is \(m\). Let \(W=\{W_{0},W_{1},\ldots,W_{k-1}\}\subseteq\mathcal{P}\). Let \(S\subseteq\mathcal{J}_{W}\) be with size at least \(\mu|\mathcal{J}_{W}|\) and \(Q\) be a \(k\)-uniform sequentially path \(((c_{1},\ldots,c_{k}),(v_{1},\ldots,v_{2k-2}))\) with vertex classes \(\{X_{0},X_{1},\ldots,X_{k-1}\}\) such that \(v_{i},v_{i+k-1}\in X_{i}\) for \(i\in[k-1]\) and \(c_{j}\in X_{0}\) for \(j\in[k]\). Let \(\mathcal{Q}\) be the down-closed \(k\)-complex generated by \(Q\) and \(\mathcal{Q}_{S}\subseteq\mathcal{Q}_{\mathcal{J}}\) be the copies of \(\mathcal{Q}\) whose edges in the \(k\)-th level are in \(S\). We have_ \[|\mathcal{Q}_{S}|\geq\frac{1}{2}\left(\frac{\mu}{8k}\right)^{k+1}|\mathcal{Q} _{\mathcal{J}}|.\] Proof.: The proof consists of three steps. Firstly, we use the dense version of the counting and extension lemma to count the number of various hypergraphs in \(\mathcal{J}\). Secondly, we remove some \((1,k-1)\)-tuples without good properties. Finally, we use an iterative procedure to return sequentially paths using good \((1,k-1)\)-tuples, as desired. Firstly, let \(\beta\) be such that \(\varepsilon\ll\beta\ll d_{2},\ldots,d_{k},1/k,\mu\). Define \[d_{a}=\prod_{i=2}^{k-2}d_{i}^{\binom{k-2}{i}},d_{b}=\prod_{i=2}^{k-2}d_{i}^{ \binom{k}{i}-\binom{k-2}{i}}\cdot\prod_{i=k-1}^{k}d_{i}^{\binom{k}{i}}.\] Let \(W^{\prime}=W\setminus\{W_{0},W_{k-1}\}\). By Lemmas 4.8 and 4.9, we have \[|\mathcal{J}_{W}|=(1\pm\beta)d_{a}d_{b}m^{k},\] \[|\mathcal{J}_{W^{\prime}}|=(1\pm\beta)d_{a}m^{k-2},\] \[|\mathcal{Q}_{\mathcal{J}}|=(1\pm\beta)d_{a}d_{b}^{k}m^{3k-2}. \tag{6}\] Since \(S\subseteq\mathcal{J}_{W}\) with \(|S|\geq\mu|\mathcal{J}_{W}|\), with (7), we have \[|S|\geq(1-\beta)\mu d_{a}d_{b}m^{k}.\] Let \(B_{W^{\prime}}\subseteq\mathcal{J}_{W^{\prime}}\) be the \((k-2)\)-edges which are not extensible to \((1\pm\beta)d_{b}m^{2}\) copies of a \(k\)-edge in \(\mathcal{J}_{W}\). By Lemma 4.10, we have \[|B_{W^{\prime}}|\leq\beta|\mathcal{J}_{W^{\prime}}|.\] Secondly, we delete from \(S\) the edges which contain a \((k-2)\)-set from \(B_{W^{\prime}}\) to obtain \(S^{\prime}\), the number of edges deleted is at most \[|B_{W^{\prime}}|m^{2}\leq\beta|\mathcal{J}_{W^{\prime}}|m^{2}\leq\beta(1+\beta )d_{a}m^{k}\leq|S|/3,\] since \(\beta\ll\mu,d_{2},\ldots,d_{k}\). Thus, we have \(|S^{\prime}|\geq 2|S|/3\). Furthermore, if there is any partite \((k-2)\)-set \(T\) in \(\mathcal{J}\) which lies in less than \(\mu d_{b}m^{2}/(4k)\) edges of \(S^{\prime}\), then we delete all edges in \(S^{\prime}\) containing \(T\) to obtain \(S^{\prime\prime}\) and iterate this until no further deletions are possible. Note that the number of partite \((k-2)\)-sets supported in the clusters of \(W\setminus\{W_{0}\}\) is \((k-1)(1\pm\beta)d_{a}m^{k-2}\). Thus the number of edges deleted is at most \[(k-1)(1+\beta)d_{a}m^{k-2}\frac{\mu d_{b}m^{2}}{4k}\leq(1+\beta)\frac{\mu d_{ a}d_{b}m^{k}}{4}\leq\frac{|S|}{3}.\] Thus, \(|S^{\prime\prime}|\geq|S|/3\). Each partite \((k-2)\)-set in \(W_{1},\ldots,W_{k-1}\) is either contained in zero edges of \(S^{\prime\prime}\) or in at least \(\mu d_{b}m^{2}/(4k)\) edges in \(S^{\prime\prime}\). Finally, we use the properties of \(S^{\prime\prime}\) to construct many labelled partition-respecting paths in \(\mathcal{Q}_{S}\). **Step 1.** Select \(T=\{x_{1},\ldots,x_{k-2}\}\in\mathcal{J}_{W^{\prime}}\) which is contained in at least \(\mu d_{b}m^{2}/4\) edges in \(S^{\prime\prime}\). **Step 2.** Choose \((c_{1},x_{k-1})\) such that \(\{c_{1},x_{1},x_{2},\ldots,x_{k-1}\}\in S^{\prime\prime}\) and \(c_{1},x_{k-1}\) are not in \(T\). **Step 3.** For \(i\in[k,2k-2]\), choose \((c_{i-k+2},x_{i})\) such that \(\{c_{i-k+2},x_{i-k+2},\ldots,x_{i}\}\in S^{\prime\prime}\) and \(c_{i-k+2},x_{i}\) are not used before. This constructs a sequentially path \(\mathcal{Q}_{S}\) on \(3k-2\) vertices such that each edge in the \(k\)-th level is in \(S^{\prime\prime}\), thus in \(S\). Next, we count the size of \(\mathcal{Q}_{S}\). In Step 1, let \(G\subseteq\mathcal{J}_{W^{\prime}}\) be the set of \((k-2)\)-sets which are contained in less than \(\mu d_{b}m^{2}/4\) edges in \(S^{\prime\prime}\), we have \[\frac{|S|}{3}\leq|S^{\prime\prime}|=\sum_{T\in\mathcal{J}_{W^{\prime}}}\deg_{ S^{\prime\prime}}(T)\leq|G|\frac{\mu}{4}d_{b}m^{2}+(|\mathcal{J}_{W^{\prime}}|-|G| )d_{b}m^{2}(1+\beta),\] it gives that \(|G|\leq(1-\beta)(1-\mu/12)d_{a}m^{k-2}\), thus, the choices for \(T\) is at least \(|\mathcal{J}_{W^{\prime}}|-|G|\geq\mu/13d_{a}m^{k-2}\). In Step 2, we have at least \(\mu d_{b}m^{2}/4\) choices for \((c_{1},x_{k-1})\). In Step 3, \(\{x_{i-k+2},\ldots,x_{i-1}\}\) is a \((k-2)\)-set contained in \(S^{\prime\prime}\), by the construction of \(S^{\prime\prime}\), there are at least \(\mu d_{b}m^{2}/(4k)\) choices for \((c_{i-k+2},x_{i})\), furthermore, at least \(\mu d_{b}m^{2}/(8k)\) are different from the previous choices. Thus, the number of paths in \(\mathcal{Q}_{S}\) is at least \[\left(\frac{\mu}{13}d_{a}m^{k-2}\right)\left(\frac{\mu}{4}d_{b}m^{2}\right)\left( \frac{\mu}{8k}d_{b}m^{2}\right)^{k-1}\geq(\frac{\mu}{8k})^{k+1}d_{a}d_{b}^{k}m^ {3k-2}\geq\frac{1}{2}(\frac{\mu}{8k})^{k+1}|\mathcal{Q}_{\mathcal{J}}|,\] since \(\beta\ll\mu,1/k\). **Lemma 7.4**.: _Let \(1/m\ll\varepsilon\ll d_{2},\ldots,d_{k},1/k,\mu\) and \(k\geq 3\). Suppose that \(\mathcal{J}\) is a \((\cdot,\cdot,\varepsilon)\)-equitable complex with density vector \(\textbf{d}=(d_{2},\ldots,d_{k})\) and ground partition \(\mathcal{P}\), the size of each vertex class is \(m\). Let \(W=\{W_{1},\ldots,W_{k-1},W_{k}\}\subseteq\mathcal{P}\). Let \(S\subseteq\mathcal{J}_{W}\) be with size at least \(\mu|\mathcal{J}_{W}|\) and \(Q\) be a \(k\)-uniform tight path \(v_{1},\ldots,v_{k-1},b,v_{k},\ldots,v_{2k-2}\) with vertex classes \(\{X_{1},\ldots,X_{k-1},X_{k}\}\) such that \(v_{i},v_{i+k-1}\in X_{i}\) for \(i\in[k-1]\) and \(b\in X_{k}\). Let \(\mathcal{Q}\) be the down-closed \(k\)-complex generated by \(Q\) and \(\mathcal{Q}_{S}\subseteq\mathcal{Q}_{\mathcal{J}}\) be the copies of \(\mathcal{Q}\) whose edges in the \(k\)-th level are in \(S\). We have_ \[|\mathcal{Q}_{S}|\geq\frac{1}{2}\left(\frac{\mu}{8k}\right)^{k+1}|\mathcal{Q} _{\mathcal{J}}|.\] Proof.: The proof consists of three steps. Firstly, we use the dense version of the counting and extension lemma to count the number of various hypergraphs in \(\mathcal{J}\). Secondly, we remove some \(k\)-tuples without good properties. Finally, we use an iterative procedure to return a tight path using good \(k\)-tuples, as desired. Firstly, let \(\beta\) be such that \(\varepsilon\ll\beta\ll d_{2},\ldots,d_{k},1/k,\mu\). Define \[d_{a}=\prod_{i=2}^{k-1}d_{i}^{\binom{k-1}{i}},d_{b}=\prod_{i=2}^{k}d_{i}^{ \binom{k-1}{i-1}}.\] Let \(W^{\prime}=W\setminus\{W_{k}\}\). By Lemma 4.8 and 4.9, we have \[|\mathcal{J}_{W}| =(1\pm\beta)d_{a}d_{b}m^{k},\] \[|\mathcal{J}_{W^{\prime}}| =(1\pm\beta)d_{a}m^{k-1},\] \[|\mathcal{Q}_{\mathcal{J}}| =(1\pm\beta)d_{a}d_{b}^{k}m^{2k-1}. \tag{7}\] Since \(S\subseteq\mathcal{J}_{W}\) with \(|S|\geq\mu|\mathcal{J}_{W}|\), with (7), we have \[|S|\geq(1-\beta)\mu d_{a}d_{b}m^{k}.\] Let \(B_{W^{\prime}}\subseteq\mathcal{J}_{W^{\prime}}\) be the \((k-1)\)-edges which are not extensible to \((1\pm\beta)d_{b}m\) copies of a \(k\)-edge in \(\mathcal{J}_{W}\). By Lemma 4.10, we have \[|B_{W^{\prime}}|\leq\beta|\mathcal{J}_{W^{\prime}}|.\] Secondly, we delete from \(S\) the edges which contain a \((k-1)\)-set from \(B_{W^{\prime}}\) to obtain \(S^{\prime}\), the number of edges deleted is at most \[|B_{W^{\prime}}|m\leq\beta|\mathcal{J}_{W^{\prime}}|m\leq\beta(1+\beta)d_{a}m^ {k}\leq|S|/3,\] since \(\beta\ll\mu,d_{2},\ldots,d_{k}\). Thus, we have \(|S^{\prime}|\geq 2|S|/3\). Furthermore, if there is any partite \((k-1)\)-set \(T\) in \(\mathcal{J}\) which lies in less than \(\mu d_{b}m/(4k)\) edges of \(S^{\prime}\), then we delete all edges in \(S^{\prime}\) containing \(T\) to obtain \(S^{\prime\prime}\) and iterate this until no further deletions are possible. Note that the number of partite \((k-1)\)-sets supported in the clusters of \(W\) is \(k(1\pm\beta)d_{a}m^{k-1}\). Thus the number of edges deleted is at most \[k(1+\beta)d_{a}m^{k-1}\frac{\mu d_{b}m}{4k}\leq(1+\beta)\frac{\mu d_{a}d_{b}m^{k}} {4}\leq\frac{|S|}{3}.\] Thus, \(|S^{\prime\prime}|\geq|S|/3\). Each partite \((k-1)\)-set in \(W_{1},\ldots,W_{k}\) is either contained in zero edges of \(S^{\prime\prime}\) or in at least \(\mu d_{b}m/(4k)\) edges in \(S^{\prime\prime}\). Finally, we use the properties of \(S^{\prime\prime}\) to construct many labelled partition-respecting paths in \(\mathcal{Q}_{S}\). **Step 1.** Select \(T=\{x_{1},\ldots,x_{k-1}\}\in\mathcal{J}_{W^{\prime}}\) which is contained in at least \(\mu d_{b}m/4\) edges in \(S^{\prime\prime}\). **Step 2.** Choose \(b\) such that \(\{x_{1},x_{2},\ldots,x_{k-1},b\}\in S^{\prime\prime}\) and \(b\notin T\). **Step 3.** For \(i\in[k,2k-2]\), choose \(x_{i}\) such that \(\{x_{i-k+2},\ldots,x_{k-1},b,x_{k},\ldots,x_{i}\}\in S^{\prime\prime}\) and \(x_{i}\) is not used before. This constructs a sequentially path \(\mathcal{Q}_{S}\) on \(2k-1\) vertices such that each edge in the \(k\)-th level is in \(S^{\prime\prime}\), thus in \(S\). Next, we count the size of \(\mathcal{Q}_{S}\). In Step 1, let \(G\subseteq\mathcal{J}_{W^{\prime}}\) be the set of \((k-1)\)-sets which are contained in less than \(\mu d_{b}m/4\) edges in \(S^{\prime\prime}\), we have \[\frac{|S|}{3}\leq|S^{\prime\prime}|=\sum_{T\in\mathcal{J}_{W^{\prime}}}\deg_{S ^{\prime\prime}}(T)\leq|G|\frac{\mu}{4}d_{b}m+(|\mathcal{J}_{W^{\prime}}|-|G| )d_{b}m(1+\beta),\] it gives that \(|G|\leq(1-\beta)(1-\mu/12)d_{a}m^{k-1}\), thus, the choices for \(T\) is at least \(|\mathcal{J}_{W^{\prime}}|-|G|\geq\mu/13d_{a}m^{k-1}\). In Step 2, we have at least \(\mu d_{b}m/4\) choices for \(b\). In Step 3, \(\{x_{i-k+2},\ldots,x_{k-1},b,x_{k},\ldots,\)\(x_{i-1}\}\) is a \((k-1)\)-set contained in \(S^{\prime\prime}\), by the construction of \(S^{\prime\prime}\), there are at least \(\mu d_{b}m/(4k)\) choices for \(x_{i}\), furthermore, at least \(\mu d_{b}m/(8k)\) are different from the previous choices. Thus, the number of paths in \(\mathcal{Q}_{S}\) is at least \[\left(\frac{\mu}{13}d_{a}m^{k-1}\right)\left(\frac{\mu}{4}d_{b}m\right)\left( \frac{\mu}{8k}d_{b}m\right)^{k-1}\geq(\frac{\mu}{8k})^{k+1}d_{a}d_{b}^{k}m^{2 k-1}\geq\frac{1}{2}(\frac{\mu}{8k})^{k+1}|\mathcal{Q}_{\mathcal{J}}|,\] since \(\beta\ll\mu,1/k\). ### Absorbing Gadget Before we build the absorbing path, we need to define absorbing gadget, which is useful to absorb a particular set \(T\) of \(k\) vertices and a particular set \(O\) of \(k\) colors. Next, we will show that for every \((T,O)\), there are numerous absorbing gadgets to absorb \((T,O)\). **Definition 7.5** (Absorbing gadget).: _Let \(T=\{t_{1},\ldots,t_{k}\}\) be a \(k\)-set of points of \(G\) and \(O=\{o_{1},\ldots,o_{k}\}\) be a \(k\)-set of colors of \(G\). We say that \(F\subseteq G\) is an absorbing gadget for \((T,O)\) if \(F=F_{1}\cup F_{2}\) where \(F_{1}=A\cup B\cup E\cup\bigcup_{i=1}^{k}(P_{i}\cup Q_{i})\cup C\cup\bigcup_{i=1 }^{k}C_{k}\) and \(F_{2}=A^{\prime}\cup B^{\prime}\cup E^{\prime}\cup\bigcup_{i=1}^{k}(P^{\prime}_ {i}\cup Q^{\prime}_{i})\cup C^{\prime}\cup\bigcup_{i=1}^{k}C^{\prime}_{k}\) such that_ 1. \(A,B,E\)_,_\(P_{1},Q_{1},\ldots,P_{k},Q_{k}\)_,_\(A^{\prime},B^{\prime},E^{\prime}\)_,_ \(P^{\prime}_{1},Q^{\prime}_{1},\ldots,P^{\prime}_{k},Q^{\prime}_{k}\) _are pairwise disjoint and also disjoint from_ \(T\)_._ \(C,C_{1},\ldots,C_{k},C^{\prime},C^{\prime}_{1},\ldots,C^{\prime}_{k}\) _are pairwise disjoint and also disjoint from_ \(O\)_,_ 2. \(C_{i}=(c_{i,1},\ldots,c_{i,k-1})\) _and_ \(C^{\prime}_{i}=(c^{\prime}_{i,1},\ldots,c^{\prime}_{i,k-1})\) _for_ \(i\in[k]\)_,_ 3. \(A,B,E,A^{\prime},B^{\prime},E^{\prime}\) _are_ \(k\)_-tuples of points of_ \(G\)_,_ \(C\) _and_ \(C^{\prime}\) _are_ \((k+1)\)_-tuples of colors of_ \(G\)_,_ \((C,AE)\)_,_ \((C^{\prime},A^{\prime}E^{\prime})\) _and_ \((C^{\prime}(c_{1,1},\ldots,c_{k,1}),A^{\prime}B^{\prime}E^{\prime})\) _are sequentially paths,_ 4. _for_ \(B=(b_{1},\ldots,b_{k})\)_, each of_ \(P_{i},Q_{i}\) _has_ \(k-1\) _vertices for_ \(i\in[k]\)_, both_ \((C_{i},P_{i}b_{i}Q_{i})\) _and_ \((\{o_{i}\}\cup C_{i}\setminus\{c_{i,1}\},P_{i}b_{i}Q_{i})\) _are sequentially paths of length_ \(2k-1\) _for_ \(i\in[k]\) _ 5. _for_ \(B^{\prime}=(b^{\prime}_{1},\ldots,b^{\prime}_{k})\)_, each of_ \(P^{\prime}_{i},Q^{\prime}_{i}\) _has_ \(k-1\) _vertices for_ \(i\in[k]\)_, both_ \((C^{\prime}_{i},P^{\prime}_{i}b^{\prime}_{i}Q^{\prime}_{i})\) _and_ \((C^{\prime}_{i},P^{\prime}_{i}t_{i}Q^{\prime}_{i})\) _are sequentially paths of length_ \(2k-1\) _for_ \(i\in[k]\)_._ Note that an absorbing gadget \(F\) spans \(4k^{2}+2k\) points together with \(2k^{2}+2k+2\) colors. **Definition 7.6** (\(\mathfrak{S}\)-gadget).: _Suppose \(F=F_{1}\cup F_{2}\) is an absorbing gadget where \(F_{1}=A\cup B\cup E\cup\bigcup_{i=1}^{k}(P_{i}\cup Q_{i})\cup C\cup\bigcup_{i=1} ^{k}C_{k}\) and \(F_{2}=A^{\prime}\cup B^{\prime}\cup E^{\prime}\cup\bigcup_{i=1}^{k}(P^{\prime}_ {i}\cup Q^{\prime}_{i})\cup C^{\prime}\cup\bigcup_{i=1}^{k}C^{\prime}_{k}\) with \(A=(a_{1},\ldots,a_{k})\), \(B=(b_{1},\ldots,b_{k})\), \(E=(e_{1},\ldots,e_{k})\), \(C=(c_{1},\ldots,c_{k+1})\), \(C_{i}=(c_{i,1},\ldots,c_{i,k})\), \(P_{i}=(p_{i,1},\ldots,p_{i,k-1})\) and \(Q_{i}=(q_{i,1},\ldots,q_{i,k-1})\) for \(i\in[k]\), \(A^{\prime}=(a^{\prime}_{1},\ldots,a^{\prime}_{k})\), \(B^{\prime}=(b^{\prime}_{1},\ldots,b^{\prime}_{k})\), \(E^{\prime}=(c^{\prime}_{1},\ldots,c^{\prime}_{k})\), \(C^{\prime}=(c^{\prime}_{1},\ldots,c^{\prime}_{k+1})\), \(C^{\prime}_{i}=(c^{\prime}_{1},\ldots,c^{\prime}_{k,k})\), \(P^{\prime}_{i}=(p^{\prime}_{i,1},\ldots,p^{\prime}_{i,k-1})\) and \(Q^{\prime}_{i}=(q^{\prime}_{i,1},\ldots,q^{\prime}_{i,k-1})\) for \(i\in[k]\). Suppose that \(\varepsilon,\varepsilon_{k+1},d_{2},\ldots,d_{k+1},c,\nu>0\). Let \(\textbf{d}=(d_{2},\ldots,d_{k+1})\) and suppose that \(\mathfrak{S}=(G,G_{\mathcal{J}},\mathcal{J},\mathcal{P},\overrightarrow{H})\) is an oriented \((k+1,m,2t,\varepsilon,\varepsilon_{k+1},r,\textbf{d})\)-regular setup. We say that \(F\) is an \(\mathfrak{S}\)-gadget if_ 1. _there exists an oriented edge_ \(Y^{\prime}=(Y_{0},Z_{1},\ldots,Z_{k})\in\overrightarrow{H}\) _and a color cluster_ \(Z_{0}\)_, such that_ \(C\cup C^{\prime}\cup\bigcup_{i\in[k]}C_{i}\subseteq Y_{0}\)_,_ \(\bigcup_{i\in[k]}C^{\prime}_{i}\subseteq Z_{0}\)_,_ \(a_{i},b_{i},e_{i}\in Z_{i}\) _for_ \(i\in[k]\)_,_ 2. _there exists an oriented edge_ \(Y=(Y_{0},Y_{1},\ldots,Y_{k})\in\overrightarrow{H}\)_, such that_ \(a^{\prime}_{i},b^{\prime}_{i},e^{\prime}_{i}\in Y_{i}\) _for_ \(i\in[k]\)_,_ 3. _there exists an ordered_ \(k\)_-tuple of clusters_ \(W_{i}=(W_{i,1},\ldots,W_{i,k-1})\) _such that_ \(W_{i}\cup\{Y_{0},Z_{i}\}\) _is an edge in_ \(H\) _and_ \((Y_{0},W_{i,1},\ldots,W_{i,k-1},Z_{i})\) _is consistent with_ \(\overrightarrow{H}\)_,_ \(p_{i,j},q_{i,j}\in W_{i,j}\) _for_ \(i\in[k],j\in[k-1]\)_,_ 4. _there exists an ordered_ \(k\)_-tuple of clusters_ \(W^{\prime}_{i}=(W^{\prime}_{i,1},\ldots,W^{\prime}_{i,k-1})\) _such that_ \(W^{\prime}_{i}\cup\{Z_{0},Y_{i}\}\) _is an edge in_ \(H\) _and_ \((Z_{0},W^{\prime}_{i,1},\ldots,W^{\prime}_{i,k-1},Y_{i})\) _is consistent with_ \(\overrightarrow{H}\)_,_ \(p^{\prime}_{i,j},q^{\prime}_{i,j}\in W^{\prime}_{i,j}\) _for_ \(i\in[k],j\in[k-1]\)_,_ 5. \(F\subseteq G_{\mathcal{J}}\)_,_ _We will further say that \(F\) is \((c,\nu)\)-extensible if the following also holds:_ 1. _The path_ \((C,AE)\) _is_ \((c,\nu)\)_-extensible both left- and rightwards to the ordered tuple_ \(Y^{\prime}=(Y_{0},Z_{1},\ldots,Z_{k})\) _and the path_ \((C_{i},P_{i}b_{i}Q_{i})\) _is_ \((c,\nu)\)_-extensible leftwards to_ \((Y_{0},W_{i,1},\ldots,W_{i,k-1},Z_{i})\) _and rightwards to_ \((Y_{0},Z_{i},W_{i,1},\ldots,W_{i,k-1})\) _for_ \(i\in[k]\)_._ 2. _The path_ \((C^{\prime},A^{\prime}E^{\prime})\) _is_ \((c,\nu)\)_-extensible both left- and rightwards to the ordered tuple_ \(Y=(Y_{0},Y_{1},\ldots,Y_{k})\) _and the path_ \((C^{\prime}_{i},P^{\prime}_{i}b^{\prime}_{i}Q^{\prime}_{i})\) _is_ \((c,\nu)\)_-extensible leftwards to_ \((Z_{0},W^{\prime}_{i,1},\ldots,W^{\prime}_{i,k-1},Y_{i})\) _and rightwards to_ \((Z_{0},Y_{i},W^{\prime}_{i,1},\ldots,W^{\prime}_{i,k-1})\) _for_ \(i\in[k]\)_._ **Definition 7.7** (Reduced gadget).: _A reduced gadget is a \((1,k)\)-graph \(L\) consisting of \(Y\cup W_{1}\cup\cdots\cup W_{k}\cup Z_{0}\cup Z_{1}\cup\ldots\cup Z_{k}\cup W ^{\prime}_{1}\cup\cdots\cup W^{\prime}_{k}\) where \(Y=\{Y_{0},Y_{1},\ldots,Y_{k}\}\), \(W_{i}=\{W_{i,1},\ldots,W_{i,k-1}\}\) for \(i\in[k]\), \(W^{\prime}_{i}=\{W^{\prime}_{i,1},\ldots,W^{\prime}_{i,k-1}\}\) for \(i\in[k]\) and \(2(k+1)\) edges given by \(Y,Y^{\prime}=\{Y_{0},Z_{1},\ldots,Z_{k}\}\), \(W_{i}\cup\{Y_{0},Z_{i}\}\) for \(i\in[k]\) and \(W^{\prime}_{i}\cup\{Z_{0},Y_{i}\}\) for \(i\in[k]\). We refer to \(Y\) and \(Y^{\prime}\) as the core edges of \(L\) and \(W_{i},W^{\prime}_{i},i\in[k]\) as the peripheral sets of \(L\)._ Given an oriented \((1,k)\)-graph \(\overrightarrow{H}\), a reduced gadget in \(\overrightarrow{H}\) is a copy of \(L\) such that \(Y\) coincides with the orientation of that edge in \(\overrightarrow{H}\) and such that \((Z_{0},W_{i,1},\ldots,W_{i,k-1},Y_{i})\) is consistent with that edge in \(\overrightarrow{H}\). Let \(\mathfrak{S}=(G,G_{\mathcal{J}},\mathcal{J},\mathcal{P},\overrightarrow{H})\) be an oriented regular setup. Let \(c,\nu>0\), \(T=\{t_{1},\ldots,t_{k}\}\) be a \(k\)-set of \(V\) and \(O=\{o_{1},\ldots,o_{k}\}\) be a \(k\)-set of \([n]\), and \(L\) be a reduced gadget in \(\overrightarrow{H}\). We define the following sets: 1. Denote the set of all reduced gadgets in \(\overrightarrow{H}\) by \(\mathfrak{L}_{\overrightarrow{H}}\), 2. Denote the set of \(\mathfrak{S}\)-gadgets which use precisely the clusters of \(L\) as in Definition 7.7 by \(\mathfrak{F}_{L}\) 3. Denote the set of \(\mathfrak{S}\)-gadgets in \(\mathfrak{F}_{L}\) which are \((c,\nu,V(G))\)-extensible by \(\mathfrak{F}_{L}^{\mathrm{ext}}\), 4. Denote the set of all \(\mathfrak{S}\)-gadgets by \(\mathfrak{F}\), 5. Denote the set of all \((c,\nu,V(G))\)-extensible \(\mathfrak{S}\)-gadgets by \(\mathfrak{F}^{\mathrm{ext}}\subseteq\mathfrak{F}\), 6. For any \(k\)-subset \(T\) of \(V\) and any \(k\)-subset \(O\) of \([n]\), let \(\mathfrak{F}_{(T,O)}\subseteq\mathfrak{F}\) be the set of absorbing \(\mathfrak{S}\)-gadgets for \((T,O)\), 7. Denote the set of \(\mathfrak{S}\)-gadgets absorbing \((T,O)\) which are \((c,\nu)\)-extensible by \(\mathfrak{F}_{(T,O)}^{\mathrm{ext}}=\mathfrak{F}_{(T,O)}\cap\mathfrak{F}^{ \mathrm{ext}}\). **Lemma 7.8**.: _Let \(k,r,m,t\in\mathbb{N}\) and \(d_{2},\ldots,d_{k+1},\varepsilon,\varepsilon_{k+1},c,\nu,\beta\) be such that_ \[1/m \ll 1/r,\varepsilon\ll 1/t,c,\varepsilon_{k+1},d_{2},\ldots,d_{k},\] \[c \ll d_{2},\ldots,d_{k},\] \[1/t \ll\varepsilon_{k+1}\ll\beta,d_{k+1}\leq 1/k,\] \[\varepsilon_{k+1} \ll\nu.\] _Let \(\textbf{d}=(d_{2},\ldots,d_{k+1})\) and let \(\mathfrak{S}=(G,G_{\mathcal{J}},\mathcal{J},\mathcal{P},\overrightarrow{H})\) be an oriented \((k,m,2t,\varepsilon,\varepsilon_{k+1},r,\textbf{d})\)-regular setup and \(L\in\mathcal{L}_{\overrightarrow{H}}\) be a reduced gadget in \(\overrightarrow{H}\). Let \(\mathcal{F}\) be the \((k+1)\)-complex corresponding to the Figure 3. Reduced Gadget down-closure of \((1,k)\)-graph \(F\) as in Definition 7.6. Then_ \[|\mathfrak{F}_{L}|=(1\pm\beta)\left(\prod_{i=2}^{k+1}d_{i}^{e_{i}( \mathcal{F})}\right)m^{6k^{2}+4k+2},\] \[|\mathfrak{F}_{L}\setminus\mathfrak{F}_{L}^{\mathrm{ext}}|\leq \beta|\mathfrak{F}_{L}|. \tag{8}\] Proof.: Let \(Y=(Y_{0},Y_{1},\ldots,Y_{k}),Y^{\prime}=(Y_{0},Z_{1},\ldots,Z_{k})\in \overrightarrow{H}\) be the ordered core edge of \(L\) and \(W_{i}=\{W_{i,1},\ldots,W_{i,k-1}\}\), \(W_{i}^{\prime}=\{W_{i,1}^{\prime},\ldots,W_{i,k-1}^{\prime}\}\) for \(i\in[k]\), be the peripheral sets, ordered such that \((Y_{0},W_{i,1},\ldots,W_{i,k-1},Z_{i})\) and \((Z_{0},W_{i,1}^{\prime},\ldots,W_{i,k-1}^{\prime},Y_{i})\) are consistent with \(\overrightarrow{H}\). Note that \(|V(F)|=6k^{2}+4k+2\). The bounds on \(|\mathfrak{F}_{L}|\) are given by Lemma 4.6 directly. Let \(Y^{*}=(Y_{1},\ldots,Y_{k-1})\) and denote the ordered tuples in the \((k-1)\)-th level of \(\mathcal{J}\) in the clusters \(\{Y_{1},\ldots,Y_{k-1}\}\) by \(\mathcal{J}_{Y^{*}}\). Let \(d_{Y^{*}}=\prod_{i=2}^{k-1}d_{i}^{\binom{k-1}{i}}\). By Lemma 4.9 we have \[|\mathcal{J}_{Y^{*}}|=(1\pm\beta)d_{Y^{*}}m^{k-1}.\] Let \(\beta_{1}\) be such that \(\varepsilon_{k+1}\ll\beta_{1}\ll\beta,d_{k},d_{k+1},1/k\). Let \(B_{1}\subseteq\mathcal{J}_{Y^{*}}\) be the set of \((k-1)\)-tuples which are not \((c,\nu)\)-extensible leftwards to \((Y_{0},Y_{1},\ldots,Y_{k})\). By Proposition 6.6 with \(\beta_{1}\) playing the role of \(\beta\), we deduce that \[|B_{1}|\leq\beta_{1}|\mathcal{J}_{Y^{*}}|.\] Let \(\beta_{2}\) be such that \(\varepsilon\ll\beta_{2}\ll\varepsilon_{k+1},d_{2},\ldots,d_{k-1}\). Let \(\phi:V(F)\to L\) be the homomorphism and \(Z\subseteq V(F)\) corresponds to the first \(k-1\) vertices \(\{a_{1},\ldots,a_{k-1}\}\) of path \(AE\). Let \(\mathcal{F}^{-}\) be the \((k-1)\)-complex generated by removing the \((k+1)\)-st and \(k\)-th layer from the down-closure \(\mathcal{F}\) of \(F\). Let \(\mathcal{Z}=\mathcal{F}^{-}[Z]\) be the induced subcomplex of \(\mathcal{F}^{-}\) in \(Z\). Note that \(\phi(a_{i})=Y_{i}\) for \(i\in[k-1]\). Thus the labelled partition-respecting copies of \(\mathcal{Z}\) in \(\mathcal{J}\) correspond exactly to \(\mathcal{J}_{Y^{*}}\). Define \[d_{\mathcal{F}^{-}\setminus\mathcal{Z}}=\prod_{i=2}^{k-1}d_{i}^{e_{i}( \mathcal{F}^{-})-e_{i}(\mathcal{Z})}.\] Let \(B_{2}\subseteq\mathcal{J}_{Y^{*}}\) be the set of \((k-1)\)-tuples which are not extensible to \((1\pm\beta_{2})d_{\mathcal{F}^{-}\setminus\mathcal{Z}}m^{6k^{2}+3k+3}\) labelled partition-respecting copies of \(\mathcal{F}^{-}\) in \(\mathcal{J}\). By Lemma 4.10 with \(\beta_{2}\) playing the role of \(\beta\), we have \[|B_{2}|\leq\beta_{2}|\mathcal{J}_{Y^{*}}|.\] By (8), we have \[|\mathfrak{F}_{L}|=(1\pm\beta)d_{k+1}^{e_{k+1}(\mathcal{F})}d_{k}^{e_{k}( \mathcal{F})}d_{\mathcal{F}^{-}\setminus\mathcal{Z}}d_{Y^{*}}m^{6k^{2}+4k+2}.\] Let \(\mathcal{G}=\mathcal{J}\cup G_{\mathcal{J}}\). Say that a labelled partition-respecting copy of \(\mathcal{F}\) in \(\mathcal{G}\) is _nice_ if the vertices of \(\{a_{1},\ldots,a_{k-1}\}\) are not in \(B_{1}\cup B_{2}\). For every \(Z\in\mathcal{J}_{Y^{*}}\), let \(N^{*}(Z)\) be the number of labelled partition-respecting copies of \(\mathcal{F}\) in \(\mathcal{G}\) which extend \(Z\). Note that \(0\leq N^{*}(Z)\leq m^{6k^{2}+3k+3}\) and we have \[\sum_{Z\in B_{1}\cup B_{2}}N^{*}(Z) =\sum_{Z\in B_{1}\setminus B_{2}}N^{*}(Z)+\sum_{Z\in B_{2}}N^{*}(Z)\] \[\leq[|B_{1}|(1+\beta_{2})d_{\mathcal{F}^{-}\setminus\mathcal{Z}}+| B_{2}||m^{6k^{2}+3k+3}\] \[\leq[\beta_{1}(1+\beta_{2})d_{\mathcal{F}^{-}\setminus\mathcal{Z} }+\beta_{2}]|\mathcal{J}_{Y^{\prime}}|m^{6k^{2}+3k+3}\] \[\leq 3\beta_{1}d_{\mathcal{F}^{-}\setminus\mathcal{Z}}|\mathcal{ J}_{Y^{*}}|m^{6k^{2}+3k+3}\] \[\leq 3\beta_{1}(1+\beta)d_{\mathcal{F}^{-}\setminus\mathcal{Z}}d_{Y ^{*}}m^{6k^{2}+4k+2}\] \[\leq\frac{3\beta_{1}(1+\beta)}{(1-\beta)d_{k+1}^{\varepsilon_{k+ 1}(\mathcal{F})}d_{k}^{\varepsilon_{k}(\mathcal{F})}}|\mathcal{F}_{L}|\] \[\leq\frac{\beta}{4k+4}|\mathcal{F}_{L}|,\] since \(\beta_{1}\ll\beta,d_{k},d_{k+1},1/k\) and \(\beta_{2}\ll d_{2},\ldots,d_{k-1},\varepsilon_{k+1}\). The same analysis shows that we define nice tuples for any \((k-1)\)-set of vertices of \(F\), the number of copies of \(F\) which are not nice with respect to that \((k-1)\)-set is at most \(\beta|\mathcal{F}_{L}|/(4k+4)\). Note that \(F\in\mathfrak{F}_{L}\) is extensible if and only if paths \((C,AE)\), \((C^{\prime},A^{\prime}E^{\prime})\), \((C_{i},P_{i}b_{i}Q_{i})\) and \((C^{\prime}_{i},P^{\prime}_{i}b^{\prime}_{i}Q^{\prime}_{i})\) for \(i\in[k]\) contained in \(F\) are extensible with certain edges of the reduced graph. This means that \(4(k+1)\) many \((k-1)\)-tuples are extensible with certain edges of the reduced graph. Thus, \(F\in\mathfrak{F}_{L}\setminus\mathfrak{F}_{L}^{\text{ext}}\) implies that \(F\) is not nice with one of \(4k+4\) many \((k-1)\)-sets. Thus, \[|\mathfrak{F}_{L}\setminus\mathfrak{F}_{L}^{\text{ext}}|\leq(4k+4)\frac{\beta }{4k+4}|\mathcal{F}_{L}|=\beta|\mathcal{F}_{L}|.\] **Lemma 7.9**.: _Let \(k,r,m,t\in\mathbb{N}\) and \(d_{2},\ldots,d_{k+1},\varepsilon,\varepsilon_{k+1},c,\nu,\beta,\mu\) be such that_ \[1/m \ll 1/r,\varepsilon\ll 1/t,c,\varepsilon_{k+1},d_{2},\ldots,d_{k},\] \[c \ll d_{2},\ldots,d_{k},\] \[1/t \ll\varepsilon_{k+1}\ll\beta,d_{k+1}\leq 1/k,\] \[\varepsilon_{k+1} \ll\nu,\mu,\] \[\alpha \ll\mu.\] _Let \(\textbf{d}=(d_{2},\ldots,d_{k+1})\) and let \(\mathfrak{S}=(G,G_{\mathcal{J}},\mathcal{J},\mathcal{P},\overrightarrow{H})\) be an oriented \((k,m,2t,\varepsilon,\varepsilon_{k+1},r,\textbf{d})\)-regular setup. Suppose that for each color cluster \(C\), there are at least \((1-\alpha)t\) point clusters \(Z\) such that \(\{C,Z\}\) has relative \((1,1)\)-degree at least \(\mu\) in \(H\), then_ \[\frac{\mu^{2k+2}}{8}{t\choose k}^{2}{t\choose k-1}^{2k}t(t-1)\leq|\mathfrak{ S}_{\overrightarrow{H}}|\leq{t\choose k}^{2}{t\choose k-1}^{2k}t(t-1).\] _Let \(\mathcal{F}\) be the \((k+1)\)-complex corresponding to the down-closure of the \((1,k)\)-graph \(F\). For each reduced gadget \(L\in\mathfrak{L}_{\overrightarrow{H}}\) in \(\overrightarrow{H}\), we have_ \[|\mathfrak{F}_{L}^{ext}|=(1\pm\beta)\left(\prod_{i=2}^{k+1}d_{i}^{e_{i}( \mathcal{F})}\right)m^{6k^{2}+4k+2}\] _and_ \[|\mathfrak{F}^{ext}|=(1\pm\beta)\left(\prod_{i=2}^{k+1}d_{i}^{e_{i}(\mathcal{ F})}\right)m^{6k^{2}+4k+2}|\mathfrak{L}_{\overrightarrow{H}}|.\] Proof.: The lower bound of \(\mathfrak{L}_{\overrightarrow{H}}\) can be done as follows. Let \(Y=(Y_{0},Y_{1},\ldots,Y_{k})\), \(Y^{\prime}=(Y_{0},Z_{1},\ldots,Z_{k})\)\(\in\overrightarrow{H}\) be the ordered core edge of \(L\) and \(W_{i}=\{W_{i,1},\ldots,W_{i,k-1}\}\), \(W_{i}^{\prime}=\{W_{i,1}^{\prime},\ldots,W_{i,k-1}^{\prime}\}\) for \(i\in[k]\), be the peripheral sets, ordered such that \((Z_{0},W_{i,1}^{\prime},\ldots,W_{i,k-1}^{\prime},Y_{i})\) and \((Y_{0},W_{i,1},\ldots,W_{i,k-1},Z_{i})\) are consistent with \(\overrightarrow{H}\). We first choose \(Y_{0},Z_{0}\) arbitrarily, there are at least \(t(t-1)\) choices. For \((Y_{1},\ldots,Y_{k})\), there are at least \(\mu\binom{t}{k}-\alpha t\binom{t}{k-1}\geq\mu\binom{t}{k}/2\) choices. Similarly, for \((Z_{1},\ldots,Z_{k})\), there are at least \(\mu\binom{t}{k}/2\) choices. Furthermore, \(W_{i}^{\prime}\) and \(W_{i}\) for \(i\in[k]\) can be chosen in at least \(\mu\binom{t}{k-1}\) ways for \(i\in[k]\), but we need to delete the possible choices of intersecting reduced gadgets, whose number is at most \(t(t-1)(2k^{2})^{2}t^{2k^{2}-2}\leq(2k^{2})^{2}t^{2k^{2}}\). We have \[|\mathfrak{L}_{\overrightarrow{H}}|\geq\frac{\mu^{2k+2}}{4}\binom{t}{k}^{2} \binom{t}{k-1}^{2k}t(t-1)-(2k^{2})^{2}t^{2k^{2}}\geq\frac{\mu^{2k+2}}{8}\binom {t}{k}^{2}\binom{t}{k-1}^{2k}t(t-1),\] since \(1/t\ll\mu,1/k\). While the upper bound is obvious. We choose \(\beta^{\prime}\) such that \(\varepsilon_{k+1}\ll\beta^{\prime}\ll\beta,d_{k},d_{k+1},1/k\). By Lemma 7.8 (with \(\beta^{\prime}\) in place of \(\beta\)), we obtain that \[(1-\beta)\left(\prod_{i=2}^{k+1}d_{i}^{e_{i}(\mathcal{F})}\right)m^{6k^{2}+4k+ 2}\leq(1-\beta^{\prime})^{2}\left(\prod_{i=2}^{k+1}d_{i}^{e_{i}(\mathcal{F})} \right)m^{6k^{2}+4k+2}\leq(1-\beta^{\prime})|\mathfrak{F}_{L}|\leq|\mathfrak{F }_{L}^{\text{ext}}|,\] \[|\mathfrak{F}_{L}^{\text{ext}}|\leq|\mathfrak{F}_{L}|\leq(1+\beta^{\prime}) \left(\prod_{i=2}^{k+1}d_{i}^{e_{i}(\mathcal{F})}\right)m^{6k^{2}+4k+2}\leq(1 +\beta)\left(\prod_{i=2}^{k+1}d_{i}^{e_{i}(\mathcal{F})}\right)m^{6k^{2}+4k+2}.\] Note that \[\mathfrak{F}^{\text{ext}}=\bigcup_{L\in\mathfrak{L}_{\overrightarrow{H}}} \mathfrak{F}_{L}^{\text{ext}},\] and the union is disjoint, the bounds of \(|\mathfrak{F}^{\text{ext}}|\) are easy to see. **Lemma 7.10**.: _Let \(k,r,m,t\in\mathbb{N}\) and \(d_{2},\ldots,d_{k+1},\varepsilon,\varepsilon_{k+1},c,\nu,\theta,\mu\) be such that_ \[1/m \ll 1/r,\varepsilon\ll 1/t,c,\varepsilon_{k+1},d_{2},\ldots,d_{k},\] \[c \ll d_{2},\ldots,d_{k},\] \[1/t \ll\varepsilon_{k+1}\ll d_{k+1}\leq 1/k,\] \[\varepsilon_{k+1} \ll\nu\ll\theta\ll\mu\ll 1/k.\] _Let \(\textbf{d}=(d_{2},\ldots,d_{k+1})\) and let \(\mathfrak{S}=(G,G_{\mathcal{J}},\mathcal{J},\mathcal{P},\overrightarrow{H})\) be an oriented \((k,m,2t,\varepsilon,\varepsilon_{k+1},r,\textbf{d})\)-regular setup. Suppose that for each color cluster \(C\), there are at least \((1-\alpha)t\) point clusters \(Z\) such that \(\{C,Z\}\) has relative \((1,1)\)-degree at least \(\mu\) in \(H\). For any point \(v\) of \(G\), color cluster \(C\), there are at least \((1-\alpha)t\) point clusters \(Z\in\mathcal{P}\) such that \(|N_{\mathcal{J}}((v,C),\mu)\cap N_{H}(Z,C)|\geq\mu\binom{t}{k-1}\). And for every \(c\in[n]\), color cluster \(C\), there are at least \((1-\alpha)t\) point clusters \(Z\in\mathcal{P}\) such that \(|N_{\mathcal{J}}((c,Z),\mu)\cap N_{H}(C,Z)|\geq\mu\binom{t}{k-1}\). Let \(T\subseteq V\) be a \(k\)-set and \(O\subseteq[n]\) be a \(k\)-set, we have_ \[|\mathfrak{F}_{(T,O)}^{\mathrm{ext}}|\geq\theta|\mathfrak{F}^{\mathrm{ext}}|.\] Given a \(k\)-subset \(T=\{t_{1},\ldots,t_{k}\}\) of \(V\) and a \(k\)-subset \(O=(o_{1},\ldots,o_{k})\), the family \(\mathfrak{L}_{\overrightarrow{H}}\) and \(\mu>0\), we define \(\mathfrak{L}_{\overrightarrow{H},(T,O),\mu}\) of _reduced \(((T,O),\mu)\)-absorbers_ as the set of \((T,O)\)-absorbers \(Y\cup W_{1}\cup\cdots\cup W_{k}\cup Z_{0}\cup Z_{1}\cup\ldots\cup Z_{k}\cup W _{1}^{\prime}\cup\cdots\cup W_{k}^{\prime}\), where \(W_{i}\subseteq N_{\mathcal{J}}((c_{i},Z_{i}),\mu)\) and \(W_{i}^{\prime}\subseteq N_{\mathcal{J}}((t_{i},Z_{0}),\mu)\) for \(i\in[k]\). **Claim 7.11**.: _Let \(k,r,m,t\in\mathbb{N}\) and \(d_{2},\ldots,d_{k+1},\varepsilon,\varepsilon_{k+1},c,\nu,\theta,\mu\) be such that_ \[1/m \ll 1/r,\varepsilon\ll 1/t,c,\varepsilon_{k+1},d_{2},\ldots,d_{k},\] \[c \ll d_{2},\ldots,d_{k+1},\] \[1/t \ll\varepsilon_{k+1}\ll d_{k+1}\leq 1/k,\] \[\varepsilon_{k+1} \ll\nu\ll\theta\ll\mu\ll 1/k,\] \[\alpha \ll\mu.\] _Let \(\textbf{d}=(d_{2},\ldots,d_{k+1})\) and let \(\mathfrak{S}=(G,G_{\mathcal{J}},\mathcal{J},\mathcal{P},\overrightarrow{H})\) be an oriented \((k,m,2t,\varepsilon,\varepsilon_{k+1},r,\textbf{d})\)-regular setup. Suppose that for each color cluster \(C\), there are at least \((1-\alpha)t\) point clusters \(Z\) such that \(\{C,Z\}\) has relative \((1,1)\)-degree at least \(\mu\) in \(H\). For any point \(v\) of \(G\), color cluster \(C\), there are at least \((1-\alpha)t\) point clusters \(Z\in\mathcal{P}\) such that \(|N_{\mathcal{J}}((v,C),\mu)\cap N_{H}(Z,C)|\geq\mu\binom{t}{k-1}\). And for every \(c\in[n]\), color cluster \(C\), there are at least \((1-\alpha)t\) point clusters \(Z\in\mathcal{P}\) such that \(|N_{\mathcal{J}}((c,Z),\mu)\cap N_{H}(C,Z)|\geq\mu\binom{t}{k-1}\). Let \(T\subseteq V\) be a \(k\)-set and \(O\subseteq[n]\) be a \(k\)-set, we have_ \[|\mathfrak{L}_{\overrightarrow{H},(T,O),\mu}|\geq\theta|\mathfrak{L}_{ \overrightarrow{H}}|.\] Proof.: Let \(T=\{t_{1},\ldots,t_{k}\}\) and \(O=(o_{1},\ldots,o_{k})\). Since \(H\) has minimum relative \((1,1)\)-degree at least \(\mu\), there are at least \(\mu t\binom{t}{k}-t\alpha t\binom{t}{k-1}\geq\mu t\binom{t}{k}/2\) choices for \(Y\). Besides, there are at least \(t-1\) choices for \(Z_{0}\). For \((Z_{1},\ldots,Z_{k})\), there are at least \(\mu\binom{t}{k}/2-k^{2}\binom{t}{k-1}\geq\mu\binom{t}{k}/3\) choices. Each \(W_{i}\) is chosen from \(N_{\mathcal{J}}((o_{i},Z_{i}),\mu)\cap N_{H}(Y_{0},Z_{i})\) for \(i\in[k]\), thus, \(W_{i}\) can be chosen in at least \(\mu\binom{t}{k-1}-(k-1)((i-1)(k-1)+2k)\binom{t}{k-2}\geq\mu\binom{t}{k-1}/2\) ways for \(i\in[k]\), since there are at most \((k-1)((i-1)(k-1)+2k)\binom{t}{k-2}\) choices for \(W_{i}^{\prime}\) which intersects with \(Y\setminus\{Y_{0}\},Z_{1},\ldots,Z_{k},W_{1},\ldots,W_{i-1}\). And each \(W_{i}^{\prime}\) is chosen from \(N_{\mathcal{J}}((t_{i},Z_{0}),\mu)\cap N_{H}(Y_{i},Z_{0})\) for \(i\in[k]\). Similarly, there are at least \((\mu/2)\binom{t}{k-1}\) possible choices for each \(W_{i}^{\prime}\) for \(i\in[k]\). Thus, the number of reduced \(((T,O),\mu)\)-absorbers is at least \[\frac{\mu t}{2}\binom{t}{k}(t-1)\frac{\mu}{3}\binom{t}{k}\left(\frac{\mu}{2} \binom{t}{k-1}\right)^{2k}\geq\theta\binom{t}{k}^{2}\binom{t}{k-1}^{2k}t(t-1) \geq\theta|\mathfrak{L}_{\overrightarrow{H}}|\] since \(\theta\ll\mu\) **Claim 7.12**.: _Let \(k,r,m,t\in\mathbb{N}\) and \(d_{2},\ldots,d_{k+1},\varepsilon,\varepsilon_{k+1},c,\nu,\theta,\mu\) be such that_ \[1/m \ll 1/r,\varepsilon\ll 1/t,c,\varepsilon_{k+1},d_{2},\ldots,d_{k},\] \[c \ll d_{2},\ldots,d_{k},\] \[1/t \ll\varepsilon_{k+1}\ll d_{k+1}\leq 1/k,\] \[\varepsilon_{k+1} \ll\nu\ll\theta\ll\mu\ll 1/k.\] _Let \(\textbf{d}=(d_{2},\ldots,d_{k+1})\) and let \(\mathfrak{S}=(G,G_{\mathcal{J}},\mathcal{J},\mathcal{P},\overrightarrow{H})\) be an oriented \((k,m,2t,\varepsilon,\varepsilon_{k+1},r,\textbf{d})\)-regular setup. Let \(T\subseteq V\) and \(O\subseteq[n]\) be \(k\)-sets and let \(L\in\mathfrak{L}_{\overrightarrow{H}}\) be a reduced \(((T,O),\mu)\)-gadget in \(\overrightarrow{H}\). We have_ \[|\mathfrak{F}_{L}\cap\mathfrak{F}_{(T,O)}|\geq\theta|\mathfrak{F}_{L}|.\] Proof.: Let \(T=\{t_{1},\ldots,t_{k}\}\) and \(O=\{o_{1},\ldots,o_{k}\}\), \(L=Y\cup W_{1}\cup\cdots\cup W_{k}\cup Z_{0}\cup Z_{1}\cup\ldots\cup Z_{k}\cup W _{1}^{\prime}\cup\cdots\cup W_{k}^{\prime}\) where \(W_{i}=\{W_{i,1},\ldots,W_{i,k-1}\}\) and \(W_{i}^{\prime}=\{W_{i,1}^{\prime},\ldots,W_{i,k-1}^{\prime}\}\). Choose \(P_{i},Q_{i}\) in \(W_{i}\) and \(P_{i}^{\prime},Q_{i}^{\prime}\) in \(W_{i}^{\prime}\), let \(\mathcal{Q}_{Z_{i},W_{i}}\) be the set of \(k\)-uniform tight paths \((b_{i},v_{1},\ldots,v_{2k-2})\) such that \(b_{i}\in Z_{i}\), \(v_{\ell},v_{\ell+k-1}\in W_{i,\ell}\) for \(i,j\in[k]\), \(\ell\in[k-1]\) and its down-closure is in \(\mathcal{J}\). Let \(\mathcal{Q}_{o_{i},(Z_{i},W_{i})}\subseteq\mathcal{Q}_{Z_{i},W_{i}}\) be the set of those paths whose edges in the \(k\)-th level are in \(N_{G}(o_{i})\). Note that \(F\) is the absorbing gadget for \((T,O)\). Let \(\mathcal{F}\) be the down-closure of \(F\). Since \(L\) is a reduced \((T,\mu)\)-gadget, we have \(W_{i}\in N_{H}(Y_{0},Z_{i})\cap N_{\mathcal{J}}((o_{i},Z_{i}),\mu)\), thus \(|N_{G}((o_{i},Z_{i}),\mathcal{J}_{W_{i}})|\geq\mu|\mathcal{J}_{W_{i}}|\). By Lemma 7.4 with \(S\) being the set of \(k\)-sets where each \(k\)-set consists of \(k-1\) points from \(N_{G}((o_{i},Z_{i}),\mathcal{J}_{W_{i}})\) and one point from \(Z_{i}\), we have \[|\mathcal{Q}_{o_{i},(Z_{i},W_{i})}|\geq\frac{1}{2}\left(\frac{\mu}{8k}\right) ^{k+1}|\mathcal{Q}_{Z_{i},W_{i}}|.\] Let \(\mathcal{Q}_{Z_{0},W_{i}^{\prime}}\) be the set of \(k\)-uniform sequentially paths \((c_{1}^{\prime},\ldots,c_{k}^{\prime},v_{1}^{\prime},\ldots,v_{2k-2}^{\prime})\) such that \(c_{j}^{\prime}\in Z_{0}\), \(v_{\ell}^{\prime},v_{\ell+k-1}^{\prime}\in W_{i,\ell}^{\prime}\) for \(i,j\in[k]\), \(\ell\in[k-1]\) and its down-closure is in \(\mathcal{J}\). Let \(\mathcal{Q}_{t_{i},(Z_{0},W_{i}^{\prime})}\subseteq\mathcal{Q}_{Z_{0},W_{i}^{ \prime}}\) be the set of those paths whose edges in the \(k\)-th level are in \(N_{G}(t_{i})\). Since \(L\) is a reduced \(((T,O),\mu)\)-gadget, we have \(W_{i}^{\prime}\in N_{H}(Z_{0},Y_{i})\cap N_{\mathcal{J}}((t_{i},Z_{0}),\mu)\), thus \(|N_{G}((t_{i},Z_{0}),\mathcal{J}_{W_{i}^{\prime}})|\geq\mu|\mathcal{J}_{W_{i}^ {\prime}}|\). By Lemma 7.3 with \(S\) being the set of \(k\)-sets where each \(k\)-set consists \(k-1\) points from \(N_{G}((t_{i},Z_{0}),\mathcal{J}_{W_{i}^{\prime}})\) and one color from \(Z_{0}\), we have \[|\mathcal{Q}_{t_{i},(Z_{0},W_{i}^{\prime})}|\geq\frac{1}{2}\left(\frac{\mu}{8k }\right)^{k+1}|\mathcal{Q}_{Z_{0},W_{i}^{\prime}}|.\] Let \(\phi:V(F)\to V(L)\) be the homomorphism which labels the copies of \(F\) in \(\mathfrak{F}_{L}\). Set \(Z=\{b_{1},\ldots,b_{k}\}\cup\bigcup_{i=1}^{k}(V(P_{i})\cup V(Q_{i}))\cup \bigcup_{i=1}^{k}(C_{i}^{\prime}\cup V(P_{i}^{\prime})\cup V(Q_{i}^{\prime}))\). Thus, \(|Z|=5k^{2}-3k\). Let \(\mathcal{Z}=\mathcal{F}[Z]\) be the induced subcomplex of \(\mathcal{F}\) in \(Z\). Note that \(\mathcal{Z}\) consists of \(k\) vertex-disjoint \(k\)-uniform tight paths of length \(2k-1\) where the \(i\)-th path lies in \(\mathcal{Q}_{o_{i},(Z_{i},W_{i})}\) and \(k\) vertex-disjoint \(k\)-uniform sequentially paths of length \(2k-2\) where the \(i\)-th path lies in \(\mathcal{Q}_{t_{i},(Z_{0},W_{i}^{\prime})}\). Let \(\mathcal{G}=\mathcal{J}\cup G_{\mathcal{J}}\) and \(\mathcal{Z}_{\mathcal{G}}\) be the set of labelled partition-respecting copies of \(\mathcal{Z}\) in \(\mathcal{G}\). Let \(\beta_{1}\) be such that \(\varepsilon\ll\beta_{1}\ll d_{2},\ldots,d_{k},\varepsilon_{k+1}\) and define \(d_{\mathcal{Z}}=\prod_{i=2}^{k}d_{i}^{e_{i}(\mathcal{Z})}\). By Lemma 4.8, we have \[|\mathcal{Z}_{\mathcal{G}}|=\prod_{i=1}^{k}|\mathcal{Q}_{Z_{i},W_{i}}|| |\mathcal{Q}_{Z_{0},W_{i}^{\prime}}|=(1\pm\beta_{1})d_{\mathcal{Z}}m^{5k^{2}-3 k}.\] Let \(\mathcal{Z}_{(T,O),\mathcal{G}}\subseteq\mathcal{Z}_{\mathcal{G}}\) be the labelled partition-respecting copies of \(\mathcal{Z}\) absorbing \((T,O)\), thus we have \[|\mathcal{Z}_{(T,O),\mathcal{G}}|\geq\prod_{i=1}^{k}|\mathcal{Q}_{o_{i},(Z_{i}, W_{i})}||\mathcal{Q}_{t_{i},(Z_{0},W_{i}^{\prime})}|\geq\left(\frac{1}{2}\left( \frac{\mu}{8k}\right)^{k+1}\right)^{2k}\prod_{i=1}^{k}|\mathcal{Q}_{Z_{i},W_{ i}}||\mathcal{Q}_{Z_{0},W_{i}^{\prime}}|\geq 3\theta|\mathcal{Z}_{\mathcal{G}}|,\] since \(\theta\ll\mu,1/k\). Let \(\beta_{2}\) be such that \(\varepsilon_{k+1}\ll\beta_{2}\ll\theta,d_{k+1},1/k\) and define \(d_{\mathcal{F}-\mathcal{Z}}=\prod_{i=2}^{k+1}d_{i}^{e_{i}(\mathcal{F})-e_{i} (\mathcal{Z})}\). Let \(I\subseteq\mathcal{Z}_{\mathcal{G}}\) be the set of labelled partition-respecting copies of \(\mathcal{Z}\) which are not extensible to \((1\pm\beta_{2})d_{\mathcal{F}-\mathcal{Z}}m^{k^{2}+7k+2}\) labelled partition-respecting copies of \(\mathcal{F}\) in \(\mathcal{G}\). By Lemma 4.7, we have \[|I|\leq\beta_{2}|\mathcal{Z}_{\mathcal{G}}|\leq\theta|\mathcal{Z}_{\mathcal{G }}|,\] since \(\beta_{2}\ll\theta\). By Lemma 7.8, we have \[|\mathfrak{F}_{L}|=(1\pm\beta_{2})d_{\mathcal{F}-\mathcal{Z}}d_{\mathcal{Z}}m ^{6k^{2}+4k+2},\] since \(\varepsilon_{k+1}\ll\beta_{2}\ll\theta,d_{k+1},1/k\). Note that a labelled partition-respecting copy of \(\mathcal{F}\) in \(\mathcal{G}\) containing a \(Z\in\mathcal{Z}_{(T,O),\mathcal{G}}\) yields exactly one gadget in \(\mathfrak{F}_{L}\cap\mathfrak{F}_{(T,O)}\), we have \[|\mathfrak{F}_{L}\cap\mathfrak{F}_{(T,O)}| \geq|\mathcal{Z}_{(T,O),\mathcal{G}}\setminus I|(1-\beta_{2})d_ {\mathcal{F}-\mathcal{Z}}m^{k^{2}+7k+2}\] \[\geq(|\mathcal{Z}_{(T,O),\mathcal{G}}|-|I|)(1-\beta_{2})d_{ \mathcal{F}-\mathcal{Z}}m^{k^{2}+7k+2}\] \[\geq 2\theta|\mathcal{Z}_{\mathcal{G}}|(1-\beta_{2})d_{\mathcal{F }-\mathcal{Z}}m^{k^{2}+7k+2}\] \[\geq 2\theta(1-\beta_{2})(1-\beta_{1})d_{\mathcal{Z}}m^{5k^{2}-3k }d_{\mathcal{F}-\mathcal{Z}}m^{k^{2}+7k+2}\] \[\geq 2\theta(1-2\beta_{2})d_{\mathcal{Z}}d_{\mathcal{F}-\mathcal{Z} }m^{6k^{2}+4k+2}\] \[\geq 2\theta\frac{1-2\beta_{2}}{1+\beta_{2}}|\mathfrak{F}_{L}|\] \[\geq\theta|\mathfrak{F}_{L}|,\] since \(\beta_{2}\ll\theta\). Proof of Lemma 7.10.: Let \(\theta\ll\theta^{\prime}\ll\mu\). By Claim 7.12 with \(\theta^{\prime}\), we have for every reduced \(((T,O),\mu)\)-gadget \(L\in\mathfrak{L}_{\overrightarrow{H}}\), \[|\mathfrak{F}_{L}\cap\mathfrak{F}_{(T,O)}|\geq\theta^{\prime}|\mathfrak{F}_{ L}|.\] Let \(\beta\) be such that \(\varepsilon_{k+1}\ll\beta\ll d_{k+1},\theta^{\prime}\), by Lemma 7.8 with \(\theta^{\prime}\), we have \(|\mathfrak{F}_{L}\setminus\mathfrak{F}_{L}^{\text{ext}}|\leq\beta|\mathfrak{F }_{L}|\leq\theta^{\prime}|\mathfrak{F}_{L}|/2\). Thus, \[|\mathfrak{F}_{(T,O)}^{\text{ext}}\cap\mathfrak{F}_{L}|\geq|\mathfrak{F}_{L} \cap\mathfrak{F}_{(T,O)}|-|\mathfrak{F}_{L}\setminus\mathfrak{F}_{L}^{\text{ ext}}|\geq\frac{\theta^{\prime}}{2}|\mathfrak{F}_{L}|.\] By Claim 7.11 with \(\theta^{\prime}\) and Lemma 7.9, we have \(|\mathfrak{L}_{\overrightarrow{H},(T,O),\mu}|\geq\theta^{\prime}|\mathfrak{L}_ {\overrightarrow{H}}|\) and \[|\mathfrak{F}_{(T,O)}^{\text{ext}}|\geq\sum_{L\in\mathfrak{L}_{\overrightarrow {H},(T,O),\mu}^{\text{ext}}}|\mathfrak{F}_{(T,O)}^{\text{ext}}\cap\mathfrak{F} _{L}|\geq\frac{\theta^{\prime}}{2}\sum_{L\in\mathfrak{L}_{\overrightarrow{H},(T,O ),\mu}^{\text{ext}}}|\mathfrak{F}_{L}|\geq\theta|\mathfrak{F}^{\text{ext}}|.\] ### Absorbing Lemma **Lemma 7.13**.: _Let \(k,r,m,t\in\mathbb{N}\) and \(d_{2},\ldots,d_{k+1},\varepsilon,\varepsilon_{k+1},c,\nu,\theta,\mu,\alpha,\zeta\) be such that_ \[1/m \ll 1/r,\varepsilon\ll 1/t,\zeta,\varepsilon_{k+1},d_{2},\ldots,d_{k},\] \[\zeta \ll c\ll d_{2},\ldots,d_{k},\] \[1/t \ll\varepsilon_{k+1}\ll d_{k+1},\nu\leq 1/k,\] \[c \ll\varepsilon_{k+1}\ll\alpha\ll\theta\ll\mu\ll 1/k.\] _Let \(\textbf{d}=(d_{2},\ldots,d_{k+1})\) and let \(\mathfrak{S}=(G,G_{\mathcal{J}},\mathcal{J},\mathcal{P},\overrightarrow{H})\) be an oriented \((k,m,2t,\varepsilon,\varepsilon_{k+1},r,\textbf{d})\)-regular setup. Suppose that \(V(G)=[n]\cup V\) where \(|V|=n\leq(1+\alpha)mt\) and \(V(H)=[t]\cup V^{\prime}\) where \(|V^{\prime}|=t\). Suppose that for each color cluster \(C\), there are at least \((1-\alpha)t\) point clusters \(Z\) such that \(\{C,Z\}\) has relative \((1,1)\)-degree at least \(\mu\) in \(H\). For any point \(v\) of \(G\), color cluster \(C\), there are at least \((1-\alpha)t\) point clusters \(Z\in\mathcal{P}\) such that \(|N_{\mathcal{J}}((v,C),\mu)\cap N_{H}(Z,C)|\geq\mu\binom{t}{k-1}\). And for every \(c\in[n]\), color cluster \(C\), there are at least \((1-\alpha)t\) point clusters \(Z\in\mathcal{P}\) such that \(|N_{\mathcal{J}}((c,Z),\mu)\cap N_{H}(C,Z)|\geq\mu\binom{t}{k-1}\). Then there exists a family \(\mathfrak{F}^{\prime\prime}\) of pairwise disjoint \(\mathfrak{S}\)-gadgets which are \((c,\nu)\)-extensible with the following properties._ 1. \(|\mathfrak{F}^{\prime\prime}|\leq\zeta m,\)__ 2. \(|\mathfrak{F}^{\prime\prime}\cap\mathfrak{F}^{\mathrm{ext}}_{(T,O)}|\geq\zeta \theta m\) _for any_ \(k\)_-subset_ \(T\) _of_ \(V\) _and_ \(k\)_-subset_ \(O\) _of_ \([n]\)_,_ 3. \(V(\mathfrak{F}^{\prime\prime})\) _is_ \((2(k+1)\zeta/t)\)_-sparse in_ \(\mathcal{P}\)_._ Proof.: Let \(\beta>0\) be such that \(\varepsilon_{k+1}\ll\beta\ll d_{k+1}\). Let \(F\) be the \((1,k)\)-graph as in Definition 7.6 and let \(\mathcal{F}\) be the \((k+1)\)-complex generated by its down-closure. Let \(d_{F}=\prod_{i=2}^{k+1}d_{i}^{\varepsilon_{i}(\mathcal{F})}\). By Lemma 7.9, we have \[|\mathfrak{F}^{\mathrm{ext}}| \leq(1+\beta)d_{F}m^{6k^{2}+4k+2}\binom{t}{k}^{2}\binom{t}{k-1}^{ 2k}t(t-1)\leq d_{F}m^{6k^{2}+4k+2}t^{2k^{2}+2},\] \[|\mathfrak{F}^{\mathrm{ext}}| \geq\frac{\mu^{k+1}}{2}(1-\beta)d_{F}m^{6k^{2}+4k+2}\binom{t}{k} ^{2}\binom{t}{k-1}^{2k}t(t-1)\] \[\geq\frac{\mu^{k+1}}{2^{k+2}k^{2k}(k-1)^{2k^{2}}}d_{F}m^{6k^{2}+4 k+2}t^{2k^{2}+2}\] \[\geq 6\theta^{1/2}d_{F}m^{6k^{2}+4k+2}t^{2k^{2}+2},\] since \(1/t\ll\varepsilon_{k+1}\ll\beta\ll d_{k+1}\ll 1/k\) and \(\theta\ll\mu,1/k\). By Lemma 7.9, for each reduced gadget \(L\in\mathfrak{L}_{\overrightarrow{H}}\) in \(\overrightarrow{H}\), we have \[|\mathfrak{F}^{ext}_{L}|\leq 2d_{F}m^{6k^{2}+4k+2}.\] By Lemma 7.10 with \(\theta^{1/2}\), for any \(k\)-set \(T\subseteq V\) and any \(k\)-set \(O\subseteq[n]\), we have \[|\mathfrak{F}^{\mathrm{ext}}_{(T,O)}|\geq\theta^{1/2}|\mathfrak{F}^{\mathrm{ ext}}|\geq 6\theta d_{F}m^{6k^{2}+4k+2}t^{2k^{2}+2}.\] Choose a family \(\mathfrak{F}^{\prime}\) from \(\mathfrak{F}^{\mathrm{ext}}\) by including each \(\mathfrak{S}\)-gadget independently at random with probability \[p=\frac{\zeta m}{2d_{F}m^{6k^{2}+4k+2}t^{2k^{2}+2}}.\] Note that \(|\mathfrak{F}^{\prime}|\), \(|\mathfrak{F}^{\prime}\cap\mathfrak{F}^{\text{ext}}_{(T,O)}|\) are binomial random variables, for any \(k\)-set \(T\subseteq V\) and any \(k\)-set \(O\subseteq[n]\), we have \[\mathbb{E}|\mathfrak{F}^{\prime}|=p|\mathfrak{F}^{\text{ext}}|\leq\frac{\zeta m }{2},\] \[\mathbb{E}|\mathfrak{F}^{\prime}\cap\mathfrak{F}^{\text{ext}}_{(T,O)}|=p| \mathfrak{F}^{\text{ext}}_{(T,O)}|\geq 3\theta\zeta m.\] For each \(Z\in\mathcal{P}\), note that \(Z\) exists in at most \(t^{2k^{2}+1}\) reduced gadgets, thus, there are at most \(2d_{F}m^{6k^{2}+4k+2}t^{2k^{2}+1}\)\(\mathfrak{S}\)-gadgets with vertices in \(Z\). Note that each \(\mathfrak{S}\)-gadget contains at most \(k^{2}+2k+2\) vertices in a cluster. Hence, for each cluster \(Z\in\mathcal{P}\), we have \[\mathbb{E}|V(\mathfrak{F}^{\prime})\cap Z|\leq 2(k^{2}+2k+2)d_{F}m^{6k^{2}+4k+2 }t^{2k^{2}+1}p=\frac{(k^{2}+2k+2)\zeta m}{t}.\] By Proposition 1.18, with probability \(1-o(1)\), the family \(\mathfrak{F}^{\prime}\) satisfies the following properties. \[|\mathfrak{F}^{\prime}|\leq 2\mathbb{E}|\mathfrak{F}^{\prime}|\leq\zeta m,\] \[|\mathfrak{F}^{\prime}\cap\mathfrak{F}^{\text{ext}}_{(T,O)}|\geq 2\theta\zeta m,\] \[|V(\mathfrak{F}^{\prime})\cap Z|\leq\frac{2(k^{2}+k+1)\zeta m}{t}\] for any \(k\)-set \(T\subseteq V\), \(k\)-set \(O\subseteq[n]\) and cluster \(Z\in\mathcal{P}\). We say that two \(\mathfrak{S}\)-gadgets are _intersecting_ if they share at least one vertex. Note that there at most \((2k^{2}+2)^{2}t^{4k^{2}+3}\) pairs of intersecting reduced gadgets. Hence, there are at most \((6k^{2}+4k+2)^{2}m^{12k^{2}+8k+1}(2k^{2}+2)^{2}t^{4k^{2}+3}\) pairs of intersecting \(\mathfrak{S}\)-gadgets. We can bound the expected number of pairs of intersecting \(\mathfrak{S}\)-gadgets by \[(6k^{2}+4k+2)^{2}m^{12k^{2}+8k+3}(2k^{2}+2)^{2}t^{4k^{2}+3}p^{2}\] \[=\frac{\zeta^{2}(6k^{2}+4k+2)^{2}(2k^{2}+2)^{2}m}{4d_{F}^{2}t}\leq\frac{ \zeta\theta m}{2},\] since \(\zeta\ll d_{2},\ldots,d_{k+1},\theta,1/k\). Using Markov's inequality, we derive that with probability at least \(1/2\), \(\mathfrak{F}^{\prime}\) contains at most \(\zeta\theta m\) pairs intersecting \(\mathfrak{S}\)-gadgets. Remove one gadget from each intersecting pair in such a family and remove gadgets that are not absorbing for any \((T,O)\) where \(T\subseteq V\), \(O\subseteq[n]\) and \(|T|=|O|\). We obtain a subfamily \(\mathfrak{F}^{\prime\prime}\), satisfying the following properties. 1. \(|\mathfrak{F}^{\prime\prime}|\leq\zeta m\), 2. \(|\mathfrak{F}^{\prime\prime}\cap\mathfrak{F}^{\text{ext}}_{(T,O)}|\geq\theta \zeta m\), 3. \(V(\mathfrak{F}^{\prime\prime})\) is \((2(k^{2}+k+1)\zeta/t)\)-sparse in \(\mathcal{P}\), as desired. The proof of Lemma 5.2.: Since \(G\) has minimum relative \((1,1)\)-degree at least \(\delta+\mu\) and \(\mathfrak{S}\) is a representative setup. For any \(v\in V\) and any color cluster \(C\), we have \[|N_{\mathcal{J}}((v,C),\frac{\mu}{3})|\geq(\delta+\frac{\mu}{4})\binom{t}{k-1}.\] For any \(c\in[n]\) and any point cluster \(Z\), we have \[|N_{\mathcal{J}}((c,Z),\frac{\mu}{3})|\geq(\delta+\frac{\mu}{4})\binom{t}{k-1}.\] by Lemma 7.1. Let \(\zeta>0\) with \(1/r,\varepsilon\ll\zeta\ll c\) and let \(\theta>0\) with \(\eta\ll\theta\ll\mu,1/k\) and \(M:=\lceil\eta t/(\theta\zeta)\rceil\). Firstly, we need the following claim. **Claim 7.14**.: _For each \(j\in[0,M]\), and any \(S\subseteq V\) of size at most \(j\theta\zeta n/t\) divisible by \(k\) and any \(O\subseteq[n]\) of size \(|S|\), there is a sequentially path \(P_{j}\subseteq G\) such that the following holds._ 1. \(P_{j}\) _is_ \((S,O)\)_-absorbing in_ \(G\)_,_ 2. \(P_{j}\) _is_ \((c,\nu)\)_-extensible and consistent with_ \(\overrightarrow{H}\)_,_ 3. \(V(P_{j})\) _is_ \((100k^{3}j\zeta/t)\)_-sparse in_ \(\mathcal{P}\) _and_ \(V(P_{j})\cap T_{j}=\emptyset\)_, where_ \(T_{j}\) _denotes the connection set of_ \(P_{j}\)_._ _Proof of the claim._ Take \(P_{0}\) to be the empty path and \(P_{j}\) satisfy the above conditions for \(j\in[0,M)\). Select a subset \(Z^{\prime}\subseteq Z\setminus V(P_{j})\) of size \(m^{\prime}=(1-\lambda)m\) since \(100k^{3}j\zeta/t\leq(2\eta t/(\zeta\theta))(100k^{3}\zeta/t)\leq\lambda\) which follows from \(\zeta\ll c\ll\eta\ll\lambda,\theta\). Also, since \(n\leq(1+\alpha)mt\), we have \(m^{\prime}\geq n/(2t)\). Let \(\mathcal{P}^{\prime}=\{Z^{\prime}\}_{Z\in\mathcal{P}}\), \(\mathcal{J}^{\prime}=\mathcal{J}[V(\mathcal{P}^{\prime})]\) and \(G^{\prime}_{\mathcal{J}^{\prime}}=G_{\mathcal{J}}[V(\mathcal{P}^{\prime})]\). By lemma 4.11, \(\mathfrak{S}^{\prime}:=(G^{\prime},G^{\prime}_{\mathcal{J}^{\prime}}, \mathcal{J}^{\prime},\mathcal{P}^{\prime},H)\) is a \((k,m^{\prime},2t,\sqrt{\varepsilon},\sqrt{\varepsilon_{k+1}},r,\mathbf{d})\)-regular setup. By Lemma 7.2, for every \(v\in V\) and color cluster \(C\), we have \[|N_{\mathcal{J}^{\prime}}((v,C),\mu/6)|\geq|N_{\mathcal{J}}((v,C),\mu/3)|\geq (\delta+\mu/4)\binom{t}{k-1},\] and for every \(o\in[n]\) and point cluster \(Z\), we have \[|N_{\mathcal{J}^{\prime}}((o,Z),\mu/6)|\geq|N_{\mathcal{J}}((o,Z),\mu/3)|\geq (\delta+\mu/4)\binom{t}{k-1},\] Thus, we obtain that for every \(v\in V\), \(o\in[n]\), color cluster \(C\), there are at least \((1-\alpha)t\) point clusters \(Z\in\mathcal{P}\), we have \[|N_{\mathcal{J}}((v,C),\mu/6)\cap N_{H}(Z,C)|\geq\frac{\mu}{5}\binom{t}{k-1},\] and \[|N_{\mathcal{J}}((o,Z),\mu/6)\cap N_{H}(C,Z)|\geq\frac{\mu}{5}\binom{t}{k-1}.\] By Lemma 7.13 with \(4c\) instead of \(c\), \(2\zeta\) instead of \(\zeta\), we obtain a set \(\mathcal{A}^{\prime}\) of pairwise-disjoint \(\mathfrak{S}^{\prime}\)-gadgets which are \((4c,\nu)\)-extensible and such that 1. \(|\mathcal{A}^{\prime}|\leq 2\zeta m^{\prime}\), 2. \(|\mathcal{A}^{\prime}\cap\mathfrak{F}_{(T,O)}|\geq 2\zeta\theta m^{\prime}\) for any \(k\)-subset of \(V\), 3. \(V(\mathcal{A}^{\prime})\) is \((4(k^{2}+k+1)\zeta/t)\)-sparse in \(\mathcal{P}^{\prime}\). Next, we would connect all paths of absorbing gadgets in \(\mathcal{A}^{\prime}\) and \(P_{j}\) to obtain \(P_{j+1}\). By Definition 7.6, there are \(2(k+1)\) pairwise disjoint sequentially paths in each \(\mathfrak{S}^{\prime}\)-gadget in \(\mathcal{A}^{\prime}\) which are \((4c,\nu)\)-extensible in \(\mathfrak{S}^{\prime}\). Let \(\mathcal{A}\) be the union of all such sequentially paths of all gadgets of \(\mathcal{A}^{\prime}\) and \(P_{j}\). Set \(T_{j+1}=V(G)\setminus V(\mathcal{A})\), it is obvious that \(\mathcal{A}\) is a set of pairwise disjoint sequentially paths in \(G\) such that 1. \(|\mathcal{A}|\leq 4(k+1)\zeta m^{\prime}+1\), 2. \(V(\mathcal{A})\) is \((100k^{3}j\zeta/t+4(k^{2}+k+1)\zeta/t)\)-sparse in \(\mathcal{P}\) and \(V(\mathcal{A})\cap T_{j+1}=\emptyset\), (3') every path in \(\mathcal{A}\setminus\{P_{j}\}\) is \((2c,\nu,T_{j+1})\)-extensible in \(\mathfrak{S}\) and consistent with \(\overrightarrow{H}\). \(P_{j}\) is \((c,\nu,T_{j+1})\)-extensible in \(\mathfrak{S}\) and consistent with \(\overrightarrow{H}\). Note that (1') follows from (1) and the addition of \(P_{j}\). (2') follows from (iii), (3) and the definition of \(T_{j+1}\). (3') follows from (ii) and (3) since \(4(k^{2}+k+1)\zeta m/t\leq 2cm\). In particular, \(P_{j}\) is \((c,\nu)\)-extensible by (ii) while all other paths go from \((4c,\nu)\)-extensible in \(\mathfrak{S}^{\prime}\) to \((2c,\nu)\)-extensible in \(\mathfrak{S}\). The consistency with \(\overrightarrow{H}\) is given by the consistency of \(P_{j}\) and the definition of \(\mathfrak{S}^{\prime}\)-gadgets. By Lemma 6.10, we obtain a sequentially path \(P_{j+1}\) with the following properties. 1. \(P_{j+1}\) contains every path of \(\mathcal{A}\), 2. \(P_{j+1}\) starts and ends with two paths different from \(P_{j}\), 3. \(V(P_{j+1})\setminus V(\mathcal{A})\subseteq V(\mathcal{P}^{\prime})\), 4. \(V(P_{j+1})\setminus V(\mathcal{A})\) intersects in at most \(10k^{2}\mathcal{A}_{Z}+t^{2t+3k+2}\) vertices with each cluster \(Z\in\mathcal{P}\), where \(\mathcal{A}_{Z}\) denotes the number of paths of \(\mathcal{A}\) that intersect with \(Z\). We claim that \(P_{j+1}\) satisfies (i)-(iii). First, we prove (iii). Note that for every cluster \(Z\in\mathcal{P}\), the number of paths of \(\mathcal{A}\) that intersect with \(Z\) is bounded by \(4(k+1)\zeta m/t+1\). (D) implies that \(V(P_{j+1})\setminus V(\mathcal{A})\) intersects in at most \(100k^{3}\zeta m/t\) vertices with each cluster \(Z\in\mathcal{P}\). Together with (iii), it follows that \(\mathcal{A}\) is \((100k^{3}(j+1)\zeta/t)\)-sparse in \(\mathcal{P}\). Next, we want to prove (ii), \(V(P_{j+1})\setminus V(\mathcal{A})\) intersects in at most \(100k^{3}\zeta m/t\leq cm/4\) vertices with each cluster \(Z\in\mathcal{P}\), since \(\zeta\ll c\). Also, we have \(V(\mathcal{A})\cap T_{j+1}=\emptyset\). Hence, we obtain (ii) after deleting the vertices of \(P_{j+1}\) from \(T_{j+1}\). After the deletion, we go from \((2c,\nu)\)-extensible in (3') to \((c,\nu)\)-extensible. It is crucial that \(P_{j+1}\) starts and ends with two paths different from \(P_{j}\) by (B). Finally, we claim that \(P_{j+1}\) is \((S,O)\)-absorbing in \(G\) for any \(S\subseteq V\) of size divisible by \(k\) and at most \((j+1)\zeta\theta n/t\) and any \(O\subseteq[n]\) of size \(|S|\). Partition \(S\) into two sets \(S_{1}\) and \(S_{2}\) such that both \(|S_{1}|,|S_{2}|\) are divisible by \(k\) and \(S_{1}\) is maximal such that \(|S_{1}|\leq j\zeta\theta n/t\). Partition \(O\) into two sets \(O_{1}\) and \(O_{2}\) such that \(|O_{1}|=|S_{1}|\) and \(|O_{2}|=|S_{2}|\). Since \(P_{j}\) is \((S^{\prime},O^{\prime})\)-absorbing in \(G\) for any set \(S^{\prime}\subseteq V\) of size at most \((j\zeta\theta n/t)\) and \(|O^{\prime}|=|S^{\prime}|\), there exists a path \(P^{\prime}_{j}\) with the same endpoints as \(P_{j}\) such that \(I(P^{\prime}_{j})=S_{1}\cup I(P_{j})\) and \(C(P^{\prime}_{j})=O_{1}\cup C(P_{j})\), besides, \(P_{j}\) is a subpath of \(P_{j+1}\). So it remains to absorb \(S_{2}\). By the choice of \(S_{1}\), we have \(|S_{2}|\leq\zeta\theta n/t+k\leq 2\zeta^{3}n/t\leq 2(1+\alpha)\zeta^{3}m\leq 5 \zeta^{3}m/2\). Therefore, we can partition \(S_{2}\) and \(O_{2}\) into \(\ell\leq 5\zeta^{3}m/(2k)\leq 2\zeta\theta m^{\prime}\) sets of size \(k\) each, let \(D_{1},\ldots,D_{\ell}\) and \(R_{1},\ldots,R_{\ell}\) be those sets. By (2), we have \(|\mathfrak{F}_{(D_{i},R_{i})}\cap\mathcal{A}^{\prime}|\geq\ell\). Thus, we can associate each \((D_{i},R_{i})\) with a different gadget \(F_{i}\in\mathcal{A}^{\prime}\) for each \(i\in[\ell]\). Each \(F_{i}\) yields a collection of \(2(k+1)\) sequentially paths \(P_{i,1},\ldots,P_{i,2(k+1)}\) and we can replace those paths with a collection of different paths with the same endpoints. Since \(P_{j}\) and each \(P_{i,u}\), \(i\in[\ell],u\in[2(k+1)]\), are subpaths of \(P_{j+1}\), the sequentially path \(P^{\prime}_{j+1}\) has the same endpoints with \(P_{j+1}\). Also, \(P^{\prime}_{j+1}\) is exactly \((C(P_{j+1})\cup O,I(P_{j+1})\cup S)\). To finish, note that \(P_{M}\) and \(C_{M}\) has the desired properties. By the choice of \(M=\lceil\eta t/(\zeta\theta)\rceil\), we have \(M\zeta\theta/t\geq\eta\), so \(P_{M}\) with \(C_{M}\) is \(\eta\)- absorbing in \(G\). Moreover, since \(M(100k^{3}\zeta/t)\leq 200k^{2}\eta/\theta\leq\lambda\) and \(\eta\ll\lambda\), \(V(P_{M})\) is \(\lambda\)-sparse in \(\mathcal{P}\). ## 8. Concluding Remarks Inspired by a series of very recent successes on rainbow matchings [29, 28, 30, 31], rainbow Hamilton cycles [8, 9, 21] and rainbow factors [7, 12, 33], we suspect the threshold for a rainbow spanning subgraph in (hyper)graph system is asymptotically same with the threshold for a spanning subgraph in a (hyper)graph. Let \(1\leq d,\ell\leq k-1\). For \(n\in(k-\ell)\mathbb{N}\), define \(h_{d}^{\ell}(k,n)\) to be the smallest integer \(h\) such that every \(n\)-vertex \(k\)-graph \(H\) satisfying \(\delta_{d}(H)\geq h\) contains a Hamilton \(\ell\)-cycle. Han and Zhao [19] gave the result that \[h_{d}^{k-1}(k,n)\geq\left(1-\binom{t}{\lfloor t/2\rfloor}\frac{\lceil t/2 \rceil^{\lceil t/2\rceil}(\lfloor t/2\rfloor+1)^{\lfloor t/2\rfloor}}{(t+1)^ {t}}+o(1)\right)\binom{n}{t} \tag{9}\] where \(d\in[k-1]\) and \(t=k-d\). In particular, \(h_{d}^{k-1}(k,n)\geq(5/9+o(1))(\binom{n}{2}),(5/8+o(1))\binom{n}{3}\) for \(k-d=2,3\). Lang and Sanhueza-Matamala [27] conjectured that the minimum \(d\)-degree threshold for \(k\)-uniform tight Hamilton cycles coincides with the lower bounds in (9). This leads to the following conjecture. **Conjecture 8.1**.: _For every \(k\geq 4,\mu>0\), there exists \(n_{0}\) such that the following holds for \(n\geq n_{0}\). Given a \(k\)-graph system \(\textbf{G}=\{G_{i}\}_{i\in[n]}\), if \(\delta_{k-3}(G_{i})\geq(5/8+\mu)\binom{n}{3}\) for \(i\in[n]\), then **G** admits a rainbow Hamilton cycle._ Furthermore, we believe the following holds. **Conjecture 8.2**.: _For every \(k,d,\mu>0\), there exists \(n_{0}\) such that the following holds for \(n\geq n_{0}\). Given a \(k\)-graph system \(\textbf{G}=\{G_{i}\}_{i\in[n]}\), if \(\delta_{d}(G_{i})\geq h_{d}^{k-1}(k,n)+\mu(\binom{n}{d})\) for \(i\in[n]\), then **G** admits a rainbow Hamilton cycle._ In fact, due to the whole proof of this paper, we believe that it is interesting to study rainbow Hamilton vicinities or rainbow Hamilton frameworks to determine the thresholds of Hamilton cycles. ## 9. Acknowledgement This work was supported by the Natural Science Foundation of China (12231018,11871311,11901292) and Youth Interdisciplinary Innovation Group of Shandong University.
この論文では、新しい虹色のHamiltonフレームワークを開発し、その独立した興味を持つものとして、Gupta, Hamann, M\"{u}yesser, Parczyk, and Sguegliaによって提案された問題を$k=3$のとき解決し、$k \ge 3$の場合の一般的な結論を以下に示します。$k$ -グラフシステム $\textbf{H}=\{H_i\}_{i\in[n]}$ は、同じ$n$頂点集合$V$上の、必ずしも異なるものではない$k$ -グラフの族であり、$k$ -グラフ$H$は、$V$上の$E(H)\subseteq\bigcup_{i\in[n]}E(H_i)$であり、$i\in[n]$ に対して $|E(H)\cap E(H_i)|\leq 1$ であることが特徴です。 私たちは示すことができ、$\gamma
2309.08976
Data-driven Reachability using Christoffel Functions and Conformal Prediction
An important mathematical tool in the analysis of dynamical systems is the approximation of the reach set, i.e., the set of states reachable after a given time from a given initial state. This set is difficult to compute for complex systems even if the system dynamics are known and given by a system of ordinary differential equations with known coefficients. In practice, parameters are often unknown and mathematical models difficult to obtain. Data-based approaches are promised to avoid these difficulties by estimating the reach set based on a sample of states. If a model is available, this training set can be obtained through numerical simulation. In the absence of a model, real-life observations can be used instead. A recently proposed approach for data-based reach set approximation uses Christoffel functions to approximate the reach set. Under certain assumptions, the approximation is guaranteed to converge to the true solution. In this paper, we improve upon these results by notably improving the sample efficiency and relaxing some of the assumptions by exploiting statistical guarantees from conformal prediction with training and calibration sets. In addition, we exploit an incremental way to compute the Christoffel function to avoid the calibration set while maintaining the statistical convergence guarantees. Furthermore, our approach is robust to outliers in the training and calibration set.
Abdelmouaiz Tebjou, Goran Frehse, Faïcel Chamroukhi
2023-09-16T12:21:57
http://arxiv.org/abs/2309.08976v1
# Data-driven Reachability using Christoffel Functions and Conformal Prediction ###### Abstract An important mathematical tool in the analysis of dynamical systems is the approximation of the reach set, i.e., the set of states reachable after a given time from a given initial state. This set is difficult to compute for complex systems even if the system dynamics are known and given by a system of ordinary differential equations with known coefficients. In practice, parameters are often unknown and mathematical models difficult to obtain. Data-based approaches are promised to avoid these difficulties by estimating the reach set based on a sample of states. If a model is available, this training set can be obtained through numerical simulation. In the absence of a model, real-life observations can be used instead. A recently proposed approach for data-based reach set approximation uses Christoffel functions to approximate the reach set. Under certain assumptions, the approximation is guaranteed to converge to the true solution. In this paper, we improve upon these results by notably improving the sample efficiency and relaxing some of the assumptions by exploiting statistical guarantees from conformal prediction with training and calibration sets. In addition, we exploit an incremental way to compute the Christoffel function to avoid the calibration set while maintaining the statistical convergence guarantees. Furthermore, our approach is robust to outliers in the training and calibration set. 120232041-2023 Conformal and Probabilistic Prediction with Applications ata-driven reachability using Christoffel Functions and Conformal Prediction ata-driven reachability, Christoffel functions, conformal prediction, probably approximately correct analysis, statistical learning ## 1 Introduction The problem of reach set approximation arises in different branches of applied mathematics and computer science, and in particular in control theory. In mathematics, the study of initial value problems and their guaranteed solution raises the question of which states can be reached under different configurations; see, for instance the work of Berz and Makino (1998). In computer science, the computation of reach sets is a fundamental operation in formal methods, which establish the correctness of a system with mathematical rigor. Initially, it was applied to program analysis, e.g., by Halbwachs et al. (1994). Later, the approach was extended to cyber-physical systems, which can involve interacting physical components, software, and communication channels, see Alur (2015). Reach set approximations may take different forms based on whether the focus is on scalability, tightness, or efficient computability. Examples include polyhedra, ellipsoids, polynomial zonotopes, and others; see the overview by Althoff et al. (2021). In this paper, we establish reach set approximations that are sublevel sets of polynomials, more precisely, sum-of-squares (SOS) polynomials, which are computationally advantageous. Once established, these can readily be used to investigate properties of regions of attraction, stability, and safety or to solve optimization problems. To achieve this, polynomial reach set approximations have been used as barrier certificates, inductive invariants, or Lyapunov functions; see the survey by Doyen et al. (2018). Traditionally, reach set approximations are established from first principles, starting from a mathematical model of the dynamics. This approach is limited to cases where sufficiently simple models are available and precise enough. More recently, data-based approaches have been used to deal with systems whose dynamics are too complex or where a model is not available and only observations are at hand. In the following, we provide a brief overview of such approaches. Related WorkThe traditional approach to go from data to reach set approximations is to first identify a model of the system dynamics and then analyse the model. To give an example, a linear model can be identified efficiently by subspace identification as proposed by Van Overschee and De Moor (2012) and then one of the set-based techniques in the survey by Althoff et al. (2021) can be applied to approximate the reach set at a given time in the future. This can be extended to uncertain linear models and nonlinear systems based on linearization, as pursued by Alanwar et al. (2023). More recently, it has been proposed to derive reach set approximations more directly from data, e.g., the approach of Djeumou et al. (2021) uses Taylor series expansions and Lipschitz bounds to derive reach sets for nonlinear systems. These approaches can, in principle, bound the reach set over an arbitrary time horizon, but the approximation error may increase very rapidly with time. Furthermore, these approaches struggle with complex dynamics. Our goal in this paper is different and more modest: We establish an SOS polynomial whose sublevel set contains the reachable set in the sense of a _probably_ approximately correct (PAC) property. In particular, we consider the approximation of a single time step. This is sufficient for many of the applications considered above (as a first step in constructing barrier certificates, inductive invariants etc.), but in contrast to the approaches cited in the beginning of this section, it does not readily extend to extrapolating the reach set over longer time horizons (it would involve costly quantifier elimination). One of the earliest data-driven approaches involving SOS polynomials was the construction of barrier certificates by Prajna (2006), e.g., to show that obstacles are avoided by a control system. The scalability was later improved by Han et al. (2015), but the optimisation problem remains somewhat challenging. Approximating the reach set is related to approximating the support of a probability measure, as observed by Devonport et al. (2021). Recent work by Lasserre and Pauwels (2019); Lasserre (2022) suggests that Christoffel functions are particularly useful for approximating the support. Our work is heavily inspired by Devonport et al. (2021), who proposed to approximate the one-step reach set with an SOS polynomial that is the superlevel set of the Christoffel function. The PAC guarantees provided by Devonport et al. (2021) are derived from measure theory and are, in practise, somewhat conservative. Based on conformal prediction, we propose significant improve ments that we outline below. Further work on conformal prediction will be cited in the text. ContributionsIn this paper, we make the following contributions: * We use conformal prediction to provide stronger and more sample-efficient guarantees on reach set approximation than those given by Devonport et al. (2021). * We propose a version of reach set approximation that is robust to outliers, in contrast to the approach of Devonport et al. (2021). * We exploit an incremental form of the Christoffel function for transductive conformal prediction, thanks to which we don't need to split the data set into training and calibration sets. * To the best of our knowledge, this is the first use of the Christoffel function in conformal prediction. The particular properties of the Christoffel function in set and density approximation make it an excellent candidate for a nonconformity function. Structure of the paperThe paper is organized as follows. Section 2 presents the data-driven framework for reachability analysis using Christoffel functions. It describes the theoretical developments related to the reach set approximation and to Christoffel functions. In Section 3, we introduce our proposed approach to the reach set approximation with conformal prediction, whose statistical guarantees are presented in Section 3.1. Section 3.2 presents a technique to avoid the calibration set by using transductive conformal prediction and an incremental version of the Christoffel function. In Section 4, we discuss the robustness of our methodology to outliers. Section 5 provides numerical experiments on simulated data to support our theoretical results, and to highlight the effectiveness and potential of the proposed approach. ## 2 Data-driven Reach Set Approximation with Christoffel Functions Reachability analysis aims to determine the possible future states of a dynamical system starting from a given initial state. For our purposes, we consider the system to be defined (explicitly or implicitly) by a transition function which maps a state \(\mathbf{x}\in\mathbb{R}^{n}\) to its successor state. We forego extending the notation to nondeterministic or stochastic systems, since our focus is on estimating the image of \(f\) applied to a set of initial states; in the case of a stochastic system we are interested in approximating the support of the image distribution. Beginning with a given initial set of states \(\mathcal{I}\), we are interested in computing the reachable set \[\mathcal{S}=\{f(\mathbf{x}):\mathbf{x}\in\mathcal{I}\}.\] When \(f\) is not precisely known or complex, obtaining the exact solution may not be possible or economical. Instead, we compute an approximation \(\hat{\mathcal{S}}\) that covers most of \(\mathcal{S}\). Every set \(S\) can be represented by a probability measure \(\mu\) such as \(S\) is the support of \(\mu\). This motivated Devonport et al. (2021) to use the Christoffel function to approximate the set \(\mathcal{S}\). In the following subsection, we introduce the Christoffel function, its empirical counterpart, and discuss how to compute it. ### Preliminaries We start by introducing some mathematical notation. Given a vector \(\mathbf{x}\in\mathbb{R}^{n}\), we denote its elements as \(\mathbf{x}=(x_{1},...,x_{n})\). An integer coefficient vector \(\mathbf{\alpha}=(\alpha_{1},...,\alpha_{n})\in\mathbb{N}^{n}\) defines the monomial \(\mathbf{x}^{\mathbf{\alpha}}=x_{1}^{\alpha_{1}}\times x_{2}^{\alpha_{2}}...\times x_{n }^{\alpha_{n}}\). For \(d\in\mathbb{N}\), we consider \(\mathbb{R}[\mathbf{X}]_{d}^{n}\) to be the vector space of n-variate polynomials whose degree is less or equal to \(d\). With each coefficient vector \(\mathbf{\alpha}\in\mathbb{N}^{n}\), we associate the monomial \(\mathbf{x}^{\mathbf{\alpha}}\) whose degree is equal to \(\|\mathbf{\alpha}\|=\sum_{i=1}^{n}\mathbf{\alpha}_{i}\). The monomials \(\mathbf{x}^{\mathbf{\alpha}}\) with \(\|\mathbf{\alpha}\|\leq d\) form a canonical basis of \(\mathbb{R}[\mathbf{X}]_{d}^{n}\). We denote the number of monomials of degree less or equal to \(d\) with \[s(d)=\binom{n+d}{n}.\] Let \(\mathbf{v}_{d}(\mathbf{x})\in\mathbb{R}^{s(d)}\) be the vector of monomials of degree less or equal to d evaluated at \(\mathbf{x}\). For example, if \(d=2\) and \(n=2\), then \(\mathbf{v}_{d}(\mathbf{x})=[1\ x_{1}\ x_{2}\ x_{1}x_{2}\ x_{1}^{2}\ x_{2}^{2}]\). ### Christoffel Functions Christoffel functions are a class of functions associated with a finite measure and a parameter degree \(d\in\mathbb{N}\). They have a strong connection to approximation theory and in this section we briefly summarize some results by Lasserre and Pauwels (2019). For a finite measure \(\mu\) on \(\mathbb{R}^{n}\) and an integer degree \(d\), the Christoffel function \(\Lambda_{\mu,d}(\mathbf{x}):\mathbb{R}^{n}\mapsto\mathbb{R}\) is defined in terms of the moment matrix of the measure \(\mu\): \[\mathbf{M}_{d}=\int_{\mathbb{R}^{n}}\mathbf{v}_{d}(\mathbf{x})\mathbf{v}_{d}(\mathbf{x})^{ \top}d\mu(\mathbf{x}).\] The moment matrix is semi-definite positive for all \(d\in\mathbb{N}\). We furthermore assume that the matrix is positive definite, which ensures the invertibility of \(M_{d}\).1 With the help of the moment matrix, the Christoffel function is defined as : Footnote 1: In fact, the moment matrix of any finite measure \(\mu\) is definite positive unless the support of \(\mu\) is contained in the zeros of a polynomial; for a closer look at the moment matrix, we refer the reader to Lasserre and Pauwels (2019). \[\Lambda_{\mu,d}(\mathbf{x})=\Big{(}\mathbf{v}_{d}(\mathbf{x})^{T}\mathbf{M}_{d}^{-1} \mathbf{v}_{d}(\mathbf{x})\Big{)}^{-1}. \tag{1}\] The following alternative formulation of the Christoffel function can be useful when the moment matrix is large. It can be computed by solving a convex quadratic programming problem, which can be done efficiently using numerical techniques, even for high degrees \(d\): \[\Lambda_{\mu,d}(\mathbf{x})=\inf_{P\in\mathbb{R}[\mathbf{X}]_{d}^{n}}\left\{\int_{ \mathbb{R}^{n}}P(\mathbf{\mathrm{z}})^{2}d\mu(\mathbf{\mathrm{z}}),\quad\text{s.t.} \quad P(\mathbf{\mathrm{x}})=1\right\}\] In a data-driven setting, the exact measure \(\mu\) is unknown. One way to obtain information about \(\mu\) is by sampling a set of points independently drawn from its distribution. For every \(N\in\mathbb{N}\), when disposing of \(N\) i.i.d samples \(\{\mathbf{x}^{1},\ldots,\mathbf{x}^{N}\}\) from \(\mu\), we approximate \(\mu\) with the empirical measure \[\hat{\mu}=\tfrac{1}{N}{\sum}_{i=1}^{N}\delta_{\mathbf{x}^{i}},\] where \(\delta_{\mathbf{x}}\) is the Dirac measure. The moment matrix \(\widehat{\mathbf{M}}_{d}\) associated with the empirical measure \(\hat{\mu}\) is \[\widehat{\mathbf{M}}_{d}=\tfrac{1}{N}{\sum\nolimits_{i=1}^{N}}\mathbf{v}_{d}( \mathbf{x}^{\mathbf{i}})\mathbf{v}_{d}(\mathbf{x}^{\mathbf{i}})^{T} \tag{2}\] Therefore, the empirical measure \(\hat{\mu}\) defines an empirical Christoffel function. Since we are only interested in superlevel sets of the Christoffel function, we can forego the inversion and instead work with sublevel sets of what we call the empirical Christoffel polynomial: \[\Lambda_{\hat{\mu},d}^{-1}(\mathbf{x})=\mathbf{v}_{d}(\mathbf{x})^{T}\widehat{ \mathbf{M}_{d}}^{-1}\mathbf{v}_{d}(\mathbf{x}) \tag{3}\] Note that the moment matrix \(\widehat{\mathbf{M}}_{d}\) is almost surely invertible if the number of samples \(N\geq s(d)\). The Christoffel polynomial is a sum-of-squares polynomial of degree \(2d\). Consequently, it is nonnegative, and if \(N>s(d)\), the empirical Christoffel polynomial is strictly positive. Note that, for increasing sample size \(N\), the empirical Christoffel function converges uniformly to the Christoffel function of the exact measure. ### Set Approximation with Christoffel Functions Lasserre and Pauwels (2019) proposed various thresholding schemes for approximating the support of a probability measure using the Christoffel function or, more precisely, its empirical counterpart. This idea was applied by Devonport et al. (2021) to approximate the reachable set \(\mathcal{S}\) with the superlevel sets of the Christoffel function. In this section, we will briefly summarize the approach. Let \(\mu\) be the probability measure of the reachable set \(\mathcal{S}\). For a given degree \(d\in\mathbb{N}\), the reachable set can be approximated with the sublevel set \[\hat{\mathcal{S}}=\{\mathbf{x}\in\mathbb{R}^{n}\mid\Lambda_{\mu,d}^{-1}(\mathbf{x}) \leq\alpha\} \tag{4}\] for some \(\alpha\in\mathbb{R}\). However, since the exactly reachable set \(\mathcal{S}\) is unknown, \(\mu\) is unknown. Instead, the Christoffel function \(\Lambda_{\mu,d}\) is approximated by an empirical Christoffel function using i.i.d generated samples \(\mathbf{x}^{i}\) from \(\mathcal{S}\). We can obtain a conservative threshold \(\alpha\) such that \(\mathbf{x}^{i}\subseteq\hat{\mathcal{S}}\) by letting \[\alpha=\max_{i}\Lambda_{\hat{\mu},d}^{-1}(\mathbf{x}^{i}). \tag{5}\] Using methods from statistical learning theory, Devonport et al. (2021) proposed the following PAC guarantees: **Conjecture 1** (Thm. 1 in Devonport et al. (2021)): _Given a training set of i.i.d samples \(\mathcal{D}=\{\mathbf{x}^{1},\dots,\mathbf{x}^{N}\}\) from \(\mathcal{S}\), let_ \[\hat{\mathcal{S}}=\{\mathbf{x}\in\mathbb{R}^{n}\mid\Lambda_{\hat{\mu},d}^{-1}(\bm {x})\leq\max_{i}\Lambda_{\hat{\mu},d}^{-1}(\mathbf{x}^{i})\}. \tag{6}\] _If \(N\geq\tfrac{5}{\epsilon}\left(\log\tfrac{4}{\delta}+\binom{n+2d}{n}\log\tfrac{ 40}{\epsilon}\right)\), then \(\mathbb{P}\Big{(}\mu\big{(}\hat{\mathcal{S}}\big{)}\geq 1-\epsilon\Big{)} \geq 1-\delta\)._ In other words, if \(N,\delta,\epsilon\) satisfy the condition in Conjecture 1, then with probability bigger than \(1-\delta\) we are sure that \(\hat{\mathcal{S}}\) contains more than \(1-\epsilon\) of the mass of \(\mathcal{S}\). However, we believe this result neglects the dependencies between the empirical Christoffel polynomial and the points used to construct the threshold \(\alpha\). As will be discussed in more detail in Section 3, different samples should be used for constructing the empirical Christoffel polynomial and for constructing the threshold \(\alpha\) to ensure independence. We informally note convergence results by Lasserre and Pauwels (2019), which hold for uniform probability measures (and some generalizations): * As \(d\to\infty\) and with an appropriately chosen threshold, the sublevel set of the (non-empirical) Christoffel polynomial converges to the support of the measure, i.e., to the exact reach set in these sense of a Hausdorff distance. * For fixed \(d\) and \(n\to\infty\), the empirical Christoffel function converges uniformly to the Christoffel function. * For fixed \(d\) and \(n\to\infty\), the border of the empirical Christoffel polynomial converges to the border of the Christoffel polynomial in these sense of a Hausdorff distance. In consequence, we can informally expect that for a large enough degree \(d\) and large enough sample size \(N\), the sublevel sets of the Christoffel polynomial are close enough to the reachable set. We will use the following running example throughout the paper to illustrate the different concepts. **Example 1** (Four squares): _Let the transition function \(f:\mathbb{R}^{2}\to\mathbb{R}^{2}\) be_ \[f(x,y)=(1+sign(x)\cdot x^{2},1+sign(y)\cdot y^{2})\] _and let the initial set be \(\mathcal{I}=[-1,1]^{2}\). The reachable set consists of four squares, i.e.,_ \[\mathcal{S}=[-3,-1]^{2}\cup[-3,-1]\times[1,3]\cup[1,3]\times[-3,-1]\cup[1,3]^{ 2}.\] _Figure 1 shows the reach set approximation given by (6), for a sample of size \(N=10\,000\) and different degrees \(d\). The caption includes the corresponding uncertainty bound \(\varepsilon\) for confidence \(1-\delta=0.99\) obtained by Conjecture 1._ _We observe that, as intended by construction, all samples are included in \(\hat{S}\). For increasing degrees, \(\hat{S}\) becomes more precise. However, the uncertainty in the covered probability mass \(\epsilon\), increases substantially. Indeed, the bound \(\epsilon\) seems rather conservative since, in all instances, \(\hat{S}\) covers nearly \(100\%\) of \(S\)._ ## 3 Reach Set Approximation with Conformal Prediction Following the reasoning of Section 2, we can expect a sublevel of the Christoffel polynomial to converge to the support of the distribution. Intuitively, the Christoffel polynomial takes high values where the density is low and low values where the density is high, which makes it a good candidate for a nonconformity function. In this section, we briefly recall relevant results from conformal prediction and instantiate them to the special case of estimating the support of distribution, which in our setting is equivalent to approximating the reach set \(\mathcal{S}\). Let \(r:\mathbb{R}^{n}\rightarrow\mathbb{R}\) be a non-conformity function. Given a sample \(\mathcal{D}=\left\{\mathbf{x}^{1},\ldots,\mathbf{x}^{N}\right\}\), the \(p\)-value at \(\mathbf{x}\) is \[p_{value}(\mathbf{x})=\tfrac{1}{N}\Big{|}\big{\{}\ i\ \big{|}\ r(\mathbf{x}^{i})\geq r(\mathbf{x}) \ \big{\}}\Big{|}\] For \(i\in\{0,...,N\}\), the conformal region is defined as \[C_{\mathcal{D}}^{\frac{i}{N}}=\left\{\mathbf{x}\in\mathbb{R}^{n}\ \Big{|}\ p_{value}(\mathbf{x})\geq\tfrac{i}{N}\right\}\] According to conformal prediction theory, see Shafer and Vovk (2008); Angelopoulos and Bates (2021), a new i.i.d sample \(\mathbf{x}^{N+1}\) satisfies \[\mathbb{P}\left(\mathbf{x}^{N+1}\in C_{\mathcal{D}}^{\frac{i}{N}}\right)\geq 1- \tfrac{i+1}{N+1}. \tag{7}\] Note that in (7), the set \(\mathcal{D}\) is also subject to randomness. In other words, (7) stands on average only if the set \(\mathcal{D}\) is re-sampled for each \(\mathbf{x}^{N+1}\). However, in reachability analysis and data-driven applications more generally, we may be restricted to a single, fixed data set \(\mathcal{D}\). Therefore, we need to take into account the probability on the left hand side of (7), conditioned on the sample \(\mathcal{D}\). ### Statistical Guarantees In this section, we ensure statistical independence between the nonconformity function \(r\) and the set \(\mathcal{D}\) by splitting it into a training set \(\mathcal{D}_{\text{train}}\) and a calibration set \(\mathcal{D}_{\text{cal}}\). The use of distinct sets of samples from the same measurement (i.e., a training set and a calibration set) is essential to ensure the independence of the samples used for computing the p-values and conformal regions from the nonconformity function, which is computed based on the training set, see Angelopoulos and Bates (2021) and Bates et al. (2023). This is a special case of conformal prediction called split conformal prediction or inductive conformal prediction. The computational advantage of this method lies in its requirement to fit the model only once. However, this comes at the cost of statistical efficiency as the method necessitates the division of the data into separate, and therefore smaller, training and calibration data sets. Figure 1: Reach set approximation \(\hat{S}\) for Example 1, using the sublevel set of the empirical Christoffel polynomial in (6) (purple outline) on a sample of size \(N=10000\) (black dots), for different degrees \(d\) and corresponding uncertainty bound \(\varepsilon\), according to Conjecture 1. A smaller calibration set increases the coverage error, while a smaller training set reduces the tightness of the approximation. An alternative to this trade-off will be examined in section 3.2. Here, we use the training set for computing the empirical Christoffel polynomial, while the calibration set is used to compute the conformal region. This will lead to bounds on the conditional probability \[\mathbb{P}\Big{(}\mathbf{x}^{N+1}\in C_{\mathcal{D}}^{\frac{i}{N}}\ \Big{|}\ \mathcal{D}_{\text{cal}} \Big{)}.\] Note that the theorems in this section apply to any choice of nonconformity function. The following theorem provides PAC guarantees for conformal regions that are defined with suitably chosen probability thresholds \(b_{1},\ldots,b_{n}\). We will afterwards propose values for \(b_{1},\ldots,b_{n}\) that correspond to the special case of set approximation. **Theorem 2** (Thm. 4 from Bates et al. (2023)): _Consider \(N\) uniform random samples \(U_{1},\ldots,U_{N}\ \overset{i.i.d.}{\sim}\ \operatorname{Unif}\bigl{(}[0,1] \bigr{)}\), with order statistics \(U_{(1)}\leq U_{(2)}\leq\ldots\leq U_{(N)}\), and fix any \(\delta\in(0,1)\). Suppose \(0\leq b_{1}\leq b_{2}\leq\ldots\leq b_{N}\leq 1\) are reals such that_ \[\mathbb{P}\left[U_{(1)}\leq b_{1},\ldots,U_{(N)}\leq b_{n}\right]\geq 1-\delta.\] _Let also \(b_{0}=0\). Then for any i.i.d vector \(\mathbf{x}\) sampled from \(\mu\):_ \[\mathbb{P}\left[\mathbb{P}\Big{(}\mathbf{x}\in C_{\mathcal{D}_{\text{cal}}}^{ \frac{i}{N}}\ \Big{|}\ \mathcal{D}_{\text{cal}}\Big{)}\geq 1-b_{i}\right]\geq 1-\delta \tag{8}\] We propose an analogous, theorem to bound the conditional probability from above. **Theorem 3**: _Under the assumptions of Thm. 2, suppose further that the nonconformity function \(r(\mathbf{x})\) is continuous, the measure \(\mu\) is continuous, and that \(\alpha\) is a real such that_ \[\mathbb{P}\bigl{(}U_{(N)}\leq\alpha\bigr{)}\geq 1-\delta.\] _Then for any i.i.d vector \(\mathbf{x}\) sampled from \(\mu\):_ \[\mathbb{P}\left[\mathbb{P}\Big{(}\mathbf{x}\in C_{\mathcal{D}_{\text{cal}}}^{ \frac{1}{N}}\ \Big{|}\ \mathcal{D}_{\text{cal}}\Big{)}\leq\alpha\right]\geq 1-\delta \tag{9}\] Under the assumptions, \(r(\mathbf{x})\) has a continuous distribution. Let \(F_{\mu}\) be the cumulative distribution function of \(r(\mathbf{x})\). Since \(r(\mathbf{x})\) has a continuous distribution, \(F_{\mu}(r(\mathbf{x}))\) follows \(\operatorname{Unif}([0,1])\) and \(F_{\mu}(r(\mathbf{x}^{1})),F_{\mu}(r(\mathbf{x}^{2})),...,F_{\mu}(r(\mathbf{x}^{N}))\) all follow \(\operatorname{Unif}([0,1])\). Without loss of generality, we assume \(r(\mathbf{x}^{1})\leq r(\mathbf{x}^{2})\leq...\leq r(\mathbf{x}^{N})\). Letting \(U_{N}=F_{\mu}(r(\mathbf{x}^{N}))\), we obtain \[\mathbb{P}\Big{[}F_{\mu}(r(\mathbf{x}^{N}))\leq\alpha\Big{]}\geq 1-\delta.\] Considering \(\mathbf{x}\) sampled i.i.d from \(\mu\), we get \[\mathbb{P}\Big{(}\mathbf{x}\in C_{\mathcal{D}_{\text{cal}}}^{\frac{1}{N}}\ \Big{|}\ \mathcal{D}_{\text{cal}}\Big{)}=\mathbb{P}\Big{(}r(\mathbf{x})\leq r(\mathbf{x}^{N}) \ \Big{|}\ \mathcal{D}_{\text{cal}}\Big{)}=F_{\mu}\Big{(}r(\mathbf{x}^{N})\Big{)}\] Combining the latter two results, we obtain (9). We now use the results of Thm. 2 and Thm. 3 to provide a guarantee on the accuracy of approximated reachable set \(\hat{\mathcal{S}}\) in Algorithm 1. Note that Thm. 3 requires the nonconformity function to be continuous, which is the case for the empirical Christoffel polynomial. **Theorem 4**.: _Suppose that the nonconformity function \(r(\mathbf{x})\) is continuous. \(\forall\delta\in(0,1),\)_ \[\mathbb{P}\left[\mu\left(C_{\mathcal{D}_{\mathrm{cal}}}^{\frac{1}{N}}\right) \geq\exp\left(\frac{\log(\delta)}{N}\right)\right]\geq 1-\delta, \tag{10}\] _If the measure \(\mu\) is continuous, then_ \[\mathbb{P}\left[\exp\left(\frac{\log(1-\delta)}{N}\right)\geq\mu\left(C_{ \mathcal{D}_{\mathrm{cal}}}^{\frac{1}{N}}\right)\right]\geq 1-\delta, \tag{11}\] _Combining these results, we obtain \(\forall\delta\in(0,\nicefrac{{1}}{{2}})\):_ \[\mathbb{P}\left[\exp\left(\frac{\log(1-\delta)}{N}\right)\geq\mu\left(C_{ \mathcal{D}_{\mathrm{cal}}}^{\frac{1}{N}}\right)\geq\exp\left(\frac{\log( \delta)}{N}\right)\right]\geq 1-2\delta. \tag{12}\] Proof.: We instantiate Theorem 2 for a particular choice of \(b_{1}\ldots,b_{N}\). Since we are interested in the support of the measure, we take \(b_{1}\) as the smallest possible value and set the other values \(b_{2}\ldots,b_{N}=1\). To satisfy the conditions of Theorem 2, we first show the following intermediate result: Let \(U_{1},\ldots,U_{N}\)\(\stackrel{{\mathrm{i.i.d.}}}{{\sim}}\)\(\mathrm{Unif}([0,1])\), with order statistics \(U_{(1)}\leq U_{(2)}\leq\ldots\leq U_{(N)}\). Fixing \(b_{1}=1-\delta^{\frac{1}{N}}\) and \(b_{2}=....=b_{N}=1\), it is straightforward that \[\mathbb{P}\left(U_{(1)}\leq b_{1},\ldots,U_{N}\leq b_{N}\right)=\mathbb{P} \left(U_{(1)}\leq b_{1}\right)=1-\mathbb{P}\left(U_{(1)}\geq b_{1}\right).\] Since \(U_{(1)}\) is the smallest of the random variables, \(U_{1},\ldots,U_{N}\), \(\mathbb{P}\left(U_{(1)}\geq b_{1}\right)\) is equivalent to all of the \(U_{i}\) being greater or equal to \(b_{1}\): \[1-\mathbb{P}\left(U_{(1)}\geq b_{1}\right)=1-\Pi_{i=1}^{N}\mathbb{P}\left(U_{ i}\geq b_{1}\right)=1-(1-b_{1})^{N}=1-\delta.\] Applying the above in Theorem 2, we obtain \[\mathbb{P}\left[\mathbb{P}\Big{(}\mathbf{x}\in C_{\mathcal{D}_{\mathrm{cal}}}^{ \frac{1}{N}}\Big{|}\ \mathcal{D}_{\mathrm{cal}}\Big{)}\geq\exp\left(\frac{\log(\delta)}{N} \right)\right]\geq 1-\delta.\] As \(\mu\left(C_{\mathcal{D}_{\mathrm{cal}}}^{\frac{1}{N}}\right)=\mathbb{P}\left[ \mathbf{x}\in C_{\mathcal{D}_{\mathrm{cal}}}^{\frac{1}{N}}\mid\mathcal{D}_{ \mathrm{cal}}\right]\) we obtain the result in (10). Fixing \(\alpha=\exp\left(\frac{\log(1-\delta)}{N}\right)\), we have \(\mathbb{P}\left[U_{N}\leq\alpha\right]=\alpha^{N}=1-\delta\), since \(U_{(N)}\leq\alpha\) means all \(U_{i}\) have to be lower than \(\alpha\). Substituting the above value of \(\alpha\) in Theorem 3, we obtain the result in (11). Combing (10) and (11), we obtain the result in (12). **Example 2**.: _We illustrate Algorithm 1 on the running Example 1. We take \(M=10000\) i.i.d samples from the reachable set \(\mathcal{S}\) by sampling uniformly \(M\) i.i.d samples in \(\mathcal{I}\), which we then split into a calibration set of size \(N=2000\) and a training set of size \(M-N\). Figure 2 shows the approximated reachable set produced by Algorithm 1 for various degrees \(d\). Theorem 4 guarantees that with confidence \(1-\delta=99\%\), the coverage error \(\epsilon\) is lower than \(\epsilon\leq 0.002\). Notably, in contrast to the algorithm presented in Devonport et al. (2021), this guarantee is independent of the dimension of the samples \(n\) and the degree of the empirical ``` 0: An i.i.d data sample \(\mathcal{D}=\{\mathbf{x}^{1},\ldots,\mathbf{x}^{M}\}\), drawn from the reach set \(\mathcal{S}=f(\mathcal{I})\), the degree \(d\), the size \(N\) of the calibration set with \(N<M\) 0:\(\epsilon\)-accurate approximation \(\hat{\mathcal{S}}\) of \(\mathcal{S}\) with confidence \(1-\delta\) and coverage error \(\epsilon=1-\delta^{\nicefrac{{1}}{{N}}}\) 0: Construct the training set of \(M-N\) samples and the calibration set of \(N\) samples: \(\mathcal{D}_{\text{train}}=\{\mathbf{x}^{N+1},\ldots,\mathbf{x}^{M}\}\) and \(\mathcal{D}_{\text{cal}}=\{\mathbf{x}^{1},\ldots,\mathbf{x}^{N}\}\) 1. Compute the empirical moment matrix \(\widehat{\mathbf{M}}_{d}\) and its inverse 1. \(\widehat{\mathbf{M}}_{d}=\frac{1}{M-N}\sum_{i=N+1}^{M}\mathbf{v}_{d}\left(\mathbf{ x}^{i}\right)\mathbf{v}_{d}\left(\mathbf{x}^{i}\right)^{\top}\), with \(\mathbf{x}^{i}\in\mathcal{D}_{\text{train}}\) 2. Compute \(\widehat{\mathbf{M}}_{d}^{-1}\). 2. Calculate the threshold \(\alpha\): \(\alpha=\max_{i=1,\ldots,N}\mathbf{v}_{d}\left(\mathbf{x}^{i}\right)^{\top}\widehat {\mathbf{M}}_{d}^{-1}\mathbf{v}_{d}\left(\mathbf{x}^{i}\right)\), with \(\mathbf{x}^{i}\in\mathcal{D}_{\text{cal}}\) 3. Given the returned \(\widehat{\mathbf{M}}_{d}^{-1}\) and \(\alpha\), record the conformal region: \[C_{\mathcal{D}_{\text{cal}}}^{\frac{1}{N}}=\hat{\mathcal{S}}=\left\{\mathbf{x}\in \mathbb{R}^{n}\;\middle|\;\mathbf{v}_{d}(\mathbf{x})^{\top}\widehat{\mathbf{M}}_{ d}^{-1}\mathbf{v}_{d}(\mathbf{x})\leq\alpha\right\}\] ``` **Algorithm 1**Reach set approximation (without outliers) Figure 2: Reach set approximations (outlined in purple) from Example 2, obtained with Algorithm 1, which uses the Christoffel polynomial as a nonconformity function, for \(M=10000\) samples, of which \(N=2000\) are the calibration set (red dots) and the remainder the training set (black dots). Higher degrees \(d\) lead to tighter approximation. Christoffel polynomial \(d\). It only depends on the confidence parameter \(\delta\) and the size \(N\) of the calibration set. Figure 3 shows the same result for \(M=1000\) samples, of which \(N=200\) samples were utilized as a calibration set. Here, the coverage error \(\epsilon\) will be lower than \(\epsilon\leq 0.02\). To empirically verify the theoretical guarantees obtained in Theorem 4, we repeated this experiment 1000 times. The empirical error was computed by checking how many of these 10000 samples were not contained in the approximated reachable set. In only \(6\) experiments, the coverage error exceeded \(\epsilon=0.02\), confirming that the confidence \(1-\delta\) is greater than \(99\%\)._ ### Avoiding the Calibration Set In this section, we circumvent split between training and calibration sets by using _transductive conformal prediction_Vovk (2013). Transductive conformal prediction is a method used to construct prediction regions for a new data point without relying on a separate training set or calibration set. The calibration set is taken to be the entire training set plus the point at which the function is evaluated, in other words a new non conformity is modulated by the data point. The statistical guarantees of the previous section, and in particular of Theorem 4, hold also for this choice of nonconformity function, with \(\mathcal{D}_{\text{cal}}:=\mathcal{D}\). This approach allows us to use all the available sample points from the measure \(\mu\) to train the Christoffel function and compute the conformal region, but at the price of higher computational cost, as will be discussed below. Let the training set be \(\mathcal{D}=\{\mathbf{x}^{1},\mathbf{x}^{2},...,\mathbf{x}^{N}\}\) be \(N\) i.i.d samples from the probability distribution \(\mu\). To compute the p-value at any point \(\mathbf{x}\in\mathbb{R}^{n}\), we add \(\mathbf{x}\) to the set \(\mathcal{D}\) before computing the empirical Christoffel polynomial. Let \(\mathcal{D}_{x}=\mathcal{D}\cup\{\mathbf{x}\}\), let the empirical measure for \(\mathcal{D}_{x}\) be \(\hat{\mu}_{x}\), and let \(\widehat{\mathbf{M}}_{x}\) be its moment matrix. Using \(\mathcal{D}_{x}\) in the empirical Christoffel polynomial, we get the nonconformity function \[r(\mathbf{x})=\Lambda_{\hat{\mu}_{x},d}^{-1}(\mathbf{x})=\mathbf{v}_{d}(\mathbf{x})^{T} \widehat{\mathbf{M}}_{x}^{-1}\mathbf{v}_{d}(\mathbf{x}).\] We now have to evaluate a different empirical Christoffel polynomial each time we evaluate the p-value \[p_{value}(\mathbf{x})=\tfrac{1}{N}\left|\{i\mid\Lambda_{\hat{\mu}_{x},d}^{-1}(\bm {x}^{i})\geq\Lambda_{\hat{\mu}_{x},d}^{-1}(\mathbf{x})\}\right|\] Figure 3: Reach set approximations (outlined in purple) from Example 2, with a reduced sample size of \(M=1000\), of which \(N=200\) are used as a calibration set. In particular, we need to compute a new moment matrix and invert it for each evaluation. This is computationally expensive, on the order of \(\mathcal{O}(s(d)^{3})\). To avoid this, we compute the inverse moment matrix of the set \(\mathcal{D}_{x}\) incrementally using the Sherman-Morrison formula, as proposed by Ducharlet et al. (2022). This allows us to replace the evaluation of \(\Lambda_{\hat{\mu}_{x},d}^{-1}(\mathbf{x})\), which depends on \(\mathbf{x}\), with evaluations of the original Christoffel polynomial \(\Lambda_{\hat{\mu},d}^{-1}(\mathbf{x})\), plus one additional product: \[\Lambda_{\hat{\mu}_{x},d}^{-1}(\mathbf{x})=\frac{\Lambda_{\hat{\mu},d}^{-1}(\mathbf{x} )}{1+\Lambda_{\hat{\mu},d}^{-1}(\mathbf{x})},\quad\Lambda_{\hat{\mu}_{x},d}^{-1}( \mathbf{x}^{i})=\Lambda_{\hat{\mu},d}^{-1}(\mathbf{x}^{i})-\frac{\left(\mathbf{v}_{d}( \mathbf{x})^{\intercal}\mathbf{y}^{i}\right)^{2}}{1+\Lambda_{\hat{\mu},d}^{-1}(\mathbf{x} )}, \tag{13}\] where \(\mathbf{y}^{i}=\widehat{\mathbf{M}}_{d}^{-1}\mathbf{v}_{d}(\mathbf{x}^{i})\) are vectors that can be precomputed. The cost of precomputing \(\Lambda_{\hat{\mu},d}^{-1}(\mathbf{x}^{i})\) and the vectors \(\mathbf{y}^{i}\) is \(\mathcal{O}(Ns(d)^{2})\), with storage requirements \(\mathcal{O}(Ns(d))\). The reduces the cost of evaluating \(\Lambda_{\hat{\mu}_{x},d}^{-1}(\mathbf{x}^{i})\) for a given \(\mathbf{x}\) to \(\mathcal{O}(s(d))\). The resulting cost of evaluating \(p_{value}(\mathbf{x})\) is \(\mathcal{O}(Ns(d)+s(d)^{2})\). **Example 3**: _Building on example 1, Figure 4 shows the reachable set approximation obtained using transductive conformal prediction with a Christoffel function of degree 15. In this case, we use the same \(M=N=1000\) sample points to train the Christoffel function and compute the set approximation. The guarantees provided by Theorem 4 assert that, using a training set of 1000 samples, the coverage error is below \(0.45\%\) with confidence \(1-\delta=0.99\)._ ## 4 Robustness to Outliers In this section, we address the presence of outliers in the data set. As data may not be very abundant in real-life applications, one may have to work with a calibration set containing outliers without knowing which data point is an outlier and which one isn't. The presence of outliers in the training set does not affect the theoretical guarantees obtained Figure 4: Reach set approximation of example 1 using the transductive conformal prediction and a Christoffel polynomial of degree \(d=15\), which avoids the split into training and calibration sets. using conformal prediction theory, though it will affect the tightness of the approximated reachable set. On the other hand, the presence of outliers in the calibration set will impact those guarantees. The following theorem provides PAC guarantees on the reach set approximation even with outliers in the calibration set. Under the assumption that no more than \(p\) outliers are in the calibration set \(\mathcal{D}\), the confidence in the result depends on \(\epsilon\), \(p\), and the size of the calibration set \(N\). **Theorem 5**: _Consider a set of points \(\mathcal{D}=\{\mathbf{x}^{1},\mathbf{x}^{2},...,\mathbf{x}^{N}\}\) containing no more than \(p\) outliers, with \(2p+1<N\), and where the rest of samples are i.i.d from a probability measure \(\mu\). Then for any i.i.d vector \(\mathbf{x}\) sampled from \(\mu\) and \(\epsilon\in(0,1)\),_ \[\mathbb{P}\bigg{(}\mu\Big{(}C_{\mathcal{D}}^{\frac{p+1}{N}}\Big{)}\geq 1- \epsilon\bigg{)}\geq\sum_{i=p+1}^{N-p}\tbinom{N-p}{i}\epsilon^{i}(1-\epsilon) ^{N-p-i} \tag{14}\] This bound is tight in the sense that for \(p=0\), (14) is identical to the case without outliers, i.e., we obtain (10). **Proof** Let \(\mathcal{D}=\mathcal{D}_{inlier}\cup\mathcal{D}_{oulier}\), with \(m\leq p\) being the unknown real size of \(\mathcal{D}_{oultier}\). Let \(U_{1},\ldots,U_{N-m}\)\(\begin{array}{c}\text{i.i.d.}\\ \sim\end{array}\)\(\text{Unif}([0,1])\), with order statistics \(U_{(1)}\leq U_{(2)}\leq\ldots\leq U_{(N-m)}\). For \(\epsilon\in(0,1)\) let \(b_{1}=...=b_{p+1}=\epsilon\) and \(b_{p+2}=...=b_{N-p}=...=b_{N-m}=1\). Then \(\forall m\leq p\): \[\mathbb{P}\left[U_{(1)}\leq b_{1},\ldots,U_{(N-m)}\leq b_{N-m}\right] \geq\mathbb{P}\left[U_{(1)}\leq b_{1},\ldots,U_{(N-p)}\leq b_{N-p}\right]\] \[\geq\sum_{i=p+1}^{N-p}\tbinom{N-p}{i}\epsilon^{i}(1-\epsilon)^{N- p-i}.\] The above result is obtained by the following reasoning: let \(0<i\leq N-p\), if we have \(N-p\) random variable \(V_{i},\ldots,V_{N-p}\)\(\begin{array}{c}\text{i.i.d.}\\ \sim\end{array}\)\(\text{Unif}([0,1])\) the probability to have exactly \(i\) of them below \(\epsilon\) is equal to \(\tbinom{N-p}{i}\epsilon^{i}(1-\epsilon)^{N-p-i}\), therefore, the probability of having at least \(p+1\) of them below \(\epsilon\) is equal to \(\sum_{i=p+1}^{N-p}\tbinom{N-p}{i}\epsilon^{i}(1-\epsilon)^{N-p-i}\). Let \(\mathbf{x}\) be an i.i.d vector sampled from \(\mu\). By definition, \[\mu\Big{(}C_{\mathcal{D}_{inliers}}^{\frac{p+1}{N-m}}\Big{)}=\mathbb{P}\Big{(} \mathbf{x}\in C_{\mathcal{D}_{inliers}}^{\frac{p+1}{N-m}}\Big{|}\ \mathcal{D}_{inliers}\Big{)}.\] Using Theorem 2, we get : \[\mathbb{P}\bigg{(}\mu\Big{(}C_{\mathcal{D}_{inliers}}^{\frac{p+1}{N-m}}\Big{)} \geq 1-\epsilon\bigg{)}\geq\sum_{i=p+1}^{N-p}\tbinom{N-p}{i}\epsilon^{i}(1- \epsilon)^{N-p-i}\] Since \(C_{\mathcal{D}_{inliers}}^{\frac{p+1}{N-m}}\subseteq C_{\mathcal{D}}^{\frac{ p+1}{N}}\), we have \(\mu\Big{(}C_{\mathcal{D}}^{\frac{p+1}{N}}\Big{)}\geq\mu\Big{(}C_{\mathcal{D}_{ inliers}}^{\frac{p+1}{N-m}}\Big{)}\), which leads us to (14). Note that the bound in Theorem 5 (14) is tight in the sense that for \(p=0\) we obtain the same lower bound as in Theorem 4 (10). Table 4 shows the confidence bound of (14) for different values of the calibration set size and the approximation uncertainty \(\epsilon\) under the assumption that no more than \(5\%\) of the calibration set are outliers. We observe that the confidence rapidly approaches \(100\%\) when the admissible coverage error is above the ratio of outliers; it rapidly drops to \(0\%\) when it is below. **Example 4**: _To evaluate the performance of Algorithm 2 on example 1, we construct a data set from \(M=1500\) samples of the reach set and substitute \(10\%\) with outliers, i.e., i.i.d. samples outside the reachable set. We use a calibration set of size \(N=500\), and the rest of the samples are used as a training set to compute the empirical Christoffel polynomial. Figure 5 shows the resulting approximation. With Theorem 5, the coverage error \(\epsilon=0.15\) with a confidence \(=98.9\%\). To empirically confirm these bounds, as in Example 2, we repeat the experiment 1000 times with different samples. For each experiment, we take 10000 samples of the reach set in order to compute the empirical coverage error. None of the experiments resulted in an empirical coverage error above \(15\%\), which is consistent with the theoretical guarantee of \(98.9\%\) confidence._ ## 5 Experiments We now turn our focus to the suitability of the empirical Christoffel polynomial as a non-conformity function. \begin{table} \begin{tabular}{r c c c c} \hline \hline & \multicolumn{4}{c}{confidence in \%} \\ \cline{2-5} size \(N\) & \(\epsilon=4\%\) & \(\epsilon=5\%\) & \(\epsilon=6\%\) & \(\epsilon=10\%\) \\ \hline 100 & 33 & 51 & 68 & 96 \\ 500 & 10 & 42 & 77 & 99.99 \\ 1000 & 3 & 37 & 84 & 99.99 \\ 2000 & 0.4 & 31 & 92 & 99.99 \\ \hline \hline \end{tabular} \end{table} Table 1: The confidence bound of (14) for different sizes \(N\) of the calibration set and the desired coverage error \(\epsilon\) for a calibration set with \(5\%\) outliers or less Figure 5: An approximation of the reach set of example 1 (purple outline) obtained with Algorithm 2 using a Christoffel polynomial of degree 15, on a data set with \(10\%\) outliers. The training set is shown in black, the calibration set in red. ### Empirical False Positive Rate We start by examining the tightness of the reachable set approximation in example 2 through the empirical measurement of false positives. We compare the empirical Christoffel polynomial with other prevalent nonconformity functions: one-class SVM, Isolation Forest (Liu et al., 2008), and Local Outlier Factor (LOF), as shown in Figure 6. Only the approximation using LOF seems comparable to that of the Christoffel polynomial, while Isolation Forest exhibits significant variability depending on the random seed. To gauge the number of false positives and assess the accuracy of the reachable set approximation, we generated 10,000 uniformly distributed samples within the domain \([-4,4]^{2}\). The false-positive rate was empirically determined for various degrees \(d\), as shown in Table 2. As observed in earlier plots, a higher degree results in a more accurate fit of the reachable set. The false-positive rates for the other algorithms can also be observed in Table 2 for varying sizes of the training and calibration sets. Consistent with the findings from the Figure 6, only the LOF provides results that are comparable in quality to those obtained using the Christoffel polynomial. To further demonstrate the effectiveness of the empirical Christoffel polynomial as a non-conformity function, we examine its robustness in the presence of outliers within the training set. Although the theoretical guarantees discussed in this article and in general conformal prediction hold for any choice of non-conformity function, even with outliers in the training set, the presence of these outliers can impact the accuracy of the model. To compare the empirical Christoffel polynomial with LOF, we conducted two experiments. In the first experiment, we considered the region \([-1,1]^{2}\) as the reachable set to approximate. We focused on comparing the performance of the algorithms under the presence of outliers in the training set. We generated a training set of size 1,200 containing 200 outliers and a calibration set of size 200, all belonging to the reachable set. The second experiment was similar to the first one, with a star-shaped region as the reachable set. We generated a training set of size 900 containing 100 outliers and a calibration set of size 200. Figure 9 il \begin{table} \begin{tabular}{l r r r r r} \hline Nonconformity function & \(|\mathcal{D}|\) & \(|\mathcal{D}_{\text{train}}|\) & \(|\mathcal{D}_{\text{cal}}|\) & \(\epsilon\) in \% & FP\% \\ \hline Christoffel with \(d=6\) & 10000 & 8000 & 2000 & 0.2 & 49.5 \\ Christoffel with \(d=10\) & & & & 0.2 & 39.5 \\ Christoffel with \(d=15\) & & & & 0.2 & 11.7 \\ Christoffel with \(d=18\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & 0.2 & 7.2 \\ LOF score & \(\vdots\) & \(\vdots\) & \(\vdots\) & 0.2 & 3.4 \\ IsolationForest score & & & & 0.2 & 92.9 \\ Oneclass SVM score & & & & 0.2 & 65.7 \\ Christoffel with \(d=6\) & 1000 & 800 & 200 & 2.2 & 44.6 \\ Christoffel with \(d=10\) & & & & 2.2 & 20 \\ Christoffel with \(d=15\) & & & & 2.2 & 12.7 \\ Christoffel with \(d=18\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & 2.2 & 12.4 \\ LOF score & \(\vdots\) & \(\vdots\) & \(\vdots\) & 2.2 & 10.6 \\ IsolationForest score & & & & 2.2 & 86.8 \\ Oneclass SVM score & & & & 2.2 & 60.7 \\ Transduct. Christ. with \(d=15\) & 1000 & 1000 & 1000 & 0.5 & 46.6 \\ \hline \end{tabular} \(\epsilon\) = Coverage error, at least \(1-\epsilon\) of the measure is covered; FP% = False positives in %, measured by uniform sampling of a sufficiently large bounding box and counting samples in \(\hat{S}\setminus S\) \end{table} Table 2: Experimentally estimated false-positive rates for different algorithms applied to the reach set approximation of Example 1, with confidence \(1-\delta=99\%\) Figure 6: Reach set approximations (purple outline) of Example 1 using one-class SVM, Isolation Forest, and Local Outlier Factor (LOF) as nonconformity functions, for a common training set of size 800 (black dots) and calibration set of size 200 (red dots). lustrates how the empirical Christoffel polynomial and LOF approximate the true reachable set in the presence of outliers. Figures 7 and 8 display the performance of both the empirical Christoffel polynomial and LOF in handling outliers within the training set across distinct and complex geometric situ Figure 8: A comparison of reach set approximation (purple outline) using the Christoffel polynomial with degree 15 and LOF for the second experiment, which targets a star-shaped region. Training set samples are in black and calibration set in red. This plot highlights the performance and robustness of both methods when encountering outliers in a complex geometric scenario, illustrating the effectiveness of the empirical Christoffel polynomial under the presence of outliers. Figure 7: Comparison of reachable set approximations for the empirical Christoffel polynomial (degree 10) and LOF in the first experiment, with the region \([-1,1]^{2}\) as the target. The training set, containing outliers, is represented by black dots, while the calibration set is shown in red. The plot highlights the performance differences and robustness of both methods in the presence of outliers, demonstrating how the empirical Christoffel polynomial is far more robust. ations. When employed as a non-conformity function, the empirical Christoffel polynomial demonstrated greater robustness in the presence of outliers across both experiments. ### Duffing oscillator The Duffing oscillator is a nonlinear mathematical model that captures the behavior of a system that oscillates when subject to an external force. It has been used in a variety of physical systems, from mechanical vibrations to biological dynamics. The Duffing oscillator is described by the following nonlinear second-order differential equation: \[\ddot{x}=-\delta\dot{x}+\alpha x-\beta x^{3}+\gamma cos(\omega t)\] Similar to Devonport et al. (2021), we take \(\alpha=1\), \(\beta=1,\delta=0.05,\gamma=0.4\) and \(\omega=1.3\). We choose the initial set to be \(\mathcal{I}=[-0.95,1.05]\times[-0.05,0.05]\). Figure 9 shows an approximation of the reach set, computed with the Christoffel function as nonconformity function for different degrees. We observe that for increasing degrees, the approximation is more precise and is able to recover holes. The results are comparable to those reported by Devonport et al. (2021), where no split into training and calibration sets was carried out. ## 6 Conclusion In this paper, we studied the mathematical reach set approximation in the analysis of dynamical systems based on conformal prediction. We consider for the first time the use of the Christoffel function as a nonconformity function, thanks to its attractive properties in set and density approximation. Our conformal prediction approach provides stronger and more sample-efficient guarantees on reach set approximation and proposed a version of reach set approximation that is robust to outliers, compared that the most relevant approaches in the literature. We exploited an incremental form of the Christoffel function for transductive conformal prediction that avoids splitting the data into training and calibration sets. Extensive illustrative numerical experiments show the effectiveness and the performance of our proposed approach and its associated algorithms. Figure 9: Reach set approximation (purple outline) of the duffing oscillator using the Christoffel polynomial with the data set split into training (black) and calibration set (red), for different degrees \(d\) of the Christoffel function, with corresponding coverage error \(\varepsilon\) for confidence \(1-\delta=0.99\). The theoretical results that we presented here in the context of reach set approximation are equally valid to approximate compact sets, or the support of probability distributions, in other application domains. Naturally, the computation of the Christoffel function is subject to numerical errors. The impact of such numerical issues will be studied in future work. This work has been supported by the French government under the "France 2030" program as part of the SystemX Technological Research Institute. This work was conducted as part of the Confiance.AI program, which aims to develop innovative solutions for enhancing the reliability and trustworthiness of AI-based systems.
dynamical systems の分析において重要な数学的なツールは、特定の時間から特定の初期状態から到達可能な状態の集合である、すなわち、到達可能な集合です。この集合は、システムのダイナミクスが知られていても、システムの挙動を定式化したOrdinary differential equationsの系において、未知の係数を持つ場合でも、計算が困難な場合があります。実際には、パラメータはしばしば未知であり、数学的なモデルは得にくい場合があります。データに基づくアプローチでは、これらの困難を回避するために、サンプルの集合に基づいて到達可能な集合を推定することを約束しています。もしモデルがあれば、このトレーニングセットは数値シミュレーションを通して得られます。モデルがなければ、実際の観察物を代わりに使用できます。データに基づく到達可能な集合の近似方法として最近提案されたアプローチは、Christoffel関数を用いて到達可能な集合を近似します。特定の仮定の下では、近似は真
2309.16590
Vertex-primitive digraphs with large fixity
The relative fixity of a digraph $\Gamma$ is defined as the ratio between the largest number of vertices fixed by a nontrivial automorphism of $\Gamma$ and the number of vertices of $\Gamma$. We characterize the vertex-primitive digraphs whose relative fixity is at least $1/3$, and we show that there are only finitely many vertex-primitive digraphs of bounded out-valency and relative fixity exceeding a positive constant.
Marco Barbieri, Primož Potočnik
2023-09-28T16:50:44
http://arxiv.org/abs/2309.16590v1
# Vertex-primitive digraphs with large fixity ###### Abstract. The relative fixity of a digraph \(\Gamma\) is defined as the ratio between the largest number of vertices fixed by a nontrivial automorphism of \(\Gamma\) and the number of vertices of \(\Gamma\). We characterize the vertex-primitive digraphs whose relative fixity is at least \(\frac{1}{3}\), and we show that there are only finitely many vertex-primitive graphs of bounded out-valency and relative fixity exceeding a positive constant. Key words and phrases:Vertex-primitive, fixity, product action, digraph, graph 2010 Mathematics Subject Classification: 05C25, 20B25 ## 1. Introduction Throughout this paper, we use the word _digraph_ to denote a combinatorial structure \(\Gamma\) determined by a finite nonempty set of _vertices_\(V\Gamma\) and a set of _arcs_\(A\Gamma\subseteq V\Gamma\times V\Gamma\), sometimes also viewed as a binary relation on \(V\Gamma\). If the set \(A\Gamma\) is symmetric (when viewed as a binary relation on \(V\Gamma\)), then the digraph \(\Gamma\) is called a _graph_ and unordered pairs \(\{u,v\}\) such that \((u,v)\) and \((v,u)\) are arcs are called _edges_ of \(\Gamma\). The _fixity_ of a finite digraph \(\Gamma\), denoted by \(\operatorname{Fix}(\Gamma)\), is defined as the largest number of vertices that are left fixed by a nontrivial automorphism of \(\Gamma\), while the _relative fixity of \(\Gamma\)_ is defined as the ratio \[\operatorname{RelFix}(\Gamma)=\frac{\operatorname{Fix}(\Gamma)}{|V\Gamma|}\,.\] The notion of fixity of (di)graphs was introduced in a 2014 paper of L. Babai [2] (see also [4]), where several deep results regarding the fixity of strongly regular graphs were proved (these results were later used in his work on the graph isomorphism problem [3]). To convey the flavour of his work, let us mention [4, Theorem 1.6], which states that the relative fixity of a strongly regular graph (other then a complete bipartite graph or the line graph of a complete graph) is at most \(\frac{7}{8}\). The study of the fixity of graphs continued in a series of papers [5, 19, 25] by P. Spiga and coauthors (including the authors of the present paper), where the problem was studied in the context of vertex-transitive graphs of fixed valency. Let us mention that fixity is a well studied parameter in the slightly more general context of permutation groups, where, instead of fixity, it is more common to consider the dual notion of _minimal degree_ of a permutation group \(G\), defined by \[\mu(G)=\min_{g\in G\setminus\{1\sigma\}}\left|\operatorname{supp}(g)\right|,\] where \(\operatorname{supp}(g)\) denotes the set of all non-fixed points of \(g\in G\). Note that the fixity of a digraph \(\Gamma\) and the minimal degree of its automorphism group \(\operatorname{Aut}(\Gamma)\) are related via the equality \[\operatorname{Fix}(\Gamma)=|V(\Gamma)|-\mu(\operatorname{Aut}(\Gamma))\,.\] A vast majority of papers on the topic of minimal degree of permutation groups (including the original work of Jordan on primitive permutation groups of minimal degree \(c\) for a fixed constant \(c\)) concentrates on _primitive permutation groups_ (see, for example, [1, 8, 13, 20, 23, 24]). It is thus natural to ask the following question: **Question 1**.: What can be said about a digraph with large relative fixity whose automorphism group acts primitively on the vertex-set? In this paper, we answer this question in the setting where the relative fixity is more than \(\frac{1}{3}\). In our analysis, we rely heavily on the recent classification of primitive permutation groups of minimal degree at most \(\frac{2}{3}\) of the degree of the permutation group from [8]. The essence of our work thus consists of determining the digraphs upon which the permutation groups from this classification act upon. Before stating our main result, let us first introduce a few graph theoretical concepts and constructions. First, recall that the _direct product of the family of digraphs_\(\Gamma_{1},\ldots,\Gamma_{r}\) (sometimes also called the _tensor product_ or the _categorical product_) is the digraph \(\Gamma_{1}\times\ldots\times\Gamma_{r}\) whose vertex-set is the cartesian product \(V\Gamma_{1}\times\ldots\times V\Gamma_{r}\) and whose arc-set is \[A(\Gamma_{1}\times\ldots\times\Gamma_{r})=\left\{\left((u_{1},\ldots,u_{r}),\, (v_{1},\ldots,v_{r})\right)\big{|}\,(u_{i},v_{i})\in A\Gamma_{i}\text{ for all }i\in\{1,\ldots,r\}\right\}\,.\] Recall also that a _union of digraphs_\(\Gamma_{1}\) and \(\Gamma_{2}\) is the digraph whose vertex-set and arc-set are the sets \(V\Gamma_{1}\cup V\Gamma_{2}\) and \(A\Gamma_{1}\cup A\Gamma_{2}\), respectively. Note that when \(\Gamma_{1}\) and \(\Gamma_{2}\) share the same vertex-set, their union is then obtained simply by taking the union of their arc-sets. Further, for a positive integer \(m\), let \(\mathbf{L}_{m}\) and \(\mathbf{K}_{m}\) denote the _loop graph_ and the _complete graph_ on a vertex-set \(V\) of cardinality \(m\) and with arc-sets \(\{(v,v):v\in V\}\) and \(\{(u,v):u,v\in V,u\plus v\}\), respectively. We now have all the ingredients needed to present a construction yielding the digraph appearing in our main result. **Construction 2**.: Let \(\mathcal{G}=\{\Gamma_{0},\Gamma_{1},\ldots,\Gamma_{k}\}\) be a list of \(k+1\) pairwise distinct digraphs sharing the same vertex-set \(\Delta\). Without loss of generality, we shall always assume that \(\Gamma_{0}=\mathbf{L}_{m}\) with \(m=|\Delta|\). Further, let \(r\) be a positive integer, and let \(\mathcal{J}\) be a subset of the \(r\)-fold cartesian power \(X^{r}\), where \(X=\{0,1,\ldots,k\}\). Given this input, construct the digraph \[\mathcal{P}(r,\mathcal{G},\mathcal{J})=\bigcup_{(j_{1},j_{2},\ldots,j_{r}) \in\mathcal{J}}\Gamma_{j_{1}}\times\Gamma_{j_{2}}\times\ldots\times\Gamma_{j _{r}}\] and call it the _merged product action digraph_. **Remark 3**.: We give some example to give a flavour of what can be obtained using Construction 2. If \(r=1\), then \(\mathcal{P}(1,\mathcal{G},\mathcal{J})\) is simply the union of some digraphs from the set \(\mathcal{G}\). If \(r=2\) and \(\mathcal{J}=\{(1,0),(0,1)\}\), then \(\mathcal{P}(1,\mathcal{G},\mathcal{J})=\mathbf{L}_{m}\times\Gamma_{1}\cup \Gamma_{1}\times\mathbf{L}_{m}\), which is, in fact, the _Cartesian product_\(\Gamma\square\Gamma\). (This product is sometimes called the _box product_, and we refer to [14] for the definition of the Cartesian product.) More generally, if \(\mathcal{J}=\{e_{i}\mid i\in\{1,\ldots,r\}\}\), where \(e_{i}=(0,\ldots,0,1,0,\ldots,0)\) is the \(r\)-tuple with \(1\) in the \(i\)-th component and zeroes elsewhere, then \(\mathcal{P}(r,\mathcal{G},\mathcal{J})=(\Gamma_{1})^{\square r}\), the \(r\)-th Cartesian power of the graph \(\Gamma_{1}\in\mathcal{G}\). More specifically, if \(\Gamma_{1}=\mathbf{K}_{m}\) and \(\mathcal{J}\) is as above, then \(\mathcal{P}(r,\mathcal{G},\mathcal{J})\) is the _Hamming graph_\(\mathbf{H}(r,m)=\mathbf{K}_{m}^{\square r}\). While \(\mathcal{J}\) can be an arbitrary set of \(r\)-tuples in \(X^{r}\), we will be mostly interested in the case where \(\mathcal{J}\subseteq X^{r}\) is invariant under the induced action of some permutation group \(H\leq\operatorname{Sym}(r)\) on the set \(X^{r}\) given by the rule \[(j_{1},j_{2},\ldots,j_{r})^{h}=(j_{1h^{-1}},j_{2h^{-1}},\ldots,j_{rh^{-1}})\,.\] (Throughout this paper, in the indices, we choose to write \(ih^{-1}\) instead of \(i^{h^{-1}}\) for improved legibility.) We shall say that \(\mathcal{J}\) is an \(H\)_-invariant subset of \(X^{r}\)_ in this case. A subset \(\mathcal{J}\subseteq X^{r}\) which is \(H\)-invariant for some _transitive_ subgroup of \(\operatorname{Sym}(r)\) will be called _homogeneous_. The last example of Remark 3 justifies the introduction of the following new family of graphs. **Definition 4**.: Let \(r,m\) be two positive integers, and let \(\mathcal{J}\subseteq\{0,1\}^{r}\) be a homogeneous set. The graph \(\mathcal{P}\left(r,\{\mathbf{L}_{m},\mathbf{K}_{m}\},\mathcal{J}\right)\) is called _generalised Hamming graph_ and is denoted by \(\mathbf{H}(r,m,\mathcal{J})\). **Remark 5**.: The generalised Hamming graphs \(\mathbf{H}(r,m,\mathcal{J})\), where \(\mathcal{J}\) is \(H\)-invariant, are precisely the unions of orbital graphs for the group \(\operatorname{Sym}(m)\operatorname{wr}H\) endowed with the product action (see Lemma 18 for further details). Furthermore, a homogeneous set \(\mathcal{J}\) is said to be _Hamming_ if, \[\mathcal{J}=\bigcup_{h\in H}\left((X\backslash\{0\})^{a}\times X^{b}\times\{0 \}^{r-a-b}\right)^{h}\,,\] for some nonnegative integers \(a,b\) such that \(a+b\leq r\) and a transitive group \(H\leq\operatorname{Sym}(r)\). It is said to be _non-Hamming_ otherwise. **Remark 6**.: Let \(\mathcal{P}(r,\mathcal{G},\mathcal{J})\) be a merged product action digraph, where the digraphs in \(\mathcal{G}\) have \(m\) vertices, and where \(\mathcal{J}\) is a Hamming set. Build \(\mathcal{J}^{\prime}\subseteq\{0,1\}^{r}\) from \(\mathcal{J}\) by substituting any nonzero entry of a sequence in \(\mathcal{J}\) with \(1\). Then \[\mathcal{P}\left(r,\mathcal{G},\mathcal{J}\right)=\mathcal{P}\left(r,\{ \mathbf{L}_{m},\mathbf{K}_{m}\},\mathcal{J}^{\prime}\right)\,.\] In particular, a generalised Hamming graph arises from Construction 2 if and only if \(\mathcal{J}\) is a Hamming set. **Remark 7**.: The ordering of the Cartesian components in the definition of a Hamming set does not matter: indeed, a permutation of the components corresponds to a conjugation of the group \(H\) in \(\operatorname{Sym}(r)\), thus defining isomorphic digraphs in Construction 2. We are ready to state our main result. **Theorem 8**.: _Let \(\Gamma\) be a finite vertex-primitive digraph with at least one arc. Then_ \[\operatorname{RelFix}(\Gamma)>\frac{1}{3}\] _if and only if one of the following occurs:_ 1. \(\Gamma\) _is a generalised Hamming graph_ \(\mathbf{H}(r,m,\mathcal{J})\)_, with_ \(m\geq 4\)_, and_ \[\operatorname{RelFix}(\Gamma)=1-\frac{2}{m}\,;\] 2. \(\Gamma\) _is a merged product action graph_ \(\mathcal{P}(r,\mathcal{G},\mathcal{J})\)_, where_ \(r\geq 1\)_, where_ \(\mathcal{J}\) _is a non-Hamming subset of_ \(X^{r}\) _with_ \(X=\{0,1,\ldots,|\mathcal{G}|-1\}\)_, and where_ \(\mathcal{G}\) _is as in one of the following:_ 1. \(\mathcal{G}=\{\mathbf{J}(m,k,i)\mid i\in\{0,1,\ldots,k\}\}\) _is the family of distance-_\(i\) _Johnson graphs, where_ \(k,m\) _are fixed integers such that_ \(k\geq 2\) _and_ \(m\geq 2k+2\) _(see Section_ 4.2 _for details), and_ \[\operatorname{RelFix}(\Gamma)=1-\frac{2k(m-k)}{m(m-1)}\,;\] 2. \(\mathcal{G}=\{\mathbf{Q}\mathbf{J}(2m,m,i)\mid i\in\{0,1,\ldots,[m/2]\}\}\) _is the family of squashed distance-_\(i\) _Johnson graphs, where_ \(m\) _is a fixed integer with_ \(m\geq 4\) _(see Section_ 4.3 _for details), and_ \[\operatorname{RelFix}(\Gamma)=\frac{1}{2}\left(1-\frac{1}{2m-1}\right)\,;\] 3. \(\mathcal{G}=\{\mathbf{L}_{m},\Gamma_{1},\Gamma_{2}\}\)_, where_ \(\Gamma_{1}\) _is a strongly regular graph listed in Section_ 4.4_,_ \(\Gamma_{2}\) _is its complement, and_ \[\operatorname{RelFix}(\Gamma)=\operatorname{RelFix}(\Gamma_{1})\] _(the relative fixities are collected in Table_ 1_)._ **Remark 9**.: Although we do not assume that a vertex-primitive digraph \(\Gamma\) in Theorem 8 is a graph, the assumption of large relative fixity forces it to be such. In other words, every vertex-primtive digraph of relative fixity larger than \(\frac{1}{3}\) is a graph. **Remark 10**.: The relative fixity can be arbitrarily close to \(1\). Indeed, this can be achieved by choosing a generalised Hamming graph \(\mathbf{H}(r,m,\mathcal{J})\) with \(m\) arbitrarily large. By analysing the vertex-primitive graphs of relative fixity more than \(\frac{1}{3}\), one can notice that the out-valency of these graphs must grow as the number of vertices grows. More explicitly, a careful inspection of the families in Theorem 8 leads to the following result, the proof of which we leave out. **Remark 11**.: There exists a constant \(C\) such that every finite connected vertex-primitive digraph \(\Gamma\) with \[\operatorname{RelFix}(\Gamma)>\frac{1}{3}\] satisfies \[\operatorname{val}(\Gamma)\geq C\log\left(|V\Gamma|\right)\,.\] Observe that, for the Hamming graphs \(\mathbf{H}(r,m)\) with \(m\geq 4\), we have that \[\operatorname{val}\left(\mathbf{H}(r,m)\right)=r(m-1)\geq r\log(m)=\log\left(|V \mathbf{H}(r,m)|\right)\,.\] In particular, as both expressions are linear in \(r\), a logarithmic bound in Remark 11 is the best that can be achieved. One of the consequences of Remark 11 is that for every positive integer \(d\) there exist only finitely many connected vertex-primitive digraphs of out-valency at most \(d\) and relative fixity exceeding \(\frac{1}{3}\). As Theorem 12 and Corollary 13 show, this remains to be true if \(\frac{1}{3}\) is substituted by an arbitrary positive constant. We thank P. Spiga for providing us with the main ideas used in the proof. **Theorem 12**.: _Let \(\alpha\) and \(\beta\) be two positive constants, and let \(\mathcal{F}\) be a family of quasiprimitive permutation groups \(G\) on \(\Omega\) satisfying:_ 1. \(\mu(G)\leq(1-\alpha)|\Omega|\)_; and_ 2. \(|G_{\omega}|\leq\beta\) _for every_ \(\omega\in\Omega\)_._ _Then \(\mathcal{F}\) is a finite family._ **Corollary 13**.: _Let \(\alpha\) be a positive constant, and let \(d\) be a positive integer. There are only finitely many vertex-primitive digraphs of out-valency at most \(d\) and relative fixity exceeding \(\alpha\)._ The proof of Theorem 8 can be found in Section 5, while Theorem 12 and Corollary 13 are proved in Section 6. ## 2. Basic concepts and notations ### Product action We start by recalling the definition of a wreath product and its product action. By doing so, we also settle the notation for the rest of the paper. We refer to [12, Section 2.6 and 2.7] for further details. Let \(H\) be a permutation group on a finite set \(\Omega\). Suppose that \(r=|\Omega|\), and, without loss of generality, identify \(\Omega\) with the set \(\{1,2,\ldots,r\}\). For an arbitrary set \(X\), we may define a _permutation action of \(H\) of rank \(r\) over \(X\)_ as the the action of \(H\) on the set \(X^{r}\) given by the rule \[(x_{1},x_{2},\ldots,x_{r})^{h}=(x_{1h^{-1}},x_{2h^{-1}},\ldots,x_{rh^{-1}})\.\] Let \(K\) be a permutation group on a set \(\Delta\). We can consider the permutation action of \(H\) of rank \(r\) over \(K\) by letting \[(k_{1},k_{2},\ldots,k_{r})^{h}=(k_{1h^{-1}},k_{2h^{-1}},\ldots,k_{rh^{-1}}) \quad\text{for all $(k_{1},k_{2},\ldots,k_{r})\in K^{r}$, $h\in H$}\,.\] If we denote by \(\vartheta\) the homomorphism \(H\to\operatorname{Aut}(K^{r})\) corresponding to this action, then the _wreath product of \(K\) by \(H\)_, in symbols \(K\operatorname{wr}H\), is the semidirect product \(K^{r}\rtimes_{\vartheta}H\). We call \(K^{r}\) the _base group_, and \(H\) the _top group_ of this wreath product. Note that the base and the top group are both embedded into \(K\operatorname{wr}H\) via the monomorphisms \[(k_{1},k_{2},\ldots,k_{r})\mapsto((k_{1},k_{2},\ldots,k_{r}),1_{H})\] and \[h\mapsto((1_{K},1_{K},\ldots,1_{K}),h)\.\] In this way, we may view the base and the top group as subgroups of the wreath product and identify an element \(((k_{1},k_{2},\ldots,k_{r}),h)\in K\operatorname{wr}H\) with the product \((k_{1},k_{2},\ldots,k_{r})h\) of \((k_{1},k_{2},\ldots,k_{r})\in K^{r}\) and \(h\in H\) (both viewed as elements of the group \(K\operatorname{wr}H\)). The wreath product \(K\operatorname{wr}H\) can be endowed with an action on \(\Delta^{r}\) by letting \[(\delta_{1},\delta_{2},\ldots,\delta_{r})^{(k_{1},k_{2},\ldots,k_{r})h}=\left( \delta_{1}^{k_{1}},\delta_{2}^{k_{2}},\ldots,\delta_{r}^{k_{r}}\right)^{h}= \left(\delta_{1h^{-1}}^{k_{1h-1}},\delta_{2h^{-1}}^{k_{2h-1}},\ldots,\delta_{rh ^{-1}}^{k_{rh-1}}\right)\,,\] for all \((\delta_{1},\delta_{2},\ldots,\delta_{r})\in\Delta^{r},(k_{1},k_{2},\ldots,k_{ r})\in K^{r}\), and \(h\in H\). We call this action the _product action of the wreath product \(K\operatorname{wr}H\) on \(\Delta^{r}\)_. We recall the condition for a wreath product endowed with product action to be primitive. **Lemma 14** ([12, Lemma 2.7A]).: _Let \(K\) be a permutation group on \(\Delta\) and let \(H\) be a permutation group on \(\Omega\). The wreath product \(K\operatorname{wr}H\) endowed with the product action on \(\Delta^{r}\) is primitive if and only if \(H\) is transitive and \(K\) is primitive but not regular._ We now introduce some notation to deal with any subgroup \(G\) of \(\operatorname{Sym}(\Delta)\operatorname{wr}\operatorname{Sym}(\Omega)\) endowed with product action on \(\Delta^{r}\). By abuse of notation, we identify the set \(\Delta\) with \[\left\{\left\{\delta\right\}\times\Delta^{r-1}\,\big{|}\,\delta\in\Delta\right\}\] via the mapping \(\delta\mapsto\left\{\delta\right\}\times\Delta^{r-1}\). We denote by \(G_{\Delta}^{\Delta}\) the permutation group that \(G_{\Delta}\) induces on \(\Delta\), that is, \[G_{\Delta}^{\Delta}\cong G_{\Delta}/G_{(\Delta)}\,.\] (Recall that \(G_{(\Delta)}\) denotes the pointwise stabilizer of \(\Delta\).) Moreover, recalling that every element of \(G\) can be written uniquely as \(gh\), for some \(g\in\operatorname{Sym}(\Delta)^{r}\) and some \(h\in\operatorname{Sym}(\Omega)\), we can define the group homomorphism \[\psi:G\to\operatorname{Sym}(\Omega),\quad gh\mapsto h\,.\] This map defines a new permutational representation of \(G\) acting on \(\Omega\). We denote by \(G^{\Omega}\) the permutation group corresponding to the faithful action that \(G\) defines on \(\Omega\), that is, \[G^{\Omega}\cong G/\ker(\psi)\,.\] Recall that a primitive group \(G\), according to the O'Nan-Scott classification (see, for instance, [22, III\((b)(i)\)]), is said to be of _product action type_ if there exists a transitive group \(H\leqslant\operatorname{Sym}(\Omega)\) and a primitive almost simple group \(K\leqslant\operatorname{Sym}(\Delta)\) with socle \(T\) such that, for some integer \(r\geqslant 2\), \[T^{r}\leqslant G\leqslant K\operatorname{wr}H\,,\] where \(T^{r}\) is the socle of \(G\), thus contained in the base group \(K^{r}\). A detailed description of primitive groups of product action type was given by L. G. Kovacs in [18]. **Remark 15**.: By [26, Theorem 1.1 \((b)\)], a group \(G\) of product action type is permutationally isomorphic to a subgroup of \(G_{\Delta}^{\Delta}\operatorname{wr}G^{\Omega}\). Therefore, up to a conjugation in \(\operatorname{Sym}(\Delta^{r})\), the group \(K\) can always be chosen as \(G_{\Delta}^{\Delta}\), and \(H\) as \(G^{\Omega}\). ### Groups acting on digraphs We give a short summary of standard notations for digraphs and graphs. If a subgroup \(G\leqslant\operatorname{Aut}(\Gamma)\) is primitive on \(V\Gamma\), we say that \(\Gamma\) is \(G\)_-vertex-primitive_. In a similar way, if \(G\) is transitive on \(A\Gamma\), we say that \(\Gamma\) is \(G\)_-arc-transitive_. The analogue notions can be defined for graphs, and when \(G=\operatorname{Aut}(\Gamma)\) we drop the prefix \(G\). For any vertex \(v\in\operatorname{V}\Gamma\), we denote by \(\Gamma(v)\) its _out-neighbourhood_, that is, the set of vertices \(u\in\Gamma\) such that \((v,u)\in A\Gamma\). The size of the out-neighbourhood of a vertex \(v\), \(|\Gamma(v)|\), is called _out-valency of \(v\)_. If \(\Gamma\) is \(G\)-vertex-primitive, for some group \(G\), then the out-valency in independent of the choice of the vertex \(v\), thus we will refer to it as the _out-valency of \(\Gamma\)_, in symbols \(\operatorname{val}(\Gamma)\). Whenever \(\Gamma\) is a graph, _neighbourhood_ and _valency_ can be defined in the same way. An _orbital for \(G\)_ is an orbit of \(G\) in its induced action on \(\Omega\times\Omega\). An _orbital digraphs for \(G\)_ is a digraph whose vertex-set is \(\Omega\), and whose arc-set is an orbital for \(G\). An example of orbital for \(G\) is the _diagonal orbital \((\omega,\omega)^{G}\)_, whose corresponding disconnected orbital graph is called _diagonal orbital graph_. We refer to [12, Section 3.2] for further details. Note that an orbital graph for \(G\) is always \(G\)-arc-transitive, and, conversely, every \(G\)-arc-transitive digraph is an orbital graph for \(G\). Furthermore, if \(G\leq\operatorname{Aut}(\Gamma)\) is a group of automorphism for a given digraph \(\Gamma\), then \(\Gamma\) is a union of orbitals for \(G\) acting on \(\operatorname{\mathrm{V}\Gamma}\). The number of distinct orbital digraphs for \(G\) is called the _permutational rank of \(G\)_. In particular, \(2\)-transitive permutation groups are precisely those of permutational rank \(2\). If \(A\subseteq\Omega\times\Omega\) is an orbital for \(G\), then so is the set \(A^{\ast}=\{(\beta,\alpha)\mid(\alpha,\beta)\in A\}\). If \(A=A^{\ast}\), then the orbital \(A\) is called _self-paired_. Similarly, an orbital digraph is _self-paired_ if its arc-set is a self-paired orbital. Note that any \(G\)-arc-transitive graph is obtained from a self-paired orbital digraph for \(G\). ## 3. Orbital digraphs for wreath products in product action We are interested in reconstructing the orbital digraphs of a wreath product \(K\operatorname{wr}H\) endowed with product action once the orbital digraphs of \(K\) are known. **Lemma 16**.: _Let \(K\operatorname{wr}H\) be a wreath product endowed with the product action on \(\Delta^{r}\), and let_ \[\mathcal{G}=\{\Gamma_{0},\Gamma_{1},\ldots,\Gamma_{k}\}\] _be the complete list of the orbital digraphs for \(K\). Then any orbital digraph is a merged product action digraph of the form_ \[\mathcal{P}\left(r,\mathcal{G},(j_{1},j_{2},\ldots,j_{r})^{H}\right)\,,\] _for a sequence of indices \((j_{1},j_{2},\ldots,j_{r})\in X^{r}\), where \(X=\{0,1,\ldots,k\}\)._ Proof.: Let \(\Gamma\) be an orbital digraph for \(K\operatorname{wr}H\). Suppose that \((u,v)\in A\Gamma\), where \(u=(u_{1},u_{2},\ldots,u_{r})\) and \(v=(v_{1},v_{2},\ldots,v_{r})\). We aim to compute the \(K\operatorname{wr}H\)-orbit of \((u,v)\), and, in doing so, proving that there is a sequence of indices \((j_{1},j_{2},\ldots,j_{r})\in X^{r}\) such that \[A\Gamma=A\mathcal{P}\left(r,\mathcal{G},(j_{1},j_{2},\ldots,j_{r})^{H}\right)\,.\] We start by computing the \(K^{r}\)-orbit of \((u,v)\) (where by \(K^{r}\) we refer to the base group of \(K\operatorname{wr}H\)). Since this action is componentwise, we obtain that \[(u,v)^{K^{r}}=A\left(\Gamma_{j_{1}}\times\Gamma_{j_{2}}\times\ldots\times \Gamma_{j_{r}}\right)\,,\] where \((u_{i},v_{i})\) is an arc of \(\Gamma_{j_{i}}\) for all \(i=1,2,\ldots,r\). The top group \(H\) acts by permuting the components, so that \[(u,v)^{K\operatorname{wr}H}=\bigcup_{(j_{1}^{\prime},j_{2}^{\prime},\ldots,j _{r}^{\prime})\in(j_{1},j_{2},\ldots,j_{r})^{H}}A\left(\Gamma_{j_{1}^{\prime} }\times\Gamma_{j_{2}^{\prime}}\times\ldots\times\Gamma_{j_{r}^{\prime}}\right)\] Therefore, the arc-sets of \(\Gamma\) and \(\mathcal{P}\left(r,\mathcal{G},(j_{1},j_{2},\ldots,j_{r})^{H}\right)\) coincide. As their vertex-sets are both \(\Delta^{r}\), the proof is complete. Now that we know how to build the orbital digraphs for a permutation group in product action, we ask ourselves what can we say about the orbital digraphs of its subgroups. **Theorem 17**.: _Let \(G\leq\operatorname{Sym}(\Delta)\operatorname{wr}\operatorname{Sym}(\Omega)\) be a primitive group of product action type, and let \(T\) be the socle of \(G_{\Delta}^{\Delta}\). Suppose that \(T\) and \(G_{\Delta}^{\Delta}\) share the same orbital digraphs. Then the orbital digraphs for \(G\) coincide with the orbital digraphs for \(G_{\Delta}^{\Delta}\operatorname{wr}G^{\Omega}\), or, equivalently, for \(T\operatorname{wr}G^{\Omega}\)._ Proof.: Since \(G\) is a primitive group of product action type, we can suppose that \(G\) is a subgroup of \(G_{\Delta}^{\Delta}\operatorname{wr}G^{\Omega}\) with socle \(T^{r}\), where \(r=|\Omega|\). Further, we set \(K=G_{\Delta}^{\Delta}\), \(H=G^{\Omega}\). As \(G\leq K\operatorname{wr}H\), the partition of \(\Delta^{r}\times\Delta^{r}\) in arc-sets of orbital digraphs for \(K\operatorname{wr}H\) is coarser than the one for \(G\). Hence, our aim is to show that a generic orbital digraph for \(K\operatorname{wr}H\) is also an orbital digraph for \(G\). Let \[\mathcal{G}=\{\Gamma_{0},\Gamma_{1},\ldots,\Gamma_{k}\}\] be the complete list of orbital digraphs for \(T\) acting on \(\Delta\), and let \(X=\{0,1,\ldots,k\}\). Observe that the set of orbital digraphs for \(T^{r}\) can be identified with the Cartesian product of \(r\) copies of \(\mathcal{G}\): indeed, we can bijectively map the generic orbital digraph \(T^{r}\), say \(\Gamma_{j_{1}}\times\Gamma_{j_{2}}\times\ldots\times\Gamma_{j_{r}}\), for some \((j_{1},j_{2},\ldots,j_{r})\in X^{r}\), to the generic \(r\)-tuple of the Cartesian product \(\mathcal{G}^{r}\) of the form \((\Gamma_{j_{1}},\Gamma_{j_{2}},\ldots,\Gamma_{j_{r}})\). This point of view explains why \(H\) can act on the set of orbital digraphs for \(T^{r}\) with its action of rank \(r\). Observe that the set of orbital digraphs for \(T^{r}\) is equal to the set of orbital digraphs for \(K^{r}\). Moreover, \(T^{r}\) is a subgroup of \(G\), and \(K^{r}\) is a subgroup of \(K\operatorname{wr}H\). Thus the orbital digraphs for \(G\) and for \(K\operatorname{wr}H\) are obtained as a suitable unions of the elements of \(\mathcal{G}^{r}\). By Lemma 16, the orbital digraphs for \(K\operatorname{wr}H\) are of the form \[\bigcup_{(j_{1}^{\prime},j_{2}^{\prime},\ldots,j_{r}^{\prime})\in(j_{1},j_{2},\ldots,j_{r})^{H}}\Gamma_{j_{1}^{\prime}}\times\Gamma_{j_{2}^{\prime}}\times \ldots\times\Gamma_{j_{r}^{\prime}}\,,\] for some \((j_{1},j_{2},\ldots,j_{r})\in X^{r}\). Aiming for a contradiction, suppose that \[\Gamma_{j_{1}}\times\Gamma_{j_{2}}\times\ldots\times\Gamma_{j_{r}}\quad\text{ and}\quad\Gamma_{i_{1}}\times\Gamma_{i_{2}}\times\ldots\times\Gamma_{i_{r}}\] are two distinct orbital digraphs for \(T^{r}\) that are merged under the action of top group \(H\), but they are not under the action of \(G\). The first portion of the assumption yields that there is an element \(h\in H\) such that \[(\Gamma_{j_{1}}\times\Gamma_{j_{2}}\times\ldots\times\Gamma_{j_{r}})^{h}= \Gamma_{i_{1}}\times\Gamma_{i_{2}}\times\ldots\times\Gamma_{i_{r}}\,.\] By definition of \(H=G^{\Omega}\), there is an element in \(G\) of the form \[(g_{1},g_{2},\ldots,g_{r})h\in G.\] Recalling that, for any \(i=1,2,\ldots,r\), \(g_{i}\in K\), we get \[(\Gamma_{j_{1}}\times\Gamma_{j_{2}}\times\ldots\times\Gamma_{j_{r}})^{(g_{1}, g_{2},\ldots,g_{r})h}=\Gamma_{i_{1}}\times\Gamma_{i_{2}}\times\ldots \times\Gamma_{i_{r}}\,.\] Therefore, the merging among these orbital graphs is also realised under the action of \(G\), a contradiction. By the initial remark, the proof is complete. ## 4. Daily specials The aim of this section is to give a descriptions of the digraphs appearing in Theorem 8. ### Generalised Hamming graphs In this section, we clarify Remark 5 and we compute the relative fixity of the generalised Hamming graphs. **Lemma 18**.: _Let \(H\leqslant\operatorname{Sym}(r)\) be a transitive permutation group, let \(G=\operatorname{Alt}(\Delta)\operatorname{wr}H\) endowed with the product action on \(\Delta^{r}\), and let \(\Gamma\) be a digraph with vertex-set \(V\Gamma=\Delta^{r}\). Then \(G\leqslant\operatorname{Aut}(\Gamma)\) if and only if \(\Gamma\) is a generalised Hamming graph \(\mathbf{H}(r,m,\mathcal{J})\), where \(|\Delta|=m\) and \(\mathcal{J}\subseteq\{0,1\}^{r}\) is \(H\)-invariant._ Proof.: By applying Lemma 16 and taking the union of the resulting orbital digraphs, we obtain the left-to-right direction of the equivalence. Let us now deal with the converse implication. Let \(\Gamma=\mathbf{H}(r,m,\mathcal{J})\), where \(|\Delta|=m\) and \(\mathcal{J}\subseteq\{0,1\}^{r}\) is \(H\)-invariant. By Construction 2 and Definition 4, \[\mathbf{H}(r,m,\mathcal{J})=\bigcup_{h\in H}\left(\bigcup_{i=0}^{b}\mathbf{K} _{m}^{a+i}\times\mathbf{I}_{m}^{b+c-i}\right)^{h}\,,\] for some non negative integers \(a,b\) such that \(a+b\leq r\). As each component of the graphs in parenthesis is either \(\mathbf{K}_{m},\mathbf{L}_{m}\) or \(\mathbf{K}_{m}\cup\mathbf{L}_{m}\), we have that \[\operatorname{Alt}(m)^{r}\leq\operatorname{Aut}\left(\bigcup_{i=0}^{b}\mathbf{K }_{m}^{a+i}\times\mathbf{L}_{m}^{b+c-i}\right)\,.\] Moreover, as \(\mathcal{J}\) is \(H\)-invariant, the action of rank \(r\) that \(H\) induces on \(\Delta^{r}\) preserves the arc-set of \(\mathbf{H}(r,m,\mathcal{J})\). As \(G\) is generated by \(\operatorname{Alt}(m)^{r}\) and this \(H\) in their actions on \(\Delta^{r}\), this implies that \(G\leq\operatorname{Aut}(\Gamma)\), as claimed. Instead of directly computing the relative fixity of \(\mathbf{H}(r,m,\mathcal{J})\), we prove the following stronger result. **Lemma 19**.: _Let \(K\operatorname{wr}H\) be a wreath product endowed with the product action on \(\Delta^{r}\), and let \(\Gamma\) be a digraph with vertex set \(\Delta^{r}\). Suppose that \(K\operatorname{wr}H\leq\operatorname{Aut}(\Gamma)\). Then_ \[\operatorname{RelFix}(\Gamma)=1-\frac{\mu\left(\operatorname{Aut}(\Gamma) \cap\operatorname{Sym}(\Delta)^{r}\right)}{|V\Gamma|}\,.\] _In particular, the relative fixity of a generalised Hamming graph is_ \[\operatorname{RelFix}\left(\mathbf{H}(r,m,\mathcal{J})\right)=1-\frac{2}{m}\,.\] Proof.: Suppose that \(|\Delta|=m\), then, by hypothesis, \[K\operatorname{wr}H\leq\operatorname{Aut}(\Gamma)\leq\operatorname{Sym}(m) \operatorname{wr}\operatorname{Sym}(r)\,.\] We claim that the automorphism that realizes the minimal degree must be contained in \(\operatorname{Aut}(\Gamma)\cap\operatorname{Sym}(m)^{r}\) (where \(\operatorname{Sym}(m)^{r}\) is the base group of \(\operatorname{Sym}(m)\operatorname{wr}\operatorname{Sym}(r)\)). Indeed, upon choosing an element of minimal degree in \(K\times\{\operatorname{id}\}\times\ldots\{\operatorname{id}\}\) and a transposition from the top group in \(\operatorname{Sym}(m)\operatorname{wr}\operatorname{Sym}(r)\), we obtain the inequalities \[\mu\left(\operatorname{Aut}(\Gamma)\cap\operatorname{Sym}(m)^{r}\right) \leq\mu(K)m^{r-1}\] \[\leq(m-1)m^{r-1}\] \[\leq\min\left\{|\operatorname{supp}(g)|\mid g\in\operatorname{ Aut}(\Gamma)\backslash\operatorname{Sym}(m)^{r}\right\}\] This is enough to prove the first portion of the statement. In particular, to compute the relative fixity of \(\mathbf{H}(r,m,\mathcal{J})\), it is enough to look at the action of \(\operatorname{Sym}(m)\) on a single component. Thus, upon choosing a transposition in \(\operatorname{Sym}(m)\times\{\operatorname{id}\}\times\ldots\{\operatorname{ id}\}\), we obtain \[\operatorname{RelFix}\left(\mathbf{H}(r,m,\mathcal{J})\right)=1-\frac{2m^{r-1 }}{m^{r}}=1-\frac{2}{m}\,.\qed\] ### Distance-\(i\) Johnson graphs The nomenclature dealing with possible generalizations of the Johnson graph is as lush as confusing. In this paper, we are adopting the one from [16]. Let \(m,k,i\) be integers such that \(m\geq 1\), \(1\leq k\leq m\) and \(0\leq i\leq k\). A _distance-\(i\) Johnson graph_, denoted by \(\mathbf{J}(m,k,i)\) is a graph whose vertex-set is the family of \(k\)-subsets of \(\{1,2,\ldots,m\}\), and such that two \(k\)-subsets, say \(X\) and \(Y\), are adjacent whenever \(|X\cap Y|=k-i\). The usual Johnson graph is then \(\mathbf{J}(m,k,1)\), and two subsets \(X\) and \(Y\) are adjacent in \(\mathbf{J}(m,k,i)\) if and only if they are at distance-\(i\) in \(\mathbf{J}(m,k,1)\). **Lemma 20**.: _Let \(m,k\) be two positive integers such that \(m\geq 2k+2\). The orbital digraphs of \(\operatorname{Alt}(m)\) and of \(\operatorname{Sym}(m)\) in their action on \(k\)-subsets are the distance-\(i\) Johnson graphs \(\mathbf{J}(m,k,i)\), one for each choice of \(i\in\{0,1,\ldots,k\}\)._ Proof.: Suppose that two \(k\)-subsets \(X\) and \(Y\) are such that \((X,Y)\) is an arc of the considered orbital digraph and \(|X\cap Y|=k-i\), for a nonnegative integer \(i\leq k\). Since \(\operatorname{Alt}(m)\) is \((m-2)\)-transitive and \(2k\leq m-2\), the \(\operatorname{Alt}(m)\)-orbit of the arc \((X,Y)\) contains all the pairs \((Z,W)\), where \(Z\) and \(W\) are \(k\)-subsets with \(|Z\cap W|=k-i\). Therefore, the statement is true for the alternating group. The same proof can be repeated _verbatim_ for \(\operatorname{Sym}(m)\). **Lemma 21**.: _Let \(m,k,i\) be three positive integers such that \(m\geq 2k+2\) and \(i\neq k\). Then the relative fixity of the distance-\(i\) Johnson graphs \(\mathbf{J}(m,k,i)\) is_ \[\operatorname{RelFix}(\mathbf{J}(m,k,i))=1-\frac{2k(m-k)}{m(m-1)}\,.\] Proof.: Under our assumption, by [15, Theorem 2 (\(a\))], the automorphism group of \(\mathbf{J}(m,k,i)\) is \(\operatorname{Sym}(m)\) in its action on \(k\) subsets. Its minimal degree is achieved by any transposition (see [13, Section 1]), where \[\mu\left(\operatorname{Sym}(m)\right)=2\binom{m-2}{k-1}\,.\] Hence, we find that \[\operatorname{RelFix}(\mathbf{J}(m,k,i))=1-\frac{2k(m-k)}{m(m-1)}\,.\qed\] ### Squashed distance-\(i\) Johnson graphs A usual construction in the realm of distance transitive graphs consist in obtaining smaller example starting from a distance transitive graph and identifying vertices at maximal distance. We need to apply this idea to a family of generalised Johnson graphs. Consider the distance-\(i\) Johnson graph \(\mathbf{J}(2m,m,i)\), for some integers \(m\) and \(i\), with \(m\) positive and \(0\leq i\leq m\). We say that two vertices of \(\mathbf{J}(2m,m,i)\) are _disjoint_ if they have empty intersection as \(m\)-subset. Observe that being disjoint is an equivalence relation, and our definition coincides with the usual notion of antipodal for \(\mathbf{J}(2m,m,1)\) seen as a metric space. We can build a new graph \(\mathbf{Q}\mathbf{J}(2m,m,i)\) whose vertex-set is the set of equivalence classes of the disjoint relation, and such that, if \([X]\) and \([Y]\) are two generic vertices, then \(([X],[Y])\) is an arc in \(\mathbf{Q}\mathbf{J}(2m,m,i)\) whenever there is a pair of representatives, say \(X^{\prime}\in[X]\) and \(Y^{\prime}\in[Y]\), such that \((X^{\prime},Y^{\prime})\) is an arc in \(\mathbf{J}(2m,m,i)\). We call \(\mathbf{Q}\mathbf{J}(2m,m,i)\) an _squashed distance-\(i\) Johnson graph_. Observe that \(\mathbf{J}(2m,m,i)\) is a regular double cover of \(\mathbf{Q}\mathbf{J}(2m,m,i)\). Furthermore, \(\mathbf{Q}\mathbf{J}(2m,m,i)\) and \(\mathbf{Q}\mathbf{J}(2m,m,m-i)\) are isomorphic graphs, thus we can restrict the range of \(i\) to \(\{0,1,\ldots,\lfloor m/2\rfloor\}\). **Lemma 22**.: _Let \(m\geq 3\) be an integer. The orbital digraphs of \(\operatorname{Alt}(2m)\) and of \(\operatorname{Sym}(2m)\) in their primitive actions with stabilizer \((\operatorname{Sym}(m)\operatorname{wr}C_{2})\cap\operatorname{Alt}(2m)\) and \(\operatorname{Sym}(m)\operatorname{wr}C_{2}\) respectively are the squashed distance-\(i\) Johnson graphs \(\mathbf{J}(m,k,i)\), one for each choice of \(i\in\{0,1,\ldots,\lfloor m/2\rfloor\}\)._ Proof.: To start, we note that the set \(\Omega\) on which the groups are acting can be identified with the set of partitions of the set \(\{1,2,\ldots,2m\}\) with two parts of equal size \(m\). Suppose that \(\{X_{1},X_{2}\}\) and \(\{Y_{1},Y_{2}\}\) are two such partitions and that \((\{X_{1},X_{2}\},\{Y_{1},Y_{2}\})\) is an arc of the orbital digraph we are building, with \[\min\{|X_{1}\cap Y_{1}|,\,|X_{1}\cap Y_{2}|\}=m-i\,,\] for a nonnegative integer \(i\leq\lfloor m/2\rfloor\). To determine the image of \((\{X_{1},X_{2}\},\{Y_{1},Y_{2}\})\) under the group action, it is enough to know the images of \(X_{1}\) and \(Y_{2}\), that is, of at most \(2m-\lceil m/2\rceil\leq 2m-2\) distinct points. By the \((2m-2)\)-transitivity of \(\operatorname{Alt}(2m)\), the \(\operatorname{Alt}(2m)\)-orbit of \((\{X_{1},X_{2}\},\{Y_{1},Y_{2}\})\) contains all the arc of the form \((\{Z_{1},Z_{2}\},\{W_{1},W_{2}\})\), where \(\{Z_{1},Z_{2}\},\{W_{1},W_{2}\}\in\Omega\) and \[\min\{|Z_{1}\cap W_{1}|,\,|Z_{1}\cap W_{2}|\}=m-i\,.\] To conclude, observe that \(\Omega\) is the set of \(m\)-subsets of \(\{1,2,\ldots,2m\}\) in which two elements are identified if they are disjoint, and that \[\min\{|X_{1}\cap Y_{1}|,\,|X_{1}\cap Y_{2}|\}=m-i\,,\] is the adjacency condition in an squashed distance-\(i\) Johnson graph. As in Lemma 20, the same reasoning can be extended to \(\operatorname{Sym}(2m)\). Therefore, the orbital digraphs of \(\operatorname{Alt}(2m)\) and of \(\operatorname{Sym}(2m)\) in these primitive actions are the squashed distance-\(i\) Johnson graphs \(\mathbf{QJ}(2m,m,i)\), for some \(i\in\{0,1,\ldots,[m/2]\}\). **Lemma 23**.: _Let \(m,i\) be two positive integers such that \(m\geq 3\) and \(i\neq\lfloor m/2\rfloor\). Then the relative fixity of the distance-\(i\) Johnson graphs \(\mathbf{QJ}(2m,m,i)\) is_ \[\operatorname{RelFix}(\mathbf{QJ}(2m,m,i))=1-\frac{2k(m-k)}{m(m-1)}\,.\] Proof.: Consider \(\mathbf{J}(2m,m,i)\), the regular double covering of \(\mathbf{QJ}(2m,m,i)\). In view of [15, Theorem 2 (\(e\))], the automorphism group of \(\mathbf{J}(2m,m,i)\) is \(\operatorname{Sym}(2m)\times\operatorname{Sym}(2)\), where the central involution swaps pairs disjoint vertices. It follows that the automorphism group of \(\mathbf{QJ}(2m,m,i)\) is \(\operatorname{Sym}(2m)\). Now, we can immediately verify that the stabilizer of the vertex \(\{\{1,2,\ldots,m\},\{m+1,m+2,\ldots,2m\}\}\) is \(\operatorname{Sym}(m)\operatorname{wr}C_{2}\). The minimal degree of the primitive action of \(\operatorname{Sym}(2m)\) with stabilizer \(\operatorname{Sym}(m)\operatorname{wr}C_{2}\) is \[\mu\left(\operatorname{Sym}(2m)\right)=\frac{1}{4}\left(1+\frac{1}{2m-1} \right)\frac{(2m)!}{m!^{2}}\] (see [8, Theorem 4]). Thus, we find that \[\operatorname{RelFix}(\mathbf{QJ}(2m,m,i))=\frac{1}{2}\left(1-\frac{1}{2m-1} \right)\,.\qed\] ### Strongly regular graphs We list all the strongly regular graphs appearing as \(\Gamma_{1}\) in Theorem 8 (\(c\)). We divide them according to the socle \(L\) of the almost simple group that acts on them. Further, the present enumeration corresponds to the one of the groups that act on these graphs as listed in (the soon to be enunciated) Theorem 24 (\(e\)). 1. \(L=U_{4}(q)\), \(q\in\{2,3\}\), acting on totally singular \(2\)-dimensional subspaces of the natural module, two vertices of \(\Gamma\) are adjacent if there is a third \(2\)-dimensional subspace that intersect both vertices in a \(1\)-dimensional subspace (see [7, Section 2.2.12]); 2. \(L=\Omega_{2m+1}(3),m\geq 2\), acting on the singular points of the natural module, two vertices of \(\Gamma\) are adjacent if they are orthogonal (see [7, Theorem 2.2.12]); 3. \(L=\Omega_{2m+1}(3),m\geq 2\), acting on the nonsingular points of the natural module, two vertices of \(\Gamma\) are adjacent if the line that connects them is tangent to the quadric where the quadratic form vanishes (see [7, Section 3.1.4]); 4. \(L=\operatorname{P\Omega}_{2m}^{e}(2),\varepsilon\in\{+,-\},m\geq 3\), acting on the singular points of the natural module, two vertices of \(\Gamma\) are adjacent if they are orthogonal (see [7, Theorem 2.2.12]); 5. \(L=\operatorname{P\Omega}_{2m}^{e}(2),\varepsilon\in\{+,-\},m\geq 2\), acting on the nonsingular points of the natural module, two vertices of \(\Gamma\) are adjacent if they are orthogonal (see [7, Section 3.1.2]); 6. \(L=\operatorname{P\Omega}_{2m}^{+}(3),m\geq 2\) acting on the nonsingular points of the natural module, two vertices of \(\Gamma\) are adjacent if they are orthogonal (see [7, Section 3.1.3]); 7. \(L=\operatorname{P\Omega}_{2m}^{-}(3),m\geq 3\) acting on the singular points of the natural module, two vertices are adjacent if they are orthogonal (see [7, Section 3.1.3]); 8. \(L=\operatorname{P\Omega}_{2m}^{-}(3),m\geq 3\) acting on the singular points of the natural module, two vertices of \(\Gamma\) are adjacent if they are orthogonal (see [7, Theorem 2.2.12]); 9. \(L=\operatorname{P\Omega}_{2m}^{-}(3),m\geq 2\) acting on the nonsingular points of the natural module, two vertices of \(\Gamma\) are adjacent if they are orthogonal (see [7, Section 3.1.3]). Table 1 collects the usual parameters of a strongly regular graph, \((v,d,\lambda,\mu)\), and their relative fixity. Recall that \(v\) is the number of vertices, \(d\) is the valency of the graph, \(\lambda\) is the number of common neighbours between two adjacent vertices, and \(\mu\) is the number of common neighbours between two nonadjacent vertices. As \(\mu(G)\) can be found in [8, Theorem 4], the relative fixity is computed as \[\operatorname{RelFix}(\Gamma)=1-\frac{\mu(G)}{v}\,,\] ## 5. Proof of Theorem 8 The primitive permutation groups we are concerned with were classified by T. Burness and R. Guralnick in [8]. We report their result here. For the sake of our proof, we explicitly write the permutational rank of the almost simple groups of Lie type. This information can be easily obtained combining the complete list of \(2\)-transitive finite permutation groups, first described by P. J. Cameron in [9, Section 5], and the complete list of classical finite permutation groups of permutational rank \(3\), compiled by W. M. Kantor and R. A. Liebler in [17, Theorem 1.1]. **Theorem 24** ([8], Theorem 4).: _Let \(G\) be a permutation group with_ \[\mu(G)<\frac{2n}{3}\,.\] _Then one of the following holds:_ 1. \(\operatorname{Alt}(m)\leq G\leq\operatorname{Sym}(m)\)_, for some_ \(m\geq 3\)_, in its action on_ \(k\)_-subsets, for some_ \(k<m/2\)_;_ 2. \(G=\operatorname{Sym}(2m)\)_, for some_ \(m\geq 2\)_, in its primitive action with stabilizer_ \(G_{\alpha}=\operatorname{Sym}(m)\operatorname{wr}C_{2}\)_;_ 3. \(G=M_{22}:2\) _in its primitive action of degree_ \(22\) _with stabilizer_ \(G_{\alpha}=\operatorname{L}_{3}(4).2_{2}\)_;_ 4. \(G\) _is an almost simple group of socle_ \(L\) _and permutational rank_ \(2\)_, and one of the following occurs:_ 1. \(L=\operatorname{L}_{m}(2)\)_,_ \(m\geq 3\)_, in its natural action;_ 2. \(L=\operatorname{L}_{m}(3)\)_,_ \(m\geq 3\)_, in its natural action, and_ \(G\) _contains an element of the form_ \((-I_{n-1},I_{1})\)_;_ 3. \(L=\operatorname{Sp}_{2m}(2)\)_,_ \(m\geq 3\)_, in its action on the singular points of the natural module;_ 4. \(L=\operatorname{Sp}_{2m}(2)\)_,_ \(m\geq 3\)_, in its action on the right cosets of_ \(\operatorname{SO}_{2m}^{-}(2)\)_;_ 5. \(L=\operatorname{Sp}_{2m}(2)\)_,_ \(m\geq 3\)_, in its action on the right cosets of_ \(\operatorname{SO}_{2m}^{+}(2)\)_;_ 5. \(G\) _is an almost simple group of socle_ \(L\) _and permutational rank_ \(3\)_, and one of the following occurs:_ 1. \(L=U_{4}(q)\)_,_ \(q\in\{2,3\}\)_, in its primitive action on totally singular_ \(2\)_-dimensional subspaces, and_ \(G\) _contains the graph automorphism_ \(\tau\)_;_ 2. \(L=\Omega_{2m+1}(3)\) _in its action on the singular points of the natural module, and_ \(G\) _contains an element of the form_ \((-I_{2m},I_{1})\) _with a_ \(+\)_-type_ \((-1)\)_-eigenspace;_ \begin{table} \begin{tabular}{|c|c|c c c c|c|} \hline & \multicolumn{1}{|c|}{Socle} & \multicolumn{1}{|c|}{\(v\)} & \(d\) & \(\lambda\) & \(\mu\) & \(\operatorname{RelFix}\) & Comments \\ \hline \((i)\) & \(U_{4}(2)\) & \(27\) & \(10\) & \(1\) & \(5\) & \(\frac{7}{27}\) & \\ \hline & \(U_{4}(3)\) & \(112\) & \(30\) & \(2\) & \(10\) & \(\frac{11}{56}\) & \\ \hline \((ii)\) & \(\Omega_{2m+1}(3)\) & \(\frac{1}{2}(9a-1)\) & \(\frac{3}{2}(a^{2}-1)\) & \(\frac{1}{2}(a^{2}-9)+2\) & \(\frac{1}{2}(a^{2}-1)\) & \(\frac{a+1}{3a+1}\) & \(a=3^{m-1}\) \\ \((iii)\) & \(\Omega_{2m+1}(3)\) & \(\frac{3a}{2}(3a-1)\) & \((a-1)(3a+1)\) & \(2(a^{2}-a-1)\) & \(2a(a-1)\) & \(\frac{3a^{2}+a+1}{3a(3a-1)}\) & \\ \hline \((iv)\) & \(\operatorname{PI}_{2m}^{+}(2)\) & \((4b-1)(2b-1)\) & \(2(2b-1)(b+1)\) & \((2b-2)(b-2)+1\) & \((2b-1)(b+1)\) & \(b=2^{m-2}\) \\ \hline & \(\operatorname{PI}_{2m}^{-}(2)\) & \(4b^{2}-1\) & \(2(b^{2}-1)\) & \(b^{2}-3\) & \(b^{2}-1\) & \(\frac{2b+1}{4b+1}\) & \\ \hline \((v)\) & \(\operatorname{PI}_{2m}^{\alpha}(2)\) & \(2b(4b-\varepsilon)\) & \(4b^{2}-1\) & \(2(b^{2}-1)\) & \(b(2b+\varepsilon)\) & \(\frac{2b}{4b-\varepsilon}\) & \(\varepsilon=\pm 1\) \\ \((vi)\) & \(\operatorname{PI}_{2m}^{+}(3)\) & \(\frac{3c}{2}(9c-1)\) & \(\frac{3c}{2}(3c-1)\) & \(\frac{c}{2}(3c-1)\) & \(\frac{3c}{2}(c-1)\) & \(3(c+1)\) & \\ \hline \((vii)\) & \(\operatorname{PI}_{2m}^{\alpha}(3)\) & \(\frac{1}{2}(9c^{2}-1)\) & \(\frac{3}{2}(c^{2}-1)\) & \(\frac{1}{2}(c^{2}-9)+2\) & \(\frac{1}{2}(c^{2}-1)\) & \(\frac{3c+1}{9c+1}\) & \\ \hline \((viii)\) & \(\operatorname{PI}_{2m}^{-}(3)\) & \(\frac{3c}{2}(9c+1)\) & \(\frac{3c}{2}(3c+1)\) & \(\frac{c}{2}(3c-1)\) & \(\frac{3c}{2}(c+1)\) & \(\frac{9c^{2}+3c-2}{3c(9c+1)}\) & \\ \hline \end{tabular} \end{table} Table 1. Parameters of strongly regular graphs with large fixity. * \(L=\Omega_{2m+1}(3)\) _in its action on the nonsingular points of the natural module whose orthogonal complement is an orthogonal space of_ \(-\)_-type, and_ \(G\) _contains an element of the form_ \((-I_{2m},I_{1})\) _with a_ \(-\)_-type_ \((-1)\)_-eigenspace;_ * \(L=\operatorname{P\Omega}_{2m}^{\varepsilon}(2)\)_,_ \(\varepsilon\in\{+,-\}\)_, in its action on the singular points on the natural module, and_ \(G=\operatorname{SO}_{2m}^{\varepsilon}(2)\)_;_ * \(L=\operatorname{P\Omega}_{2m}^{\varepsilon}(2)\)_,_ \(\varepsilon\in\{+,-\}\)_, in its action on the nonsingular points on the natural module, and_ \(G=\operatorname{SO}_{2m}^{\varepsilon}(2)\)_;_ * \(L=\operatorname{P\Omega}_{2m}^{+}(3)\) _in its action on the nonsingular points on the natural module, and_ \(G\) _contains an element of the form_ \((-I_{2m-1},I_{1})\) _such that the discriminant of the_ \(1\)_-dimensional_ \(1\)_-eigenspace is a nonsquare;_ * \(L=\operatorname{P\Omega}_{2m}^{-}(3)\) _in its action on the singular points on the natural module, and_ \(G\) _contains an element of the form_ \((-I_{2m-1},I_{1})\)_;_ * \(L=\operatorname{P\Omega}_{2m}^{-}(3)\) _in its action on the nonsingular points on the natural module, and_ \(G\) _contains an element of the form_ \((-I_{2m-1},I_{1})\) _such that the discriminant of the_ \(1\)_-dimensional_ \(1\)_-eigenspace is a square;_ * \(G\leq K\operatorname{wr}\operatorname{Sym}(r)\) _is a primitive group of product action type, where_ \(K\) _is a permutation group appearing in points_ \((a)-(e)\)_, the wreath product is endowed with the product action, and_ \(r\geq 2\)_;_ * \(G\) _is an affine group with a regular normal socle_ \(N\)_, which is an elementary abelian_ \(2\)_-subgroup._ Proof of Theorem 8.: The proof is split in two independent chunks. First, we prove that every vertex-primitive digraph of relative fixty exceeding \(\frac{1}{3}\) belongs to one of the families appearing in Theorem 8. Then, we tackle the problem of computing the relative fixities of the graphs appearing in Theorem 8, thus showing that they indeed all have relative fixity larger than \(\frac{1}{3}\). Assume that \(\Gamma\) is a digraph on \(n\) vertices with at least one arc and with \(\operatorname{RelFix}(\Gamma)>\frac{1}{3}\) such that \(G=\operatorname{Aut}(\Gamma)\) is primitive. If \(\Gamma\) is disconnected, then the primitivity of \(G\) implies that \(\Gamma\cong\mathbf{L}_{n}\). Hence we may assume that \(\Gamma\) is connected. Moreover, \(\operatorname{RelFix}(\Gamma)>\frac{1}{3}\) implies that \(\mu(G)<\frac{2n}{3}\). Hence \(G\) is one of the groups determined in [8] and described in Theorem 24. Suppose that \(G\) is an almost simple group. Then \(G\) is one of the groups appearing in parts \((a)-(e)\) of Theorem 24. Since any \(G\)-vertex-primitive group is a union of orbital digraphs for \(G\), the digraphs arising from these cases will be merged product action digraphs \(\mathcal{P}(1,\mathcal{G},\mathcal{J})\) (see Remark 3). Hence, our goal is to consider these almost simple groups in turn and compile their list of orbitals digraphs \(\mathcal{G}\). Let \(G\) be a group as described in Theorem 24\((a)\). Lemma 20 states the orbital digraphs for \(G\) are the distance-\(i\) Johnson graph \(\mathbf{J}(m,k,i)\). Assume that \(k=1\), that is, consider the natural action of either \(\operatorname{Alt}(m)\) or \(\operatorname{Sym}(m)\) of degree \(m\). Since this action is \(2\)-transitive, their set of orbital digraphs is \(\mathcal{G}=\{\mathbf{L}_{m},\mathbf{K}_{m}\}\). In particular, \(\mathcal{P}(1,\mathcal{G},\mathcal{J})=\mathbf{H}(1,m,\mathcal{J})\). This case exhausts the generalized Hamming graphs with \(r=1\), which appear in Theorem 8\((i)\). Therefore, in view of Remark 6, for as long as we suppose \(r=1\), we can also assume that \(\mathcal{J}\) is a non-Hamming homogeneous set. Observe \(m\geq 4\), otherwise, we go against our assumption on the relative fixity. Going back to distance-\(i\) Johnson graphs, to guarantee that \(\mathcal{J}\) is non-Hamming, we have to take \(k\geq 2\). Thus, \[\mathcal{G}=\{\mathbf{J}(m,k,i)\mid i\in\{0,1,\ldots,k\}\}\,\] which corresponds to Theorem 8\((ii)(a)\). Let \(G=\operatorname{Sym}(2m)\) be a permutation group from Theorem 24\((b)\). If \(m=2\), the degree of \(G\) is \(3\), and the relative fixity of any action of degree \(3\) can either be \(0\) or \(\frac{1}{3}\). Hence, we must suppose that \(m\geq 3\): by Lemma 22, the orbital digraphs for \(G\) are the squashed distance-\(i\) Johnson graph \(\mathbf{QJ}(2m,m,i)\). We obtain that \[\mathcal{G}=\{\mathbf{QJ}(2m,m,i)\mid i\in\{0,1,\ldots,\lfloor m/2\rfloor\}\}\,\] as described in Theorem 8\((ii)(b)\). Let \(G=M_{22}:2\) in the action described in Theorem 24\((c)\). Consulting the list of all the primitive groups of degree \(22\) in Magma[6] (which is based on the list compiled in [11]), we realize that they are all \(2\)-transitive. Hence, the set of orbital digraphs is \(\mathcal{G}=\{\mathbf{K}_{22},\mathbf{L}_{22}\}\). In particular, all the graphs are generalised Hamming graphs. Let \(G\) be an almost simple of Lie type appearing in Theorem 24\((d)\). Since all these groups are \(2\)-transitive with a \(2\)-transitive socle \(L\), their orbital digraphs are either \(\mathbf{K}_{m}\) or \(\mathbf{L}_{m}\), where \(m\geq 7\) is the degree of \(G\). Once again, we obtain only generalise Hamming graphs. Let \(G\) be an almost simple of Lie type described in Theorem 24\((e)\). Any group of permutational rank \(3\) defines two nondiagonal orbital digraphs, and, as such digraphs are arc-transitive and one the complement of the other, they are strongly regular digraphs (see, for instance, [7, Section 1.1.5]). The set of orbital digraphs is of the form \(\mathcal{G}=\{\mathbf{L}_{m},\Gamma_{1},\Gamma_{2}\}\), where we listed the possible \(\Gamma_{1}\) in Section 4.4, and where \(m=|V\Gamma_{1}|\). The graphs described in this paragraph appear in Theorem 8\((ii)(c)\). We have exhausted the almost simple groups from Theorem 24. Hence, we pass to Theorem 24\((f)\). Suppose that \(G\leq K\operatorname{wr}\operatorname{Sym}(r)\) is a primitive group of product action type. We want to apply Theorem 17 to \(G\). The only hypothesis we miss is that \(T\) and \(G^{\Delta}_{\Delta}\) share the same set of orbital digraphs. We claim that \(T\) and \(K\) induces the same set of orbital digraphs. If \(K\) is either alternating or symmetric, the claim follows from Lemmas 20 and 22. If \(K\) is \(2\)-transitive, then we can observe that its socle \(L\) is also \(2\)-transitive: the socle of \(M_{22}:2\) is \(T=M_{22}\) in its natural \(3\)-transitive action, while the socle \(T\) of the almost simple groups of Lie type of rank \(2\) is \(2\)-transitive by [9, Section 5]. In particular, \(K\) and \(T\) both have \(\mathcal{G}=\{\mathbf{L}_{m},\mathbf{K}_{m}\}\) as their set of orbital graphs. Finally, suppose that \(K\) is an almost simple group of permutational rank \(3\). We have that its socle \(T\) is also of permutational rank \(3\) by [17, Theorem 1.1]. Observe that, since any orbital digraph for \(T\) is a subgraph of an orbital digraph for \(G\), the fact that \(G\) and \(L\) both have permutational rank \(3\) implies that they share the same set of orbital digraphs. Therefore, the claim is true. By our claim together with the double inclusion \[T\leq G^{\Delta}_{\Delta}\leq K\,,\] we obtain that \(T,G^{\Delta}_{\Delta}\) and \(K\) all induce the same set of orbital digraphs. Therefore, we can apply Theorem 17 to \(G\): we obtain that \(G\) shares its orbital graphs with \(T\operatorname{wr}G^{\Omega}\). Therefore, all the \(G\)-vertex-primitive digraphs are union of orbital digraphs for \(T\operatorname{wr}H\), with \(T\) socle type of \(G\) and \(H\) transitive permutation group isomorphic to \(G^{\Omega}\). In other words, we found all the graphs \(\mathcal{P}(r,\mathcal{G},\mathcal{J})\) with \(r\geq 2\) described in Theorem 8. (Recall that, by Definition 4, among the graphs \(\mathcal{P}(r,\mathcal{G},\mathcal{J})\), we find all the generalised Hamming graphs.) Suppose that \(G\) is an affine group with a regular normal socle \(N\), which is an elementary abelian \(2\)-subgroup. We have that \(G\) can be written as the split extension \(N:H\), where \(H\) is a group of matrices that acts irreducibly on \(N\). It follows that \(G\) is \(2\)-transitive on \(N\), hence, as \(|N|\geq 4\), the graphs arising in this scenario are \(\mathbf{L}_{|N|},\mathbf{K}_{|N|}\) and \(\mathbf{L}_{|N|}\cup\mathbf{K}_{|N|}\), which are generalised Hamming graphs. We have completed the first part of the proof, showing that the list of vertex-primitive digraphs appearing in Theorem 8 is exhaustive. As all the orbital digraphs in \(\mathcal{G}\) are actually graphs, the same property is true for the graphs in the list, as we have underlined in Remark 9. We can now pass to the second part of the proof, that is, we can now tackle the computation of relative fixities. We already took care of the generalised Hamming graphs in Lemma 18. Thus, we can suppose that \(\Gamma\) is a merged product action graph \(\mathcal{P}(r,\mathcal{G},\mathcal{J})\) appearing in Theorem 8\((ii)\). Suppose that \(r=1\), that is, \(\Gamma\) is a union of graphs for some primitive almost simple group \(K\). (We are tacitely assuming that \(K\) is maximal among the groups appearing in a given part of Theorem 24.) In view of [21, Theorem], we have that \(K\) is a maximal subgroup of either \(\operatorname{Alt}(|V\Gamma|)\) or \(\operatorname{Sym}(|V\Gamma|)\). Therefore, there are just two options for \(\operatorname{Aut}(\Gamma)\): either it is isomorphic to \(K\) or it contains \(\operatorname{Alt}(|V\Gamma|)\). In the latter scenario, as \(\operatorname{Alt}(|V\Gamma|)\) is \(2\)-transitive on the vertices, we obtain that \(\Gamma\in\{\mathbf{L}_{m},\mathbf{K}_{m},\mathbf{L}_{m}\cup\mathbf{K}_{m}\}\). All those graphs are generalised Hamming graphs, against our assumption on \(\Gamma\). Therefore, we have \(K=\operatorname{Aut}(\Gamma)\). In particular, the relative fixity for \(\Gamma\) are computed in Lemma 21, Lemma 23 or Table 1 given that \(\mathcal{G}\) is described in Theorem 8\((ii)(a)\), \((ii)(b)\) or \((ii)(c)\) respectively. Suppose now that \(r\geqslant 2\). The automorphism group of \(\Gamma\) either embeds into \(\operatorname{Sym}(m)\operatorname{wr}\operatorname{Sym}(r)\), where \(m=|V\Gamma_{i}|\) for any \(\Gamma_{i}\in\mathcal{G}\), or, by maximality of \(\operatorname{Sym}(m)\operatorname{wr}\operatorname{Sym}(r)\), \(\operatorname{Aut}(\Gamma)=\operatorname{Sym}(m^{r})\). In the latter scenario, \(\Gamma\in\{\mathbf{L}_{m},\mathbf{K}_{m},\mathbf{L}_{m}\cup\mathbf{K}_{m}\}\). All these graphs can be written as a merged product graph where \(r=1\) and \(\mathcal{J}\) is a Hamming set. This goes against our assumption on \(\Gamma\), thus we must suppose \(\operatorname{Aut}(\Gamma)\neq\operatorname{Sym}(m^{r})\). As a consequence, we obtain that, for some almost simple group \(K\) listed in Theorem 24\((a)-(e)\), and for some transitive group \(H\leqslant\operatorname{Sym}(r)\), \(K\operatorname{wr}H\leqslant\operatorname{Aut}(\Gamma)\). Note that, as \(K\leqslant\operatorname{Aut}(\Gamma)_{\Delta}^{\Delta}\), by [21, Theorem], \(\operatorname{Aut}(\Gamma)_{\Delta}^{\Delta}\) is either \(K\) or it contains \(\operatorname{Alt}(m)\). If the latter case occurs, then \(\operatorname{Alt}(m)^{r}\operatorname{wr}H\leqslant\operatorname{Aut}(\Gamma)\). By Lemma 18, \(\Gamma\) is a generalised Hamming graph, which contradicts our choice of \(\Gamma\). Therefore, \(\operatorname{Aut}(\Gamma)\leqslant K\operatorname{wr}\operatorname{Sym}(r)\). Observe that we can apply Lemma 19. We obtain that \[\operatorname{RelFix}(\Gamma)=1-\frac{\mu(K)m^{r-1}}{m^{r}}=1-\frac{\mu(K)}{m} =\operatorname{RelFix}\left(\mathcal{P}(1,\mathcal{G},\mathcal{J}^{\prime}) \right)\,,\] for some non-Hamming homogeneous set \(\mathcal{J}^{\prime}\). In particular, the relative fixities for \(r\geqslant 2\) coincides with those we have already computed for \(r=1\). This complete the proof. ## 6. Proof of Theorem 12 Recall that a permutation group \(G\) on \(\Omega\) is _quasiprimitive_ if all its normal subgroups are transitive on \(\Omega\). Clearly, any primitive group is quasiprimitive. Moreover, recall that, by repeating the proof of Cauchy-Frobenius Lemma (see [12, Theorem 1.7A]) on the conjugacy class of a permutation \(x\in G\), we get \[\operatorname{fix}(x)|x^{G}|=|x^{G}\cap G_{\omega}|\] where \(\operatorname{fix}(x)=|\Omega|-|\operatorname{supp}(x)|\) is the number of fixed points of \(x\). Proof of Theorem 12.: (We would like to thank P. Spiga again for pointing out the key ingredients for this proof.) Let \(G\) be a quasiprimitive permutation group on a set \(\Omega\), and let \(x\in G\backslash\{1\}\) be an element achieving \(|\operatorname{supp}(x)|\leqslant(1-\alpha)|\Omega|\). For any point \(\omega\in\Omega\), we obtain \[\alpha\leqslant\frac{|x^{G}\cap G_{\omega}|}{|x^{G}|}\leqslant\frac{|G_{\omega }|}{|x^{G}|}\leqslant\frac{\beta}{|x^{G}|}\,.\] It follows that \(|x^{G}|\leqslant\alpha^{-1}\beta\). Now consider the normal subgroup of \(G\) defined by \[N:=\bigcap_{g\in G}\mathbf{C}_{G}(x^{g})\,.\] Recall that \(|G:\mathbf{C}_{G}(x)|=|x^{G}|\). Observe that \(G\) acts by conjugation on the set \[\{\mathbf{C}_{G}(x^{g})\mid g\in G\}\,,\] it defines a single orbit of size \(|x^{G}|\), and \(N\) is the kernel of this action. Therefore \[|G:N|\leqslant|x^{G}|!\leqslant\left\lceil\frac{\beta}{\alpha}\right\rceil!\,,\] that is, \(N\) is a bounded index subgroup of \(G\). Since \(G\) is quasiprimitive, either \(N\) is trivial or \(N\) is transitive. Aiming for a contradiction, we suppose that \(N\) is transitive. Since \([N,x]=1\), for any \(\omega\in\Omega\) and for any \(n\in N\), \[\omega^{nx}=\omega^{xn}=\omega^{n}\,,\] The transitivity of \(N\) implies that \(x=1\), against our choice of \(x\). Therefore, \(N\) is trivial. It follows that \[|G|=|G:N|\leq\left\lceil\frac{\beta}{\alpha}\right\rceil!\,.\] Since there are finitely many abstract groups of bounded size, the proof is complete. An equivalent formulation of Sims' Conjecture states that if \(G\) is a primitive permutation group and the minimal out-valency among its nondiagonal orbital digraphs is at most \(d\), then the size of a point stabilizer is bounded from above by a function \(\mathbf{f}(d)\) depending only on the positive integer \(d\). An answer in the positive to this conjecture was given in [10]. Proof of Corollary 13.: Let \(\Gamma\) be a vertex-primitive digraphs of out-valency at most \(d\) and relative fixty exceeding \(\alpha\), and let \(G=\operatorname{Aut}(\Gamma)\). The hypothesis on the out-valency implies that, for any \(v\in V\Gamma\), \(|G_{v}|\leq\mathbf{f}(d)\), where \(\mathbf{f}(d)\) is the function that solves Sims' Conjecture. The result thus follows by choosing \(\beta=\mathbf{f}(d)\) in Theorem 12. We conclude the paper by observing that, as \(\mathbf{f}(d)\geq(d-1)!\), from Corollary 13 we cannot obtain a bound as sharp as that in Remark 11.
$\Gamma$ の相対的固定性は、$\Gamma$ の非単射同型による固定された頂点の最大数を、$\Gamma$ の頂点の数で割った値を表します。$ 1/3$以上の相対的固定度を持つ頂点の基本図を特徴付け、有限に多くの定数を超える relative fixity とBounded out-valency を持つ頂点の基本図が存在することを示しました。 Please tell me if you need further clarification.
2309.06783
Ungar $\unicode{x2013}$ A C++ Framework for Real-Time Optimal Control Using Template Metaprogramming
We present Ungar, an open-source library to aid the implementation of high-dimensional optimal control problems (OCPs). We adopt modern template metaprogramming techniques to enable the compile-time modeling of complex systems while retaining maximum runtime efficiency. Our framework provides syntactic sugar to allow for expressive formulations of a rich set of structured dynamical systems. While the core modules depend only on the header-only Eigen and Boost.Hana libraries, we bundle our codebase with optional packages and custom wrappers for automatic differentiation, code generation, and nonlinear programming. Finally, we demonstrate the versatility of Ungar in various model predictive control applications, namely, four-legged locomotion and collaborative loco-manipulation with multiple one-armed quadruped robots. Ungar is available under the Apache License 2.0 at https://github.com/fdevinc/ungar.
Flavio De Vincenti, Stelian Coros
2023-09-13T08:16:24
http://arxiv.org/abs/2309.06783v2
# Ungar - A C++ Framework for Real-Time Optimal Control Using Template Metaprogramming ###### Abstract We present Ungar, an open-source library to aid the implementation of high-dimensional optimal control problems (OCPs). We adopt modern template metaprogramming techniques to enable the compile-time modeling of complex systems while retaining maximum runtime efficiency. Our framework provides syntactic sugar to allow for expressive formulations of a rich set of structured dynamical systems. While the core modules depend only on the header-only Eigen and Boost.Hana libraries, we bundle our codebase with optional packages and custom wrappers for automatic differentiation, code generation, and nonlinear programming. Finally, we demonstrate the versatility of Ungar in various model predictive control applications, namely, four-legged locomotion and collaborative loco-manipulation with multiple one-armed quadruped robots. Ungar is available under the Apache License 2.0 at [https://github.com/fdevinc/ungar](https://github.com/fdevinc/ungar). ## I Introduction The advancements in model predictive control (MPC) methods have endowed robots with exceptional athletic skills. Recent displays of humanoid [8] and quadruped robots [4, 11] have shown feats that were once prerogative to science fiction. However, significant engineering efforts are still necessary to make such practical MPC implementations possible. The solution of big nonlinear programming (NLP) problems at real-time rates clashes with the inherent high dimensionality and fast dynamics of mechanical systems. These conflicting aspects translate to onerous computational costs to be met in fractions of seconds, thus calling for complex data structures and ingenious software designs. We seek to facilitate the manual endeavors flowing into the development of MPC controllers. In our vision, ease of use and relevance to a broad range of applications are of paramount importance. These objectives are only achievable by carefully averting any runtime computational overheads. At the same time, user interfaces must provide an intuitive syntax that mirrors standardized, mathematical formulations of optimal control problems (OCPs). Achieving robust MPC performance is challenging in many ways. Efficient NLP solvers must be coupled with fast derivative computations. Given the large numbers and interrelationships of the state and control variables, manual implementation of first and second-order derivatives would result in a tedious, error-prone process. Also, although mature automatic differentiation (AD) and NLP libraries exist, most implementations require all the variables stacked in a single vector, which necessitates some index-keeping logic. This fact begs the question of what data structures could store them while guaranteeing zero-cost access operations and adaptability to different system designs. With Ungar, we provide a metalanguage that addresses these modeling challenges. Our solution introduces constructs that significantly simplify the definition of the NLP problems typically arising in optimal control. We use template metaprogramming (TMP) to delegate the generation of the necessary implementations to the compiler, while users can focus exclusively on the architectural details of a desired MPC application. Our approach makes the transcription into code of structured variable sets seamless while encoding all hierarchical and indexing information at compile time. Consequently, all read/write operations acting on correspondingly created objects incur no additional costs, just like ad hoc programming solutions. Since the core of Ungar is header-only, its integration in C++ projects is effortless. However, we also include an optional interface to CppADCodeGen [3, 15] for automatically generating derivatives and an optional sequential quadratic programming (SQP) solver using OSQP [20] as a backend; if enabled, all external dependencies are automatically downloaded through CMake. Finally, we illustrate the capabilities of Ungar by implementing MPC controllers for increasingly complex systems, including quadruped robots and teams of four-legged manipulators cooperatively carrying an object. ### Related Work There exist many open-source packages to assist in the creation of MPC controllers. A noncomprehensive list includes frameworks for modeling, simulation, and optimization-based control of mechanical systems, such as the Control Toolbox [10] and its successor OCS2 [9], Drake [21], Crocoddyd [18], TOWR [25], FROST [13], Quad-SDK [19], SymForce [17], etc. These libraries are geared toward robotic applications and the OCPs they solve require specific structures. In contrast, NLP-oriented frameworks address more general classes of nonlinear optimization problems; notable examples are IPOPT [24], ACADOS [14, 22], PSOPT [2], CasADi [1], etc. Ungar complements the above libraries by providing novel system modeling features. In particular, it allows for quickly setting up the data structures required by widely adopted AD implementations and OCP solvers. Eventually, while MPC is its primary focus, the design of Ungar makes it suitable for any application requiring the solution of finite dimensional NLP problems. ## II Data Structures In this section, we introduce the two main data structures at the heart of Ungar: variables and maps. We accompany their descriptions with motivating examples in the robotics domain. We remark that the classes we discuss build on only two external dependencies, namely, Eigen [12] and Boost.Hana [7]. The former is a linear algebra template library ubiquitous in robotics codebases due to its fast performance and versatility; the latter is a collection of utilities and containers that greatly simplify the implementation of TMP algorithms. Since both are header-only, we wrap them within Ungar to make its integration in C++ projects as straightforward as possible. ### _Variables_ At the core of Ungar lies the Variable template class. Variables describe the structure of quantities of interest such as states, inputs, or parameters. Each variable has a name and a kind1, and it is related to all other variables through hierarchical relationships. When a Variable object is instantiated, all this information is encoded in its type using three template parameters: a compile-time string for the name, an integer number identifying its kind, and a compile-time map representing the variable hierarchy. We employ boost:hana:string and boost:hana:map types to designate the name and the variable hierarchy, respectively. In particular, the latter maps variable names to sub-Variable objects or arrays thereof. Footnote 1: In this paper, we assign distinct meanings to the terms _kind_ and _type_. To clarify, _kind_ denotes the mathematical group to which a variable belongs, while _type_ exclusively refers to C++ data types. To clarify and explain the above design choices, let us consider the decision variables of a finite horizon MPC for controlling a quadrotor. The state \(\mathbf{x}_{k}\) of the system at time step \(k\) consists of the robot's pose and velocity; the control inputs \(\mathbf{u}_{k}\) comprise four rotor speeds. Given a discrete time horizon \(N\in\mathbb{N}\) and assuming a direct multiple shooting formulation [6], the optimization variables amount to the stacked state and input vectors \(\mathbf{X}=[\mathbf{x}_{0}^{\top}\ \mathbf{x}_{1}^{\top}\ \ldots\mathbf{x}_{N}^{ \top}]^{\top}\) and \(\mathbf{U}=[\mathbf{u}_{0}^{\top}\ \mathbf{u}_{1}^{\top}\ \ldots\mathbf{u}_{N-1}^{ \top}]^{\top}\), respectively. Then, we can use the Ungar template object var_c to instantiate the relevant variables as shown in Listing 1. The var_c construct takes a fixed string denoting the name of the variable and an optional integer parameter representing its kind: if present, it instantiates a "leaf" variable, i.e., a variable that has no additional substructures; otherwise, it creates a "branch" variable, which is only defined in relation to its subvariables. We identify the kind of a leaf variable with \(1\) for scalars, an implementation-defined constant \(\mathbb{Q}\) for unit quaternions, and any positive integer for a correspondingly Fig. 1: Hierarchical relationships among state and input variables of an OCP for controlling a quadrotor and underlying memory representation. At every time step \(k\), we define the robot’s state \(\mathbf{x}_{k}\) as a stacked vector containing its position \(\mathbf{p}_{k}\in\mathbb{R}^{3}\), orientation \(\mathbf{q}_{k}\in\mathbb{S}^{3}\subset\mathbb{R}^{4}\), linear velocity \(\mathbf{p}_{k}\in\mathbb{R}^{3}\), and angular velocity \(\boldsymbol{\omega}_{k}\in\mathbb{R}^{3}\). The inputs \(\mathbf{u}_{k}\) consist of the four rotor speeds \(\omega_{k}^{i}\in\mathbb{R},\forall i\in\{1,2,3,4\}\). Finally, \(\mathbf{X}\in\mathbb{R}^{13(N+1)}\) and \(\mathbf{U}^{4N}\) denote the stacked state and input vectors, where \(N\in\mathbb{N}\) is the discrete time horizon— in the diagram, \(N=2\). Using TMP techniques, Ungar supports the generation of data structures for efficiently manipulating raw data arrays. sized vector. As schematically represented in Fig. 1, Ungar allows mapping the structures encoded by variables to contiguous memory buffers (see Section II-B). In our example, the lines 1-3 define a discrete time horizon \(N\) of \(30\) time steps and the number of rotors as integral constants [7] through the user-defined literal _c. The lines 5-10 define the leaf variables for the pose and velocity of a quadrotor, as well as its rotor speeds. Instead, the lines 12-17 introduce the branch variables for \(\mathbf{x}_{k}\), \(\mathbf{X}\), \(\mathbf{u}_{k}\), and \(\mathbf{U}\), respectively. We observe that \(\mathbf{x}_{k}\) consists of the stacked position, orientation, linear velocity, and angular velocity of the robot, while \(\mathbf{X}\) contains \(N+1\) stacked states. Then, we can immediately express the above structural information using the overloaded functions operator\(<\)=, operator, and operator\(*\) as shown at lines 13-14. Similarly, we define the branch variables for \(\mathbf{u}_{k}\) and \(\mathbf{U}\), and finally we stack \(\mathbf{X}\) and \(\mathbf{U}\) inside the object decision_variables at line 17. All the variables defined in Listing 1 are constexpr and can be queried at compile-time for two main pieces of information: their sizes and their indices within the hierarchy. For instance, with reference to the diagram in Fig. 1 and for \(N=30\), we can write ``` 1static_assert( 2x.Size()--13&& 3X.Size()==403&& 4u.Size()==4&& 5U.Size()==120&& 6decision_variables.Size()==523 7); 8 9and 10 11static_assert( 12X(x,0).Index()==0&& 13X(x,1).Index()==x.Size()&& 14X(x,1,linear_velocity).Index()==20&& 15decision_variables(0).Index()==X.Size()&& 16U(u,0).Index()==0&& 17U(u,1).Index()==4&& 18U(u,1,rotor_speed,0).Index()==4&& 19U(u,1,rotor_speed,1).Index()==5 20); ``` As shown in the above listings, we can access all subvariables of any branch variable using the function call operator operator(). If multiple copies of a subvariable exist, we must write the zero-based index of the subvariable we are interested in. Most importantly, if there is no ambiguity in the path from a branch variable to any of its subvariables, we can bypass all intermediate variables as shown in Listing 2: this feature is very convenient when the variables are organized in deep hierarchies. Finally, Ungar provides macros for defining variables according to the convention that their name is identical to the corresponding object name. Thus, Listing 1 can be rewritten in a more succinct way as shown in Listing 3. ### _Variable Maps_ The Variable framework provides the means to describe complex systems with a minimal yet expressive syntax without incurring runtime costs. To turn system descriptions into useful data structures, Ungar offers the template class VariableMap. A variable map associates a variable with an array of scalars and each subvariable with a subarray. To perform the various mappings, we adopt the Eigen::Map class [12], which allows interfacing raw buffers with dense matrix expressions seamlessly. All necessary Eigen maps are created during the execution of the VariableMap constructor; therefore, accessing any subvariable data has no runtime cost. This is only possible due to our adoption of TMP techniques and makes Ungar akin to a metalanguage. Given some user-defined scalar type scalar_t, we can create a variable map for the quadrotor decision variables introduced in Section II-A as: ``` 1autovars=MakeVariableMap<scalar_t>(decision_variables); ``` Then, we can access all submaps by passing corresponding subvariables to the Get method. For example, we can initialize all unit quaternions to identity rotations and all remaining decision variables to zero [12] as: ``` 1vars.Get(X).setZero(); 2for(autok=0;k<N+1;++k){ 3vars.Get(orientation,k).setIdentity(); 4} 5vars.Get(U).setZero(); 6static_assert( 7std::same_asc< 8decltype(vars.Get(U)), Eigen::Map<Eigen::Vector<scalar_t>>& 9> 10}; ``` We remark that all objects returned by the Get method have reference types, hence they do not perform copies and directly manipulate the underlying data. Also, the returned type depends on the kind of the corresponding variable, so it can be a reference to scalar_t, an Eigen map to a unit quaternion, or an Eigen map to a vector. Branch variables are always mapped to vectors spanning all the corresponding subvariables (line 6). We enable this flexibility by internally adopting compile-time maps associating subvariables to corresponding data subarrays. While this solution ensures the best possible runtime performance, it can be demanding in terms of compile time and memory consumption. This can be undesirable if, for instance, such variable maps are employed only for intermediate code generation steps; indeed, most code generators optimize the input code, thus making any performance optimizations unnecessary at this stage. We address this need with a lazy version of VariableMap named VariableLazyMap. Lazy maps instantiate Eigen maps on demand, which is a cheap operation involving the copies of two integer numbers. Their constructors require a data buffer with the correct size as: ``` 1Vectorxrunderlying(decision_variables.Size()); 2autolvars=MakeVariableLazyMap(underlying,decision_variables); ``` In particular, we can rewrite our initialization example as: ``` 1lvars.Get(X).setZero(); 2for(autok=0;k<N+1;++k){ 3lvars.Get(orientation,k).setIdentity(); *static_assert(X(x, 1, linear_velocity) == X(linear_velocity, 1)); *static_assert(U(u, 1, rotor_speed, 0) == U(rotor_speed, 1, 0)); *static_assert(U(u, 1, rotor_speed, 1) == U(rotor_speed, 1, 1)); *static_assert(decision_variables(X, x, 1, linear_velocity) == decision_variables(linear_velocity, 1)); *static_assert( *decision_variables(U, u, 2, rotor_speed, 3) == decision_variables(u, 2, rotor_speed, 3) && *decision_variables(U, u, 2, rotor_speed, 3) == decision_variables(rotor_speed, 2, 3) *); Listing 2. Equivalent expressions for unambiguous variable hierarchies. ### _Implementation Details_ We implement our controllers relying exclusively on the functionalities provided by Ungar. For our tests, we employ the core data structures alongside the optional CppADCodeGen wrapper [15] and SQP solver. We base our solver on the recent work by Grandia et al. [11] and refer the interested reader to [23] for more details. To generate the necessary derivatives, CppADCodeGen requires all functions to depend on a single array of data including both independent variables and parameters. We comply with this interface through Ungar maps as described in Section II-B. For each controller, we define the variable hierarchies decision_variables and parameters. The decision variables include states and control inputs, while the parameters contain inertial properties, reference trajectories, physical constants, etc. In the following sections, we will show the definitions of only the decision variables of the various MPC controllers to keep the exposition succinct. Nevertheless, we consistently adopt this subdivision in all our implementations, and we invite the reader to explore the examples in the library for a detailed walk-through. Ungar requires compilers with C++20 support2. We implemented and thoroughly tested our controllers on Ubuntu 22.04.2 LTS with GCC 11, but Ungar's core modules do not depend on any OS-specific instructions. However, we note that our optional CppADCodeGen wrapper uses runtime compilation and dynamic linking features offered by Linux. _Compile Times:_ Ungar map objects have no runtime overheads by construction. Thus, we only provide a benchmark of the compile times for systems with different sizes. For this purpose, we measure the time required to build a VariableMap and a VariableLazyMap object for the decision variables defined in Listing 3. We take this measurement for different values of N and NUM_ROTORRS on a laptop computer with an i7-11800H, \(2.30\,\mathrm{GHz}\), 16-core CPU, and we plot corresponding heatmaps in Fig. 2. We can see that the lazy map has more favorable compile times, requiring only \(14\,\mathrm{s}\) to build the data structures for an octocopter with \(N=390\). In contrast, for the same setup, the compile time of VariableMap is \(39\,\mathrm{s}\) long. Although VariableMap scales less favorably compared to VariableLazyMap, we can notice that they have similar compile time performance for time horizons smaller than \(100\) time steps. Nevertheless, we recommend using lazy maps for prototyping and switching to VariableMap for production code to get the fastest runtime performance. ### _Quadrupedal Locomotion_ We base our MPC formulation for quadrupedal locomotion on the controller of Bledt and Kim [5], but with three notable differences. Firstly, we use the nonlinear SRBD equations without linearizations or simplifications. Secondly, we represent orientations with unit quaternions instead of Euler angles to prevent singularity issues. Lastly, we employ a Lie group time-stepping method to integrate the dynamics conserving quaternion unit-norm constraints [23]. We define our variable hierarchy as shown in Listing 4. In particular, we can see that less than \(19\) lines of code suffice for generating the data structures required to manipulate the robot's states and inputs. While an MPC locomotion controller requires additional components to be of practical use, such as gait planners, inverse kinematics solvers, and whole-body controllers, we can already appreciate the potential of Ungar in simplifying the formulation of NLP problems with elaborate structures. ### _Collaborative Loco-Manipulation_ We use Ungar to implement an MPC controller that simultaneously optimizes ground reaction forces, manipulation wrenches, stepping locations, and body trajectories for a team of one-armed quadruped robots collectively manipulating an object. The resulting optimal control problem is very high-dimensional and presents coupled dynamics and deep hierarchies among states and inputs. For instance, each robot has \(4\) legs, and each leg is associated with a ground reaction force and a stepping location at every time step; also, each robot has an arm that, in turn, corresponds to a manipulation force and torque. We formulate our MPC for collaborative loco-manipulation (CLM) as an extension of the SRBD to multi-agent systems and refer the interested reader to [23] for a detailed description of our model. As shown in Listing 5, the creation of Ungar variables for the seemingly involved CLM setting requires only minor changes to the locomotion control problem of Listing 4. ### _Limitations_ The optimal runtime efficiency of Ungar is due to a large number of compile-time computations. However, if the variable hierarchies become too deep or nested, then compile times may become significantly high. Also, we observed compiler crashes when instantiating VariableMap objects for very large OCPs, once again caused by the excessive amount of compile-time computations. In these rare cases, it is sufficient to adopt the VariableLazyMap type, which is considerably less computation-intensive and provides almost the same performance as its non-lazy version. For future work, we will optimize the design of Ungar to improve its compile times. We additionally plan to expand the library with more tools to facilitate the implementation of high-performance MPC controllers. ## IV Conclusion In this paper, we introduced Ungar, a C++ template library for real-time MPC applications. Our framework uses TMP techniques to address modeling needs overlooked in existing NLP and optimal control software packages. In particular, it provides a metalanguage to describe complex systems in terms of variable hierarchies. Then, it lets compilers produce highly efficient code for manipulating raw data buffers based on these hierarchies. As shown in our quadruped locomotion and collaborative loco-manipulation experiments, these features enable great flexibility in formulating NLP problems and simplify AD-compliant implementations. Fig. 2: Compile times to generate the implementations of a VariableMap **(left)** and a VariableLazyMap **(right)** for the multirotor example of Listing 3. We benchmark Ungar against different time horizons and numbers of rotors by varying the lines 2 and 3. The heatmaps manifest the more desirable compile times of the lazy map compared to its non-lazy alternative. For this reason, the VariableMap type should only be employed when seeking the best MPC performance possible. ``` 1// Defineintegralconstants. 2constexprautoN=30_c; 3constexprautoNUM_LEGS=4_c; 4 5// Define"leaf"variables. 6UNGAR_VARIABLE(position,3); 7UNGAR_VARIABLE(orientation,Q); 8UNGAR_VARIABLE(linear_velocity,3); 9UNGAR_VARIABLE(angular_velocity,3); 10UNGAR_VARIABLE(force,3); 11UNGAR_VARIABLE(relative_position,3); 12 13// Define"branch"variables. 14UNGAR_VARIABLE(leg_input)<<=(force,relative_position); 15UNGAR_VARIABLE(x)<<=(position,orientation,linear_velocity,angular_velocity); 16UNGAR_VARIABLE(x)<<(N+1_c)*x; 17UNGAR_VARIABLE(u)<<=NUM_LEGS*leg_input; 18UNGAR_VARIABLE(U)<<=N*u; 19UNGAR_VARIABLE(decision_variables)<<=(X,U); ``` Listing 4. Decision variables of an OCP for quadrupedal locomotion using the single rigid body model. ``` 1//Defineintegralconstants. 2constexprautoN=10_c; 3constexprautoN=N_ROBGS=2_c; 4constexprautoNUM_LEGS=4_c; 5 6// Define"leaf"variables. 7UNGAR_VARIABLE(position,3); 8UNGAR_VARIABLE(orientation,Q); 9UNGAR_VARIABLE(linear_velocity,3); 10UNGAR_VARIABLE(angular_velocity,3); 11UNGAR_VARIABLE(force,3); 12UNGAR_VARIABLE(relative_position,3); 13UNGAR_VARIABLE(regure,3); 14 15// Define"branch"variables. 16UNGAR_VARIABLE(leg_input)<<=(force,relative_position); 17UNGAR_VARIABLE(arm_input)<<=(force,torque); 18UNGAR_VARIABLE(robot_input)<<=(NUM_LEGS*leg_input,arm_input); 19UNGAR_VARIABLE(payload_state)<<=(position,orientation,linear_velocity,angular_velocity); 20UNGAR_VARIABLE(w)<<=(payload_state,NUM_ROBGS*robot_state); 21UNGAR_VARIABLE(x)<<=(N+1_c)*x; 22UNGAR_VARIABLE(w)<<=NUM_ROBGS*robot_input; 23UNGAR_VARIABLE(U)<<=N+u; 24UNGAR_VARIABLE(decision_variables)<<=(X,U); ``` Listing 5. Decision variables of an OCP for collaborative loco-manipulation with two robots modeled as single rigid bodies. We highlight the differences from the locomotion controller formulated in Listing 4. In particular, we mark newly added variables in yellow and modified variables in light blue.
Ungarはオープンソースのライブラリで、高次元の最適制御問題(OCP)のimplementaionに役立つものです。私たちは、現代のテンプレートメタプログラミング技術を採用し、複雑なシステムをコンパイル時モデル化し、最大限の runtime 効率を維持しています。私たちのフレームワークは、構造化されたダイナミックシステムの表現的な公式を可能にするための構文の甘みを提供しています。コアモジュールは、ヘッダーのみの Eigen と Boost.Hana ライブラリに依存していますが、オプションパッケージとカスタムwraperが自動微分、コード生成、非線形プログラミングのためのコードベースをバンドルしています。最後に、Ungar の多様な用途を示すため、四足動物の歩行と、複数一腕の四足動物ロボットによる協調歩行制御などのモデル予測制御アプリケーションを демонstrates。Ungarは、Apache License 2.0 で
2305.19576
Periodic Vlasov-Stokes' system: Existence and Uniqueness of strong solutions
This paper deals with the Vlasov-Stokes' system in three dimensions with periodic boundary conditions in the spatial variable. We prove the existence of a unique strong solution to this two-phase model under the assumption that initial velocity moments of certain order are bounded. We use a fixed point argument to arrive at a global-in-time solution.
Harsha Hutridurga, Krishan Kumar, Amiya K. Pani
2023-05-31T05:53:35
http://arxiv.org/abs/2305.19576v1
# Periodic Vlasov-Stokes' system: existence ###### Abstract. This paper deals with the Vlasov-Stokes' system in three dimensions with periodic boundary conditions in the spatial variable. We prove the existence of a unique strong solution to this two-phase model under the assumption that initial velocity moments of certain order are bounded. We use a fixed point argument to arrive at a global-in-time solution. ## 1. Introduction This paper deals with a coupled system of partial differential equations arising in the study of thin sprays. From a modeling perspective, it is assumed that the spray particles (droplets) are a dispersed phase in a gas medium. Studying two-phase models comprising of a kinetic equation for the dispersed phase and a fluid equation for the gas dates back to the works of O'Rourke [11] and Williams [12] (see also [13]). We choose to model the three dimensional background fluid by the linear unsteady Stokes' equation and the droplet distribution by the Vlasov equation while the coupling is via a drag term: \[\begin{cases}\partial_{t}f+v\cdot\nabla_{x}f+\nabla_{v}\cdot\Big{(}\left( \boldsymbol{u}-v\right)f\Big{)}=0&\text{in }(0,T)\times\Omega_{x}\times\mathbb{R}^{3},\\ f(0,x,v)=f_{0}(x,v)&\text{in }\Omega_{x}\times\mathbb{R}^{3}.\end{cases} \tag{1.1}\] \[\begin{cases}\partial_{t}\boldsymbol{u}-\Delta_{x}\boldsymbol{u}+\nabla_{x}p =\int_{\mathbb{R}^{2}}(v-\boldsymbol{u})\,f\,\mathrm{d}v&\text{in } (0,T)\times\Omega_{x},\\ \nabla_{x}\cdot\boldsymbol{u}=0&\text{in }\Omega_{x},\\ \boldsymbol{u}(0,x)=\boldsymbol{u}_{0}(x)&\text{in }\Omega_{x}.\end{cases} \tag{1.2}\] Here \(\Omega_{x}\) denotes the three dimensional torus \(\mathbb{T}^{3}\). The unknowns in the above coupled system are the following: the fluid velocity \(\boldsymbol{u}(t,x)\), the fluid pressure \(p(t,x)\), the droplet distribution function \(f(t,x,v)\). We impose periodic boundary conditions in the \(x\) variable. The above model with homogeneous Dirichlet boundary condition for the fluid velocity and with specular reflection boundary condition for the droplet distribution was studied by Hamdache in [1], wherein he proved the existence of global-in-time weak solutions. Hofer studied the Vlasov-steady Stokes' system in [10] with compactly supported initial data in the phase space. Various other kinetic-fluid equations have been studied in the literature: Vlasov-Burgers' equations [11, 12, 13]; Vlasov-Euler equations [11, 12, 14, 15], to name a few. In this paper, we make precise the notion of strong solutions to our system (1.1)-(1.2). Using (i) certain a priori bounds coming from the energy identity, (ii) the regularity theory for the Stokes' equation, (iii) the DiPerna-Lions' theory for the well-posedness of the transport equation with Sobolev vector fields and (iv) a fixed point argument, we prove the global-in-time well-posedness result for the fluid-kinetic system (1.1)-(1.2). The aforementioned a priori bounds have been known since the work of Hamdache [1]. In most of the works on existence and uniqueness of solutions mentioned above, a standard assumption on the initial droplet distribution is that its velocity moments up to certain order are bounded. More precisely, one assumes \[\int_{\Omega_{x}}\int_{\mathbb{R}^{d}}\left|v\right|^{k}f_{0}(x,v)\,\mathrm{ d}v\,\mathrm{d}x\leq C,\] where the order \(k\) typically depends on the dimension \(d\) that one is considering. A conventional result is then to show that similar bounds hold for velocity moments of the droplet distribution at later times as well. In this work, we also assume that the velocity moments associated with the first-order derivatives of the initial droplet distribution are also bounded and that this property is propagated in time. More precisely, we assume that \[\int_{\Omega_{x}}\int_{\mathbb{R}^{3}}|v|^{p}|\nabla_{x}f_{0}|^{2}\,\mathrm{d}v \,\mathrm{d}x+\int_{\Omega_{x}}\int_{\mathbb{R}^{3}}|v|^{p}|\nabla_{v}f_{0}|^{ 2}\,\mathrm{d}v\,\mathrm{d}x\leq C.\] This particular assumption is inspired by the work of M. Chae et al. [1]. Our arguments leading to the application of the Banach fixed theorem goes in parallel to the arguments that can be found in the work of Yu [23] addressing the well-posedness of the Vlasov-Navier-Stokes' system in two dimensions. We would like to point out that there is a minor gap in one of the arguments of [23] which we highlight and fix in this article. We thank Cheng Yu for discussing this minor issue with us and for suggesting a way to fix that error as well (Cheng Yu, personal communication, August 17, 2021). It should, however, be noted that our proof requires the aforementioned velocity moment bounds associated with the first-order derivatives which wasn't used in [23]. We believe that, with only of the assumptions made in [23], it may not be possible to close this line of argument via the contraction map (see Remark 2.12). ## 2. Well-posedness result We set the local density \(\rho\) and the local macroscopic velocity \(V\) as \[\rho(t,x)=\int_{\mathbb{R}^{3}}f(t,x,v)\,\mathrm{d}v\quad\text{and}\quad V(t,x )=\frac{1}{\rho}\int_{\mathbb{R}^{3}}f(t,x,v)v\,\mathrm{d}v.\] In what follows, we denote the \(k^{th}\) order velocity moments by \[m_{k}f(t,x)=\int_{\mathbb{R}^{3}}|v|^{k}f(t,x,v)\,\mathrm{d}v,\quad\text{for} \quad k\in\mathbb{N}\cup\{0\}.\] Through out this paper, we use standard notation for Sobolev spaces. We denote by \(W^{m,p}\) the \(L^{p}\)-Sobolev space of order \(m\geq 0\). We take \(\boldsymbol{W^{m,p}}=\left(W^{m,p}(\Omega_{x})\right)^{3},\;\forall\;m\geq 0,\;1\leq p\leq\infty\). We also use the standard notations \(H^{s}=W^{s,2}\) and \(\boldsymbol{H^{s}}=\boldsymbol{W^{s,2}}\). We further denote a special class of divergence-free (in the sense of distribution) vector fields by \[\boldsymbol{J_{1}}=\left\{\boldsymbol{z}\in\boldsymbol{H^{1}}:\nabla_{x} \cdot\boldsymbol{z}=0,\boldsymbol{z}\text{ is periodic}\right\}.\] Throughout this manuscript, any function defined on \(\Omega_{x}\) is assumed to be periodic in the \(x\)-variable. ### Notion of solution and main result We say that \((f,\boldsymbol{u},p)\) is a **strong solution** to the Vlasov-Stokes' system (1.1)-(1.2) if * \(f\in W^{1,1}(0,T;W^{1,1}(\Omega_{x}\times\mathbb{R}^{3}))\cap L^{\infty}(0,T; L^{1}(\Omega_{x}\times\mathbb{R}^{3})\cap L^{\infty}(\Omega_{x}\times\mathbb{R}^{3}))\) * \(\boldsymbol{u}\in L^{\infty}(0,T;\boldsymbol{J_{1}})\cap L^{2}(0,T;\boldsymbol {H^{2}})\cap H^{1}(0,T;\boldsymbol{L^{2}})\) * \(p\in L^{2}(0,T;H^{1}(\Omega_{x})/\mathbb{R})\) * \((f,\boldsymbol{u},p)\) satisfies the equations (1.1) and (1.2) in almost everywhere sense (in the phase space) for almost all time \(t\in(0,T]\). **Theorem 2.1**.: _(Existence and Uniqueness of strong solution) Let the initial datum \(f_{0}\) be such that_ \[f_{0}\geq 0, \tag{2.2}\] \[f_{0}\in L^{1}(\Omega_{x}\times\mathbb{R}^{3})\cap L^{\infty}( \Omega_{x}\times\mathbb{R}^{3})\cap H^{1}(\Omega_{x}\times\mathbb{R}^{3}),\] (2.3) \[\int_{\Omega_{x}}\int_{\mathbb{R}^{3}}|v|^{p}\left\{f_{0}+|\nabla_ {x}f_{0}|^{2}+|\nabla_{v}f_{0}|^{2}\right\}\,\mathrm{d}v\,\mathrm{d}x\leq C, \tag{2.1}\] _for \(0\leq p\leq 9+\delta\) with \(\delta>0\) and let the initial datum \(\boldsymbol{u_{0}}\in\boldsymbol{H^{2}}\cap\boldsymbol{J_{1}}\). Then, there exists a unique global-in-time strong solution \((f,\boldsymbol{u},p)\) to the Vlasov-Stokes' system (1.1)-(1.2). Furthermore,_ \[\int_{\Omega_{x}}\int_{\mathbb{R}^{3}}|v|^{p}\left\{f+|\nabla_{x}f|^{2}+| \nabla_{v}f|^{2}\right\}\,\mathrm{d}v\,\mathrm{d}x\leq C, \tag{2.4}\] _for \(0\leq p\leq 9+\delta\) with \(\delta>0\) and for all \(t>0\)._ The proof of the above result goes via the following steps: * A bound on the \((9+\delta)^{\text{th}}\) order velocity moment, with \(\delta>0\), of \(f\) helps to deduce \(\mathbf{u}\in L^{\infty}(0,T;\mathbf{W^{1,\infty}})\), thanks to Stokes' regularity [1, 1]. * Using \(\mathbf{u}\in L^{\infty}(0,T;\mathbf{W^{1,\infty}})\), we prove that the velocity moments of \(|\nabla_{x}f|^{2}\) and \(|\nabla_{v}f|^{2}\) stay bounded for all time if they are bounded initially. This is essentially the assertion (2.4) in the statement of Theorem 2.1. The essential ideas of this step is an adaptation of the calculations in [1, Theorem 5, p.2462] and [1, Lemma 3.2, p.11]. * Using DiPerna-Lions theory [10] for well-posedness of the transport equations and using a certain recurrence relation involving velocity moments, we conclude existence and uniqueness of solution to the Vlasov-Stokes' system by employing the Banach fixed-point theorem in the Banach space \(L^{\infty}(0,T;\mathbf{J_{1}})\cap L^{2}(0,T;\mathbf{H^{2}})\). This step is inspired by the work of Goudon [1] on the Vlasov-Burgers' equations and by the work of Yu [23] on the Vlasov-Navier-Stokes' equations. **Remark 2.2**.: _Hofer in [11] proves the existence and uniqueness of the solution to the \(3D\) Vlasov-Stokes' equation while considering the steady Stokes' equation for the background fluid medium. He proves the existence of unique solution \((f,\mathbf{u})\in W^{1,\infty}((0,T)\times\mathbb{R}^{3}\times\mathbb{R}^{3}) \times\big{(}L^{\infty}(0,T;\mathbf{W^{2,\infty}})\cap\mathbf{W^{1,\infty}}((0,T) \times\mathbb{R}^{3})\big{)}\) for the initial data \(f_{0}(x,v)\in W^{1,\infty}(\mathbb{R}^{3}\times\mathbb{R}^{3})\) with compact support. To proof in [11] goes via a fixed point argument in the Banach space \(W^{1,\infty}((0,T)\times\mathbb{R}^{3}\times\mathbb{R}^{3})\). The assumption of the \(W^{1,\infty}\) data having compact support implies that the velocity moments of any arbitrary order are bounded. Hence it is more restrictive compared to the present setting of this article._ ### Qualitative and quantitative aspects of the model problem Next, we recall a result that yields bound on the \(L^{\infty}\)-norm of the local density. This estimate is important while addressing the well-posedness of the Stokes' system. **Lemma 2.3**.: _Let \(\mathbf{u}\in L^{1}(0,T;\mathbf{L^{\infty}})\). Let \(f_{0}\) be such that \(\sup_{C^{r}_{t,v}}f_{0}\in L^{\infty}_{loc}\left(\mathbb{R}_{+};L^{1}(\mathbb{ R}^{3})\right)\), where \(C^{r}_{t,v}:=\Omega_{x}\times B(e^{t}v,r),\,\forall\,r>0\). Here \(B(e^{t}v,r)\) denotes the ball of radius \(r\) with center at \(e^{t}v\). Then, the following estimate holds:_ \[\|\rho(t,x)\|_{L^{\infty}((0,T]\times\Omega_{x})}\leq e^{3T}\sup_{t\in[0,T]} \|\sup_{C^{r}_{t,v}}f_{0}\|_{L^{1}(\mathbb{R}^{3})}. \tag{2.5}\] The proof of the above result can be found in [11, Proposition 4.6, p.44]. The following result gathers certain properties of solutions to the two-phase model (1.1)-(1.2), the proof of which can be found in [10]. Hence we skip its proof. **Lemma 2.4**.: _Any strong solution \((f,\mathbf{u},p)\) to the Vlasov-Stokes' system (1.1)-(1.2) has the following properties:_ 1. _Positivity preserving:_ _For any non-negative initial data_ \(f_{0}\)_, the solution_ \(f\) _is also non-negative._ 2. _Mass conservation:_ _The distribution function_ \(f\) _conserves the total mass in the following sense:_ \[\int_{\mathbb{R}^{3}}\int_{\Omega_{x}}\,f(t,x,v)\,\mathrm{d}x\,\mathrm{d}v= \int_{\mathbb{R}^{3}}\int_{\Omega_{x}}\,f_{0}(x,v)\,\mathrm{d}x\,\mathrm{d}v, \quad t\in[0,T].\] 3. _Total momentum conservation:_ _The distribution function_ \(f\) _and the fluid velocity_ \(\mathbf{u}\) _together conserve total momentum in the following sense: for all_ \(t\in[0,T]\)_,_ \[\int_{\mathbb{R}^{3}}\int_{\Omega_{x}}vf(t,x,v)\,\mathrm{d}x\,\mathrm{d}v+2 \int_{\Omega_{x}}\mathbf{u}(t,x)\,\mathrm{d}x=\int_{\mathbb{R}^{3}}\int_{\Omega_{ x}}vf_{0}(x,v)\,\mathrm{d}x\,\mathrm{d}v+2\int_{\Omega_{x}}\mathbf{u_{0}}(x)\, \mathrm{d}x.\] 4. _Energy dissipation:_ _For any non-negative initial data_ \(f_{0}\)_, the total energy of the Vlasov-Stokes' system (_1.1_)-(_1.2_) dissipates in time, i.e._ \[\frac{\mathrm{d}}{\mathrm{d}t}\left(\int_{\mathbb{R}^{3}}\int_{\Omega_{x}}|v|^ {2}f(t,x,v)\,\mathrm{d}x\,\mathrm{d}v+\int_{\Omega_{x}}\mathbf{u}^{2}\,\mathrm{d}x \right)\leq 0.\] While proving the aforementioned energy dissipation property in [1], Hamdache derives the following identity: \[\begin{split}&\frac{1}{2}\left(\int_{\mathbb{R}^{3}}\int_{\Omega_{x}} |v|^{2}f(t,x,v)\,\mathrm{d}x\,\mathrm{d}v+\int_{\Omega_{x}}\mathbf{u}^{2}\,\mathrm{ d}x\right)+\int_{0}^{t}\int_{\Omega_{x}}|\nabla_{x}\mathbf{u}|^{2}\,\mathrm{d}x\, \mathrm{d}t\\ &\quad+\int_{0}^{t}\int_{\mathbb{R}^{3}}\int_{\Omega_{x}}|\mathbf{u}- v|^{2}f\,\mathrm{d}x\,\mathrm{d}v\,\mathrm{d}t=\frac{1}{2}\int_{\mathbb{R}^{3}} \int_{\Omega_{x}}|v|^{2}\,f_{0}\,\mathrm{d}x\,\mathrm{d}v+\frac{1}{2}\int_{ \Omega_{x}}\mathbf{u}_{0}^{2}\,\mathrm{d}x.\end{split} \tag{2.6}\] This helps us to deduce that \[\mathbf{u}\in L^{\infty}(0,T;\mathbf{L^{2}})\quad\text{and}\quad\mathbf{u}\in L^{2}(0,T; \mathbf{J_{1}}) \tag{2.7}\] provided \(|v|^{2}f_{0}\in L^{1}(\Omega_{x}\times\mathbb{R}^{3})\) and \(\mathbf{u_{0}}\in\mathbf{L^{2}}\). Now, an application of the Sobolev imbedding yields \(H^{1}(\Omega_{x})\subset L^{p}(\Omega_{x}),2\leq p\leq 6\). Therefore, \[\mathbf{u}\in L^{2}(0,T;\mathbf{L^{p}})\quad\text{for}\quad 2\leq p\leq 6. \tag{2.8}\] The following result shows integrability estimates on the local density and the local momentum. As these appear as source terms in the Stokes' equation, these estimates are crucial in deducing the regularity of solutions to the Stokes' problem. The proof of the following result can be found in [1, Lemma 2.2, p.56]. **Lemma 2.5**.: _Let \(p\geq 1\). Let \(\mathbf{u}\in L^{2}(0,T;\mathbf{L^{p+3}}),f_{0}\in L^{\infty}(\Omega_{x}\times\mathbb{ R}^{3})\cap L^{1}(\Omega_{x}\times\mathbb{R}^{3})\) and let_ \[\int_{\mathbb{R}^{3}}\int_{\Omega_{x}}|v|^{p}f_{0}\,\mathrm{d}x\mathrm{d}v<\infty.\] _Then the local density \(\rho\) and the local momentum \(\rho V\) satisfy the following:_ \[\rho\in L^{\infty}\left(0,T;L^{\frac{p+3}{3}}(\Omega_{x})\right)\quad\text{and }\quad\rho V\in L^{\infty}\left(0,T;L^{\frac{p+3}{4}}(\Omega_{x})\right). \tag{2.9}\] **Remark 2.6**.: _Setting \(p=3\) in the Lemma 2.5 shows_ \[\rho\in L^{\infty}\left(0,T;L^{2}(\Omega_{x})\right)\quad\text{and}\quad\rho V \in L^{\infty}\left(0,T;L^{\frac{2}{3}}(\Omega_{x})\right). \tag{2.10}\] _A use of the Stokes' regularity result yields_ \[\mathbf{u}\in L^{2}(0,T;\mathbf{W}^{2,\frac{3}{2}}). \tag{2.11}\] _An application of the Sobolev inequality shows_ \[\mathbf{u}\in L^{2}(0,T;\mathbf{L^{p}})\quad\text{for}\quad\frac{3}{2}\leq p<\infty. \tag{2.12}\] **Remark 2.7**.: _Choosing \(p=5\) in Lemma 2.5, we arrive at_ \[\rho\in L^{\infty}\left(0,T;L^{\frac{8}{3}}(\Omega_{x})\right)\quad\text{and }\quad\rho V\in L^{\infty}\left(0,T;L^{2}(\Omega_{x})\right). \tag{2.13}\] _A use of the Stokes' regularity result shows_ \[\mathbf{u}\in H^{1}(0,T;\mathbf{L^{2}})\cap L^{2}(0,T;\mathbf{H^{2}})\cap L^{\infty}(0,T; \mathbf{H^{1}}). \tag{2.14}\] **Remark 2.8**.: _Set \(p=9+\delta\) with \(\delta>0\) in the Lemma 2.5 to obtain_ \[\rho\in L^{\infty}\left(0,T;L^{\frac{12\delta+\delta}{3}}(\Omega_{x})\right) \quad\text{and}\quad\rho V\in L^{\infty}\left(0,T;L^{\frac{12\delta+\delta}{4 }}(\Omega_{x})\right). \tag{2.15}\] _A use of the Stokes' regularity result yields_ \[\mathbf{u}\in L^{\infty}(0,T;\mathbf{W^{1,\infty}}). \tag{2.16}\] The following Lemma shows the propagation of velocity moments which is crucial for the proof of Theorem 2.1. The proof of the assertion (2.4) made in Theorem 2.1 is entrusted to the following Lemma. **Lemma 2.9**.: _Let \(\mathbf{u}\in L^{\infty}(0,T;\mathbf{W}^{\mathbf{1},\mathbf{\infty}})\) and let \(f_{0}\geq 0\) be such that_ \[\int_{\Omega_{x}}\int_{\mathbb{R}^{3}}|v|^{k}\{f_{0}+|\nabla_{x}f_{0}|^{2}+| \nabla_{v}f_{0}|^{2}\}\,\mathrm{d}v\,\mathrm{d}x\leq C,\] _for \(0\leq k\leq 9+\delta\) with \(\delta>0\). Then, the solution \(f\) of the Vlasov equation satisfies_ \[\int_{\Omega_{x}}\int_{\mathbb{R}^{3}}|v|^{k}\{f+|\nabla_{x}f|^{2}+|\nabla_{v} f|^{2}\}\,\mathrm{d}v\,\mathrm{d}x\leq C\] _for \(0\leq k\leq 9+\delta\) with \(\delta>0\) and for all \(t>0\). Furthermore, there hold for \(k\geq 1\),_ \[\sup_{t\in[0,T]}\int_{\Omega_{x}}m_{k}f\,\mathrm{d}x+k\int_{0}^{T }\int_{\Omega_{x}}m_{k}f\,\mathrm{d}x\,\mathrm{d}t \tag{2.18}\] \[\qquad\qquad\qquad\leq k\|\mathbf{u}\|_{L^{1}(0,T;\mathbf{L}^{\mathbf{\infty }})}\sup_{t\in[0,T]}\int_{\Omega_{x}}m_{k-1}f\,\mathrm{d}x+\int_{\Omega_{x}}m_ {k}f_{0}\,\mathrm{d}x, \tag{2.17}\] \[\|m_{0}f\|_{L^{3}(\Omega_{x})}^{3}\leq C\int_{\Omega_{x}}\int_{\mathbb{R}^{3} }|v|^{6}f\mathrm{d}v\,\mathrm{d}x, \tag{2.19}\] \[\|m_{1}f\|_{L^{2}(\Omega_{x})}^{2}\leq C\int_{\Omega_{x}}\int_{\mathbb{R}^{3} }|v|^{5}f\mathrm{d}v\,\mathrm{d}x. \tag{2.20}\] Proof.: Consider the equation for \(\frac{\partial f}{\partial x_{i}}\): \[\partial_{t}\frac{\partial f}{\partial x_{i}}+v\cdot\nabla_{x}\frac{\partial f }{\partial x_{i}}+\nabla_{v}\cdot\left(\frac{\partial\mathbf{u}}{\partial x_{i}}f \right)+\nabla_{v}\cdot\left(\left(\mathbf{u}-v\right)\frac{\partial f}{\partial x _{i}}\right)=0,\] for \(i=1,2,3\). Multiplying the above vector equation by \(\left(1+|v|^{k}\right)\nabla_{x}f\) and integrating with respect to \(x,v\) yields \[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega_{x}}\int_{\mathbb{R}^{3 }}\left(1+|v|^{k}\right)|\nabla_{x}f|^{2}\,\mathrm{d}v\,\mathrm{d}x=I_{1}+I_{ 2}+I_{3}\] where \[I_{1} =-\int_{\Omega_{x}}\int_{\mathbb{R}^{3}}\left(1+|v|^{k}\right) \nabla_{x}\mathbf{u}\nabla_{x}f\cdot\nabla_{v}f\,\mathrm{d}v\,\mathrm{d}x,\] \[I_{2} =3\int_{\Omega_{x}}\int_{\mathbb{R}^{3}}\left(1+|v|^{k}\right)| \nabla_{x}f|^{2}\,\mathrm{d}v\,\mathrm{d}x,\] \[I_{3} =-\frac{1}{2}\int_{\Omega_{x}}\int_{\mathbb{R}^{3}}\left(1+|v|^{k }\right)(\mathbf{u}-v)\cdot\nabla_{v}\left(|\nabla_{x}f|^{2}\right)\,\mathrm{d}v \,\mathrm{d}x.\] After using Young's inequality in \(I_{1}\), we obtain \[I_{1}\leq\|\nabla_{x}\mathbf{u}\|_{\mathbf{L}^{\mathbf{\infty}}}\int_{\Omega_{x}}\int_{ \mathbb{R}^{3}}\left(1+|v|^{k}\right)\left(|\nabla_{x}f|^{2}+|\nabla_{v}f|^{2 }\right)\,\mathrm{d}v\,\mathrm{d}x.\] An integration by parts yields \[I_{3}=-\int_{\Omega_{x}}\int_{\mathbb{R}^{3}}\left(1+|v|^{k}\right)|\nabla_{x} f|^{2}\,\mathrm{d}v\,\mathrm{d}x+I_{4}\] with \[I_{4}=\frac{k}{2}\int_{\Omega_{x}}\int_{\mathbb{R}^{3}}|v|^{k-2}v\cdot(\mathbf{u}- v)\,|\nabla_{x}f|^{2}\,\mathrm{d}v\,\mathrm{d}x.\] A use of the Young's inequality shows \[I_{4} \leq\frac{k}{2}\|\mathbf{u}\|_{\mathbf{L}^{\mathbf{\infty}}}\int_{\Omega_{x} }\int_{\mathbb{R}^{3}}|v|^{k-1}|\nabla_{x}f|^{2}\,\mathrm{d}v\,\mathrm{d}x+ \frac{k}{2}\int_{\Omega_{x}}\int_{\mathbb{R}^{3}}|v|^{k}|\nabla_{x}f|^{2}\, \mathrm{d}v\,\mathrm{d}x\] \[\leq\frac{k}{2}\|\mathbf{u}\|_{\mathbf{L}^{\mathbf{\infty}}}\int_{\Omega_{x}} \int_{\mathbb{R}^{3}}\left(\frac{k-1}{k}|v|^{k}+\frac{1}{k}\right)|\nabla_{x} f|^{2}\,\mathrm{d}v\,\mathrm{d}x+\frac{k}{2}\int_{\Omega_{x}}\int_{\mathbb{R}^{3}}|v|^{k}| \nabla_{x}f|^{2}\,\mathrm{d}v\,\mathrm{d}x\] \[\leq C\left(1+\|\mathbf{u}\|_{\mathbf{L}^{\mathbf{\infty}}}\right)\int_{\Omega_ {x}}\int_{\mathbb{R}^{3}}\left(1+|v|^{k}\right)|\nabla_{x}f|^{2}\,\mathrm{d}v\, \mathrm{d}x.\] A similar computation involving the equation for \(\nabla_{v}f\) yields \[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega_{x}}\int_{ \mathbb{R}^{3}}\left(1+|v|^{k}\right)|\nabla_{v}f|^{2}\,\mathrm{d}v\,\mathrm{d}x \leq\int_{\Omega_{x}}\int_{\mathbb{R}^{3}}\left(1+|v|^{k}\right) \left(|\nabla_{x}f|^{2}+|\nabla_{v}f|^{2}\right)\,\mathrm{d}v\,\mathrm{d}x\] \[+C\left(1+\|\mathbf{u}\|_{\mathbf{L^{\infty}}}\right)\int_{\Omega_{x}} \int_{\mathbb{R}^{3}}\left(1+|v|^{k}\right)|\nabla_{v}f|^{2}\,\mathrm{d}v\, \mathrm{d}x.\] Altogether, we obtain \[\frac{\mathrm{d}}{\mathrm{d}t}\left(\int_{\Omega_{x}}\int_{ \mathbb{R}^{3}}\left(1+|v|^{k}\right)\left(|\nabla_{x}f|^{2}+|\nabla_{v}f|^{2} \right)\,\mathrm{d}v\,\mathrm{d}x\right)\leq C\left(1+\|\mathbf{u}\|_{\mathbf{W^{1, \infty}}}\right)\] \[\int_{\Omega_{x}}\int_{\mathbb{R}^{3}}\left(1+|v|^{k}\right) \left(|\nabla_{x}f|^{2}+|\nabla_{v}f|^{2}\right)\,\mathrm{d}v\,\mathrm{d}x.\] A use of Gronwall's inequality shows our desired result. Our next task it to derive (2.17) and (2.18). Multiplying equation (1.1) by \(|v|^{k}\), for \(k\geq 1\) and integrating in \(x,v\) variables yields \[\partial_{t}\int_{\Omega_{x}}m_{k}f\,\mathrm{d}x+k\int_{\Omega_{x}}m_{k}f\, \mathrm{d}x=k\int_{\Omega_{x}}\int_{\mathbb{R}^{3}}|v|^{k-2}\mathbf{u}\cdot vf\, \mathrm{d}v\,\mathrm{d}x.\] An integration of the above equation in time yields (2.17). Note that \[m_{0}f=\int_{|v|<R}f\mathrm{d}v+\int_{|v|\geq R}f\mathrm{d}v\leq\|f\|_{L^{ \infty}(\mathbb{R}^{3})}R^{3}+\frac{1}{R^{6}}\int_{|v|\geq R}|v|^{6}f\mathrm{d}v.\] After choosing \(R=\left(\int_{\mathbb{R}^{3}}|v|^{6}f\mathrm{d}v\right)^{\frac{1}{9}}\), we find that \[|m_{0}f|\leq\left(\|f\|_{L^{\infty}(\mathbb{R}^{3})}+1\right)\left(\int_{ \mathbb{R}^{3}}|v|^{6}f\mathrm{d}v\right)^{\frac{1}{3}}.\] Now, for \(k=1\): \[m_{1}f=\int_{|v|<R}vf\mathrm{d}v+\int_{|v|\geq R}vf\mathrm{d}v\leq\|f\|_{L^{ \infty}(\mathbb{R}^{3})}R^{4}+\frac{1}{R^{4}}\int_{|v|\geq R}|v|^{5}f\mathrm{d}v.\] Then, choosing \(R=\left(\int_{\mathbb{R}^{3}}|v|^{5}f\mathrm{d}v\right)^{\frac{1}{6}}\), we obtain \[|m_{1}f|\leq\left(\|f\|_{L^{\infty}(\mathbb{R}^{3})}+1\right)\left(\int_{ \mathbb{R}^{3}}|v|^{5}f\mathrm{d}v\right)^{\frac{1}{2}}.\] Thus, we arrive at (2.18) and this concludes the proof. ### Proof of the main theorem We shall now prove Theorem 2.1. Proof of Theorem 2.1.: Let \(0<T<\infty\) and set \(X:=L^{\infty}(0,T;\mathbf{J_{1}})\cap L^{2}(0,T;\mathbf{H^{2}})\), with the norm \[\|\mathbf{u}\|_{X}=\|\mathbf{u}\|_{L^{\infty}(0,T;\mathbf{J_{1}})}+\|\mathbf{u}\|_{L^{2}(0,T; \mathbf{H^{2}})}.\] Let us arbitrarily fix a \(f_{0}\) satisfying (2.1)-(2.2)-(2.3) and let us fix a \(\mathbf{u_{0}}\in\mathbf{H^{2}}\cap\mathbf{J_{1}}\). We now consider the map \[\mathcal{T}:X \to X\] \[\mathbf{u}^{*}\longmapsto\mathbf{u}=\mathcal{T}(\mathbf{u}^{*})\] defined by the following scheme: * Solve the Vlasov equation: (2.20) \[\partial_{t}f+v\cdot\nabla_{x}f+\nabla_{v}\cdot\left((\mathbf{u}^{*}-v)\,f\right)=0,\] with initial data \(f_{0}\) and with periodic boundary conditions in the \(x\) variable. * Solve the Stokes' equation: (2.21) \[\partial_{t}\mathbf{u}-\Delta_{x}\mathbf{u}+\nabla_{x}p=\rho V-\rho\mathbf{u},\] with intial data \(\mathbf{u_{0}}\) and with periodic boundary conditions in the \(x\) variable. Here \(\rho\) and \(\rho V\) are the local density and the local momentum associated with the solution \(f\) of (2.20), respectively. To begin with, we show that the above map \(\mathcal{T}\) is well-defined. For a given \(\mathbf{u}^{*}\in X\) and a given initial datum \(f_{0}\), the Vlasov equation (2.20) is uniquely solvable (see Lemma 2.10 below for details). Having solved (2.20) for \(f(\mathbf{u}^{*})\), one gathers that the corresponding local density \(\rho\in L^{\infty}\) (see Lemma 2.3) and the corresponding momentum \(\rho V\in L^{2}\) (see Lemma 2.5). Hence, classical theory for the Stokes' problem [1] yields a unique solution \(\mathbf{u}\in X\) for the problem (2.21). Thus, the map \(\mathcal{T}:X\to X\) that takes \(\mathbf{u}^{*}\) to \(\mathcal{T}(\mathbf{u}^{*})=\mathbf{u}\) is well-defined. Our next step in the proof is to show that \(\mathcal{T}\) is a contraction map and that has been demonstrated in Lemma 2.11 below. Therefore, an application of the Banach fixed-point theorem ensures the existence of a unique solution \((f,\mathbf{u})\) in a short time interval \((0,T^{0})\). As the solution \((f,\mathbf{u})\) stays bounded at \(t=T^{0}\), thanks to a priori estimates, we can employ continuation argument to extend the interval of existence upto \((0,T]\). As \(T\) is arbitrary, we get global-in-time well-posedness of our system. Next we deal with Lemmata 2.10 and 2.11 which played a crucial role in the above proof. **Lemma 2.10**.: _Let \(\mathbf{u}^{*}\in X\) and let \(f_{0}\in L^{1}(\Omega_{x}\times\mathbb{R}^{3})\cap L^{\infty}(\Omega_{x}\times \mathbb{R}^{3})\). Then, there exists a unique solution \(f\in L^{\infty}(0,T;L^{1}(\Omega_{x}\times\mathbb{R}^{3})\cap L^{\infty}( \Omega_{x}\times\mathbb{R}^{3}))\) to (2.20)._ Proof.: Note that (2.20) can be rewritten as \[\partial_{t}f+b\cdot\nabla_{x,v}f-3f=0,\] where \(b=(v,\mathbf{u}^{*}-v)\), which lies in \[L^{1}(0,T;H^{1}(\Omega_{x}\times(-K,K)^{3})),\quad 0<K<\infty.\] Note that \(\operatorname{div}_{x,v}b=-3\in L^{\infty}((0,T)\times\Omega_{x}\times \mathbb{R}^{3})\). Furthermore, \(|b|/(1+|v|)\) is bounded. This setting appeals to the general results in [10]. In particular, we can apply [10, Corollaries II-1 and II-2, p.518] to arrive at the existence of the unique solution. **Lemma 2.11**.: _The map \(\mathcal{T}\) defined by (2.20) and (2.21) is a contraction map._ Proof.: Take \(\mathbf{u}^{*}_{1},\mathbf{u}^{*}_{2}\in X\). Let \(f_{i}\) be the unique solution to (2.20) for a given \(\mathbf{u}^{*}_{i}\in X\). Define \(\bar{\mathbf{u}}=\mathbf{u}_{1}-\mathbf{u}_{2},\bar{\mathbf{u}}^{*}=\mathbf{u}^{*}_{1}-\mathbf{u}^{*}_ {2}\) and \(\bar{f}=f_{1}-f_{2}\), then from (2.20)-(2.21) we find that \[\bar{f}_{t}+v\cdot\nabla_{x}\bar{f}+\nabla_{v}\cdot\left(\bar{\mathbf{u}}^{*}f_{1 }+\mathbf{u}^{*}_{2}\bar{f}-v\bar{f}\right)=0, \tag{2.22}\] and \[\begin{cases}\partial_{t}\bar{\mathbf{u}}-\Delta_{x}\bar{\mathbf{u}}+\nabla_{x}\bar{ p}=\int_{\mathbb{R}^{3}}\left(v\bar{f}-\mathbf{u}_{2}\bar{f}-\bar{\mathbf{u}}f_{1} \right)\,\mathrm{d}v,\\ \nabla_{x}\cdot\bar{\mathbf{u}}=0\end{cases} \tag{2.23}\] with initial data \[\bar{f}(0,x,v)=0,\qquad\bar{\mathbf{u}}(0,x)=0.\] Stokes' regularity [11, 1] yields \[\|\bar{\mathbf{u}}\|_{X}^{2}\leq C\,\left\|\int_{\mathbb{R}^{3}}\left(v\bar{f}+ \mathbf{u}_{2}\bar{f}-\bar{\mathbf{u}}f_{1}\right)\,\mathrm{d}v\right\|_{L^{2}((0,T) \times\Omega_{x})}^{2} \tag{2.24}\] Now, the Holder inequality followed by Sobolev imbedding shows \[\begin{split}\left\|\int_{\mathbb{R}^{3}}\left(v\bar{f}-\mathbf{u}_{ 2}\bar{f}-\bar{\mathbf{u}}f_{1}\right)\,\mathrm{d}v\right\|_{L^{2}([0,T]\times \Omega_{x})}\leq\left\|\int_{\mathbb{R}^{3}}v\bar{f}\,\mathrm{d}v\right\|_{L^{2 }([0,T]\times\Omega_{x})}\\ +T^{\frac{1}{6}}\|\mathbf{u}_{2}\|_{X}\left\|\int_{\mathbb{R}^{3}} \bar{f}\,\mathrm{d}v\right\|_{L^{3}([0,T]\times\Omega_{x})}+T^{\frac{1}{2}}\| \bar{\mathbf{u}}\|_{X}\|m_{6}f_{1}\|_{L^{\infty}(0,T;L^{3}(\Omega_{x}))}.\end{split} \tag{2.25}\] For a sufficiently small \(T>0\), there holds \[C\,T\|m_{0}f_{1}\|_{L^{\infty}(0,T;L^{3}(\Omega_{x}))}^{2}\leq\frac{1}{2}. \tag{2.26}\] Hence for such a choice of \(T\), we obtain \[\|\bar{\mathbf{u}}\|_{X}^{2}\leq C\left\|\int_{\mathbb{R}^{3}}v\bar{f}\,\mathrm{d}v \right\|_{L^{2}([0,T]\times\Omega_{x})}^{2}+C\|\mathbf{u}_{2}\|_{X}^{2}\left\|\int_{ \mathbb{R}^{3}}\bar{f}\,\mathrm{d}v\right\|_{L^{3}([0,T]\times\Omega_{x})}^{2}. \tag{2.27}\] Now, a similar calculation as in the proof of Lemma 2.9 implies \[\left\|\int_{\mathbb{R}^{3}}v\bar{f}\,\mathrm{d}v\right\|_{L^{2}([0,T]\times \Omega_{x})}^{2}\leq C\int_{0}^{T}\int_{\mathbb{R}^{3}}\int_{\Omega_{x}}\,|v|^{ 5}|\bar{f}|\,\mathrm{d}x\,\mathrm{d}v\,\mathrm{d}t, \tag{2.28}\] and \[\left\|\int_{\mathbb{R}^{3}}\bar{f}\,\mathrm{d}v\right\|_{L^{3}([0,T]\times \Omega_{x})}^{2}\leq C\int_{0}^{T}\int_{\mathbb{R}^{3}}\int_{\Omega_{x}}\,|v|^{ 6}|\bar{f}|\,\mathrm{d}x\,\mathrm{d}v\,\mathrm{d}t. \tag{2.29}\] Multiply equation (2.22) by \(|v|^{k}\frac{\bar{f}}{\sqrt{\bar{f}^{2}+\delta}}\) with \(k\geq 1\) and \(\delta>0\), to obtain \[|v|^{k}\partial_{t}\left(\sqrt{\bar{f}^{2}+\delta}\right) +|v|^{k}\,v\cdot\nabla_{x}\left(\sqrt{\bar{f}^{2}+\delta}\right) +|v|^{k}\frac{\bar{f}}{\sqrt{\bar{f}^{2}+\delta}}\bar{\boldsymbol{u}}^{*} \cdot\nabla_{v}f_{1}\] \[+|v|^{k}\boldsymbol{u}_{2}^{*}\cdot\nabla_{v}\left(\sqrt{\bar{f} ^{2}+\delta}\right)-|v|^{k}\,\frac{\bar{f}}{\sqrt{\bar{f}^{2}+\delta}}\, \nabla_{v}\cdot\left(v\bar{f}\right)=0.\] An integrate with respect to \(x,v\) shows \[\partial_{t}\int_{\mathbb{R}^{3}}\int_{\Omega_{x}} |v|^{k}\left(\sqrt{\bar{f}^{2}+\delta}\right)\,\mathrm{d}x\, \mathrm{d}v-k\int_{\mathbb{R}^{3}}\int_{\Omega_{x}}\,|v|^{k-2}\,\frac{\bar{f} }{\sqrt{\bar{f}^{2}+\delta}}\,f_{1}\,\bar{\boldsymbol{u}}^{*}\cdot v\, \mathrm{d}x\,\mathrm{d}v\] \[-\int_{\mathbb{R}^{3}}\int_{\Omega_{x}}\,|v|^{k}\,f_{1}\,\bar{ \boldsymbol{u}}^{*}\cdot\nabla_{v}\left(\frac{\bar{f}}{\sqrt{\bar{f}^{2}+ \delta}}\right)\,\mathrm{d}x\,\mathrm{d}v+k\int_{\mathbb{R}^{3}}\int_{\Omega_{ x}}\,|v|^{k}\,\frac{\bar{f}^{2}}{\sqrt{\bar{f}^{2}+\delta}}\,\mathrm{d}x\, \mathrm{d}v\] \[-k\int_{\mathbb{R}^{3}}\int_{\Omega_{x}}\,|v|^{k-2}\,\left(\sqrt{ \bar{f}^{2}+\delta}\right)\,\boldsymbol{u}_{2}^{*}\cdot v\,\mathrm{d}x\, \mathrm{d}v\] \[+\int_{\mathbb{R}^{3}}\int_{\Omega_{x}}\,|v|^{k}\,\bar{f}\,\nabla _{v}\left(\frac{\bar{f}}{\sqrt{\bar{f}^{2}+\delta}}\right)\cdot v\,\mathrm{d}x \,\mathrm{d}v=0.\] A use of the Sobolev inequality with integration in time yields \[\begin{split}\sup_{t\in[0,T]}\int_{\mathbb{R}^{3}}\int_{\Omega_{ x}}\,|v|^{k}&\left(\sqrt{\bar{f}^{2}+\delta}\right)\,\mathrm{d}x\, \mathrm{d}v+k\int_{0}^{T}\int_{\mathbb{R}^{3}}\int_{\Omega_{x}}|v|^{k}\frac{ \bar{f}^{2}}{\sqrt{\bar{f}^{2}+\delta}}\,\mathrm{d}x\,\mathrm{d}v\,\mathrm{d} t\\ &\leq\int_{0}^{T}k\|\bar{\boldsymbol{u}}^{*}\|_{\boldsymbol{H}^{ 2}}\|m_{k-1}f_{1}\|_{L^{1}(\Omega_{x})}\,\mathrm{d}t+|T_{k}^{1}|+|T_{k}^{2}|\\ &\quad+\int_{0}^{T}k\|\boldsymbol{u}_{2}^{*}\|_{\boldsymbol{H}^{ 2}}\left\|\int_{\mathbb{R}^{3}}|v|^{k-1}\left(\sqrt{\bar{f}^{2}+\delta}\right) \,\mathrm{d}v\right\|_{L^{1}(\Omega_{x})}\,\mathrm{d}t\\ &\leq T^{\frac{1}{2}}\|\bar{\boldsymbol{u}}^{*}\|_{X}\|m_{k-1}f_{1} \|_{L^{\infty}(0,T;L^{1}(\Omega_{x}))}+|T_{k}^{1}|+|T_{k}^{2}|\\ &+T^{\frac{1}{2}}\|\boldsymbol{u}_{2}^{*}\|_{X}\left\|\int_{ \mathbb{R}^{3}}|v|^{k-1}\left(\sqrt{\bar{f}^{2}+\delta}\right)\,\mathrm{d}v \right\|_{L^{\infty}(0,T;L^{1}(\Omega_{x}))}.\end{split} \tag{2.30}\] Here \[T_{k}^{1}=\int_{0}^{T}\int_{\mathbb{R}^{3}}\int_{\Omega_{x}}|v|^{k}\,f_{1}\, \bar{\boldsymbol{u}}^{*}\cdot\nabla_{v}\left(\frac{\bar{f}}{\sqrt{\bar{f}^{2}+ \delta}}\right)\,\mathrm{d}x\,\mathrm{d}v\,\mathrm{d}t=\int_{0}^{T}\int_{ \mathbb{R}^{3}}\int_{\Omega_{x}}\,|v|^{k}\,f_{1}\,\bar{\boldsymbol{u}}^{*} \cdot\frac{\delta\,\nabla_{v}\bar{f}}{\left(\bar{f}^{2}+\delta\right)^{\frac{ 3}{2}}}\,\mathrm{d}x\,\mathrm{d}v\,\mathrm{d}t\] and \[T_{k}^{2}=\int_{0}^{T}\int_{\mathbb{R}^{3}}\int_{\Omega_{x}}|v|^{k}\nabla_{v} \left(\frac{\bar{f}}{\sqrt{\bar{f}^{2}+\delta}}\right)\cdot v\bar{f}\,\mathrm{d }x\,\mathrm{d}v\,\mathrm{d}t=\int_{0}^{T}\int_{\mathbb{R}^{3}}\int_{\Omega_{x} }\,|v|^{k}\frac{\delta\,\nabla_{v}\bar{f}}{\left(\bar{f}^{2}+\delta\right)^{ \frac{3}{2}}}\cdot v\bar{f}\,\mathrm{d}x\,\mathrm{d}v\,\mathrm{d}t.\] As \(\bar{f}\in L^{1}(0,T;L^{\infty}(\Omega_{x}\times\mathbb{R}^{3}))\) and as fifth order velocity moments of \(|\nabla_{v}\bar{f}|^{2}\) and \(\bar{f}\) are bounded (see Lemma 2.9), \(|T_{k}^{1}|\to 0\) and \(|T_{k}^{2}|\to 0\) as \(\delta\to 0\) for \(k=1,2,3,4,5,6\). Next, we multiply equation (2.22) by \(\frac{\bar{f}}{\sqrt{\bar{f}^{2}+\delta}}\) and integrate with respect to \(x,v\) and \(t\) variables to obtain \[\int_{\mathbb{R}^{3}}\int_{\Omega_{x}}\,\sqrt{\bar{f}^{2}+\delta}\, \mathrm{d}x\,\mathrm{d}v-\int_{0}^{T}\int_{\mathbb{R}^{3}}\int_{\Omega_{x}}\, \nabla_{v}\left(\frac{\bar{f}}{\sqrt{f^{2}+\delta}}\right)\cdot\left(\bar{\mathbf{u }}^{*}f_{1}+\mathbf{u}_{2}^{*}\bar{f}-v\bar{f}\right)\,\mathrm{d}x\,\mathrm{d}v\] \[=\int_{\mathbb{R}^{3}}\int_{\Omega_{x}}\,\sqrt{\bar{f}^{2}(0,x,v)+ \delta}\,\mathrm{d}x\,\mathrm{d}v.\] Note that \(\bar{f}(0,x,v)=0\) and \(\nabla_{v}\left(\frac{\bar{f}}{\sqrt{f^{2}+\delta}}\right)=\frac{\delta\, \nabla_{v}\bar{f}}{\left(\bar{f}^{2}+\delta\right)^{\frac{2}{2}}}\). Hence, arguing as we did with the \(T_{k}^{1}\) and \(T_{k}^{2}\) terms, in the \(\delta\to 0\) limit, the above equation yields \[\int_{\mathbb{R}^{3}}\int_{\Omega_{x}}|\bar{f}|\,\mathrm{d}x\,\mathrm{d}v=0. \tag{2.31}\] Using the recurrence relation in (2.30), we arrive at \[\int_{0}^{T}\int_{\mathbb{R}^{3}}\int_{\Omega_{x}}|v|^{5}\,|\bar{f }|\,\mathrm{d}x\,\mathrm{d}v\,\mathrm{d}t\lesssim T^{\frac{1}{2}}\|\bar{\mathbf{u }}^{*}\|_{X}\left(\|m_{4}f_{1}\|_{L^{\infty}(0,T;L^{1}(\Omega_{x}))}\right.\] \[\left.+\|\mathbf{u}_{2}^{*}\|_{X}\|m_{3}f_{1}\|_{L^{\infty}(0,T;L^{1}( \Omega_{x}))}+\|\mathbf{u}_{2}^{*}\|_{X}^{2}\|m_{2}f_{1}\|_{L^{\infty}(0,T;L^{1}( \Omega_{x}))}\right.\] \[\left.+\|\mathbf{u}_{2}^{*}\|_{X}^{3}\|m_{1}f_{1}\|_{L^{\infty}(0,T;L^ {1}(\Omega_{x}))}+\|\mathbf{u}_{2}^{*}\|_{X}^{4}\|m_{0}f_{1}\|_{L^{\infty}(0,T;L^{ 1}(\Omega_{x}))}\right), \tag{2.32}\] and \[\int_{0}^{T}\int_{\mathbb{R}^{3}}\int_{\Omega_{x}}|v|^{6}\,|\bar{f }|\,\mathrm{d}x\,\mathrm{d}v\,\mathrm{d}t\lesssim T^{\frac{1}{2}}\|\bar{\mathbf{u }}^{*}\|_{X}\left(\|m_{5}f_{1}\|_{L^{\infty}(0,T;L^{1}(\Omega_{x}))}\right.\] \[\left.+\|\mathbf{u}_{2}^{*}\|_{X}\|m_{4}f_{1}\|_{L^{\infty}(0,T;L^{1}( \Omega_{x}))}+\|\mathbf{u}_{2}^{*}\|_{X}^{2}\|m_{3}f_{1}\|_{L^{\infty}(0,T;L^{1}( \Omega_{x}))}\right.\] \[\left.+\|\mathbf{u}_{2}^{*}\|_{X}^{3}\|m_{2}f_{1}\|_{L^{\infty}(0,T;L^ {1}(\Omega_{x}))}+\|\mathbf{u}_{2}^{*}\|_{X}^{4}\|m_{1}f_{1}\|_{L^{\infty}(0,T;L^{ 1}(\Omega_{x}))}\right.\] \[\left.+\|\mathbf{u}_{2}^{*}\|_{X}^{5}\|m_{0}f_{1}\|_{L^{\infty}(0,T;L^ {1}(\Omega_{x}))}\right). \tag{2.33}\] Using (2.28), (2.29), (2.32) and (2.33) in (2.27) while employing (2.17) for handling \(m_{k}f_{1}\) terms, for sufficiently small \(T>0\), we obtain \[\|\bar{\mathbf{u}}\|_{X}\leq\alpha\|\bar{\mathbf{u}}^{*}\|_{X},\quad\text{ for some }\quad\alpha\in(0,1).\] This shows that \(\mathcal{T}\) is a contraction map. **Remark 2.12**.: _In [13], the author treats the difference \(\overline{f}:=f_{1}-f_{2}\) as non-negative (see, in particular, the two inequalities at the end of page 290 in [13]). This is a misstep and the above proof fixes that. The correct versions of those two inequalities in three dimensions are given above (see (2.28) and (2.29)). Furthermore, in our above analysis, we encountered the terms \(T_{k}^{1}\) and \(T_{k}^{2}\). To understand their behaviours in the \(\delta\to 0\) limit requires the boundedness property of the velocity moments associated with the first-order derivatives of the distribution function. Such a bound was established in Lemma 2.9 above. It isn't very clear if one can prove the map \(\mathcal{T}\) is a contraction with the assumptions on the initial datum only analogous to those in [13]._ **Remark 2.13**.: _Hofer in [14] sets up the proof of well-posedness in a similar fashion (similar to the above proof of Theorem 2.1). In our scheme, we solve the Vlasov equation for a fixed \(\mathbf{u}^{*}\) followed by solving the unsteady Stokes' equation with the local density and local momentum associated with the solution \(f(\mathbf{u}^{*})\). In [14], however, the author's scheme is to solve the steady Stokes' equation for a fixed \(f^{*}\in W^{1,\infty}((0,T)\times\mathbb{R}^{3}\times\mathbb{R}^{3})\) followed by solving the Vlasov equation with the fluid velocity \(\mathbf{u}(f^{*})\). The contraction property is demonstrated by analysis the Vlasov equation. Hence, it goes via the analysis of the characteristics and the Banach space where the contraction property is established turns out to be \(W^{1,\infty}((0,T)\times\mathbb{R}^{3}\times\mathbb{R}^{3})\)._
この論文は、空間変数における周期境界条件の下で3次元でVlasov-Stokesシステムを扱っています。この2相モデルの強解の存在を証明し、初期速度の特定の階数のモーメントが制限されている前提のもと、存在する唯一の強解を示しました。このグローバル時間解は、固定点の議論により得られます。
2308.16861
Facing Unknown: Open-World Encrypted Traffic Classification Based on Contrastive Pre-Training
Traditional Encrypted Traffic Classification (ETC) methods face a significant challenge in classifying large volumes of encrypted traffic in the open-world assumption, i.e., simultaneously classifying the known applications and detecting unknown applications. We propose a novel Open-World Contrastive Pre-training (OWCP) framework for this. OWCP performs contrastive pre-training to obtain a robust feature representation. Based on this, we determine the spherical mapping space to find the marginal flows for each known class, which are used to train GANs to synthesize new flows similar to the known parts but do not belong to any class. These synthetic flows are assigned to Softmax's unknown node to modify the classifier, effectively enhancing sensitivity towards known flows and significantly suppressing unknown ones. Extensive experiments on three datasets show that OWCP significantly outperforms existing ETC and generic open-world classification methods. Furthermore, we conduct comprehensive ablation studies and sensitivity analyses to validate each integral component of OWCP.
Xiang Li, Beibei Feng, Tianning Zang, Shuyuan Zhao, Jingrun Ma
2023-08-31T17:04:20
http://arxiv.org/abs/2308.16861v1
# Facing Unknown: Open-World Encrypted Traffic Classification Based on Contrastive Pre-Training ###### Abstract Traditional Encrypted Traffic Classification (ETC) methods face a significant challenge in classifying large volumes of encrypted traffic in the open-world assumption, _i.e._, simultaneously classifying the known applications and detecting unknown applications. We propose a novel Open-World Contrastive Pre-training (OWCP) framework for this. OWCP performs contrastive pre-training to obtain a robust feature representation. Based on this, we determine the spherical mapping space to find the marginal flows for each known class, which are used to train GANs to synthesize new flows similar to the known parts but do not belong to any class. These synthetic flows are assigned to Softmax's unknown node to modify the classifier, effectively enhancing sensitivity towards known flows and significantly suppressing unknown ones. Extensive experiments on three datasets show that OWCP significantly outperforms existing ETC and generic open-world classification methods. Furthermore, we conduct comprehensive ablation studies and sensitivity analyses to validate each integral component of OWCP. Encrypted Traffic Classification, Open-World Assumption, Unknown Applications, Contrastive Pre-Training, Marginal Flows, Generative Adversarial Networks ## I Introduction Traffic classification, which groups similar or related traffic data by protocol or source, is crucial for ensuring Quality of Service (QoS), Quality of Experience (QoE), network management, Web measurement, and threat detection [1]. In recent years, we have witnessed the rapid development of new network technologies and mobile ecosystems, accompanied by one key evolution, _i.e._, transforming traffic from plaintext to encrypted form. Many websites and mobile applications (apps) now use Transport Layer Security (TLS) to protect privacy and ensure secure communication. According to the Annual Report of Let's Encrypt [2], HTTPS page loads have reached 84% globally. Unfortunately, even malware apps have started using TLS to conceal communications for Command and Control (C&C) and data theft. As reported by Sophos [3], encrypted traffic now accounts for 46% of all malware. Large amounts of encrypted traffic, particularly those of unknown classes, present a new challenge to traditional Encrypted Traffic Classification (ETC) methods. Most of the existing ETC methods [4, 5] operate under the closed-world assumption, which means that the apps presented in the classification phase must also be present during the model training phase. If an app is invisible during training, it will be misclassified as a known app, as shown in Fig. 1. Enumerating all the apps and collecting their traffic for model training is impossible, as Google Play Store has over 2.6 million available apps [6]. To make matters worse, recent research indicates that app developers widely use Third-Party Libraries (TPLs), leading to homogeneous network behavior [7]. Many app developers use standard repositories and associated domains to implement functions such as authentication, advertising, and analytics, which may increase the risk of false positives. For example, when _TaoBao_ app runs, not all traffic is directly related to itself. Some actions lead to *_.amap.com_ or *_.alach.com_, which may also appear when running new apps, such as _Youku_ app. Moreover, over 3.17 million new malware apps are discovered yearly, many of which do not belong to any known app family [8]. Traditional ETC methods fail to handle the open-world classification task that classifies known and detects unknown apps simultaneously. One potential solution to address the challenge of detecting unknown apps is based on Softmax output, which provides a predicted probability distribution of a multiclass classification model. If all the dimensional values of the distribution are lower than a threshold probability, then it is considered an unknown class. This approach has proven successful in Computer Vision field [9, 10]. However, encrypted traffic presents unique challenges compared to images, such as unreadability and homogeneity to result in significant overlap between apps, which makes the framework inappropriate for ETC. To tackle the issue, we propose OWCP, a novel Open-World Fig. 1: An ETC Model Trained in the Closed-World Scenario cannot Handle the Open-World Classification Task. Contrastive Pre-training framework for ETC that addresses the limitations of existing methods. OWCP provides a robust feature representation by adopting traffic contrastive pre-training. Specifically, we resample the training data by constructing positive and negative flow pairs, bringing positive pairs closer together and pushing negative ones farther to optimize the discriminable feature representation. On this basis, we determine the spherical mapping space for each known class to find non-homogeneous flows at the margins, which are used to train Generative Adversarial Networks (GANs) to synthesize new flows that are similar to known flows but do not belong to any known class. The above operations bring two salient advantages: i) any flow that deviates from the distribution of known app classes can be classified as unknown app flow, so that synthesizing marginal flows can approach the known while falling into the simulated unknown; ii) by filtering out homogeneous marginal flows that are generated by reusing TPLs, we can focus on unknown flows triggered by new apps of their own behavior. Then, the synthetic flows are assigned to unknown nodes to modify the classifier, to overall reduce and flatten the recognition probability of unknown flows, making the classifier more sensitive to known flows and significantly suppressing unknown ones. Our major contributions can be concluded as follows: * We propose a novel open-world ETC paradigm that addresses the challenge of unknown classes in real network environments. To the best of our knowledge, OWCP is the first method that uses pre-training to solve the open-world ETC task. * OWCP discovers non-homogeneous marginal flows in spherical space and synthesizes simulated unknown flows to jointly improve the performance of known classification and unknown detection. * We evaluate the proposed framework in extensive experiments across three public available datasets and demonstrate its superiority over existing ETC models and multiple open-world classification methods. ## II Related work As machine learning and deep learning become more widely adopted, researchers have increasingly focused on developing ETC methods and have reported high accuracy. In this section, we will discuss various classification models that rely on different types of features, which can be divided into statistical-feature-based methods and sequence-feature-based methods. ### _Statistical-Feature-Based Methods_ Early on, a combination of various traditional machine learning algorithms proposed statistical features to solve ETC tasks. Representatively, Taylor _et al._[11] first designed the burst and flow statistical features and offered a robust app classification method. Recent statistical features captured complex dependencies through deep learning. Shi _et al._[12] built a deep learning framework to select and combine the statistical features to enhance the performance of traffic classification. Chen _et al._[13] exploited attribute features and statistical features to predict the app to which the encrypted flow belongs. However, these methods are mainly based on rich experiences, professional knowledge, and much human effort. ### _Sequence-Feature-Based Methods_ Encryption apps leak information about dependencies or transfers between data messages, known as sequence features. Korczynski _et al._[14] proposed representing the message type sequence of TLS handshake phase data with a Markov transformation matrix to classify encrypted traffic. Fu _et al._[15] extracted packet length and time delay sequences to build a Hidden Markov Model, which in turn enhances the intrinsic richness of the model. Meanwhile, researchers have started to explore convolutional neural networks and recurrent neural networks combined with the non-plaintext payload sequence. Wang _et al._[16] used the first 784 bytes of the payloads to construct an end-to-end classification model. Lotfollahi _et al._[17] proposed the Deep packet method and adopted the stacked autoencoder to extract features from encrypted payloads. Most existing ETC classification methods have primarily relied on a closed-world assumption, meaning they can only classify traffic within a static dataset of pre-defined app classes. In open-world assumptions, these classifiers must be able to detect unknown classes to provide accurate traffic classification to support network measurement and management. ## III The Proposed OWCP This section provides a formal problem definition for the open-world ETC task and details our proposed OWCP framework. As shown in Fig. 2, we first obtain a pre-trained model using contrastive learning to identify non-homogeneous flows at the margins by mapping them onto a spherical space. A GANs generator is then used to synthesize the distribution of marginal non-homogeneous flows to simulated unknown flows that support the classifier's training. By adding an unknown node in Softmax, we can improve the performance of classification and detection by lowering and flattening the recognition probabilities of unknown flows while relatively raising those of known flows. #### Iii-A1 Non-Plaintext Payload Sequence Although TLS encrypts the plaintext, some side-channel information is still leaked from the encrypted payload. To obtain more context-sensitive information, we partition the payload by two bytes, _e.g._, from {1a, 2b, 03, 45, 62, aa,...} to {1a2b, 0345,62aa,...}. #### Iii-A2 Packet Length Sequence The packet length is measured in bytes, and we use "+" to indicate packets sent from the client to the server and "-" to indicate packets sent from the server to the client, _e.g._,{+328, -1074, -180, +328...}, to represent the bi-directional nature of the flow. ### _Traffic Contrastive pre-training_ Without loss of generality, we assume \(x\) is one of the bidirectional flows of the training set, which is obtained according to the traffic five-tuple. We filter out biased information for each packet, such as IP address, port number, MAC address, _etc_. Based on existing works [17], we extract the \(M*64\) bytes non-plaintext payload sequence of \(M\)=6 packets and the packet length sequence of \(N\)=128 packets, reconstructed and encoded into the coding dictionary by order of frequency of occurrence. Four unique markers, CLS, SEP, PAD, and UNK, are added to the dictionary, indicating the start flag of the input, the separator flag of the non-plaintext payload sequence and the packet length sequence, the padding flag, and the unregistered word flag, respectively. Let the Non-plaintext Payload (NP) and Packet Length (PL) sequences after dictionary encoding as \(NP=[np_{0},\ldots,np_{M}]\) and \(PL=[pl_{0},\ldots,pl_{N}]\), and the input sequence of x can be expressed as, \[x=CLS+[NP]+SEP+[PL] \tag{1}\] To capture more internal connections, we apply word embedding to \(x\) using the parameter matrix \(\mathrm{W}\in\mathbb{R}^{V\times d}\), which transforms the discrete input sequence \(x\in\mathbb{R}^{(m*64/2+n+2)\times 1}\) into a high-dimensional vector \(x\in\mathbb{R}^{(m*64/2+n+2)\times d}\), where \(V\) denotes the size of \(\mathrm{W}\), and \(d\) is the dimension. Additionally, we incorporate positional encoding information into the embedded vector to enhance its contextual representation [18]. OWCP resamples the training data by constructing positive and negative flow pairs to obtain the pre-training set. Specifically, we randomly select samples \(y^{+}\) and \(y^{-}\), one from the same class as the positive example and the other from a different class as the negative example, respectively. Then the triple [\(x,y^{+},y^{-}\)] feeds to the multi-headed attention encoder, consisting of multi-headed self-attention and feed-forward neural networks, as shown in Fig. 4, which can be stacked \(N\)=6 times to enhance the representation capability. The multi-headed self-attention network encodes contextual information, and the feedforward neural network provides nonlinear variation, consisting of two linear layers and one layer of ReLU, which is connected by residual networks and layer normalization. Meanwhile, we use the InfoNce loss function to pre-train the multi-headed attention encoder in order to bring the positive example closer and push the negative example further, calculated as follows, \[\begin{split} L_{CTL}=&-\log\frac{\exp\left( \sin\left(x,y_{i}^{+}\right)/\tau\right)}{\sum_{k=1}^{N}\exp\left(\sin\left(x,y _{k}^{+}\right)\right)}\\ &-\log\frac{\exp\left(\sin\left(x,y_{i}^{-}\right)/\tau\right)}{ \sum_{k=1}^{N}\exp\left(\sin\left(x,y_{k}^{-}\right)\right)}\end{split} \tag{2}\] where \(\tau\) denotes the temperature parameter used to control the shape of the distribution of logits. ### _Synthesize Unknown Flows_ Theoretically, any flow that deviates from the distribution of known app classes can be classified as unknown app flow [19]. Fig. 4: Multi-Headed Attention Encoder Structure. Add & Norm Means the Residual Network and Layer Normalization [18]. Fig. 3: Non-Plaintext Payload and Packet Length Sequence Features. Fig. 2: Overview of OWCP workflow. We first discover the marginal flows with discrimination attributes in each known class by spherical decision boundaries. Specifically, we use the pre-trained multi-headed attention encoder \(\theta\) as the feature extractor for known app classes, to determine the spherical center by computing the average feature vector for each class. Let \(A_{k}\) denote the set of flows labeled with class \(k\). The centroid \(c_{k}\) is the mean vector of flows in \(A_{k}\), \[\mathbf{c}_{k}=\frac{1}{\left|A_{k}\right|}\sum_{(x_{i},l_{i})\in A_{k}}\theta\left( \mathbf{x}_{i}\right) \tag{3}\] where \(\left|A_{k}\right|\) denotes the number of flows in \(A_{k}\). We define \(\Delta_{k}\) as the radius of decision boundary with respect to the centroid. For each flow of \(A_{k}\), we aim to satisfy the constraint, \[\forall\mathbf{x}_{i}\in A_{k},\left\|\theta\left(\mathbf{x}_{i}\right)-\mathbf{c}_{k} \right\|_{2}\leq\Delta_{k} \tag{4}\] where \(\left\|\theta\left(\mathbf{x}_{i}\right)-\mathbf{c}_{k}\right\|_{2}\) denotes the Euclidean distance between \(\theta\left(\mathbf{x}_{i}\right)\) and \(c_{k}\). We collect the marginal flow set \(\mathcal{D}^{\prime}=\left\{x_{0},\ldots,x_{m}\right\}\), where each instance is located/close to the margin of \(\Delta\), _i.e._, \(\left(\Delta-\left\|\theta\left(\mathbf{x}_{i}\right)-\mathbf{c}_{k}\right\|_{2}\right)<\varepsilon\), and \(m<<n\). We consider the existence of homogeneous marginal flows in \(\mathcal{D}^{{}^{\prime}}\) generated by the reuse of TPLs, inconsistent with the goal of focusing on the unknown apps themselves. For that, we propose to filter homogeneous marginal flow in \(\mathcal{D}^{{}^{\prime}}\) based on background similarity if the following conditions are satisfied: * The {destination IP address, destination port} tuple of the marginal flow appears in multiple classes. * The {SNI} or {TLS Certificate} of the marginal flow appears in multiple classes. Inspired by GANs [20], flow generation is achieved without explicitly modeling probability densities. We use a generator \(Gen\) for sampling a latent variable \(z\) from a prior distribution, _e.g._, a Gaussian \(\mathcal{N}\) as the input to generate an output \(Gen(z)\). Meanwhile, a discriminator \(Dis\) is trained to distinguish whether an input \(x\) is from a target data distribution by mapping \(x\in\mathcal{D}^{\prime}\ or\ Gen(z)\) to a probability range in [0,1]. \(Gen\) aims at synthesizing simulated flows as accurately as possible when freezing \(Dis\), while \(Dis\) aims to distinguish when freezing \(Gen\), which are contesting with each other in a zero-sum game framework. The generation of simulated unknown flows \(\mathcal{D}^{u}=\{Gen(z)\}\) can be optimized by a min-max objective of compact form as, \[\min_{Gen}\max_{Dis}\mathbb{E}_{x\in\mathcal{D}^{\prime}}[\log Dis(x)]+ \mathbb{E}_{z\in\mathcal{N}}[\log(1-Dis(Gen(z)))] \tag{5}\] ### _Open-world Traffic Classification_ We construct the classification model \(\delta\), which adds a fully connected layer adapted to the classification task, initialized by the parameters of the pre-trained \(\theta\) and fine-tuned by the \(\mathcal{D}\) and \(\mathcal{D}^{u}\). Correspondingly, we add an unknown decision node in the Softmax layer to implicitly modify the \(\delta\) by overall reducing and flattening the recognition probability of unknown flows, to relatively sensitize to known flows while significantly suppressing unknown ones. We use a modified two-stage recognition to provide the final classification results as follows, \[\hat{l}=\begin{cases}\operatorname*{argmax}_{l\in\{0,\ldots,k\}}P(l\mid\delta( x))&\text{ if }P(l\mid\delta(x))\geq\sigma\\ unknown&\text{otherwise.}\end{cases} \tag{6}\] where \(P(l\mid\delta(x))\) is the output of the SoftMax layer, and \(\sigma\) is a hyperparameter threshold that can be selected by doing a grid search calibration procedure using a set of training flows plus a sampling of open-world flows. ## IV Experiment ### _Experiment Setting_ #### Iv-A1 Dataset Description To comprehensively evaluate the performance of our proposed OWCP, we conduct experiments on three public available datasets: * CrossPlatform [21] consists of traffic from popular apps on the Android platform in China, United States, and India, with a total of 215 classes. * ISCX17 [22] includes 7 types of traffic for VPN and non-VPN communications, combined by apps, resulting in 17 different classes. * USTC-TFC [23] contains 10 classes of benign traffic and 10 classes of malicious traffic. We randomly select 80% of the apps from each dataset as known classes, while the remaining apps are treated as unknown classes. The data from known classes are randomly divided into training, validation, and test sets, with a ratio of 8:1:1. Among them, the test set of the known classes is used as CW-test set. All data from unknown classes are added to the CW-test set to form the OW-test set. The detailed dataset settings are provided in Table I. #### Iv-A2 Implementation Details and Evaluation Metrics All experiments are conducted on a server with 128GB RAM, Intel(R) i7-8700 CPU, and NVIDIA 3090 GPUs, implemented with Pytorch 1.10.0. The multi-head attention encoder contains 8 heads, with a vector dimension of 64 for \(q_{i}\), \(k_{i}\), \(v_{i}\), and the feedforward neural network has 1024 neurons. We use the BertAdam optimizer with a learning rate of 5e-5 and a warmup of 0.03. To select the hyperparameter \(\sigma\), we perform a grid search calibration procedure and cross-validation (0.7 in this paper). We evaluate and compare the performance by the closed-world metrics, including Accuracy (AC) and F1 score [4]. Following [9, 10], we also use the open-world metrics to measure the evaluation performance (AC\({}_{ow}\), F1\({}_{ow}\)), _e.g._, AC\({}_{ow}\) is defined as, \[AC_{ow}=(\frac{\mathrm{TP}_{(K)}}{\mathrm{TP}_{(K)}+\mathrm{FN}_{(K)}}+\frac{ \mathrm{TP}_{(U)}}{\mathrm{TP}_{(U)}+\mathrm{FN}_{(U)}})/2 \tag{7}\] where \(TP_{(K/U)}\) and \(FN_{(K/U)}\) are the true positive and the false negative for known/unknown classes, respectively. Macro Average [7] is used to avoid biased results due to imbalance between multiple classes of data by calculating the mean. ### _Comparison with Existing Methods_ To get a comprehensive understanding of the OWCP performance, we compare with four ETC methods and two generic open-world classification methods: * Deep Fingerprinting (DF) [1] and FlowPrint [7], which both support open-world ETC. * Fs-Net [4] and PERT [5] (with pre-training), which are closed-world ETC methods. * Pre-trained encoders equipped with generic open-world classification methods, namely PT-OM (with Openmax [9]) and PT-ST (with thresholding Softmax [10]), which are fine-tuned only by the training set. We perform closed-world and open-world scenario experiments on three datasets according to the comparison method adaptation, respectively. As can be seen from Table II (focusing on closed-world) and Table III (focusing on open-world), OWCP outperforms all methods. In the closed-world scenario, our model achieves 3.85% and 7.06% improvement in F1 from existing methods, _e.g._, FlowPrint and PERT, on CrossPlatform, respectively, indicating that OWCP achieves strong feature representation through contrastive pre-training learning. Meanwhile, in the open-world scenario, OWCP achieves up to 14.58% improvement in F1\({}_{ow}\) over the best open-world ETC model, FlowPrint, on CrossPlatform. Our recognition paradigm also improves by 2.18% and 2.36% in F1\({}_{ow}\) compared to two generic open-world solutions. Furthermore, by performing a more comprehensive analysis of the two tables, OWCP only declines by 1.17% on ISCX17 and 1.59% on CrossPlatform when challenged with unknown apps, proving that our model is adequate to combat the unknowns that arise in the real network environments. ### _Ablation Study_ We present ablation results in Table IV to evaluate the contribution of each component on the widely compared CrossPlatform. NP and PL refer to the non-plaintext payload and packet length sequences, respectively, and are used to assess the impact of different feature es. The decrease of 2.17%\(\downarrow\) and 0.87%\(\downarrow\) on F1, and 2.75%\(\downarrow\) and 1.25%\(\downarrow\) on F1\({}_{os}\), for the w/o NP and w/o PL models, respectively, suggests that both sequence features are beneficial in classification. Additionally, the effect of NP is superior to that of PL on the CrossPlatform. CPT and BSF denote the contrastive pre-training and background similarity filtering, respectively. We remove the pre-trained model to evaluate the impact of contrastive pre-training. According to the w/o CPT model, the loss of pre-training damps the classification effect, especially 3.03%\(\downarrow\) drop in F1\({}_{os}\) of open-world scenarios, indicating that discovering marginal flows without scene knowledge is unreliable and leads to unknown recognition even worse than thresholding Softmax (0.85%\(\downarrow\)). We also evaluated the effect of the model without background similarity filtering (w/o BSF) and found that not filtering homogeneous marginal flows had a more significant impact on known classes. In the closed-world scenario, F1 decreased by 1.15%, indicating that using homogeneous marginal flows to simulate unknown ones will lead to misjudgment of homogeneous flows of known classes. ### _Sensitivity Analysis_ We also conduct a sensitivity analysis to investigate the effect of the percentage of unknown classes on CrossPlatform. We sequentially select the percentage of unknown classes from [10%, 20%, 30%, 40%, 50%] to observe the variation of closed-world and open-world metrics, as shown in Fig. 5. As the percentage of unknown classes increases, there is a decreasing trend in F1\({}_{ow}\) in open-world scenarios and an increasing trend in F1 in closed-world scenarios. However, the gain from the known classes decline is more negligible than the loss in the open-world scenarios, which is the same as our real network environment perception. The percentage of unknown classes should be kept within a reasonable range, and once it exceeds 30% will have a non-negligible impact. ## V Conclusion In this paper, we propose a novel Open-World Contrastive Pre-training (OWCP) framework for ETC, which can effectively classify the known apps and detect the unknowns simultaneously. OWCP provides a robust feature representation by traffic contrastive pre-training. On this basis, we determine the spherical mapping space for each known class to find non-homogeneous flows at the margins, which are used to train GANs to synthesize new parts close to the known but not belonging to any class as the unknown flows. The synthetic flows are then assigned to unknown nodes of Softmax to modify the classifier, to overall reduce and flatten the recognition probability of unknown flows, making the classifier more sensitive to known flows and significantly suppressing unknown ones. Our proposed method is evaluated in extensive experiments in three public available datasets, which shows that OWCP outperforms existing closed-world/open-world ETC methods and the generic open-world classification methods, with F1 and F1\({}_{ow}\) achieving 96.21% and 94.62%. Furthermore, extensive ablation studies prove that the payload/packet length sequences, contrastive pre-training, and background similarity filtering significantly contribute to the performance improvements of OWCP. Meanwhile, the sensitivity analysis also showed that the percentage of unknown classes should be kept within a reasonable range. Once it exceeds 30% will have a non-negligible impact.
``` 伝統的な暗号化トラフィック分類 (ETC) 方法は、オープンワールド仮定のもと、大量の暗号化トラフィックを分類する際に、大きな課題を抱えています。つまり、既知のアプリケーションを同時に分類し、未知のアプリケーションを検出する必要があります。そこで、この問題に対処するため、私たちは新しいオープンワールドコンタストプリトレーニング (OWCP) フレームワークを提案します。OWCPは、対照的な事前トレーニングを実施することで、 robustnessな特徴表現を得ます。この特徴表現に基づいて、各既知クラスの球面マッピング空間を決定し、この空間で、既知クラスの各流の個々のmarginal flowsを見つけます。これらの流は、GANのトレーニングに用いられ、既知部分と属さない新しい流を生成します。これらの生成された流は、Softmaxの未知ノードに割り当てられ、分類器を修正し、既知流に対する感度を高
2306.00211
Astrophysical foreground cleanup using non-local means
To create high-fidelity cosmic microwave background maps, current component separation methods rely on availability of information on different foreground components, usually through multi-band frequency coverage of the instrument. Internal linear combination (ILC) methods provide an unbiased estimators for CMB which are easy to implement, but component separation quality crucially depends on the signal to noise ratio of the input maps. In the present paper, we develop an efficient non-linear filter along the lines of non-local means used in digital imaging research which significantly improves signal to noise ratio for astrophysical foreground maps, while having minimal signal attenuation, and evaluate it performance in map and spectral domains. Noise reduction is achieved by averaging ``similar'' pixels in the map. We construct the rotationally-invariant feature vector space and compute the similarity metric on it for the case of non-Gaussian signal contaminated by an additive Gaussian noise. The proposed filter has two tuneable parameters, and with minimal tweaking achieves a factor of two improvement in signal to noise spectral density in Planck dust maps. A particularly desirable feature is that signal loss is extremely small at all scales.
Guillermo F. Quispe Peña, Andrei V. Frolov
2023-05-31T22:01:19
http://arxiv.org/abs/2306.00211v1
# Astrophysical foreground cleanup using non-local means ###### Abstract Context:To create high-fidelity cosmic microwave background maps, current component separation methods rely on availability of information on different foreground components, usually through multi-band frequency coverage of the instrument. Internal linear combination (ILC) methods provide an unbiased estimators for CMB which are easy to implement, but component separation quality crucially depends on the signal to noise ratio of the input maps. In the present paper, we describe a non-linear filter which significantly improves signal to noise ratio for astrophysical foreground maps, while having minimal signal attenuation. Aims:We develop an efficient non-linear filter along the lines of non-local means used in digital imaging research which is suitable (and fast enough) for application to full resolution Planck foreground maps, and evaluate it performance in map and spectral domains. Methods:Noise reduction is achieved by averaging "similar" pixels in the map. We construct the rotationally-invariant feature vector space and compute the similarity metric on it for the case of non-Gaussian signal contaminated by an additive Gaussian noise. Results:The proposed filter has two tuneable parameters, and within minimal tweaking achieves a factor of two improvement in signal to noise spectral density in Planck dust maps. A particularly desirable feature is that signal loss is extremely small at all scales. ## 1 Introduction The Cosmic Microwave Background (CMB) provides essential information about all the epochs of our Universe and plays a fundamental role in understanding its structure and dynamical evolution, e.g. see recent overview by Planck Collaboration et al. (2020). Precise measurements and theoretical predictions enable an accurate reconstruction of CMB sky maps that aid in measuring cosmological parameters (Bond et al., 1994; Planck Collaboration et al., 2020) and constraining various physical phenomena (Planck Collaboration et al., 2020). The mapping of small anisotropies in intensity and polarization of the CMB has had the most significant impact, providing stringent constraints on models of the early Universe (Hinshaw et al., 2013; Planck Collaboration et al., 2020, ). In recent decades, an important objective of CMB experiments has been the measurement of CMB polarization, with a particular focus on detecting curl modes known as \(B\)-modes, e.g. as discussed by Wolz et al. (2023). The detection of these modes would carry significant implications, as they could potentially provide evidence of primordial gravitational waves and enhance our understanding of the early Universe (Crittenden et al., 1993). However, observing the CMB is challenging due to the presence of local contamination from various astrophysical sources, collectively referred to as CMB foregrounds. Some of these foreground emissions exhibit polarization, including \(B\)-modes, which introduce contamination into our observations of the primary CMB \(B\)-modes (Planck Collaboration et al., 2020; Ade et al., 2021). Consequently, a crucial step in analyzing CMB data involves effectively separating the (polarized) foreground emissions from the overall observed sky signal in order to retrieve valuable information from the CMB (Leach et al., 2008; Planck Collaboration et al., 2020). This could be accomplished either on the map or anisotropy spectrum levels of data reduction. The characterization of astrophysical components is currently based on their frequency dependence, which enables us to effectively separate them and obtain clean maps of the CMB. Recent advancements in sensitivity and frequency coverage of CMB experiments have led to significant progress in component separation techniques, which can be broadly categorized into maximal likelihood estimators, usually Gibbs samplers (Wandelt et al., 2004; Eriksen et al., 2008) and unbiased linear estimators, usually referred to as internal linear combination (ILC) in the CMB literature (Martinez-Gonzalez et al., 2003; Delabrouille et al., 2003; Cardoso et al., 2008; Remazeilles et al., 2011). However, even the most sophisticated foreground removal processes cannot completely eliminate instrumental noise and residual foreground contamination from the final data. Accurate frequency modelling of astrophysical foregrounds is essential for maximal likelihood estimators, whereas ILC estimators could potentially be biased by the noise. As any inaccuracies may introduce biases in the estimation of cosmological parameters, it is crucial to continually improve the characterization of foreground components and enhance their signal-to-noise ratios to ensure reliable and accurate CMB analysis. In this paper, we present a new method which significantly attenuates the noise while keeping the signal mostly unaffected for strongly non-Gaussian data. This paper is structured as follows. In Section 2, we introduce a denoising algorithm known as non-local means, initially proposed by Buades et al. (2005). We extend this filter by incorporating covariant functions that capture morphological features of the input map on a sphere, thereby modifying the non-local averaging procedure. The specific set of functions and the criteria for their selection are outlined in Section 3. The application of our filter to Planck thermal dust and CMB maps is presented in Sections 4 and 5, where we discuss the obtained results. Possible extensions to polarization data are considered in Section 6. Finally, in Section 7, we discuss our results. Technical details concerning calculation of the covariance matrix associated with our chosen feature space estimators can be found in Appendix A. Appendix B contains derivation of the characteristic parameters for the angular two-point correlation functions of Gaussian noise models employed in this study. Appendix C describes pre-processing of the 353GHz maps that we used as test samples. ## 2 Non-local means Sky emission maps are essentially digital images that can be represented as arrays of real numbers. Each pixel in such an image can be expressed as a pair \((i,d_{i})\), where \(i\) denotes a point on a 2-dimensional grid and \(d_{i}\) represents the associated real value. Common pixelization scheme of a sphere which is used for most CMB data is HEALPix (Hierarchical Equal Area iso-Latitude Pixelization) by Gorski et al. (2005). The accuracy of digital images is often limited by the presence of noise. In the context of measured sky emission data, this noise can arise from various sources such as photon noise, phonon noise, and glitch residuals. To model the observed data \(d\), we can express it as the sum of the true signal value \(s\) and the noise perturbation \(n\), yielding the equation \[d=s+n. \tag{1}\] While noise is typically dependent on instrument properties and scan strategy, a commonly employed approximation in data analysis is to assume a zero-mean additive Gaussian noise model with known covariance. In the simplest and most common case, it is diagonal in pixel space, i.e. the pixel noise is independent, although the variance can vary from pixel to pixel. Signal could either be a Gaussian random field, such as the case for CMB, or completely non-Gaussian and full of features, as most astrophysical foregrounds are. In order to denoise the image and restore the underlying true signal \(s\), we employ the non-local means denoising method by Buades et al. (2005). This method operates on the noisy image \(d\) and estimates the true value \(s_{i}\) at each pixel \(i\) by computing the mean of the values of all pixels whose neighbourhood exhibits _similarity_ to the neighbourhood of pixel \(i\). In contrast with the usual smoothing, where the averaging weight depends on pixel _proximity_, the assessment of similarity between different pixels in the image is performed by comparing their values, as well as other characteristics of the pixel neighbourhood, the effective size of which can be controlled by the Gaussian smoothing beam \[\tilde{s}=b*d, \tag{2}\] where the convolution operation \(*\) is performed using a Gaussian convolution kernel \(b\). The width of the kernel depends on a free smoothing parameter, often specified as the full width at half maximum (FWHM), allowing for control over the level of smoothing applied to the image for feature identification. In the simplest version, the estimated value \(s_{i}\) is given by \[s_{i}=\frac{\sum_{j}w(\tilde{s}_{i},\tilde{s}_{j})\,d_{j}}{\sum_{j}w(\tilde{s} _{i},\tilde{s}_{j})}, \tag{3}\] where the sum is over the entire map, and the weight \[w(\tilde{s}_{i},\tilde{s}_{j})=\exp\left[-\frac{1}{2}\frac{(\tilde{s}_{i}- \tilde{s}_{j})^{2}}{h^{2}}\right] \tag{4}\] quantities the similarity between pixels \(i\) and \(j\) by comparing their corresponding values in the Gaussian-smoothed map \(\tilde{s}\). The parameter \(h\) determines the width of the Gaussian kernel in the feature space for filtering the map, and could be defined as \[h^{2}=\alpha^{2}\text{Var}\,(\delta\tilde{s}), \tag{5}\] where \(\alpha\) is a user-specified parameter that determines the filtering strength, and \(\delta\tilde{s}\) is the noise component of the smoothed map. As the noise contribution might not be known outright, it could be useful to bootstrap it from the map itself as \(d-\tilde{s}\), and tweak parameter \(\alpha\) for desired filter strength. In contrast to just using a smoothed pixel value, we propose a refined method to compute the similarity between Gaussian neighbourhoods by incorporating additional morphological features extracted from the input map \(d\). This is achieved by constructing a feature space, represented by a collection of maps \(\mathcal{F}^{(n)}\) that capture relevant morphological information from \(d\). In the original implementation of Buades et al. (2005), field values in a square pixel neighbourhood were used as a feature vector. This is obviously not optimal for statistically isotropic maps such as CMB, since it is not rotationally invariant, and in any case problematic with HEALPix as the number of nearest neighbours of a pixel varies. Instead, we propose to use covariant invariants of the map and its derivatives, which could be graded on field and derivative power. These maps are combined to form a feature vector field \(\mathcal{F}=[\mathcal{F}^{(1)},\mathcal{F}^{(2)},\cdots]^{T}\). The expansion of the feature space enables the incorporation of additional information beyond the comparison of values in \(\tilde{s}\) alone. Consequently, the estimation of the true value \(s_{i}\) in Eq. (3) can be generalized as \[s_{i}=\frac{\sum_{j}w(\mathcal{F}_{i},\mathcal{F}_{j})\,d_{j}}{\sum_{j}w( \mathcal{F}_{i},\mathcal{F}_{j})}, \tag{6}\] where the weight function now evaluates the similarity between pixels \(i\) and \(j\) based on the comparison of their corresponding feature vectors in \(\mathcal{F}\). The weight function in Eq. (4) can be generalized as \[w(\mathcal{F}_{i},\mathcal{F}_{j})=\exp\left[-\frac{1}{2}(\mathcal{F}_{i}- \mathcal{F}_{j})^{T}\omega^{-2}\left(\mathcal{F}_{i}-\mathcal{F}_{j}\right) \right], \tag{7}\] where \(\omega^{-2}\) defines a similarity metric on the feature space with \[\omega^{2}=\alpha^{2}\,\text{Var}\,(\delta\mathcal{F}). \tag{8}\] The feature space extends the 1-dimensional variance \(\text{Var}\,(\delta\tilde{s})\) to a multidimensional covariance matrix of noise perturbations \(\text{Var}\,(\delta\mathcal{F})\), capturing the statistical relationships among the feature estimators. Once again, the adjustable parameter \(\alpha\) controls the degree of filtering. Further details regarding the calculation of the covariance matrix for the specific feature space employed in this study can be found in Appendix A. As the sum (6) has to be done for every pixel, computational cost of the algorithm scales as a number of pixels squared, and increases with dimensionality of the feature space. While expensive, the algorithm is trivially parallelizable, and lends itself well for computation on GPUs. To increase computational speed and enforce some locality, the sum (6) could be restricted to a specific neighbourhood of a pixel (say, within certain angular radius), or even outfitted with a weight based on proximity, providing a bridge to the usual convolutions. In essence, the generalized non-local means (6) is an extension of a regular convolution to a distance measured on a surface embedded into a higher-dimensional space-feature manifold. ## 3 Feature space The feature space \(\mathcal{F}^{(n)}\) should include information that captures relevant and non-redundant morphological information of the maps, in particular hot and cold spots in emission. This inclusion aims to enhance the accuracy of pixel similarity comparisons. In the previous section, we discussed limitations of the original non-local means algorithm, where the feature vector is tied down to a particular pixelization scheme. Thus does not make sense for our application, and instead a tower of differential invariants seems like a natural choice. Starting from the scalar field \(\varphi\) and grading by the number of derivatives applied, these would be \(\varphi\), \((\nabla\varphi)^{2}\), \(\Delta\varphi\), \(\varphi_{\omega\pm}\varphi^{\mu}\varphi^{\mu}\varphi^{\mu}\epsilon^{\hbar\nu} \varphi_{\varepsilon}\), \(\varphi_{\mu\pm}\epsilon^{\mu}\varphi_{\varepsilon}\epsilon^{\hbar\nu} \varphi_{\varepsilon,\mu}\), \(\varphi_{\omega\pm}\varphi^{\mu}\) and so on. Here semicola denotes covariant derivative \(\nabla\) on a sphere, \(\Delta\equiv\nabla_{a}\nabla^{a}\) is Laplace operator, while \(\epsilon_{ab}\) is a total antisymmetric symbol in two dimensions, used to make duals. Not all of these invariants are of degree one with respect to the field \(\varphi\), but they could be made so by taking fractional power or by dividing by lower-order invariants. The number of combinations rapidly increases with the rank of derivative operator. The trick is to pick as few as possible, while still having access to enough morphological information. Besides the field value \(\varphi\), the obvious candidate linear in \(\varphi\) is \(|\nabla\varphi|=\sqrt{\varphi_{\omega\varphi}\varphi^{2}}\), vanishing of which distinguishes peaks. The next in line are three expressions with four derivatives normalized to be linear in \(\varphi\) by dividing them by \((\nabla\varphi)^{2}\), namely \(\varphi_{ab}\varphi^{a}\varphi^{b}/(\nabla\varphi)^{2}\), \(\varphi_{\omega\Rightarrow}\varphi^{a}\epsilon^{b}\varphi_{\varphi_{\varepsilon }/(\nabla\varphi)^{2}}\), and \(\varphi_{\omega\pm}\epsilon^{\omega}\varphi_{\varphi_{\varepsilon}}\epsilon^{ b}\varphi_{\varphi_{\varepsilon}/d}/(\nabla\varphi)^{2}\). These have the meaning of components of the field Hessian matrix written in orthonormal basis \(\mathbf{g}_{i}^{(1)}=\varphi_{ii}/|\nabla\varphi|\) and \(\mathbf{g}_{\alpha}^{(2)}=\epsilon_{ab}\varphi^{b}/|\nabla\varphi|\) aligned with the field gradient direction. Notably, the third expression enters Minkowski functional integral \(\mathcal{I}_{2}\) in Schmalzing & Gorski (1998), while zero set of the second one corresponds to the map skeleton, as described in Novikov et al. (2006), and is most interesting morphologically. After some experimentation, we settled on the feature space consisting of the field value, length of its gradient, and the skeleton invariant. As we mentioned in the previous section, these are constructed from the smoothed field \(\tilde{s}\). The selection of these features was guided in part by the objective of minimizing the complexity of the covariance matrix \(\mathrm{Var}\left(\delta\mathcal{F}\right)\) to reduce computational cost, while still incorporating useful information provided by the Gaussian-smoothed map \(\tilde{s}\) and its first and second covariant derivatives. The covariant derivatives at a point \((\theta,\phi)\) on the unit sphere, projected onto orthonormal basis vectors \(\boldsymbol{\epsilon}^{(1)}=\hat{\theta}\) and \(\boldsymbol{\epsilon}^{(2)}=\hat{\phi}\), are \[\begin{split}\tilde{s}_{\tilde{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol }}}}}}}}}}}}}}}}}} \left(\vec ## 3 Results Figure 1: Application of the non-local means filter to a thermal dust emission map at 353GHz, with resolution \(N_{\rm side}=2048\). Units are \(K_{\rm CMB}\). _Upper:_ Intensity channel of a thermal dust emission map is the input map. _Middle:_ The output map obtained by non-local means filtering, using a \(20^{\prime}\) FWHM smoothing for the feature space construction and a filtering parameter \(\alpha=16\). _Lower:_ The difference between the input and output maps showing what was removed by the filter, which we will call a residual map. Fig. 1 displays the results of our modified non-local means algorithm applied to a minimally processed full mission Planck 353GHz intensity map (a good proxy for thermal dust emission) contaminated with noise. The top image illustrates the original map, which exhibits noticeable noise particularly in the high latitude regions. In the middle image, we present the output of our algorithm, demonstrating noise reduction and improved image quality. Finally, the bottom image shows the difference between input and output maps representing what was removed by the filter. Visually, this difference map (which we will call the residual) appears to be mostly noise in regions with low signal-to-noise ratios, and gradually fading to zero towards the galactic plane where the signal-to-noise ratio is significantly higher. This is due to the fact that similarity distance in equation (7) between different bright pixels is large, and weight function effectively concentrates on a single pixel. Fig. 2 presents gnomonic projection of these maps at a specific location to facilitate visual comparison. The second image shows an additional map smoothed with a 20' FWHM Gaussian kernel for reference. It is observed that our proposed non-local means algorithm effectively eliminates noise while preserving the morphological characteristics of the original map, with no apparent loss in resolution. In contrast, convolution with a Gaussian kernel noticeably smoothes the map and alters the shape of the hot spots. The three features described in Section 3 were employed to obtain the aforementioned results. One might ask how they characterize the underlying signal itself. For example, Minkowski functionals were used to characterize non-Gaussianity of CMB maps (Schmalzing & Gorski 1998). Situation with foregrounds is more complex. Fig. 3 displays the bivariate distributions of these features for the dust map, illustrating a non-trivial correlation between them. Additionally, the marginal distributions of each feature demonstrate distinct characteristics: \(\mathcal{F}^{(1)}\) and \(\mathcal{F}^{(2)}\) are closer to log-normal distribution (actually still skewed even on logarithmic scale), while \(\mathcal{F}^{(3)}\) more resembles a centered normal distribution (with some kurtosis). To quantitatively assess the effectiveness of our proposed algorithm, one can independently apply the non-local means filter to odd and even splits of the Planck 353 GHz thermal dust emission maps. The cross-spectrum \(C_{\rm input}^{\rm OE}\) of these splits enables us to characterize the true clean signal in input maps \[C_{\ell,\rm clean}=C_{\ell,\rm input}^{\rm OE}, \tag{12}\] while excess power in autocorrelation of each split contains specific noise contributions (and residual systematics which are lower in the odd-even split than in the half-mission one). We can estimate the power spectrum associated with the noise in input maps by subtracting the cross-spectrum \(C_{\ell,\rm input}^{\rm OE}\) from the average power spectrum of the splits \[C_{\ell,\rm noise}=\frac{C_{\ell,\rm input}^{\rm O}+C_{\ell,\rm input}^{\rm E }}{2}-C_{\ell,\rm input}^{\rm OE}. \tag{13}\] The residuals removed by the algorithm may not be perfect and could include contributions from the true signal, which is an undesirable but often unavoidable effect of any filter. To characterize the power spectrum of the lost signal, cross-spectrum of the residuals removed from odd and even maps can be considered \[C_{\ell,\rm lost}=C_{\ell,\rm residual}^{\rm OE}. \tag{14}\] Figure 3: Pairwise bivariate distributions of the feature space components in the lower triangle and marginal distribution of each feature in the feature space on the diagonal. Figure 2: Gnomonic projection of a neighbourhood of the two complex-shaped hot spots for visual comparison. The Gaussian-smoothed map was obtained with a smoothing parameter \(\rm FWHM=20^{\prime}\). Units are \(K_{\rm CMB}\). To estimate the noise that has been removed during the process, we subtract the cross-spectrum \(C^{\rm OE}_{\ell,\,\rm residual}\) from the average power spectrum of the residuals \[C_{\ell,\,\rm removed}=\frac{C^{\rm O}_{\ell,\,\rm residual}+C^{\rm E}_{\ell,\, \rm residual}}{2}-C^{\rm OE}_{\ell,\,\rm residual}. \tag{15}\] The four spectra mentioned are presented in Fig. 4 on a logarithmic scale (evaluated for the full sky coverage). Inspecting the plot, it is evident that the power spectra of the true clean signal are higher than those of the true noise, which means Planck 353GHz map signal to noise ratio is pretty high as is. Lost signal power is orders of magnitude below the signal, so filtering has minimal impact on the signal. Furthermore, as the multipole moment \(\ell\) increases, the spectrum of the removed noise progressively approaches the spectrum of the true noise. To quantify improvement in the signal to noise ratio due to the filter applied, we can extract signal and noise power spectra from the output maps exactly as we did with input ones, namely \[C^{\prime}_{\ell,\,\rm clean}=C^{\rm OE}_{\ell,\,\rm output} \tag{16}\] and \[C^{\prime}_{\ell,\,\rm noise}=\frac{C^{\rm O}_{\ell,\,\rm output}+C^{\rm E}_ {\ell,\,\rm output}}{2}-C^{\rm OE}_{\ell,\,\rm output}. \tag{17}\] The spectral density signal to noise (SN) ratio for the original data can be expressed as \[{\rm SN}_{\ell}=\frac{C_{\ell,\,\rm clean}}{C_{\ell,\,\rm noise}}, \tag{18}\] while for the filtered maps it is \[{\rm SN}^{\prime}_{\ell}=\frac{C^{\prime}_{\ell,\,\rm clean}}{C^{\prime}_{ \ell,\,\rm noise}}. \tag{19}\] Fig. 5 depicts the enhancement of the signal to noise ratio, quantified by \({\rm SN}^{\prime}_{\ell}/{\rm SN}_{\ell}\), and the signal attenuation, quantified by the ratio \(C^{\prime}_{\ell,\,\rm clean}/C_{\ell,\,\rm clean}\). It can be observed that our non-local means algorithm achieves a significant spectral SN enhancement, which increases at higher multipole moment \(\ell\). Additionally, the signal attenuation remains negligible across all scales. For comparison, an optimal linear filter would attenuate signal and noise spectral densities equally, resulting in spectral signal to noise ratio of one, with any gains realized only in integrated signal. ## 5 Component-separated CMB maps We also evaluated non-local means filter for noise reduction in Planck component separated maps (Planck Collaboration et al., 2020). Planck provides component separation by four different methods, known as SMICA, SEVEM, NILC, and Commander. Commander uses spectral energy density models for foregrounds to evaluate best fit for components on per-pixel basis, Figure 4: Input signal (red), input noise (green), lost signal (purple) and removed noise (blue) power spectra for even-odd split of the full resolution dust intensity map with \(N_{\rm side}=2048\). Vertical black line corresponds to feature space smoothing scale. Figure 5: Spectral density signal to noise enhancement (red) and signal attenuation (green) for the full resolution dust intensity map (\(N_{\rm side}=2048\)). Vertical black line corresponds to feature space smoothing scale. Figure 6: Input signal (red), input noise (green), lost signal (purple) and removed noise (blue) power spectra for even-odd split of the full resolution CMB component separated map with \(N_{\rm side}=2048\). Vertical black line corresponds to feature space smoothing scale. while the other three methods are based on various linear combination strategies. For our test map, we use 2018 SMICA component separated CMB map, which is supplied at \(N_{\text{side}}=2048\) resolution with 5' FWHM Gaussian beam. As other Planck data products, it is available as splits as well. We tried several smoothing scales for feature space construction and different filter strengths. Fig. 6 shows the four spectral densities we used to characterize dust map filtering, as described in the previous section, for 20' FWHM Gaussian smoothing scale with filter strength \(\alpha=32\). Red and green curves represent input signal and noise spectra, while purple and blue ones show removed signal and noise. Unlike dust map, signal and noise at higher \(\ell\) are removed with about the same efficiency, which means there is little gain in spectral signal to noise ratio. In this sense, the non-local means filter performance is not much better than the linear filter (for example matched Wiener filter) could achieve at less computational expense. The reason for this is simple. CMB temperature anisotropy is an isotropic Gaussian random field, and the only thing distinguishing it from noise is different spectra, which are hard to disentangle from a single map. Unlike dust emission which has prominent features the non-local means filter can use to separate signal from noise components based on morphology, CMB features are not as distinctive. Non-local means filter still reduces noise, of course, but struggles to separate signal from noise based on morphology only. This does not mean it cannot improve component separated CMB maps, but the way to do it would be to remove noise from _foreground_ maps used as input for component separation. Reduced noise in foreground templates used to extract CMB signal would directly translate into reduced noise of the linear combination map. In many ILC strategies, it would also help with determining linear weights more accurately, as it would tighten up covariance matrices used to determine them. ## 6 Extensions to polarization Polarization measurements present much larger possibilities for feature space construction, while coming with a number of specific challenges related to how the polarization data is used for scientific inference in cosmology. Unlike scalar intensity maps we discussed up to this point, linearly polarized emission is described by a rank two tensor, with components in local orthonormal frame represented by Stokes parameters \(I\), \(Q\) and \(U\) as \[\mathcal{P}_{ab}=\left[\begin{array}{cc}I+Q&U\\ U&I-Q\end{array}\right]. \tag{20}\] Intensity \(I=\frac{1}{2}\mathcal{P}_{a}^{a}\) is a scalar, as well as total polarization power \(P^{2}=\frac{1}{2}P_{ab}P^{bb}=Q^{2}+U^{2}\) constructed from traceless tensor \(P_{ab}=\mathcal{P}_{ab}-I\delta_{ab}\) describing purely polarized component. A number of invariants involving derivatives can be readily constructed, for example \(P^{ab}I_{a}I_{b}\), \(P^{ab}I_{a}\epsilon_{bc}I^{c}\), \(P^{ab}\epsilon_{bc}I^{c}\epsilon_{bd}I^{d}\), as well as higher order ones like \(P^{ab}_{~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{ }~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{ }~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{ }~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{ }~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{ }~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{ }~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~ {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~ {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~ {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{ }~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~ ## 7 Conclusions In this paper, we discuss a new non-linear noise reduction algorithm for scalar data on a sphere, and its implementation for HEALPix pixelization of the maps used in cosmic microwave background anisotropy studies. It is based on ideas of non-local means algorithm in digital signal processing by Buades et al. (2005), but is specifically adopted to the symmetries of the CMB and astrophysical foreground maps. The noise is removed by averaging "similar" pixels, with similarity determined by a tower of differential invariants forming a feature space, outfitted with a distance measure calculated from the noise covariance of the feature estimators. The algorithm is substantially more effective than anything else we are aware of for non-Gaussian emission maps, realizing a factor of two gain in spectral to noise ratio without any apparent signal loss for Planck 353GHz dust maps. Application to component separated CMB maps is less spectacular, with efficiency roughly comparable to linear filters. Although we mostly focused on emission intensity maps, the same techniques can be applied to polarization data, with potentially larger feature space constructed from polarization tensor. To avoid unintended correlations and mode conversions, it seems prudent to apply the filter separately on parity-definite scalar maps, namely \(E\)- and \(B\)-modes. These are easily constructed from full-sky maps, but are far less trivial to estimate on a masked sky. Impact of the noise reduction is more apparent for the astrophysical foreground maps, which have strong features the algorithm can use to separate signal from noise based on morphology only, without resorting to frequency dependence of emission. This is advantageous especially in the context of "spectral confusion", i.e. when foregrounds are hard to disentangle on their spectral energy density profiles alone, which is an ever-present worry for component separation techniques. Even for clearly spectrally distinguished foregrounds, reducing the noise in foreground templates would correspondingly decrease it in component-separated CMB maps. Given that a factor of two is feasible at least for some maps with moderate computational expenses (which would increase hardware cost or integration time by a factor of four if brute-force data accumulation strategy was to be used to reduce statistical noise), it seems like a promising technique to explore. Of potential downsides, the non-linear nature of the filter might complicate statistical analysis. However, the foreground properties and instrumental noise models are already complicated as they are, and frequentist approach with full forward-feeding signal and noise simulations is already used in Planck. This trend might continue in future experiments, opening the window of opportunity for non-linear signal processing. It is already widely used in image processing and computer vision. ###### Acknowledgements. This work was supported in part by NSERC Discovery Grant "Testing fundamental physics with B-modes of Cosmic Microwave Background anisotropy".
高精度宇宙マイクロ波背景地図の作成には、現在の成分分離方法が、通常は多波長周波数カバーの測定器情報に基づいており、これは foreground コンポーネントに関する情報に依存している。内部線形結合 (ILC) 法は、CMB の無偏推定値を提供するが、成分分離の質は、入力マッピングの信号ノイズ比に大きく依存する。この論文では、デジタル画像研究で使用されている非局所平均 along the lines of に基づいた効率的な非線形フィルタを開発し、天体観測の背景マッピングの信号ノイズ比を大幅に向上させ、最小限の信号衰退を達成する。ノイズ削減は、マップ内で「類似」のピクセルを平均することで達成する。回転 invariant の特徴ベクトル空間を構築し、Gaussian 信号にバイアスを加えた場合の類似度を計算する。提案されたフィルタ
2309.10687
EchoPrompt: Instructing the Model to Rephrase Queries for Improved In-context Learning
Language models are achieving impressive performance on various tasks by aggressively adopting inference-time prompting techniques, such as zero-shot and few-shot prompting. In this work, we introduce EchoPrompt, a simple yet effective approach that prompts the model to rephrase its queries before answering them. EchoPrompt is adapted for both zero-shot and few-shot in-context learning with standard and chain-of-thought prompting. Experimental results show that EchoPrompt yields substantial improvements across all these settings for four families of causal language models. These improvements are observed across various numerical reasoning (e.g. GSM8K, SVAMP), reading comprehension (e.g. DROP), and logical reasoning (e.g. Coin Flipping) tasks. On average, EchoPrompt improves the Zero-shot-CoT performance of code-davinci-002 by 5% in numerical tasks and 13% in reading comprehension tasks. We investigate the factors contributing to EchoPrompt's effectiveness through ablation studies, which reveal that both the original query and the model-generated rephrased version are instrumental in its performance gains. Our empirical results indicate that EchoPrompt is an effective technique that enhances in-context learning performance. We recommend incorporating EchoPrompt into various baseline prompting strategies to achieve performance boosts.
Rajasekhar Reddy Mekala, Yasaman Razeghi, Sameer Singh
2023-09-16T00:55:08
http://arxiv.org/abs/2309.10687v3
# EchoPrompt: Instructing the Model to Rephrase Queries ###### Abstract Language models are achieving impressive performance on various tasks by aggressively adopting inference-time prompting techniques, such as zero-shot and few-shot prompting. In this work, we introduce EchoPrompt, a _simple_ yet effective approach that prompts the model to rephrase its queries before answering them. EchoPrompt is adapted for both zero-shot and few-shot in-context learning with standard and chain-of-thought prompting. Experimental results show that EchoPrompt yields substantial improvements across all these settings for four families of causal language models. These improvements are observed across various numerical reasoning (e.g. GSM8K, SVAMP), reading comprehension (e.g. DROP), and logical reasoning (e.g. Coin Flipping) tasks. On average, EchoPrompt improves the Zero-shot-CoT performance of code-davinci-002 by 5% in numerical tasks and 13% in reading comprehension tasks. We investigate the factors contributing to EchoPrompt's effectiveness through ablation studies, which reveal that both the original query and the model-generated rephrased version are instrumental in its performance gains. Our empirical results indicate that EchoPrompt is an effective technique that enhances in-context learning performance. We recommend incorporating EchoPrompt into various baseline prompting strategies to achieve performance boosts. ## 1 Introduction Large language models have revolutionized natural language task-solving through prompting Brown et al. (2020). This technique involves conditioning the language model with an instruction (zero-shot) or augmenting the prompt with a small set of task-specific examples (few-shot), resulting in the model to generalize and respond effectively to tasks. A rapidly advancing body of research has introduced techniques to enhance these prompting methodologies. Notably, chain-of-thought prompting Wei et al. (2023); Kojima et al. (2022) has emerged as a powerful method for enhancing language model performance in reasoning tasks. Least-to-most prompting Zhou et al. (2022) and Tree of Thoughts Yao et al. (2023) further support chain-of-thought by breaking down complex problems into simpler subproblems. While both standard prompting and chain-of-thought prompting exhibit impressive capabilities and find applications across various domains, they can sometimes lead to inaccurate responses due to logical errors, symbol mapping issues, and omission of intermediate steps Kojima et al. (2022), indicating potential oversights in adequately addressing various facets of the queries. In this paper, we propose EchoPrompt, a prompting strategy that builds upon existing prompting approaches by incorporating _Query-Rephrasing_ as Figure 1: Comparison of prompts in Zero-shot-CoT with and without EchoPrompt, highlighting the modification in prompts. Zero-shot-CoT with EchoPrompt uses the prompt “Let”s repeat the question and also think step by step” to aid the model in recalling the query before solving it. a preliminary task in the in-context learning process. EchoPrompt draws inspiration from the innate cognitive strategies employed by humans, precisely the act of self-questioning, when answering queries. By verbalizing queries before answering them, humans establish a cognitive checkpoint to refine their thoughts, uncovering misconceptions that might have otherwise gone unnoticed (Joseph and Ross, 2018; Joseph et al., 2019). Figure 1 provides an illustrative example of EchoPrompting in Zero-shot-CoT settings. While the approach proposed by (Kojima et al., 2022) uses the prompt "_Let's think step by step."_ to elicit chain-of-thought reasoning and then extracts the answer using the prompt _"Therefore, the answer is"_, we modify the first prompt to "_Let's repeat the question and also think step by step."_ or similar texts. This modification guides the model to generate a version of the original query before solving it. We empirically evaluate our approach against various prompting baselines using a wide variety of model families with different sizes, including code-davinci-002, GPT-3.5-Turbo1, Starcoder-15B, Llama-13B, and GPT-J-6B. Our results show that EchoPrompt significantly improves the performance of language models on arithmetic, reading comprehension, and logical reasoning tasks. We observe substantial performance gains with both standard and chain-of-thought prompting, particularly in zero-shot scenarios for large language models (code-davinci-002, GPT-3.5-turbo) and with standard prompting on smaller models (Starcoder-15B, Llama-13B, and GPT-J-6B). For example, EchoPrompt increases the Zero-shot-CoT performance from 56.8% to 67.3% on DROP (Census) and from 75.1% to 82.6% on GSM8K with chain-of-thought prompting on GPT-3.5(gpt-3.5-turbo). Footnote 1: [https://openai.com/blog/chatgpt/](https://openai.com/blog/chatgpt/). We use gpt-3.5-turbo-0301 snapshot from March 2023 We conduct a series of ablation studies to gain deeper insights into the effectiveness of the EchoPrompt technique. First, we examine whether the accuracy gains attributed to EchoPrompt resulted solely from rephrased queries. Our findings demonstrate that both the original query and the rephrased query are essential in achieving performance improvements. Next, we investigate whether EchoPrompt can be seen as a query augmentation technique by considering the alternative approach of directly augmenting the original query with a rephrased version. We observe comparable results between these two approaches, indicating that EchoPrompt serves as a query augmentation technique. Additionally, we explore whether instructing EchoPrompt to generate multiple rephrases can further enhance performance. Interestingly, we observe a slight performance drop as the number of rephrases increases. This suggests that the improvements achieved with EchoPrompt cannot be solely attributed to generating more tokens. Finally, we assess the performance of EchoPrompt in the presence of irrelevant text within the queries and find that it maintains improvements despite replicating irrelevant text in the rephrases. Our study indicates that EchoPrompt fundamentally improves in-context learning performance and finds broad applicability as a building block in emerging complex techniques that leverage prompting in multiple stages. ## 2 EchoPrompt EchoPrompt teaches language models to generate a version of the query before solving it. The fine-grained details of this technique are explained in the following two subsections, with examples. ### Zero-shot EchoPrompt In zero-shot prompting, the standard approach relies on a single prompt _"Therefore, the answer is"_ to directly extract the answer. In contrast, Zero-shot EchoPrompt introduces a two-stage prompting process. The language model is initially instructed to rephrase the query using a task-agnostic prompt, _"Let's repeat the question. "_" and then the answer is extracted using the same prompt as in zero-shot prompting. Similarly, in Zero-shot-CoT, as proposed by (Kojima et al., 2022), the conventional approach involves using the prompt _"Let's think step by step."_ to guide the model in generating its reasoning steps before producing the final answer. However, in Zero-shot-CoT with EchoPrompt, we introduce a query-rephrasing subtask by employing prompts like _"Let's repeat the question and also think step by step."_. This modification encourages the model to generate the query in its own words and then engage in multi-hop reasoning. The prompt used for answer extraction remains consistent in both zero-shot and Zero-shot-CoT scenarios. Figure-1 shows an example, highlighting the key differences between the two approaches. Tables-1,11 gives a comprehensive overview of the prompts we experi mented with in this approach2. Footnote 2: In zero-shot prompting, EchoPrompt only focuses on repeating the exact query, whereas in Zero-shot-CoT, we explore both query-repetition and rephrasing. This is because we can easily identify the end of query repetition by using quotations. However, there is no clear way to detect when the rephrase is complete. ### Few-shot EchoPrompt In few-shot learning, we teach the language model to rephrase the test query in a particular structure before answering the query. We do this by providing exemplars demonstrating the rephrase structure and corresponding responses to example queries. We examine three distinct rephrasing structures in addition to teaching the model to repeat the exact query in the following formats: * **Rephrased to _Compound Sentences_**: Queries are formulated using compound sentences incorporating multiple clauses or phrases. * **Rephrased to putting the _question First_**: Queries are structured to present the final question at the beginning, followed by contextual information. * **Rephrased to _Short and Simple Sentences_**: Queries are constructed by breaking down the original problem's context into simpler and shorter sentences. * _Repetition_**: Repeating the original query itself can serve as a fundamental form of rephrasing, and we consider it one of the rephrase structures. Figure-2 shows an example of these rephrasing formats for a query. We use ChatGPT(OpenAI, 2021) to generate the rephrases for the exemplars in these structures. This way, even our exemplars are generated automatically and with the minimum human effort, which makes EchoPrompt simple to use. The prompts used for generating the rephrases for the exemplars are shown in Table-10. In Figure-3, we present an illustrative example of the proposed _compound sentences_ rephrasing. The exemplars in the standard prompting approach (highlighted in blue) demonstrate a sample query and the corresponding answering format. Consequently, when the model is presented with a test query, it responds similarly. However, with the introduction of EchoPrompt, the exemplars now showcase an additional step: query-rephrasing. Consequently, when the model encounters a test query, it produces a rephrased variant and answers it using the original and generated query reformulation. ## 3 Evaluation Setup ### Benchmarks We evaluate EchoPrompt across a range of natural language processing tasks, specifically focusing on four types, including fourteen widely recognized benchmarks. We experiment with four categories of causal language models to ensure a broad and thorough evaluation. In this section, we delve into Figure 2: Example of rephrases used for the proposed rephrase structures in EchoPrompt in few-shot prompting exemplars. The Rephrases of exemplars are generated using ChatGPT based on prompts in Table-10. Figure 3: Example of EchoPrompt with Compound Sentences. Standard Prompting approach showcases exemplars with queries and corresponding answering formats. In contrast, the EchoPrompt incorporates a Query-Rephrase step, where the exemplars showcase a rephrased version of the query along with the answering format. the details of our evaluation setup. Numerical ReasoningWe evaluate numerical reasoning tasks from Wei et al. (2023) for a fair comparison between the methods including, **GSM8K**Cobbe et al. (2021), **SVAMP**Patel et al. (2021), **AQUA-RAT**Ling et al. (2017), **SingleOp** and **MultiArith** subsets from Roy and Roth (2016). Additionally, we examine the performance of EchoPrompt on the high school mathematics subset of the **MMLU** dataset Hendrycks et al. (2021, 2021) and the **GSMIC-4k** dataset Shi et al. (2023), which focuses explicitly on queries containing perturbations. Logical ReasoningFor logical reasoning, we assess the **Date Understanding**, **Shuffled Objects** (tracking three objects) tasks from bigBench Ghazal et al. (2013), **LogiQA**Liu et al. (2020) and generate 1000 random samples with two trials of flipping for **Coin Flipping** task Wei et al. (2023). Reading ComprehensionWhile we evaluate multiple numerical subsets of **DROP**Dua et al. (2019), (including Football, Non-football, Census, and BreakWolfson et al. (2020) from the **QDMR** dev subset) and could also be included in the arithmetic benchmarks, we group it with **SQuAD**Rajpurkar et al. (2016) based on the query style. We evaluate EchoPrompt on **DROP**Dua et al. (2019) and **SQuAD**Rajpurkar et al. (2016) as two standard reading comprehension benchmarks. The Football subset of the DROP dataset was curated by applying keyword-based filtering with the keyword "yard" Zhou et al. (2022), and the Census subset was created by selectively filtering passages that contained the terms "population" and "census." Commonsense ReasoningFor commonsense reasoning, we use **StrategyQA**Geva et al. (2021), **Winogrande**ai2 (2019) datasets to assess the performance of EchoPrompt on tasks that involve simpler queries but require factual knowledge. ### Language models For our experiments, we use code-davinci-002 Chen et al. (2021) as the primary model for all tasks since this model is free to evaluate and has a strong in-context learning ability. Additionally, we present the results on a subset of datasets on GPT-3.5-Turbo, a model comparable to the size of code-davinci-002. We also experiment with the smaller and publicly available models such as StarCoder-15B Li et al. (2023), Llama-13B Touvron et al. (2023), and GPT-J-6B Wang and Komatsuzaki (2021) specifically on synthetic and simpler tasks. ### Prompts Few-shot ExemplarsFor a fair comparison of methods, we use the same exemplars introduced in Wei et al. (2023) for the GSM8K, SVAMP, SingleOp, MultiArith, Date Understanding, and Coin-Flipping tasks across all models. Additionally, we evaluate with the prompts suggested by Zhou et al. (2022) for GSM8K, SVAMP, MultiArith, and DROP subsets. Furthermore, we provide a new set of prompts specifically for the DROP Census subset since no prior proposals exist. Zero-shot-CoT PromptsAs proposed in Kojima et al. (2022), we employ the prompt_"Let's think step by step."_ in stage 1. In stage 2, we extract the answer using different prompts depending on the type of task. For multiple-choice tasks, we utilize prompts like "From (a) through (e), the answer is." For other tasks, we use the phrase "Therefore, the answer is." ## 4 Results We conduct an extensive comparison of our approach against zero-shot, Zero-shot-CoT, few-shot, and few-shot-CoT prompting strategies. Figure-4 (and Table-9 in Appendix) provides the overall results of EchoPrompt while the extended results on code-davinci-002 and other models are presented in Appendix-A. The findings on individual models are summarized below. Code-davinci-002Overall, We observe that EchoPrompt performs well regardless of the baseline prompting strategy. Notably, EchoPrompt shows significant improvements in zero-shot prompting scenarios, especially for tasks with longer query contexts, such as different DROP and SQuAD subsets containing extraneous information. For example, we observed an 18.5% improvement in accuracy on the DROP(Census subset) dataset for zero-shot prompting. Similarly, EchoPrompt with Zero-shot-CoT on SVAMP achieves (7.4%) improvement in accuracy, which makes the overall accuracy comparable to few-shot-CoT prompting. However, it is worth noting that EchoPrompt does not yield improvements in cases where the base line method cannot solve the task. For example, in the Shuffled Objects task involving three objects, EchoPrompt shows a slight drop in zero-shot performance (36.4% to 35.2%), which is close to random choice (33.3%). Nevertheless, it considerably improves the accuracy in Zero-shot-CoT (42.4% to 58.2%), where the model can partially solve the task. We also do not observe any consistent improvements in tasks involving multiple-choice questions, such as AQuA-RAT, MMLU, and LogiQA, where the model must select one option among several rather than explicitly generating the answer. Gpt-3.5-TurboTo assess the performance of the EchoPrompt technique on a non-code-trained model of similar size to Code-davinci-002, we experiment with GPT-3.5-Turbo on a subset of tasks. Detailed results are in Table-9 in Appendix. Overall, these results align with our previous experiments on code-davinci-002. For example, the EchoPrompt technique significantly improves accuracy on GSM8K, from 75.1% to 83.5% in few-shot-CoT. However, we observe a drop in performance on reading comprehension tasks (DROP, and SQuAD) in zero-shot scenarios. After manual qualitative analysis, we observe that the model generates descriptive rather than instruction-based extractable answers, which explains some of the drop in performance. StarCoder-15B, Llama-13B, GPT-J-6BSimilarly, we evaluate the performance of EchoPrompt on smaller and publicly available models: StarCoder-15B, Llama-13B, and GPT-J-6B. Our evaluation includes tasks such as coin-flipping, SingleOp, SVAMP, and date-understanding since these smaller models are less capable of challenging reasoning tasks. This set encompasses a toy task and two relatively simpler datasets, while date understanding is considered a challenging task on Bigbench. Detailed results are in Table-9 in Appendix. EchoPrompt improves the performance with standard prompting, although we observe inconsistent results with chain-of-thought reasoning. This finding is not entirely surprising, as chain-of-thought is considered an emergent phenomenon in larger language models (Wei et al., 2023). Comparision with least-to-most promptingTable-2 shows a comparison of EchoPrompt in few-shot-CoT against least-to-most prompting3, which is considered to be state-of-the-art for numerical reasoning tasks. While EchoPrompt utilizes rephrased queries, least to most prompting breaks down the problem into subproblems and solves these subproblems sequentially using chain-of-thought. For a fair comparison, we evaluate both numerical (GSM8K, SVAMP, Multiarith) and Figure 4: Performance summary of EchoPrompt with repetition in zero-shot and compound sentence rephrasing in few-shot settings. Darker colored bars show EchoPrompt augmented with the baseline method. EchoPrompt consistently achieves performance gains across different prompting strategies, particularly in zero-shot scenarios. For details, see Table-9 in Appendix. reading comprehension (DROP) tasks using the prompts proposed Wei et al. (2023); Zhou et al. (2022). Although EchoPrompt is a relatively simpler approach, it outperforms least-to-most prompting on two of the three arithmetic reasoning tasks and all reading comprehension subsets. ## 5 Analysis To gain a deeper understanding of the factors that contribute to the success of EchoPrompt, we perform a series of ablation studies in the following sections: Effect of prompts on zero-shot EchoPromptTo investigate the impact of prompts used to instruct the language model in rephrasing queries in zero-shot settings, we conducted experiments using a variety of prompts on arithmetic tasks, including both standard and chain-of-thought prompting. The results shown in Table-1 indicate that EchoPrompt consistently enhances performance when compared to the baseline method, regardless of the chosen prompt. However, we observe a difference in performance with various prompt selections in the Zero-shot-CoT setting. The prompt "Let's reiterate the question and also think step by step." achieves the best results. Effect of rephrases on few-shot EchoPromptIn the few-shot setting, we assess the performance of the proposed rephrase structures compared to baseline techniques, focusing on arithmetic and reading comprehension tasks that require explicit answer generation. The results, as shown in Table-3, reveal that although there is variance among the performance, all the rephrase structures outperform the standard and chain-of-thought prompting, highlighting the effectiveness of EchoPrompt. Notably, no single rephrase structure consistently outperforms the others. Are rephrased queries self-sufficient?To assess whether the EchoPrompt performance gains are solely due to the rephrased queries or if both the original and rephrased queries are essential, We isolate the LM generated rephrases. This process involves two steps. First, through in-context learning, we generate the rephrased query using the same method as before and with the same exemplars. Then, we prompt the language model with the revised exemplars that match the rephrased query structure. We only provide the rephrased queries for the model to answer. The results in Table-4 show that standalone rephrases consistently yield lower accuracies than EchoPrompt. Although rephrased queries can improve accuracy compared to baseline prompting (compound sentence rephrases), the improvements are still considerably lower than those achieved with EchoPrompt. This suggests that the primary source of improvement in EchoPrompt lies in the provision of two \begin{table} \begin{tabular}{c l|l l l l} \hline \hline EchoPrompt? & Stage-1 Prompt & GSM8K & SVAMP & MultiArith & SingleOp \\ \hline \multicolumn{5}{l|}{**Zero-shot**} \\ & \(\mathbf{X}\) & - & 16.4 & 66.8 & 31.0 & 91.6 \\ & \(\mathbf{\checkmark}\) & Let’s repeat the question. ” & **20.7\((+3.3)\)** & **74.7\((+7.9)\)** & **48.5\((+17.5)\)** & 91.8\((+0.2)\) \\ & \(\mathbf{\checkmark}\) & Let’s reiterate the question. ” & 19.7\((+3.3)\) & 73.4\((+6.6)\) & **51.0\((+20.0)\)** & 93.0\((+1.4)\) \\ & \(\mathbf{\checkmark}\) & Let’s restate the question. ” & 19.2\((+2.8)\) & 74.6\((+7.8)\) & 47.7\((+16.7)\)** & 89.6\((-2.0)\) \\ & \(\mathbf{\checkmark}\) & Let’s summarize the question. ” & 20.6\((+4.2)\) & 73.2\((+6.4)\) & 48.8\((+17.8)\)** & **93.7\((+2.1)\)** \\ \hline \multicolumn{5}{l|}{**Zero-shot-CoT**} \\ & \(\mathbf{\checkmark}\) & Let’s think step by step. & 49.3 & 66.5 & 76.0 & 82.9 \\ & \(\mathbf{\checkmark}\) & Let’s repeat the question and also think step by step. & 44.6\((-4.7)\) & **74.7\((+8.2)\)** & 70.9\((-5.1)\) & 92.3\((+9.4)\) \\ & \(\mathbf{\checkmark}\) & Let’s reiterate the question and also think step by step. & **51.1\((+1.8)\)** & 73.9\((+7.4)\) & 78.7\((+2.7)\)** & **92.4\((+9.5)\)** \\ & \(\mathbf{\checkmark}\) & Let’s repeat the question and also think step by step. ” & 42.0\((-7.3)\) & 60.4\((-6.1)\) & 78.1\((+2.1)\) & 88.3\((+5.4)\) \\ & \(\mathbf{\checkmark}\) & Let’s restate the question and also think step step by step. & 47.0\((-2.3)\) & 73.9\((+7.4)\) & **79.3\((+3.3)\)** & 90.2\((+7.3)\) \\ & \(\mathbf{\checkmark}\) & Let’s summarize the question and also think step by step. & 49.9\((+0.6)\) & 74.2\((+7.7)\) & 75.8\((-0.2)\) & 90.9\((+8.0)\) \\ \hline \hline \end{tabular} \end{table} Table 1: **Code-davinci-002: Arithematic reasoning** Evaluation of EchoPrompt on various prompt templates. All the prompts improve the performance in zero-shot setting. However, we find that only the prompt “Let’s reiterate the question and also think step by step.” consistently outperforms baseline Zero-shot-CoT. \begin{table} \begin{tabular}{l l l l l l} \hline \hline GSM8K & SVAMP & Multi- & DROP & DROP & DROP \\ & & Arith & (Census) & (Break) & (Football) \\ \hline **CoT** & & & & & \\ 61.1 & 75.2 & 96.1 & 70.0 & 65.3 & 67.3 \\ **CoT+Compound** & & & & & \\ **65.9** & 79.0 & **97.8** & **75.4** & **69.6** & **70.8** \\ **LTM** & & & & & \\ 63.2 & **82.2** & 93.7 & 73.8 & 61.2 & 66.2 \\ \hline \hline \end{tabular} \end{table} Table 2: **code-davinci-002** Table show a comparison of EchoPrompt with CoT against least-to-most prompting. EchoPrompt outperforms least-to-most prompting on most of the benchmarks. query versions. Comparing the rephrase and the original queriesWe compare the BLEU scores for the rephrased queries alongside the original ones (refer to Table-16 in the Appendix). Additionally, we compute the fraction of tokens retained in the rephrased queries (see Table-15 in the Appendix). In numerical tasks, the rephrases retain most of the information from the original queries. However, we observe considerable differences in scores in the standalone rephrases in reading comprehension tasks, particularly in the DROP Football and Break subsets. In these datasets, the original queries exhibit a huge variance in the token count distribution, leading to low-quality rephrase generation, which may be why we observe a significant drop in accuracy. Generating vs Augmenting the rephrasesTo study whether EchoPrompt can be considered as a query augmentation technique, we compare the performance of EchoPrompt with directly augmenting the original question using a rephrase (generated in Section-5). In EchoPrompt, the model generates both the rephrase and the answer simultaneously, while in query augmentation, the query is provided to the language model beforehand, and the model only generates the answer. Table-18(in Appendix) shows an example highlighting the distinction between the two settings. The result of this experiment is summarized in Table-5, demonstrating that both approaches yield comparable improvements in accuracy. This result indicates that although we introduce EchoPrompt as a subtask within context learning, it can also be considered a query augmentation technique. This is because the language model utilizes the same rephrased query and the original query to solve the query in both cases. Stacking multiple rephrases for EchoPromptThe benefits observed with query-rephrasing in EchoPrompt naturally prompted us to investigate the effects of having the language model generate multiple rephrases. The summarized results in Table-6 show a drop in performance as the number of rephrases increases. When manually examining the generated answers, we observe a tendency towards repetition in the chain-of-thought reasoning despite successfully generating the desired number of rephrases. This repetition phenomenon becomes particularly prominent when the question requires longer multi-hop reasoning. The Appendix shows Examples illustrating this finding in Table-17. This observation aligns with expectations since the task's focus shifts from chain-of-thought reasoning to rephrase generation when the number of rephrases is increased in EchoPrompt. Consequently, the model prioritizes generating the requested number of rephrases rather than the reasoning process. Robustness to irrelevant textRecent work Shi et al. (2023) has shed light on the sensitivity of large language models (LLMs) to irrelevant information using various prompting methods, including the CoT reasoning. Intuitively, EchoPrompt could be particularly prone to such distractions, given that it rephrases or regenerates the query, including the distractions. To evaluate if EchoPrompt technique works even in the presence of such perturbations, we study the performance of EchoPrompt on GSMIC-4k dataset Shi et al. (2023). The evaluation results in Table 7 demonstrate that EchoPrompt \begin{table} \begin{tabular}{l l l l l l l l l} \hline \hline & EchoPrompt & GSM8K & SVAMP & MultiArith & DROP & DROP & DROP & SQuAD(F1) \\ & & & & (Census) & (Break) & (Football) & \\ \hline Standard & - & 19.2 & 69.8 & 44.0 & 56.8 & 55.5 & 63.7 & 88.7 \\ & Repeat & 21.4(+2.2) & 75.8(+6.6) & 53.8(+9.8) & 65.9(+9.1) & **63.1**(+7.6) & **69.2**(+5.5) & 91.3(+2.6) \\ & Compound & 20.8(+1.6) & 75.1(+5.3) & 54.0(+10.0) & **67.3**(+10.5) & 62.7(+6.9) & 67.7(+4.0) & 90.6(+1.9) \\ & Question First & 20.9(+1.7) & 75.0(+5.2) & 53.6(+9.6) & 65.2(+8.4) & 59.7(+3.9) & 63.1(-0.6) & **92.2**(+3.5) \\ & Simple & **21.5**(+2.3) & **76.6**(+6.8) & **55.6**(+11.6) & 65.1(+8.3) & **63.1**(+7.6) & 67.1(+3.4) & 90.9(+2.2) \\ \hline CoT & - & 61.1 & 75.2 & 96.1 & 70.0 & 65.3 & 67.3 & 90.5 \\ & Repeat & 63.5(+2.4) & 77.6(+2.4) & 98.8(+2.7) & 71.6(+1.6) & **70.0**(+4.7) & 71.3(+4.0) & - \\ & Compound & **65.9**(+4.8) & **79.0**(+3.8) & 97.8(+1.7) & **75.4**(+5.4) & 69.6(+4.3) & 70.8(+3.5) & 90.8(+0.3) \\ & Question First & 64.4(+3.3) & 77.0(+1.8) & 98.3(+2.2) & 75.3(+5.3) & 68.1(+2.8) & **72.0**(+4.7) & - \\ & Simple & 63.6(+2.5) & 76.9(+1.7) & **99.0**(+2.9) & 73.5(+3.5) & 67.7(+2.4) & 71.2(+3.9) & - \\ \hline \hline \end{tabular} \end{table} Table 3: **code-davinci-002** Evaluation of EchoPrompt using the proposed rephrase structures and query-repetition. We compare these approaches with baseline methods in arithmetic and reading comprehension tasks. The results showcase improvements across all rephrase structures, with no single structure consistently outperforming the others. maintains improvements across all prompting techniques, even in the presence of perturbations. ## 6 Related Work PromptingLarge language models' success has sparked interest in improving task performances through prompting techniques Brown et al. (2020). While the recent studies focus on task-based instruction tuning, either by fine-tuning the entire model Raffel et al. (2020); Wei et al. (2021); Sanh et al. (2021); Wang et al. (2022b); Huang et al. (2022) or maintaining task-specific parameters Li and Liang (2021); Lester et al. (2021), our work is a general prompting approach that improves the in-context learning abilities and does not require any fine-tuning. Intermediate stepsThe concept of employing language models to generate intermediate steps for process supervision has been extensively examined in the context of solving reasoning tasks, whether through training Nye et al. (2021); Zelikman et al. (2022), zero-shot Kojima et al. (2022), few-shot prompting Wei et al. (2022) or action planningYao et al. (2022). Recent works focus on problem decomposition and teaching the language model to answer the subtasks, to eventually answer complex problems Zhou et al. (2022); Dua et al. (2022); Wang et al. (2022a); Zhou et al. (2022b). EchoPrompt is orthogonal to these approaches, augmenting the input query rather than rationale generation. Consequently, it can be easily extended with any of these prompting strategies. Interpretability, Consistency and Outcome correctionAnother related research direction involves exploring interpretability and consistency in the rationale generated by large-scale models. Recent works Imani et al. (2023); Miao et al. (2022), \begin{table} \begin{tabular}{l c c c} \hline \hline & times & GSM8K & SVAMP & DROP \\ \hline Repeat & 1 & **63.5** & **77.6** & **70.3** \\ & 2 & 61.7 & 77.6 & 68.5 \\ & 3 & 59.8 & 77.8 & 69.3 \\ & 5 & 59.9 & 76.9 & 67.5 \\ \hline Compound & 1 & **65.9** & **79.0** & **69.6** \\ & 2 & 63.7 & 77.9 & 68.8 \\ & 3 & 63.2 & 78.9 & 67.9 \\ \hline \hline \end{tabular} \end{table} Table 6: **code-davinci-002** The accuracies drop as the number of rephrases/repetitions increases when generating multiple rephrases with EchoPrompt. \begin{table} \begin{tabular}{l l c c c c} \hline \hline & Query Structure & GSM8K & SVAMP & DROP & DROP & DROP \\ & & & & (Census) & (Break) & (Football) \\ \hline Standard & Original & 19.2 & 69.8 & 56.8 & 55.5 & 63.7 \\ & Compound & 19.9(\(+0.7\)) & 71.8(\(+2.0\)) & 59.1(\(+2.3\)) & 54.1(\(-1.4\)) & 65.1(\(+1.4\)) \\ & Question First & 14.6(\(-4.6\)) & 58.5(\(-11.3\)) & 28.2(\(-28.6\)) & 36.2(\(-19.3\)) & 48.8(\(-14.9\)) \\ & Simple & 19.7(\(+0.5\)) & 70.9(\(+1.1\)) & 56.5(\(-0.3\)) & 55.5(\(+0.0\)) & 62.7(\(-1.0\)) \\ Standard+ Repeat & - & **21.5** & **76.6** & **65.1** & **63.1** & **67.1** \\ \hline CoT & Original & 61.1 & 75.2 & 69.6 & 65.3 & 67.3 \\ & Compound & 62.1(\(+1.0\)) & 78.0(\(+2.8\)) & 71.9(\(+2.3\)) & 66.7(\(+1.4\)) & 68.2(\(+0.9\)) \\ & Question First & 55.1(\(-6.0\)) & 66.6(\(-8.6\)) & 48.1(\(-21.5\)) & 64.5(\(-0.8\)) & 57.8(\(-9.5\)) \\ & Simple & 61.3(\(+0.2\)) & 75.8(\(+0.6\)) & 70.3(\(+0.7\)) & 67.3(\(+2.0\)) & 67.1(\(-0.2\)) \\ CoT+ Compound & - & **65.9** & **79.0** & **74.3** & **69.6** & **70.8** \\ \hline \hline \end{tabular} \end{table} Table 4: **Standalone Rephrases: code-davinci-002** Compound Sentence rephrasing performs better than the original queries, while question-first rephrasing performs worse. We observe information loss in the rephrases for certain tasks (see Table-15), indicating that the performance gains of EchoPrompt are due to the combination of rephrasing and having multiple versions. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & & Standard & \multicolumn{2}{c}{CoT} & \multicolumn{2}{c}{LTM} \\ EchoPrompt? & ✗ & ✓ & ✗ & ✓ & ✗ & ✓ \\ \hline Zero-shot & 23.7 & 30.1 & 46.7 & 52.8 & N/A & N/A \\ & & (\(+6.4\)) & & (\(+6.1\)) & \\ 1-shot & 27.1 & 29.1 & 72.6 & 77.2 & 73.8 & 81.3 \\ & & (\(+2.0\)) & & (\(+4.6\)) & & (\(+7.5\)) \\ 4-shot & 25.2 & 31.0 & 77.4 & 81.8 & 84.3 & 85.4 \\ & & (\(+5.8\)) & & (\(+4.4\)) & & (\(+1.1\)) \\ \hline \hline \end{tabular} \end{table} Table 7: **code-davinci-002** Performance of EchoPrompt on GSMIC-4k(which contains irrelevant context in queries). EchoPrompt improves performance on both chain-of-thought and least-to-most prompting, even though it repeats the perturbation sentence in the rephrase. \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{1}{c}{times} & GSM8K & SVAMP & DROP \\ \hline Repeat & 1 & **63.5** & **77.6** & **70.3** \\ & 2 & 61.7 & 77.6 & 68.5 \\ & 3 & 59.8 & 77.8 & 69.3 \\ & 5 & 59.9 & 76.9 & 67.5 \\ \hline Compound & 1 & **65.9** & **79.0** & **69.6** \\ & 2 & 63.7 & 77.9 & 68.8 \\ & 3 & 63.2 & 78.9 & 67.9 \\ \hline \hline \end{tabular} \end{table} Table 6: **code-davinci-002** The accuracies drop as the number of rephrases/repetitions increases when generating multiple rephrases with EchoPrompt. 2023; Madaan and Yazdanbakhsh, 2022) help improve the interpretability in arithmetic and reasoning tasks through validation. Although these approaches are not directly tied to the EchoPrompt technique, they utilize chain-of-thought prompting, where we have shown that EchoPrompt exhibits promising results, particularly in zero-shot scenarios. In the domain of outcome correction, approaches such as Jung et al. (2022); Wang et al. (2023); Yao et al. (2023); Miao et al. (2021); Xie et al. (2023) leverage consistency among multiple generated rationales while Weng et al. (2023); Khalifa et al. (2023); Yang and Klein (2021); Ni et al. (2023); Chen et al. (2022) prioritize the ranking of plausible generations to enhance performance across arithmetic, reasoning, and code-generation tasks. Building upon these foundations, self-correction methodologies like Madaan et al. (2023); Jiang et al. (2023); Hao et al. (2023); Shinn et al. (2023), which employ feedback loops for refinement and multi-agent debating strategies Du et al. (2023); Cohen et al. (2023); Fu et al. (2023) have evolved. EchoPrompt distinguishes itself from these approaches by focusing on single rationale generation rather than considering multiple generated responses. ## 7 Limitations While the EchoPrompt subtask presents notable advantages, several limitations exist. Although we provide several ablation studies and qualitative examples, answering the question of when EchoPrompt works better, we could not explain why EchoPrompt results in performance gains, particularly in standard prompting. Additionally, it is worth noting that our approach involves regenerating the entire query before solving the tasks. Consequently, the model must generate many tokens when dealing with long queries, leading to increased compute requirements and time delays. ## 8 Conclusion We have proposed EchoPrompt, a simple yet effective approach that builds upon existing prompting approaches and integrates query-rephrasing as a subtask in the in-context learning process inspired by how humans think. It enables the language model to recall the query before attempting to solve it. EchoPrompt offers a direct approach to enhance in-context learning in pre-trained language models without fine-tuning, making it a simple and powerful approach to achieve performance boosts. ## 9 Reproducibility Statement Our primary results are on Code-davinci-002 and GPT-3.5-Turbo, which are publicly accessible OpenAI models. To increase reproducibility, we have included prompts used for all the tasks in the Appendix. We also plan to release the code soon.
言語モデルは、zero-shot やfew-shot などの推論時間のパフォーマンス向上技術を積極的に採用することで、様々なタスクで Impressive Performance を達成しています。この研究では、EchoPromptというシンプルな yet 効果的なアプローチを導入します。EchoPromptは、質問を回答前に再表現するようにモデルに促すものです。EchoPromptは、ゼロショットや少人数ショットインCONTEXT学習に標準的な手法と、Chain-of-thought手法で適応可能です。実験結果では、EchoPromptが因果性言語モデルの4つのファミリーのすべての設定で、大きな改善をもたらしています。これらの改善は、様々な数値的推理 (例:GSM8K, SVAMP)、読解理解 (例:DROP) と論理推理 (例:Coin Flipping) のタスクにおいて観察されています。平均的に、EchoPromptは code-davinci-002のZero-shot-CoT性能を数値タスクで5%、
2307.00138
Substrate suppression of oxidation process in pnictogen monolayers
2D materials present an interesting platform for device designs. However, oxidation can drastically change the system's properties, which need to be accounted for. Through {\it ab initio} calculations, we investigated freestanding and SiC-supported As, Sb, and Bi mono-elemental layers. The oxidation process occurs through an O$_2$ spin-state transition, accounted for within the Landau-Zener transition. Additionally, we have investigated the oxidation barriers and the role of spin-orbit coupling. Our calculations pointed out that the presence of SiC substrate reduces the oxidation time scale compared to a freestanding monolayer. We have extracted the energy barrier transition, compatible with our spin-transition analysis. Besides, spin-orbit coupling is relevant to the oxidation mechanisms and alters time scales. The energy barriers decrease as the pnictogen changes from As to Sb to Bi for the freestanding systems, while for SiC-supported, they increase across the pnictogen family. Our computed energy barriers confirm the enhanced robustness against oxidation for the SiC-supported systems.
R. L. H. Freire, F. Crasto de Lima, A. Fazzio
2023-06-30T21:07:37
http://arxiv.org/abs/2307.00138v1
# Substrate suppression of oxidation process in pnictogen monolayers ###### Abstract 2D materials present an interesting platform for device designs. However, oxidation can drastically change the system's properties, which need to be accounted for. Through _ab initio_ calculations, we investigated freestanding and SiC-supported As, Sb, and Bi mono-elemental layers. The oxidation process occurs through an O\({}_{2}\) spin-state transition, accounted for within the Landau-Zener transition. Additionally, we have investigated the oxidation barriers and the role of spin-orbit coupling. Our calculations pointed out that the presence of SiC substrate reduces the oxidation time scale compared to a freestanding monolayer. We have extracted the energy barrier transition, compatible with our spin-transition analysis. Besides, spin-orbit coupling is relevant to the oxidation mechanisms and alters time scales. The energy barriers decrease as the pnictogen changes from As to Sb to Bi for the freestanding systems, while for SiC-supported, they increase across the pnictogen family. Our computed energy barriers confirm the enhanced robustness against oxidation for the SiC-supported systems. The realization of two-dimensional (2D) materials through diverse experimental techniques have increased interest in their technological applications on electronic devices. Particularly, the arising topological insulating phase in bisunthene [1], antimonene [2], strained arsenene [3; 4], with the former robust against disorder [5; 6], leading to low-power spintronics [7]. However, the experimental conditions towards scalable production of these materials pose great challenges due to their relatively low stability [8], mainly at room temperature and in the presence of air (oxygen). Freestanding monoelemental materials, like phosphorene, were shown to be very unstable upon O\({}_{2}\)-exposure being degraded within a few hours [9]. Indeed, freestanding monolayer pnictogens (P and As) are more prone to oxidation than other 2D materials presenting the same atomic structure [10], while the presence of a substrate can alter the oxidation process [8]. The O\({}_{2}\) molecule occurs naturally in a triplet (\({}^{3}\Sigma_{g}^{-}\)) ground state. On the other hand, under experimental conditions (e.g., photoexcitation [11]), O\({}_{2}\) molecule can be found in excited singlet states, namely \({}^{1}\Delta_{g}\) and \({}^{1}\Sigma_{g}^{+}\). The singlet states are more reactive than the ground state triplet, being of great importance in oxidation process [12]. Experimental results over oxidation of 3D-stacked pnictogen systems (down to a few layers), show the robustness of oxidation for the internal layers, while the surface presents oxygen groups [13; 14]. Ruled by the higher interlayer bond of heavier pnictogens (compared with the phosphorene), the formation of surface oxide-layer protects the internal layers from oxidation [15; 16; 17]. There are studies about oxidation on 2D pnictogen materials, however focusing on the freestanding configuration [18; 19; 20; 21; 22; 23], while not taking into account fundamental aspects, such as the role of triplet-singlet transitions, and spin-orbit effects. At the same time, the realization of supported materials through molecular beam epitaxy (MBE) has attracted attention, for example, Sb/Ag(111) [24], Bi/SiC(0001) [25] and As/SiC(0001) [26] with a planar structure [27]. Particularly, the topological insulating phase of bisunthene and other pnictogens was predicted when supported on SiC substrate [1; 2]. While the presence of a substrate can alter the oxidation kinetics of 2D systems [28]. In this sense, understanding the mechanisms behind oxygen interaction with those substrate-supported materials is a key point for future experimental investigations upon applications and routes to improve their stability. In this paper, we show that the oxidation process of pnictogen monolayers is considerably lower (slower) when deposited on top of SiC substrate. Taking an _ab initio_ approach based on the density functional theory (DFT) we investigated the rate of formation of reactive oxygen species, i.e. O\({}_{2}\) triplet-singlet transition, close to the materials' surface in the buckled free-standing (FS) form and in the flat structure when on top of SiC substrate (SiC). We connected such rate of formation with the reaction barrier calculated within the nudge elastic band (NEB) method. The FS case reacts barrierless with the singlet O\({}_{2}\) molecule, while the supported one presents a non-negligible barrier. Additionally, the barriers found for the triplet O\({}_{2}\) molecule are considerably larger for the heavier pnictogen Bi. Our results draw attention to the possible atmospheric stability of supported pnictogens monolayer. Group-5A elemental monolayers were investigated through spin-polarized calculations based on density functional theory (DFT) [29; 30], performed within the semi-local exchange-correlation functional proposed by Perdew-Burke-Ernzerhof [31]. For the total energies, the electron-ion interactions were considered within the projector augmented wave (PAW) method [32; 33], as implemented in the vienna _ab-initio_ simulation package (VASP) [34; 35]. For all calculations, the cutoff energy for the plane-wave expansion of the Kohn-Sham orbitals was set to 400 eV, under an energy convergence parameter of \(10^{-6}\) eV, with all atoms relaxed until the atomic forces on every atom were smaller than \(10^{-2}\) eV A\({}^{-1}\). We considered \(3\times 3\) unit cells with 13 A and 16 A distance between periodic images for FS and SiC-supported sys tems respectively. A uniform \(4\times 4\times 1\) k-point mesh was considered for the Brillouin zone (BZ) integration. The oxidation process of pnictogen 2D allotropes is known in the literature to be an exothermic process. We calculate the adsorption energy (\(E_{a}\)) of a single oxygen atom on the pnictogen surface in its buckled freestanding geometry (FS) and in the flat geometry presented when supported on silicon-carbide (SiC-supported) [Fig. 1(a) and (b)]. It is worth pointing out that the bismuthene and antimonene on top of SiC form honeycomb lattices, while arsenene has a lower energy triangular lattice [26], which is considered here. In Table 1,we present our calculations for the adsorption energy \[E_{a}=E_{\rm X+O}-E_{\rm X}-\frac{1}{2}E_{\rm O_{2}}, \tag{1}\] where \(E_{X}\) is the pristine pnictogen configuration, \(E_{\rm X+O}\) the pnictogen with single oxygen adsorbed on its surface, and \(E_{\rm O_{2}}\) the isolated \(O_{2}\) molecule total energy. Indeed, the adsorption process is still exothermic even for the substrate-supported case. To obtain those adsorption energies we have considered different adsorption sites according to the surface geometry. Thus, in the FS case, we probed on-top, bridge, valley, and hollow sites, while for SiC were on-top, bridge, and hollow sites. For all cases in the lower energy configuration, the oxygen atom forms a bridge between adjacent pnictogen atoms. Comparing the FS with the supported SiC system, we see higher adsorption energies for Sb and Bi, while a decrease is observed for As. Here, the supported As system has a larger tensile strain than the Sb and Bi, when compared to their freestanding structure [26]. The oxygen adsorption, bridging two adjacent As atoms, contributes to lowering the tensile strain, therefore leading to lower adsorption energy. Although there is an indication of a higher exothermic process for As, oxidation can have different reaction time scales for each system. Here we will (i) explore the Landau-Zener probability of transition between oxygen molecule triplet and its most reactive form, oxygen-singlet, close to the pnictogen surfaces and (ii) explore energy barriers for the oxidation process considering the role of the spin-orbit coupling through the nudge elastic band (NEB) method. Analyzing the total energy of an O\({}_{2}\) molecule close to a materials interface, we see a dependence between the singlet and triplet spin configurations total energies and the molecule distance from the pnictogen surface, as show in Fig. 1 (c). Away from the surface the singlet and triplet state are separated in energy by \(\Delta E_{\rm vac}\sim 1\) eV, while close to the pnictogen surfaces they present an energy crossing. This crossing implies a transition probability between the two spin states of O\({}_{2}\) molecule. Based on the slope of the triplet and singlet curves we have obtained the triplet-singlet transition probabilities (\(P_{ts}\)) by employing the Landau-Zener relation (\(P_{LZ}\)) [36; 37; 8] \[P_{ts}=(1-P_{LZ})(1+P_{LZ}), \tag{2}\] where \[P_{LZ}=\exp\left(-\frac{V^{2}}{hv|F_{t}-F_{s}|}\right). \tag{3}\] Here, \(V\) is the spin-orbit matrix element of O\({}_{2}\) molecule (122 cm\({}^{-1}\)), \(v\) the velocity of O\({}_{2}\) molecule at room temperature (483.59 m s\({}^{-1}\)), and \(F_{i}\) the forces acting on the O\({}_{2}\) molecule for each spin state (triplet and singlet) [8]. It is worth noting that \(F_{i}\) will depend on the materials local adsorption site, and the arriving geometry of the O\({}_{2}\) molecule. That is, a single adsorption site can not capture the variations on triplet-single transition, as at experimental conditions this should run over a large distribution of possible sites and molecule geometries (orientation with respect to the surface). Our analysis includes different adsorption sites for both FS and SiC-supported structures and different molecule geometries. This will generate one-dimensional curves as that presented as an example in Fig. 1 (c), in which the singlet and triplet potential energy surfaces cross at some point (\(d_{cross}\)) [37]. We extracted information about the (i) triplet-singlet crossing distance (\(d_{cross}\)); (ii) crossing point relative energy (\(\Delta E_{cross}\)), (iii) the singlet minimum relative energy \begin{table} \begin{tabular}{c c c c} phases & As & Sb & Bi \\ \hline FS & \(-1.01\) & \(-1.32\) & \(-1.06\) \\ SiC & \(-2.69\) & \(-1.15\) & \(-0.45\) \\ \end{tabular} \end{table} Table 1: Oxygen adsorption energy, \(E_{a}\) (eV/O-atom), on pnictogen surfaces on their freestanding (FS) configuration and on silicon carbide-supported (SiC) configuration. The most stable configuration is an epoxy-like bridge bond inclined towards the hexagonal center. Figure 1: O\({}_{2}\) adsorption model for (a) freestanding and (b) SiC-supported structures, and (c) an example for evaluating the Landau-Zener probabilities, including a few definitions like the distance of the molecule center-of-mass from the 2D material surface at the triplet-singlet crossing (\(d_{\rm cross}\)), the singlet-triplet energy difference far from the surface (\(\Delta E_{vac}\)), at the energy minimum (\(\Delta E_{min}\)). (\(\Delta E_{min}\)), and (iv) the triplet-singlet transition probability (\(P_{ts}\)). In Fig. 2, we present the triplet-singlet transition probability (\(P_{ts}\), in the color bar), mapping it with respect to the crossing relative energies (\(\Delta E_{cross}\)) and the distance from the surface at the crossing point (\(d_{cross}\)). In the right panel close to the color bar we represent the \(P_{ts}\) statistical distribution. Here, 50% of the FS configurations presented \(P_{ts}\,^{\prime}s<5\%\), while 60% for SiC-supported. Additionally, the SiC-supported has a probability transition more concentrated around the 2%, while the FS configurations present values spreading to higher probabilities. That is, we have a statistical indication that the triplet-single transition is more probable in FS than in the SiC-supported pnictogens. In Table 2, we summarize the average values and mean deviation for the different configurations probed. Despite the significant mean deviation values, we can see that the \(P_{ts}\) average for FS is larger than that for SiC-supported, indicating FS as more prone to \(O_{2}\) triplet-singlet transition than SiC-supported, thus facilitating the oxidation process. The crossing distance between the triplet-singlet curves is higher for the SiC-supported than in FS, given the buckled nature of the latter. We see a monotonic growth of \(P_{ts}\) when going from As\(\rightarrow\)Sb\(\rightarrow\)Bi in the FS case, which is not observed for the SiC system. Furthermore, we see a correlation between \(d_{cross}\) with the \(P_{ts}\), where the closer to the surface, the larger \(P_{ts}\), that is, the surface orbitals interaction with the molecule is ruling the transition. In fact, because of the different bonding nature within the two structures, their orbitals will have different spreading into the vacuum region. In the FS structure, there is a hybridization between in-plane and out-of-plane orbitals forming a \(sp^{3}\) (\(s,p_{x},p_{y},p_{z}\)) bonding, while in the flat SiC-supported, the absence of hybridization between in-plane and out-of-plane orbitals leads to the formation of a \(sp^{2}\) bonding, and a remaining out-of-plane orbital (\(p_{z}\)) [27]. Because the \(p_{z}\) orbital is not hybridized in the latter it can possibly spread into larger distances within the vacuum region if compared to the FS structure. Thus, the molecule will feel the presence of the SiC-supported structure at larger distances as a result of the interaction with this out-of-plane orbital depending on the surface site and geometry it approaches. The singlet configuration presents minimum energy when close to the system surface, being the singlet minimum relative energy (\(\Delta E_{min}\)) lower for the SiC system. This singlet minimum energy is due to unstable physisorbed configurations of the O\({}_{2}\) that arise only when constraining the system in the singlet state. As we will show below, such configuration presents a barrierless transition to oxidation and cannot be stabilized on FS systems. Given the scenario for triplet-singlet transition, the reaction rate is also dependent on the energy barrier for both configurations to adsorb on the pnictogen surface. \begin{table} \begin{tabular}{c c c c c} & phases & \(\Delta E_{min}\) & \(d_{cross}^{OX}\) & \(P_{ts}\) \\ \hline As & FS & 1.04 (0.02) & 1.51 (0.46) & 2.46 (1.78) \\ & SiC & 0.94 (0.11) & 1.87 (0.28) & 2.15 (1.25) \\ Sb & FS & 0.93 (0.07) & 1.41 (0.54) & 3.33 (2.03) \\ & SiC & 0.72 (0.15) & 1.74 (0.64) & 2.85 (1.67) \\ Bi & FS & 0.77 (0.09) & 1.30 (0.55) & 4.25 (2.31) \\ & SiC & 0.70 (0.10) & 1.74 (0.61) & 2.77 (1.34) \\ \end{tabular} \end{table} Table 2: Average values of \(\Delta E_{min}\) (eV), \(d_{cross}\) (Å) [shown in Fig. 1], and Landau-Zener triplet-singlet probability transition \(P_{ts}\) (%) for all configurations tested [Fig. 2]. Numbers in parentheses are the standard deviation for the respective quantity. Figure 2: Triplet-singlet Landau-Zener transition probabilities (\(P_{ts}\)) for free-standing (top) and SiC-supported (bottom) systems for a few different adsorption sites, depicted with the triplet-singlet crossing distance (\(d_{cross}\)) and crossing energy (\(\Delta E_{cross}\)). \(P_{ts}\) is indicated by the color bar, and the histogram indicates the \(P_{ts}\) value distribution. Here we have calculated the energy barrier through the nudge elastic band (NEB) method, considering three scenarios: (i) O\({}_{2}\) in an enforced singlet configuration, (ii) O\({}_{2}\) with a free spin degree of freedom without spin-orbit coupling, and (iii) a fully relativistic case taking spin-orbit coupling into account. Our results are presented in Fig. 3 and summarized in Table 3. First, analyzing the enforced singlet case the O\({}_{2}\) molecule finds no energy barrier to dissociate over the FS material surface, while for SiC-supported systems there always exists an energy barrier. The singlet energy barrier for the latter is lower for the As and Sb system (0.36 and 0.47 eV respectively) while a higher value of 1.52 eV was found for Bi. We see a different scenario when considering a free spin degree of freedom, here far from the surface the O\({}_{2}\) is in a triplet state while through the barrier it changes to a singlet state before dissociation (see the magnetization in the lower panels of Fig. 3). Such behavior is present with or without the spin-orbit effect. This spin transition before the dissociation is dictated by a spin selection rule given the non-magnetic character of oxidized pnictogens [38, 39]. The spin-orbit effect is negligible for As and Sb systems, while presenting different effects on Bi. For Bi-FS the spin-orbit coupling lowers both the barrier maximum and the initial state energies, while for Bi-SiC it lowers the initial state keeping the barrier maximum energy. In the singlet states s=0, the spin-orbit contribution vanishes (\(\tilde{L}\cdot\vec{s}\)), while on the triplet state it presents a non-vanishing contribution. For the Bi-FS the triplet state persists higher on the barrier which gives this barrier lowering, while on the Bi-SiC in the barrier maximum, the s=0 state is already defined. We see different behavior of the barrier for FS and SiC configuration across the pnictogen group. While for FS, heavier pnictogen present a lower barrier, for supported system the opposite is observed. The decrease in barrier towards heavier pnictogens in FS configuration was also previously observed [21]. For FS system, the Bi system presents a lower energy barrier. Indeed, our Landau-Zener transition probability analysis has shown that the triplet-singlet transition for Bismuth is more favorable than Sb and As. As indicated by the magnetization panels of Fig. 3, the barrier height in the non-strained FS system is ruled by the triplet-singlet transition. On the other hand, the SiC-supported pnictogens are under strain, which can change their interaction energy with O\({}_{2}\). Bismuth has the largest atomic radius among the pnictogens studied, being under lower strain followed by Sb and As for the SiC supported structure [26]. Such lower strain energy makes the initial configuration (before O\({}_{2}\) reaction) lower in energy compared with other pnictogens, leading to a higher barrier for the reaction. The rate of oxidation for the pristine pnictogen systems can be estimated as \[f_{0}=\nu e^{(-E_{b}/kT)} \tag{4}\] with \(\nu\) the attempt frequency, \(E_{b}\) the calculated barrier energy. In the kinetic theory of gases, for one atmospheric pressure, at 300 K (\(kT=0.026\) eV), the number of \(O_{2}\) molecules arriving at a surface per unity of area, per unity of time is \[\frac{n}{4}\sqrt{\frac{8kT}{\pi m}}\sim\frac{1.87\cdot 10^{24}}{\rm s\cdot cm^{ 2}}, \tag{5}\] with \(n=5.1\cdot 10^{24}\) m\({}^{-3}\) the number of \(O_{2}\) molecules in air per volume at atmospheric pressure/temperature, and \(m=4.9\cdot 10^{-26}\) kg the \(O_{2}\) mass, at \(kT=4.16\cdot 10^{-18}\) kg m\({}^{2}\)s\({}^{-2}\). Such rate of oxidation \(f_{0}\), is valid for the pristine non-oxidized surface. When the system approaches its most stable oxide phase X\({}_{2}\)O\({}_{3}\) (with X=As, Sb, Bi), such rate should vanish. Therefore the rate of oxidation should Figure 3: O\({}_{2}\) reaction barriers (upper panels) and magnetization along the barrier (lower panels) calculated by the nudge elastic band method, for (a1)-(d1) the FS configuration and (a2)-(d2) the SiC configuration. The atoms’ trajectory shown is for the Bi systems, similar geometries are observed for the other systems. decay with the surface oxygen concentration \(\eta\) from \(f_{0}\) to zero at the critical concentration \(\eta_{c}\) equivalent to the oxygen density in the X\({}_{2}\)O\({}_{3}\) phase. \[f(\eta)=f_{0}e^{-\frac{\eta}{\eta_{c}-\eta}}. \tag{6}\] Given such oxidation rate, the reaction time needed for the system to oxidize one cm\({}^{2}\) from oxygen concentration zero up to \(\eta\) is \[T=2\int_{0}^{\eta}[f(x)]^{-1}dx. \tag{7}\] In Fig. 4, we display the reaction time as a function of the relative O concentration \(\eta/\eta_{c}\), for different temperatures. Here we can see a fast oxidation process for the FS systems. Indeed, experimental results on multilayer pnictogen systems have shown a fast oxidation process on the exposed surface layer [13; 14; 15]. However, on supported SiC systems, the time scale increases by several orders of magnitude. For As and Sb systems, despite the increased time scale, the oxidation process still hinders experimental realization of Arsenene/Antimonene at atmospheric conditions. On the other hand, supported Bi present an oxidation process slow enough to allow an exposition of its surface on atmospheric condition. Increasing the temperature can lead to an oxidation reaction time drastically reduced. For instance, temperatures about 390 K should be enough for the supported Bi system to lose its oxidation robustness. In summary, we have shown the triplet-singlet spin-transition of O\({}_{2}\) molecule, rules the oxidation process in monolayer pnictogens. Through our Landau-Zener statistical analysis, we have shown that the FS systems present higher spin-transition probabilities than the SiC-supported ones. By exploring the minimum energy path through the O\({}_{2}\) dissociation, we have extracted the barrier transition energy, compatible with our spin-transition analysis. Besides, spin-orbit coupling plays an important role in the oxidation mechanisms and time scales. Particularly, it has a significant effect on SiC-supported systems. The energy barrier presents an inverse dependence with the heavier pnictogen for the FS system (lower for Bi), while a direct dependence is observed for the SiC-supported systems (higher for Bi). The computed barriers confirm the enhanced robustness against oxidation for the SiC-supported systems. Based on that, we have established that according to the reaction time scale for complete oxidation (at 300 K), SiC-supported Bi is robust against atmospheric conditions. Our results open a path to explore the optimal 2D systems/substrate interplay aiming their experimental manipulation for further applications at atmospheric conditions. Figure 4: Reaction time for oxidation from zero up to oxygen concentration of X\({}_{2}\)O\({}_{3}\) phase (X=As, Sb, Bi), namely critical concentration \(n_{c}\). Upper panels are for FS configuration and lower panels for SiC-supported. \begin{table} \begin{tabular}{c c c c c c c} system & FS\({}_{soc}^{s=1}\) & FS\({}_{no-soc}^{s=1}\) & FS\({}_{soc}^{s=0}\) & SiC\({}_{soc}^{s=1}\) & SiC\({}_{no-soc}^{s=1}\) & SiC\({}_{s}^{s=0}\) \\ \hline As & 0.90 & 0.91 & 0.00 & 1.17 & 1.17 & 0.36 \\ Sb & 0.59 & 0.59 & 0.00 & 1.20 & 0.91 & 0.47 \\ Bi & 0.40 & 0.40 & 0.00 & 2.38 & 2.11 & 1.16 \\ \end{tabular} \end{table} Table 3: Barrier energies \(E_{bar}\) (eV) for O\({}_{2}\) reaction on pnictogen surfaces for the initial state in enforced singlet (\(s=0\)), and triplet (\(s=1\)) without/with SOC (no-soc/soc). ###### Acknowledgements. The authors acknowledge financial support from the Brazilian agencies FAPESP (grants 20/14067-3, 19/20857-0, and 17/02317-2), CNPq, INCT-Nanocarbono, INCT-Materials Informatics, and Laboratorio Nacional de Computacao Cientifica for computer time (project ScafMat2 and emt2D).
2D材料は、デバイス設計のための興味深いプラットフォームを提供します。しかし、酸化はシステムの特性を劇的に変化させる可能性があり、これは考慮する必要があります。 {\it ab initio}計算を用いて、自立型とSiCサポートされたAs、Sb、Bi単元層を調査しました。酸化プロセスはO$_2$スピン状態転移を通して発生し、Landau-Zener転移の中で考慮されました。さらに、酸化バリアとスピン軌道Couplingの役割を調査しました。計算の結果、SiC基板の有無は自立型単層に比べて酸化時間のスケールを縮小させます。エネルギー障壁遷移は、スピン転移分析に適しています。また、スピン軌道Couplingは酸化メカニズムに関連し、時間のスケールを変化させます。 freestanding系では、AsからSb、Biに渡る遷移に伴い、エネルギー障壁は減少しますが
2302.14793
TREXIO: A File Format and Library for Quantum Chemistry
TREXIO is an open-source file format and library developed for the storage and manipulation of data produced by quantum chemistry calculations. It is designed with the goal of providing a reliable and efficient method of storing and exchanging wave function parameters and matrix elements, making it an important tool for researchers in the field of quantum chemistry. In this work, we present an overview of the TREXIO file format and library. The library consists of a front-end implemented in the C programming language and two different back-ends: a text back-end and a binary back-end utilizing the HDF5 library which enables fast read and write operations. It is compatible with a variety of platforms and has interfaces for the Fortran, Python, and OCaml programming languages. In addition, a suite of tools has been developed to facilitate the use of the TREXIO format and library, including converters for popular quantum chemistry codes and utilities for validating and manipulating data stored in TREXIO files. The simplicity, versatility, and ease of use of TREXIO make it a valuable resource for researchers working with quantum chemistry data.
Evgeny Posenitskiy, Vijay Gopal Chilkuri, Abdallah Ammar, Michał Hapka, Katarzyna Pernal, Ravindra Shinde, Edgar Josué Landinez Borda, Claudia Filippi, Kosuke Nakano, Otto Kohulák, Sandro Sorella, Pablo de Oliveira Castro, William Jalby, Pablo López Rıós, Ali Alavi, Anthony Scemama
2023-02-28T17:44:54
http://arxiv.org/abs/2302.14793v2
# TREXIO: A File Format and Library for Quantum Chemistry ###### Abstract TREXIO is an open-source file format and library developed for the storage and manipulation of data produced by quantum chemistry calculations. It is designed with the goal of providing a reliable and efficient method of storing and exchanging wave function parameters and matrix elements, making it an important tool for researchers in the field of quantum chemistry. In this work, we present an overview of the TREXIO file format and library. The library consists of a front-end implemented in the C programming language and two different back-ends: a text back-end and a binary back-end utilizing the HDF5 library which enables fast read and write operations. It is compatible with a variety of platforms and has interfaces for the Fortran, Python, and OCaml programming languages. In addition, a suite of tools has been developed to facilitate the use of the TREXIO format and library, including converters for popular quantum chemistry codes and utilities for validating and manipulating data stored in TREXIO files. The simplicity, versatility, and ease of use of TREXIO make it a valuable resource for researchers working with quantum chemistry data. quantum chemistry, data, interoperability ## I Introduction Quantum chemistry relies on quantum mechanics to explain and predict the properties and behaviors of atoms, molecules, and materials. Although density functional theory (DFT) is one of the most widely used approaches thanks to its excellent ratio between computational cost and accuracy, another important tool is wave function theory (WFT), which describes the behavior of a quantum system in terms of its wave function. In order to perform WFT calculations, it is necessary to manipulate a large number of parameters, such as the expansion coefficients of the wave function and the matrix elements of the Hamiltonian operator. These parameters are typically numerous and difficult to handle, making it important to have a robust and efficient method for storing and accessing them. Reproducible research remains a challenging topic, despite recent advances such as the introduction of the FAIR (findable, accessible, interoperable, reusable) data principles.[1] A key aspect of reproducibility is software interoperability, which refers to the ability of different programs to work together and exchange information, allowing different systems to communicate and exchange data in order to function as a cohesive whole. Interoperable software is prevalent nowadays and is a key component of the Unix philosophy.[2] In Unix shells, the most straightforward application of software interoperability is made through the use of the _pipe_ operator, where the output of a program is the input of another program. Similarly, shell scripts are created through the composition of smaller programs, exchanging data through files or pipes. A major challenge of reproducible research is the uniformity of input/output (I/O) data within a particular research domain. The Unix philosophy recommends the use of text files because they are architecture-independent, readable in any language, and can be read as a stream, which is useful for making programs communicate over a network. However, storing data in a text format can result in large file sizes and conversion from ASCII to binary format can be computationally expen sive for large data sets. To address this concern, domain-specific binary formats have been developed, such as the Joint Photographic Experts Group (JPEG) format[3] for digital images and the Moving Picture Experts Group (MPEG) format[4] for videos. These binary formats are utilized through standardized application programming interfaces (API). In the field of wave function theory such a standard format and API is still lacking, and the purpose of the TREXIO library presented in this article is to fill this gap. This paper is organized as follows: firstly, a brief overview of the related work is presented. Secondly, the TREXIO format for the electronic wave functions is introduced together with some details concerning the internal representation and the associated API. Finally, some applications are demonstrated with a major focus on the interoperability achieved within the TREX Center of Excellence in Exascale Computing[5] due to the use of the TREXIO format. ## II Related Work It is worth mentioning that there have been several efforts to unify the data formats within different subdomains of quantum chemistry. Probably one of the earliest works in this direction was the definition of the Crystallographic Information File (CIF) for establishing databases of crystal structures.[6] A few years later, the Chemical Markup Language (CML)[7; 8] was introduced. It is a format based on the Extensible Markup Language (XML) which is used to describe chemical data: molecules, chemical properties, reactions, spectra, materials, _etc_. With formats like CIF or CML, the burden of following a standard is placed on the code _writing_ the data. As a consequence, any tool that can read the format will be able to interpret the data without needing to understand the specific code that was used to produce it. This means that data can be easily shared and reused across different programs, and new tools can be developed to work with the format without needing to know anything about the code used to produce the data. Recently, the cclib Python package[9], originally developed for performing computational chemistry calculations, has accumulated several internal converters capable of parsing and transforming the output of different programs into the internal representation called ccData. A similar approach has been taken by the developers of IOData[10], who have implemented converters and parsers for commonly used programs and their output files. However, there is currently no unified data representation or API that can be integrated into quantum chemistry codes to improve interoperability. Consequently, each time a given program modifies its input/output formatting, the IOData package must be adapted accordingly and promptly, which poses an additional challenge for maintainers. More recently, consolidated efforts have given rise to QCSchema[11], which provides an API-like access to data generated by existing quantum chemistry codes, thereby addressing the issue of dependence on the output file's formatting style. In this case, the responsibility for adhering to conventions falls on the code _reading_ the data, as it must be aware of the conventions chosen by the code that generated the data. With the Electronic Structure Common Data Format (ESCDF)[12] and its associated library, codes that write data can supply metadata to assist codes that read data in comprehending the organization of the data in the files. Hence, ESCDF aims to provide low-level tools and flexibility to facilitate the exchange of large datasets between codes with high-performance I/O. While this greatly reduces the difficulty of understanding conventions for developers reading the data, they may still need to apply different conversions depending on the code that generated the data. Consequently, implementing support for ESCDF may require more effort on the part of code developers compared to using a standardized format such as CML. Another popular format for storing quantum chemistry data is the Gaussian[13] fchk format. While it is a proprietary format specific to the Gaussian software package, its compatibility with several other software programs has contributed to its extensive utilization. However, the format's proprietary and closed-source nature prevents external developers from improving the format, leaving enhancements and compatibility updates solely in the hands of Gaussian developers. Recently, the mwfn[14] format was introduced with the primary goal of enhancing the existing solutions such as wfn,[13] wfx,[15] and Molden[16] formats, which were designed to store parameters of molecular orbitals and atomic basis sets in view of reconstructing the one-particle density matrix. Although mwfn is an improvement on these other formats, it does not allow the user to store enough information for a wave function coming from a configuration interaction (CI) or coupled cluster (CC) calculation. For post-Hartree-Fock calculations, the FCIDUMP format[17] has become a _de facto_ standard because of its simplicity. It is a text-based format that only contains minimal information for building the second-quantized Hamiltonian, namely the one- and two-electron integrals in the basis of molecular orbitals (MO), the number of electrons and information about the spin state and orbital symmetries. The nuclear coordinates and basis set are not saved in FCIDUMP files. The text format makes its adoption extremely simple, but it has a very high impact on the performance since FCIDUMP files are usually large. Although very practical, the use of the FCIDUMP format has other important limitations than efficiency. Once a program has computed a post-Hartree-Fock wave function using an FCIDUMP file as an input, the parameters of the basis set and the molecular orbitals may have been lost unless they were stored in a separate file in another format. Although configuration interaction or coupled cluster calculations can be performed using FCIDUMP files, this format is too limited to be used for quantum Monte Carlo (QMC) calculations, which require _all_ the wave function parameters. The Q5Cost[18; 19; 20] initiative was one of the first attempts aiming at standardizing the WFT data by introducing both a format and the API to interact with it. With Q5Cost, it was possible to store all the wave function parameters of CI expansions together with the basis set, molecular orbitals, and even electron repulsion integrals. The Q5Cost library was relying on the Hierarchical Data Format version 5 (HDF5)[21] to provide efficient I/O and keep the data well organized in the file. Nevertheless, Q5Cost had some severe drawbacks. First, Q5Cost was written in Fortran which made its use tedious in other programming languages such as C++ or Python. In addition, to be able to interpret a Q5Cost file, it was often necessary to know which code had generated it. Indeed, most WFT codes have different conventions in terms of normalization of the basis functions, ordering of the atomic orbitals, _etc_, and no conversion into a unique internal representation was imposed by the library. So the burden of understanding conventions was still on the shoulders of the readers of the files. Finally, Q5Cost had important technical limitations: the Q5Cost library was intended to be used as a compiled Fortran module (a so-called.mod file), that depended on the compiled Fortran modules provided by the HDF5 library. As the format of the compiled Fortran modules is specific to the compiler vendor and even to the version of the compiler, the Q5Cost library could not be simply linked as an external library to any code. Using the Q5Cost library in a Fortran code imposed that the user's code was compiled with the same Fortran compiler as the one that was used to compile both the HDF5 Fortran modules and the Q5Cost library. This contamination of dependencies could lead to some important impact on the performance of the user's code, and the only solution to solve that problem was to compile many different versions of the HDF5 Fortran interface and Q5Cost library with multiple compilers and compiler versions. The TREXIO initiative, heavily influenced by the Q5Cost project, aims to propose a standard format and library for wave function calculations. This initiative seeks to leverage the strengths of the Q5Cost project and learn from its design flaws that hindered its widespread adoption. One of the key improvements we aim to achieve is to shift the effort of adopting a format and conventions to the side of the code writing the data. This way, the files will be easily readable without any prior knowledge by any code, similar to CML or JPEG. ## III The TREXIO format The TREXIO format (version 2.3.0) is designed to store all the necessary information to represent a wave function, including: the number of up- and down-spin electrons, nuclear coordinates and charges, basis set and effective core potential (ECP) parameters, atomic and molecular orbital parameters, Slater determinants and CI coefficients, configuration state function (CSF) definitions, and metadata related to the description of excited states. It is also capable of storing data required for the computation of the wave function, such as one- and two-electron integrals, numerical integration grids used in DFT calculations, and one- and two-particle reduced density matrices. One notable feature of TREXIO is that it is self-contained, meaning that all the parameters needed to recreate the wave function are explicitly stored within the file, eliminating the need for external databases. For example, instead of storing the name of a basis set (such as cc-pVDZ), the actual basis set parameters used in the calculation are stored. All data are stored in atomic units for simplicity. The data in TREXIO are organized into _groups_, each containing multiple _attributes_ defined by their _type_ and _dimensions_. Each attribute within a group corresponds to a single scalar or array variable in a code. In what follows, the notation <group>.<attribute> will be used to identify an attribute within a group. For example, nucleus.charge refers to the charge attribute in the nucleus group. It is an array of type float with dimension nucleus.num, the attribute describing the number of nuclei. For simplicity, the singular form is always used for the names of groups and attributes. ### Data types So that TREXIO can be used in any language, we use a limited number of data types. It is important to keep in mind that these types are abstract in the sense that they are defined independently of their implementation, and are not tied to any specific representation on a computer. The main data types are int for integers, float for floating-point values, and str for character strings. The real and imaginary parts of complex numbers are stored separately as floats. To minimize the risk of integer overflow and accuracy loss, numerical data types are stored using 64-bit representations by default. However, in specific cases where integers are bounded (such as orbital indices in four-index integrals), the smallest possible representation is used to reduce the file size. The API presented in the next section handles any necessary type conversions. There are also two types derived from int: dim and index. dim is used for dimensioning variables, which are positive integers used to specify the dimensions of an array. In the previous example, nucleus.num is a dimensioning variable that specifies the dimensions of the nucleus.charge array. index is used for integers that correspond to array indices, because some languages (such as C or Python) use zero-based indexing, while others (such as Fortran) use one-based indexing by default. For convenience, values of the index type are shifted by one when TREXIO is used in one-based languages to be consistent with the semantics of the language. Arrays can be stored in either dense or sparse formats. If the sparse format is selected, the data is stored in coordinate format. For example, the element A(i,j,k,l) is stored as a quadruplet of integers \((i,j,k,l)\) along with the corresponding value. Typically, one- and two-dimensional arrays are stored as dense arrays, while arrays with higher dimensions are stored in sparse format. ### Stored data In this section, we provide a comprehensive overview of the data that can be stored in TREXIO files. A complete list of the groups and attributes is available as supplementary information or in the documentation of the library. In both resources, multi-dimensional arrays are expressed in column-major order, meaning that elements of the same column are stored contiguously. #### ii.2.1 Metadata In order to facilitate the archiving of TREXIO files in open-data repositories, users have the option to store metadata in the metadata group. This includes the names of the codes that were used to create the file, a list of authors, and a textual description. This allows for more information about the file to be easily accessible and transparent. #### ii.2.2 System information The chemical system consists of nuclei and electrons, where the nuclei are considered as fixed point charges with Cartesian coordinates. The wave function is stored in the spin-free formalism,[22] and therefore, it is necessary to explicitly store the number of spin-up (\(N_{\uparrow}\)) and spin-down (\(N_{\downarrow}\)) electrons. These numbers correspond to the normalization of the spin-up and spin-down single-particle reduced density matrices. Certain calculations, such as DFT calculations, require the use of a numerical integration grid. The grid group provides information for storing grids, inspired by the data required by the numgrid software.[23; 24] To keep things simple, TREXIO can only store a single wave function per file. When working with excited states, it is often the case that multiple states only differ in their CI coefficients, while other parameters (such as geometry, basis set, molecular orbitals, etc.) are the same. To facilitate the storage of multiple states, TREXIO provides the option to store all the data needed to describe one state in a main file, along with the names of additional TREXIO files that contain only the state-specific parameters. #### ii.2.3 Basis set In the basis group, the atomic basis set is defined as a list of shells. Each shell \(i\) is centered at a center \(A_{i}\), has a specified angular momentum \(l_{i}\), and a radial function \(R_{i}\). The radial function is a linear combination of \(N_{\text{prim}\,i}\)_primitive_ functions, which can be Slater type orbitals (STO, \(p=1\)) or Gaussian type orbitals (GTO, \(p=2\)). These primitive functions are parameterized by exponents \(\gamma_{ki}\) and coefficients \(a_{ki}\): \[R_{i}(\mathbf{r})=\mathcal{N}_{i}|\mathbf{r}-\mathbf{R}_{A_{i}}|^{n_{i}}\sum_ {k=1}^{N_{\text{prim}\,i}}a_{ki}\,f_{ki}(\gamma_{ki},p)\,e^{-\gamma_{ki}| \mathbf{r}-\mathbf{R}_{A_{i}}|^{p}}. \tag{1}\] Different codes have different normalization practices, so it is necessary to store normalization factors in the TREXIO file to ensure that it is self-contained and does not rely on the client program having the ability to compute overlap integrals. Some codes assume that the contraction coefficients are applied to _normalized_ linear combinations of primitives, so a normalization constant \(f_{ki}\) for each primitive must also be stored. Some codes assume that the functions \(R_{i}\) are normalized, requiring the computation of an additional normalization factor, \(\mathcal{N}_{i}\). #### ii.2.4 Atomic orbitals The ao group in TREXIO contains information related to the expansion of the shells in the basis set into atomic orbitals (AOs). For example, a \(p\)-shell is expanded into three AOs: \(p_{x}\), \(p_{y}\), and \(p_{z}\). AOs are defined as follows: \[\chi_{i}(\mathbf{r})=\mathcal{N}_{i}^{\prime}\,P_{\eta(i)}(\mathbf{r})\,R_{s( i)}(\mathbf{r}) \tag{2}\] where \(i\) is the atomic orbital index, \(P\) refers to either polynomials or spherical harmonics, and \(s(i)\) specifies the shell on which the AO is expanded. \(\eta(i)\) denotes the chosen angular function. The AOs can be expressed using real spherical harmonics or polynomials in Cartesian coordinates. In the case of real spherical harmonics, the AOs are ordered as \(0,+1,-1,+2,-2,\ldots,+m,-m\). In the case of polynomials, the canonical (or alphabetical) ordering is used, \[p :p_{x},p_{y},p_{z}\] \[d :d_{xx},d_{xy},d_{xz},d_{yy},d_{yz},d_{zz}\] \[f :f_{xxx},f_{xxy},f_{xxz},f_{yyy},f_{xyz},f_{xzz},f_{yyy},f_{yyz},f_{zzz}\] \[\vdots\] Note that for \(p\) orbitals in real spherical harmonics, the ordering is \(0,+1,-1\) which corresponds to \(p_{z},p_{x},p_{y}\). \(\mathcal{N}_{i}^{\prime}\) is a normalization factor that allows for different normalization coefficients within a single shell, as in the GAMESS[25] convention where each individual function is unit-normalized. Using GAMESS convention, the normalization factor of the shell \(\mathcal{N}_{d}\) (Eq. 1) in the basis group is appropriate for instance for the \(d_{z}^{2}\) function (i.e. \(\mathcal{N}_{d}\equiv\mathcal{N}_{z^{2}}\)) but not for the \(d_{xy}\) AO, so the correction factor \(\mathcal{N}_{i}^{\prime}\) for \(d_{xy}\) in the \(\mathsf{ao}\) groups is the ratio \(\frac{\mathcal{N}_{xy}}{\mathcal{N}_{z^{2}}}\). #### ii.2.5 Effective core potentials An effective core potential (ECP) \(V_{A}^{\rm{ECP}}\) can be used to replace the core electrons of atom A. It can be expressed as: [26] \[V_{A}^{\rm{ECP}}=V_{A\ell_{\rm{max}}+1}+\sum_{\ell=0}^{\ell_{\rm{max}}}\delta V _{A\ell}\sum_{m=-\ell}^{\ell}|Y_{\ell m}\rangle\langle Y_{\ell m}| \tag{3}\] The first term in this equation is attributed to the local channel, while the remaining terms correspond to non-local channel projections. \(\ell_{\rm{max}}\) refers to the maximum angular momentum in the non-local component of the ECP. The functions \(\delta V_{A\ell}\) and \(V_{A\ell_{\rm{max}}+1}\) are parameterized as: \[\delta V_{A\ell}(\mathbf{r}) =\sum_{q=1}^{N_{\rm{eff}}}\beta_{Aq\ell}\left|\mathbf{r}-\mathbf{ R}_{A}\right|^{n_{Aq\ell}}e^{-\alpha_{Aq\ell}\left|\mathbf{r}-\mathbf{R}_{A} \right|^{2}}\] \[V_{A\ell_{\rm{max}}+1}(\mathbf{r}) =-\frac{Z_{\rm{eff}}}{\left|\mathbf{r}-\mathbf{R}_{A}\right|}+ \delta V_{A\ell_{\rm{max}}+1}(\mathbf{r}) \tag{4}\] where \(Z_{\rm{eff}}\) is the effective nuclear charge of the center. All the parameters can be stored in the ecp group. #### ii.2.6 Molecular orbitals The \(\mathsf{mo}\) group is devoted to the storage of the molecular orbitals (MOs). MO coefficients are stored in a two-dimensional array, with additional information such as symmetries or occupation numbers stored in separate arrays. It is also possible to store the spin to enable the description of unrestricted Hartree-Fock or unrestricted Kohn-Sham determinants. #### ii.2.7 Hamiltonian matrix elements One-electron integrals can be stored in the AO and MO bases in the groups \(\mathsf{ao}\)_\(\mathsf{le}\)_\(\mathsf{int}\) and \(\mathsf{mo}\)_\(\mathsf{le}\)_\(\mathsf{int}\), respectively. Similarly, two-electron integrals can be stored in the AO and MO bases in the groups \(\mathsf{ao}\)_\(\mathsf{2e}\)_\(\mathsf{int}\) and \(\mathsf{mo}\)_\(\mathsf{2e}\)_\(\mathsf{int}\), respectively. One-electron integrals are stored as two-dimensional arrays, while two-electron integrals are stored in a sparse format, with a quadruplet of indices and the corresponding value stored for each non-zero integral. The order of the indices follows Dirac's bra-ket notation. It is also possible to store a low-rank representation of the two-electron integrals, obtained via a Cholesky decomposition. #### ii.2.8 Cl expansion The wave function \(\Psi\) can be represented as a combination of Slater determinants \(D_{I}\): \[\left|\Psi\right\rangle=\sum_{I}C_{I}\left|D_{I}\right\rangle \tag{5}\] In the determinant group of a TREXIO file, the definition of these Slater determinants, as well as the configuration interaction (CI) expansion coefficients, can be stored. Each Slater determinants is represented as a Waller-Hartree double determinant, [27] i.e. the product of a determinant with \(\uparrow\)-spin electrons and a determinant with \(\downarrow\)-spin electrons. To enable the storage of arbitrary CI expansions and to reduce the storage size, the determinants are stored as pairs of binary strings: one for the \(\uparrow\) spin sector and one for the \(\downarrow\) spin. Each binary string has a length equal to the number of MOs, with the \(i\)-th bit set to one if and only if the \(i\)-th MO is included in the determinant. As the creation of these binary strings may be tedious, we provide some helper functions to transform lists of orbital indices into binary strings. If the orbital indices are not in increasing order, a reordering is made and the user is informed if a change of sign is needed in the corresponding CI coefficient. Alternatively, the wave function may be expanded in a basis of configuration state functions (CSFs), \[\left|\Psi\right\rangle=\sum_{I}\tilde{C}_{I}\left|\psi_{I}\right\rangle. \tag{6}\] where each CSF \(\psi_{I}\) is a linear combination of Slater determinants. The csf group allows for the storage of the CSF expansion coefficients, as well as the matrix \(\langle D_{I}|\psi_{J}\rangle\) in a sparse format. This enables the projection of the CSFs onto the basis of Slater determinants. #### ii.2.9 Amplitudes The wave function may also be expressed in terms of the action of the cluster operator \(\hat{T}\): \[\hat{T}=\hat{T}_{1}+\hat{T}_{2}+\hat{T}_{3}+\ldots \tag{7}\] on a reference wave function \(\Psi\), where \(\hat{T}_{1}\) is the single excitation operator, \[\hat{T}_{1}=\sum_{ia}t_{i}^{a}\,\hat{a}_{a}^{\dagger}\hat{a}_{i}, \tag{8}\] \(\hat{T}_{2}\) is the double excitation operator, \[\hat{T}_{2}=\frac{1}{4}\sum_{ijab}t_{ij}^{ab}\,\hat{a}_{a}^{\dagger}\hat{a}_{b} ^{\dagger}\hat{a}_{j}\hat{a}_{i}, \tag{9}\] _etc_. Indices \(i\), \(j\), \(a\) and \(b\) denote molecular orbital indices. Wave functions obtained with perturbation theory or configuration interaction are of the form: \[|\Phi\rangle=\hat{T}|\Psi\rangle \tag{10}\] and coupled-cluster wave functions are of the form: \[|\Phi\rangle=e^{\hat{T}}|\Psi\rangle \tag{11}\] The reference wave function \(\Psi\) is stored using the determinant and/or csf groups, and the amplitudes are stored using the amplitude group. The attributes with the exp suffix correspond to exponentialized operators. ### Reduced density matrices The reduced density matrices, stored in the rdm group, are defined in the basis of molecular orbitals. The \(\uparrow\)-spin and \(\downarrow\)-spin components of the one-body density matrix are given by \[\gamma^{\uparrow}_{ij} =\langle\Psi|\hat{a}^{\dagger}_{j\alpha}\,\hat{a}_{i\alpha}|\Psi\rangle \tag{12}\] \[\gamma^{\downarrow}_{ij} =\langle\Psi|\hat{a}^{\dagger}_{j\beta}\,\hat{a}_{i\beta}|\Psi\rangle \tag{13}\] and the spin-summed two-body density matrix is \[\gamma_{ij}=\gamma^{\uparrow}_{ij}+\gamma^{\downarrow}_{ij} \tag{14}\] The \(\uparrow\uparrow\), \(\downarrow\downarrow\), and \(\uparrow\downarrow\) components of the two-body density matrix are given by \[\Gamma^{\uparrow\uparrow}_{ijkl} =\langle\Psi|\hat{a}^{\dagger}_{k\alpha}\,\hat{a}^{\dagger}_{l \alpha}\hat{a}_{j\alpha}\,\hat{a}_{i\alpha}|\Psi\rangle \tag{15}\] \[\Gamma^{\downarrow\downarrow}_{ijkl} =\langle\Psi|\hat{a}^{\dagger}_{k\beta}\,\hat{a}^{\dagger}_{l \beta}\hat{a}_{j\beta}\,\hat{a}_{i\beta}|\Psi\rangle\] (16) \[\Gamma^{\uparrow\downarrow}_{ijkl} =\langle\Psi|\hat{a}^{\dagger}_{k\alpha}\,\hat{a}^{\dagger}_{l \beta}\hat{a}_{j\beta}\,\hat{a}_{i\alpha}|\Psi\rangle+\] \[\langle\Psi|\hat{a}^{\dagger}_{l\alpha}\,\hat{a}^{\dagger}_{k \beta}\hat{a}_{i\beta}\,\hat{a}_{j\alpha}|\Psi\rangle, \tag{17}\] and the spin-summed one-body density matrix is \[\Gamma_{ijkl}=\Gamma^{\uparrow\uparrow}_{ijkl}+\Gamma^{\downarrow\downarrow} _{ijkl}+\Gamma^{\uparrow\downarrow}_{ijkl}. \tag{18}\] ### Correlation factors Explicit correlation factors can be introduced in the wave function, such as in QMC, \(F_{12}\), or transcorrelated methods. In the current version of the library, it is possible to store two different types of Jastrow factors. The Jastrow factor is an \(N\)-electron function which multiplies the reference wave function expansion: \(\Psi=\Phi\times\exp(J)\), where \[J(\mathbf{r},\mathbf{R})=J_{\mathrm{eN}}(\mathbf{r},\mathbf{R})+J_{\mathrm{ ee}}(\mathbf{r})+J_{\mathrm{eeN}}(\mathbf{r},\mathbf{R}). \tag{19}\] In the following, we use the notations \(r_{ij}=|\mathbf{r}_{i}-\mathbf{r}_{j}|\) and \(R_{i\alpha}=|\mathbf{r}_{i}-\mathbf{R}_{\alpha}|\), where indices \(i\) and \(j\) correspond to electrons and \(\alpha\) to nuclei. The first form of Jastrow factor is the one used in the CHAMP [28] program. [29]\(J_{\mathrm{eN}}\) contains electron-nucleus terms: \[J_{\mathrm{eN}}(\mathbf{r},\mathbf{R})=\sum_{i=1}^{N_{\mathrm{ elec}}}\sum_{\alpha=1}^{N_{\mathrm{eucl}}}\left[\frac{a_{1,\alpha}\,f_{\alpha}(R_{i \alpha})}{1+a_{2,\alpha}\,f_{\alpha}(R_{i\alpha})}\right.\] \[\left.+\sum_{p=2}^{N_{\mathrm{eucl}}^{\mathrm{e}}}a_{p+1,\alpha} \left[f_{\alpha}(R_{i\alpha})\right]^{p}-J_{\mathrm{eN}}^{\infty}\right] \tag{20}\] \(J_{\mathrm{ee}}\) contains electron-electron terms: \[J_{\mathrm{ee}}(\mathbf{r})=\sum_{i=1}^{N_{\mathrm{elec}}}\sum_{j=1}^{i-1} \left[\frac{\frac{1}{2}\big{(}1+\delta^{\uparrow\downarrow}_{ij}\big{)}\,b_{ 1}\,f_{\mathrm{ee}}(r_{ij})}{1+b_{2}\,f_{\mathrm{ee}}(r_{ij})}\right.\] \[\left.+\sum_{p=2}^{N_{\mathrm{eucl}}^{\mathrm{h}}}b_{p+1}\left[f_{ \mathrm{ee}}(r_{ij})\right]^{p}-J_{\mathrm{ee},ij}^{\infty}\right] \tag{21}\] where \(\delta^{\uparrow\downarrow}_{ij}\) is zero when the electrons \(i\) and \(j\) have the same spin, and one otherwise. \(J_{\mathrm{eeN}}\) contains electron-electron-nucleus terms: \[J_{\mathrm{eeN}}(\mathbf{r},\mathbf{R})=\sum_{\alpha=1}^{N_{ \mathrm{eucl}}}\sum_{i=1}^{N_{\mathrm{elec}}}\sum_{j=1}^{i-1}\sum_{p=2}^{N_{ \mathrm{ord}}}\sum_{k=0}^{p-1}\sum_{l=0}^{p-k-2\delta_{k,0}}c_{lkp\alpha} \left[g_{\mathrm{ee}}(r_{ij})\right]^{k} \tag{22}\] \[\left[\left[g_{\alpha}(R_{i\alpha})\right]^{l}+\left[g_{\alpha}( R_{j\alpha})\right]^{l}\right]\left[g_{\alpha}(R_{i\,\alpha})\,g_{\alpha}(R_{j \alpha})\right]^{(p-k-l)/2}, \tag{23}\] \(c_{lkp\alpha}\) being non-zero only when \(p-k-l\) is even. The terms \(J_{\mathrm{ee}}^{\infty}\) and \(J_{\mathrm{ee}}^{\infty}\) are shifts to ensure that \(J_{\mathrm{ee}}\) and \(J_{\mathrm{eN}}\) have an asymptotic value of zero. \(f\) and \(g\) are scaling functions defined as \[f_{\alpha}(r)=\frac{1-e^{-\kappa_{\alpha}\,r}}{\kappa_{\alpha}}\text{ and }g_{\alpha}(r)=e^{-\kappa_{\alpha}\,r}, \tag{24}\] and the possible presence of an index \(\alpha\) indicates that the scaling coefficient \(\kappa\) depends on the atom \(\alpha\). The second form of Jastrow factor is the \(\mu\) Jastrow factor [30] \[J_{\mathrm{ee}}(\mathbf{r})=\sum_{i=1}^{N_{\mathrm{elec}}}\sum_{j=1}^{i-1}r_{ ij}\left(1-\mathrm{erf}(\mu\,r_{ij})\right)-\frac{1}{\mu\sqrt{\pi}}e^{-(\mu\,r_{ ij})^{2}}. \tag{25}\] It is a single parameter correlation factor that has been recently introduced in the context of transcorrelated methods. It imposes the electron-electron cusp and is built such that the leading order in \(1/r_{12}\) of the effective two-electron potential reproduces the long-range interaction of the range-separated density functional theory. An envelope function has then been introduced to cancel out the Jastrow effects between two electrons when at least one electron is close to a nucleus, and standard one-body terms were also introduced to avoid the expansion of the one-body density. As there exist multiple forms of Jastrow factors in the literature, contributions to extend this section are welcome. QMC data We also provide in the qmc group some information specific to QMC calculations. In QMC methods, the wave function is evaluated at points in the \(3N\)-dimensional space, where \(N\) is the number of electrons. It might be convenient to store the coordinates of points together with the wave function, and to store the value of the wave function and the local energy \(\hat{H}\Psi(\mathbf{r})/\Psi(\mathbf{r})\) evaluated at these points, for example, to check that different codes give the same values. ## IV The TREXIO library The TREXIO library is written in the C language, and is licensed under the open-source 3-clause BSD license to allow for use in all types of quantum chemistry software, whether commercial or not. The design of the library is divided into two main sections: the front-end and the back-end. The front-end serves as the interface between users and the library, while the back-end acts as the interface between the library and the physical storage. ### The front-end By using the TREXIO library, users can store and extract data in a consistent and organized manner. The library provides a user-friendly API, including functions for reading, writing, and checking for the existence of data. The functions follow the pattern trexio_[has|read|write]_<group>_<attribute>, where the group and attribute specify the particular data being accessed. It also includes an error handling mechanism, in which each function call returns an exit code of type trexio_exit_code, explaining the type of error. This can be used to catch exceptions and improve debugging in the upstream user application. Figures 1 and 2 show examples of usage of the TREXIO library in C and Python, respectively. To ensure the consistency of the data, the attributes can only be written if all the other attributes on which they explicitly depend have been written. For example, as the nucleus.coord array is dimensioned by the number of nuclei nucleus.num, the nucleus.coord attribute can only be written after nucleus.num. However, the library is not aware of non-explicit dependencies, such as the relation between the electron repulsion integrals (ERIs) and MO coefficients. A complete control of the consistency of the data is therefore impossible, so the attributes were chosen to be by default _immutable_. By only allowing data to be written only once, the risk of modifying data in a way that creates inconsistencies is reduced. For example, if the ERIs have already been written, it would be inconsistent to later modify the MO coefficients. To allow for flexibility, the library also allows for the use of an _unsafe_ mode, in which data can be overwritten. However, this mode carries the risk of producing inconsistent files, and the metadata group's unsafe attribute is set to 1 to indicate that the file has potentially been modified in a dangerous way. This attribute can be manually reset to 0 if the user is confident Figure 1: C code writing the nuclear coordinates of a water molecule in a TREXIO file, with error handling. Figure 2: Python code writing the nuclear coordinates of a water molecule in a TREXIO file. that the modifications made are safe. ### The back-end At present, TREXIO supports two back-ends: one relying only on the C standard library to produce plain text files (the so-called text back-end), and one relying on the HDF5 library. With the text back-end, the TREXIO "file" is a directory containing multiple text files, one for each group. This back end is intended to be used in development environments, as it gives access to the user to standard tools such as diff and grep. In addition, text files are better adapted than binary files for version control systems such as Git, so this format can be also used for storing reference data for unit tests. HDF5 is a binary file format and library for storing and managing large amounts of data in a hierarchical structure. It allows users to manipulate data in a way similar to how files and directories are manipulated within the file system. The HDF5 library provides optimal performance through its memory mapping mechanism and supports advanced features such as serial and parallel I/O, chunking, and compression filters. However, HDF5 files are in binary format, which requires additional tools such as h5dump to view them in a human-readable format. HDF5 is widely used in scientific and engineering applications, and is known for its high performance and ability to handle large data sets efficiently. The TREXIO HDF5 back-end is the recommended choice for production environments, as it provides high I/O performance. Furthermore, all data is stored in a single file, making it especially suitable for parallel file systems like Lustre. These file systems are optimized for large, sequential I/O operations and are not well-suited for small, random I/O operations. When multiple small files are used, the file system may become overwhelmed with metadata operations like creating, deleting, or modifying files, which can adversely affect performance. In a benchmarking program designed to compare the two back-ends of the library, the HDF5 back-end was found to be significantly faster than the text back-end. The program wrote a wave function made up of 100 million Slater determinants and measured the time taken to write the Slater determinants and CI coefficients. The HDF5 back-end achieved a speed of \(10.4\times 10^{6}\) Slater determinants per second and a data transfer rate of 406 MB/s, while the text back-end had a speed of \(1.1\times 10^{6}\) determinants per second and a transfer rate of 69 MB/s. These results were obtained on a DELL 960 GB mix-use solid-state drive (SSD). The HDF5 back-end was able to achieve a performance level close to the peak performance of the SSD, while the text back-end's performance was limited by the speed of the CPU for performing binary to ASCII conversions. In addition to the HDF5 and text back-ends, it is also possible to introduce new back-ends to the library. For example, a back-end could be created to support object storage systems, such as those used in cloud-based applications [31] or for archiving in open data repositories. To use a new back-end, only a minor modification is required in the code using TREXIO: the correct back-end argument needs to be passed to the trexio_open function (see Figures 1 and 2). ### Supported languages One of the main benefits of using C as the interface for a library is that it is easy to use from other programming languages. Many programming languages, such as Python or Julia, provide built-in support for calling C functions, which means that it is relatively straightforward to write a wrapper that allows a library written in C to be called from another language. In general, libraries with a C interface are the easiest to use from other programming languages, because C is widely supported and has a simple, stable application binary interface (ABI). Other languages, such as Fortran and C++, may have more complex ABIs and may require more work to interface with them. TREXIO has been employed in codes developed in various programming languages, including C, C++, Fortran, Python, OCaml, and Julia. While Julia is designed to enable the use of C functions without the need for additional manual interfacing, the TREXIO C header file was automatically integrated into Julia programs using the CBindings.jl package.[32] In contrast, specific bindings have been provided for Fortran, Python, and OCaml to simplify the user experience. In particular, the binding for Fortran is not distributed as multiple compiled Fortran module files (.mod), but instead as a single Fortran source file (.F90). The distribution of the source file instead of the compiled module has multiple benefits. It ensures that the TREXIO module is always compiled with the same compiler as the client code, avoiding the compatibility problem of.mod files between different compiler versions and vendors. The single-file model requires very few changes in the build system of the user's codes, and it facilitates the search for the interface of a particular function. In addition, advanced text editors can parse the TREXIO interface to propose interactive auto-completion of the TREXIO function names to the developers. Finally, the Python module, partly generated with SWIG [33] and fully compatible with NumPy,[34] allows Python users to interact with the library in a more intuitive and user-friendly way. Using the Python interface is likely the easiest way to begin using TREXIO and understanding its features. In order to help users get started with TREXIO and understand its functionality, tutorials in Jupyter notebooks are available on GitHub ([https://github.com/TREX-CoE/trexio-tutorials](https://github.com/TREX-CoE/trexio-tutorials)), and can be executed via the Binder platform. ### Source code generation and documentation Source code generation is a valuable technique that can significantly improve the efficiency and consistency of software development. By using templates to generate code automatically, developers can avoid manual coding and reduce the risk of errors or inconsistencies. This approach is particularly useful when a large number of functions follow similar patterns, as in the case of the TREXIO library, where functions are named according to the pattern trexio_[has|read|write]_<group>_<attribute>. By generating these functions from the format specification using templates, the developers can ensure that the resulting code follows a consistent structure and is free from errors or inconsistencies. The description of the format is written in a text file in the Org format.[35] Org is a structured plain text format, containing information expressed in a lightweight markup language similar to the popular Markdown language.[36] While Org was introduced as a mode of the GNU Emacs text editor, its basic functionalities have been implemented in most text editors such as Vim, Atom or VS Code. There are multiple benefits in using the Org format. The first benefit is that the Org syntax is easy to learn and allows for the insertion of equations in LaTeX syntax. Additionally, Org files can be easily converted to HyperText Markup Language (HTML) or Portable Document Format (PDF) for generating documentation. The second benefit is that GNU Emacs is a programmable text editor and code blocks in Org files can be executed interactively, similar to Jupyter notebooks. These code blocks can also manipulate data defined in tables and this feature is used to automatically transform tables describing groups and attributes in the documentation into a JavaScript Object Notation (JSON) file.[37; 38] This JSON file is then used by a Python script to generate the needed functions in C language, as well as header files and some files required for the Fortran, Python, and OCaml interfaces. With this approach, contributions to the development of the TREXIO library can be made simply by adding new tables to the Org file, which can be submitted as _pull requests_ on the project's GitHub repository ([https://github.com/trex-coe/trexio](https://github.com/trex-coe/trexio)). Overall, this process allows for a more efficient and consistent development process and enables contributions from a wider range of individuals, regardless of their programming skills. ### Availability and reliability The TREXIO library is designed to be portable and easy to install on a wide range of systems. It follows the C99 standard to ensure compatibility with older systems, and can be configured with either the GNU Autotools or the CMake build systems. The only external dependency is the HDF5 library, which is widely available on HPC platforms and as packages on major Linux distributions. Note that it is possible to disable the HDF5 back-end at configuration time, allowing TREXIO to operate only with the text back-end and have zero external dependencies. This can be useful for users who may not be able to install HDF5 on certain systems. TREXIO is distributed as a tarball containing the source code, generated code, documentation, and Fortran interface. It is also available as a binary.deb package for Debian-based Linux distributions and as packages for Guix[39], Spack[40] and Conda.[41] The Python module can be found in the PyPI repository, the OCaml binding is available in the official OPAM repository, and the.deb packages are already available in Ubuntu 23.04. To ensure the reliability and quality of the TREXIO library, we have adopted standard continuous integration and deployment practices. For example, we use unit tests that are executed automatically using GitHub actions whenever modifications are made to the codebase. These tests cover a wide range of functionalities and help to identify any potential issues or bugs in the code. Additionally, the TREXIO library is regularly used by the authors of the present paper, and as such, it is continuously tested and validated in the context of ongoing research activities. TREXIO was built, tested and installed successfully on 20 different architectures supported by the Debian build farm. Furthermore, we ensure that the quality of our code meets the requirements of the CERT coding standards,[42] and we use the cppcheck[43] tool to validate the quality of our code. These measures demonstrate our commitment to ensuring that the TREXIO library is a reliable and trustworthy tool. ### Open-Source Governance and Sustainability Strategies Our approach to the development and governance of the TREXIO library follows the standard design of open-source projects, which typically involve a collaborative effort from a community of contributors. The TREX European Center of Excellence initiated the project and proposed the first functional version of the software. However, we consider this to be just the starting point for a larger community effort. As an open-source project, we encourage contributions from anyone interested in the development of the library. This includes not only contributions to the codebase but also contributions to the documentation, testing, and other aspects of the project. We believe that this collaborative approach is the key to the success of any open-source project. Regarding governance, we have a small group of maintainers who oversee the development of the project, review and merge contributions, and ensure the quality of the code. However, we strive to make the development process as transparent and open as possible, and we encourage contributions from anyone interested in the project. Overall, our strategy for the governance and development of the TREXIO library follows the standard design of open-source projects, which emphasizes collaboration and transparency. We believe that this approach, combined with our commitment to seeking and securing funding for the continued development and maintenance of TREXIO, will ensure the long-term success and usefulness of the library to the quantum chemistry community. ## V Examples of applications The open-source Python package trexio_tools[44] has been created to enhance the use of the TREXIO library and corresponding data format. It includes converters for transforming output files from codes such as Gaussian, GAMESS,[25] or PySCF[45] into TREXIO files. However, in the future, it would be preferable if the developers of these codes were to offer the option to export data in TREXIO format in order to maintain numerical precision and ensure consistency in the stored data. In addition, the package includes utilities to convert certain data blocks from TREXIO files into FCIDUMP or Molden formats. It also has a feature that validates the consistency of a wave function by numerically calculating overlap integrals on a grid and comparing them to the overlap matrix stored in the file. This helps to confirm that all basis set parameters are consistent with the conventions of the original program. TREXIO is currently used to exchange wave function parameters between the selected CI code Quantum Package[46] and the QMC code CHAMP.[28] The QMC codes QMC=Chem[47] and TurboRVB[48] are also able to read TREXIO files, allowing for comparison of the three QMC codes using the same wave function. TREXIO is also used to transfer integrals between Quantum Package and the FCIQMC code NECI,[49] and to read density matrices produced by Quantum Package in GammCor[50] for symmetry-adapted perturbation theory (SAPT)[51] molecular interaction calculations with near-full CI density matrices.[52] In addition, the recent development of a code for calculating electron repulsion integrals using Slater-type orbitals[53] now produces TREXIO files, enabling FCIQMC calculations using Slater-type orbitals with NECI and similar selected CI calculations with Quantum Package, which can then be used as trial wave functions for QMC calculations. ## VI Conclusion The TREXIO format and library offer a convenient and flexible way to store and exchange quantum chemistry data. Its open-source nature allows for easy integration into various software applications and its compatibility with multiple programming languages makes it accessible to a wide range of users. The use of the HDF5 library as the default back-end ensures efficient storage and retrieval of data, while the option to disable HDF5 and use the text back-end allows for zero external dependencies. The development of TREXIO has been driven by the need to facilitate collaboration and reproducibility in quantum chemistry research, and its adoption in various codes and projects is a testament to its usefulness in achieving these goals. We would like to emphasize that the TREXIO library is a work in progress, and we are committed to expanding its scope and functionality in future releases. Our immediate priorities include supporting periodic boundary conditions and other basis sets such as grids, and plane waves. Overall, the TREXIO format and library is a valuable resource for the quantum chemistry community and its continued development and adoption will surely benefit the field. ###### Acknowledgements. The authors would like to thank Susi Lehtola for providing valuable feedback on an earlier version of this manuscript. This work was supported by the European Centre of Excellence in Exascale Computing TREX -- Targeting Real Chemical Accuracy at the Exascale. Hence, the name of the software is _TREX Input/Output_ (TREXIO). This project has received funding from the European Union's Horizon 2020 -- Research and Innovation program -- under grant agreement no. 952165. A CC-BY 4.0 ([https://creativecommons.org/licenses/by/4.0/](https://creativecommons.org/licenses/by/4.0/)) public copyright license has been applied by the authors to the present document and will be applied to all subsequent versions up to the Author Accepted Manuscript arising from this submission, in accordance with the grant's open access conditions.
TREXIOはオープンソースのファイル形式とライブラリで、量子化学計算によって生成されるデータの保存と操作に用いられています。量子化学の分野の研究者に信頼性と効率性を提供することを目的として設計されており、波関数パラメータや行列要素の保存と交換の方法として重要ツールとなっています。このワークでは、TREXIOファイル形式とライブラリの概要を紹介します。このライブラリには、C言語で実装されたフロントエンドと、HDF5ライブラリを利用した二つの異なるバックエンドがあります。これは高速な読み込みと書き込みの操作を可能にするため、テキストバックエンドとバイナリバックエンドとなっています。様々なプラットフォームに互換性があり、Fortran、Python、OCamlのプログラミング言語用のインターフェースも備えています。さらに、TREXIOフォーマットとライブラリを利用するためのツールスイートを開発しました。これは、人気のある量子化学コードへのコンバーター
2310.20627
Shapes of infinite conformally balanced trees
Numerical experiments by Werness, Lee and the third author suggested that dessin d'enfants associated to large trivalent trees approximate the developed deltoid introduced by Lee, Lyubich, Makarov and Mukherjee. In this paper, we confirm this conjecture. As a side product of our techniques, we give a new proof of a theorem of Bishop which says that ``true trees are dense.'' We also exhibit a sequence of trees whose conformally natural shapes converge to the cauliflower, the Julia set of $z\mapsto z^2+1/4$.
Oleg Ivrii, Peter Lin, Steffen Rohde, Emanuel Sygal
2023-10-31T16:57:59
http://arxiv.org/abs/2310.20627v1
# Shapes of infinite conformally balanced trees ###### Abstract Numerical experiments by Werness, Lee and the third author suggested that dessin d'enfants associated to large trivalent trees approximate the developed deltoid introduced by Lee, Lyubich, Makarov and Mukherjee. In this paper, we confirm this conjecture. As a side product of our techniques, we give a new proof of a theorem of Bishop which says that "true trees are dense." We also exhibit a sequence of trees whose conformally natural shapes converge to the cauliflower, the Julia set of \(z\mapsto z^{2}+1/4\). ## 1 Introduction A finite tree \(\mathcal{T}\) in the plane is called a _conformally balanced tree_ or a _true tree_ if * Every edge has the same harmonic measure as seen from infinity. * Harmonic measures on the two sides of every edge are identical. Conformally balanced trees are in one-to-one correspondence with Shabat polynomials: any conformally balanced tree is the pre-image of the segment \([-1,1]\) by an essentially unique polynomial \(p\) with critical values \(\pm 1\). (The polynomial \(p(z)\) is uniquely determined up to multiplication by \(-1\).) We say that two trees \(T_{1},T_{2}\) in the plane are _equivalent_ if there is an orientation-preserving homeomorphism of the plane which takes \(T_{1}\) onto \(T_{2}\). It is a classic fact that every finite tree \(T\) in the plane is equivalent to a conformally balanced tree \(\mathcal{T}\), which is unique up to post-composition with affine maps. A proof of these facts will be sketched in Section 3.1. It is natural to ask if infinite trees also have a natural shape. In [5], the second- and third author developed the theory of Gehring trees and showed that the Aldous continuum random tree possesses a natural conformal structure. In this paper, we consider the _infinite trivalent tree_\(\mathcal{T}\), which exhibits a different and surprising behaviour. To come up with a natural shape for \(\mathcal{T}\), we truncate it at level \(n\), form the conformally balanced tree \(\mathcal{T}_{n}\) and take \(n\to\infty\). In order for the finite trees \(\mathcal{T}_{n}\) to converge, we need to normalize them in some way. Throughout the rest of the paper, we use the _hydrodynamic_ normalization: we ask that each conformal map \(\varphi_{n}:\hat{\mathbb{C}}\setminus\mathbb{D}\to\hat{\mathbb{C}}\setminus \mathcal{T}_{n}\) has the expansion \(z\to z+O(1/z)\) near infinity. Our main theorem states: **Theorem 1.1**.: _The trees \(\mathcal{T}_{n}\) converge in the Hausdorff topology to an infinite trivalent tree union a Jordan curve \(\mathcal{T}_{\infty}\cup\partial\Omega\). The domain \(\Omega\) enclosed by \(\partial\Omega\) is the developed deltoid. The Shabat polynomials \(p_{n}\) converge to \(F\circ R^{-1}\) where \(F\) is a modular function invariant under an index 2 subgroup of \(\mathrm{PSL}(2,\mathbb{Z})\) and \(R:\mathbb{D}\to\Omega\) is the Riemann map._ Figure 1: The developed deltoid and its approximating conformally balanced tree. _Remark_. The choice of truncation is important: by considering other truncations of the infinite trivalent tree, one can obtain different limit sets. In fact, any compact connected set in the plane can be approximated in the Hausdorff topology by conformally balancing finite truncations of the infinite trivalent tree, thereby giving another proof of a theorem of Bishop [1]. See Appendix A. ### The developed deltoid The _deltoid_\(\triangle\subset\mathbb{C}\) is a remarkable domain in the plane bounded by a Jordan curve with three outward pointing cusps. It can be described as the curve traversed by a point on a circle of radius \(1/3\) as it rolls around in the interior of a circle of radius \(1\). Alternatively, one can describe the exterior of the deltoid \(\triangle_{e}=\mathbb{C}\setminus\overline{\triangle}\) as the image of \(\mathbb{D}_{e}=\hat{\mathbb{C}}\setminus\mathbb{D}\) under the conformal map \(z\to z+\frac{1}{2z^{2}}\). The exterior of the deltoid is part of a somewhat mysterious family of domains called _quadrature domains_. Quadrature domains have several equivalent definitions such as possessing a _Schwarz reflection_ which is an anti-holomorphic function \(\sigma:\triangle_{e}\to\mathbb{C}\) that is identity on \(\partial\triangle_{e}\). By repeatedly reflecting the deltoid in its sides, one obtains the _developed deltoid_ \[\Omega=\bigcup_{k\geq 0}\sigma^{-k}(\triangle),\] see Fig. 1. The developed deltoid was first studied by S-Y. Lee, M. Lyubich, N. G. Makarov and S. Mukherjee [3], who showed that it fuses Fuchsian dynamics with anti-holomorphic dynamics: **Theorem 1.2**.: (i) _The boundary of the developed deltoid \(\partial\Omega\) is the unique Jordan curve that realizes the mating of the ideal triangle group and \(z\to\overline{z}^{2}\)._ (ii) _The developed deltoid \(\Omega\) is a John domain. In particular, \(\partial\Omega\) is conformally removable._ We now describe their result in detail: _Dynamics on \(\mathbb{D}_{e}\)._ In the exterior of the unit disk, we consider the dynamical system \(z\to\overline{z}^{2}\). _Dynamics on \(\mathbb{D}\)._ Let \(\triangle_{\rm hyp}\subset\mathbb{D}\) be the ideal triangle in the unit disk with vertices at \(1,\omega,\omega^{2}\), where \(\omega=e^{2\pi i/3}\) is a third root of unity. Consider the group \(\Gamma=\langle R_{\rho_{1}},R_{\rho_{2}},R_{\rho_{3}}\rangle\subset{\rm Aut}( \mathbb{D})\) generated by the reflections in the sides \(s_{1},s_{2},s_{3}\) of \(\triangle_{\rm hyp}\). The images \[\{\gamma(\triangle_{\rm hyp}):\gamma\in\Gamma\}\] tessellate the unit disk. The Markov map \(\rho:\mathbb{D}\setminus\triangle_{\rm hyp}\to\mathbb{D}\) is defined as \(R_{\rho_{1}}\) on the (hyperbolic) half-plane cut off by \(s_{1}\), \(R_{\rho_{2}}\) on the half-plane cut off by \(s_{2}\) and \(R_{\rho_{3}}\) on the half-plane cut off by \(s_{3}\). _What it means to be a mating._ A Jordan curve \(\gamma=\partial\Omega\) is a _mating_ of \(z\to\overline{z}^{2}\) and \(\Gamma\) if there exist conformal maps \(\varphi:\mathbb{D}\to\Omega\), \(\psi:\mathbb{D}_{e}\to\Omega_{e}\) that glue the dynamical systems together, i.e. \(\varphi\circ\rho\circ\varphi^{-1}=\psi\circ\overline{z}^{2}\circ\psi^{-1}\) on \(\partial\Omega\). In particular, this implies that \[\sigma(z)=\begin{cases}\psi\circ\overline{z}^{2}\circ\psi^{-1},\quad z\in \overline{\Omega_{e}}\\ \varphi\circ\rho\circ\varphi^{-1},\quad z\in\overline{\Omega}\setminus\varphi (\triangle_{\rm hyp})\end{cases}\] is a Schwarz reflection for \(\hat{\mathbb{C}}\setminus\varphi(\triangle_{\rm hyp})\), and hence \(\hat{\mathbb{C}}\setminus\varphi(\triangle_{\rm hyp})\) is a quadrature domain. A set \(E\) is called _conformally removable_ if every conformal map \(h:\hat{\mathbb{C}}\setminus E\to\hat{\mathbb{C}}\setminus F\) which extends continuously to the Riemann sphere is a Mobius transformation. ### Strategy of proof Our proof of Theorem 1.1 proceeds in three steps: _Step 1._ We first show that any subsequential limit of the true trees \(\mathcal{T}_{n}\) in the Hausdorff topology is homeomorphic to an infinite trivalent tree union a Jordan curve \(\mathcal{T}_{\infty}\cup\partial\Omega\), with \(\mathcal{T}_{\infty}\subset\Omega\). Among our key tools are estimates for the diameters of edges by means of conformal modulus estimates of certain curve families. A notable difference to the setting of random trees is that in the truncated trivalent tree, the diameters of a fixed edge do not shrink to zero as \(n\to\infty\), see also the remark at the end of Section 3.4. _Step 2._ We then show that any subsequential limit \(\partial\Omega\) realizes the mating of \(z\to\overline{z}^{2}\) and \(\Gamma\). At this point, one can appeal to the uniqueness of the mating [3] to complete the proof of Theorem 1.1. However, appealing to [3] feels somewhat unsatisfactory since it relies on a priori knowledge of the deltoid, while ideally, one would want to "discover" the deltoid from the infinite trivalent tree. _Step 3._ To show that the limit of the \(\mathcal{T}_{n}\) does not depend on the subsequence, we prove "partial conformal removability." Partial conformal removability is a much less stringent property than full conformal removability and it is easier to check. In essence, it asks that if \(h:\hat{\mathbb{C}}\,\backslash E\to\hat{\mathbb{C}}\,\backslash F\) is a conformal map (which extends continuously to the Riemann sphere) onto the complement of a set \(F\) which has roughly the same geometry as \(E\), then \(h\) is a Mobius transformation. ### Acknowledgements In 2014, Brent Werness (oral communication) proposed to study the natural shape of the infinite trivalent tree and posed the question of the "shrinking of diameters." During a visit of Seung-Yeop Lee to Seattle in 2015, Brent, Seung-Yeop and the third author performed computer experiments and observed the similarity between the trees and the developed deltoid, leading to the conjecture regarding their convergence. We are grateful to Brent and Seung-Yeop for our discussions and their contributions. We would also like to thank Curt McMullen for his continued interest in this project and for his suggestion that Shabat polynomials converge to a modular function. This research was supported by the Israeli Science Foundation (3134/21) and the National Science Foundation (DMS-1700069 and DMS-1954674). ## 2 Preliminaries In this section, we gather a number of useful facts that will be used in this paper. We also describe the Farey tessellation and discuss weak conformal removability. ### Moduli of annuli and rectangles It is well known that any doubly-connected domain \({\bf A}\subset{\mathbb{C}}\) can be mapped onto a round annulus \(\{z:r<|z|<R\}\). The number \(\operatorname{Mod}{\bf A}:=\frac{1}{2\pi}\log\frac{R}{r}\) is called the _modulus_ of \({\bf A}\). Two doubly-connected domains are conformally equivalent if and only if their moduli coincide. A _metric_\(\rho(z)\) is a non-negative measurable function defined on a domain \(\Omega\subset{\mathbb{C}}\). One can use \(\rho(z)\) to measure lengths of rectifiable curves \[\ell_{\rho}(\gamma)=\int_{\gamma}\rho(z)|dz|\] and compute areas of shapes, for instance the total area of \(\rho\) is given by \[A(\rho)=\int_{\Omega}\rho(z)^{2}|dz|^{2}.\] The metric \(\rho\) is said to be _admissible_ for a family of rectifiable curves \(\Gamma\) contained in \(\Omega\) if the \(\rho\)-length of every curve \(\gamma\in\Gamma\) is at least \(1\). The _modulus_ of the curve family \(\Gamma\) is defined as \[\operatorname{Mod}\Gamma:=\inf_{\rho}A(\rho),\] where the infimum is taken over all admissible metrics \(\rho\). If one finds a conformal metric \(\rho\) such that \(\ell_{\rho}(\gamma)\geq L\) for any \(\gamma\in\Gamma\), then \(\operatorname{Mod}\Gamma\leq A(\rho)/L^{2}\). The modulus of a doubly-connected domain is a special case of the above construction: \(\operatorname{Mod}{\bf A}\) is equal to the modulus of the family of curves \(\Gamma_{\circlearrowright}\) that separate the two boundary components, while \(1/\operatorname{Mod}{\bf A}\) is equal to modulus of the family \(\Gamma_{\uparrow}\) of curves that connect the opposite boundary components of \({\bf A}\). Thus one uses \(\Gamma_{\circlearrowright}\) to give upper bounds for \(\operatorname{Mod}{\bf A}\) while one uses \(\Gamma_{\uparrow}\) go give lower bounds for \(\operatorname{Mod}{\bf A}\). We will frequently use the following two simple rules for modulus, which follow from the definitions: 1. (Monotonicity rule) If \({\bf A}_{1}\subset{\bf A}\) is an essential doubly-connected subdomain, so that \(\Gamma_{\circlearrowright}({\bf A}_{1})\subset\Gamma_{\circlearrowright}({\bf A})\), then \(\operatorname{Mod}{\bf A}_{1}\leq\operatorname{Mod}{\bf A}\). 2. (Parallel rule) If a doubly-connected domain \({\bf A}={\bf A}_{1}\cup{\bf A}_{2}\) can be represented as a union of two essential doubly-connected domains, then \[\operatorname{Mod}{\bf A}_{1}+\operatorname{Mod}{\bf A}_{2}\leq \operatorname{Mod}{\bf A}.\] We will also use the following standard estimates: **Lemma 2.1**.: _Let \(\Omega\) be a simply-connected domain in the plane._ (a) _Suppose \(F\) is a compact connected set contained in \(\Omega\). If \(\operatorname{Mod}(\Omega\setminus F)\geq m\) is bounded from below, then_ \[\operatorname{dist}(\partial\Omega,F)\geq c\,\operatorname{diam}F,\] _for some \(c>0\) which depends only on \(m>0\). Furthermore, \(c\to\infty\) as \(m\to\infty\). Conversely, if \(\operatorname{dist}(\partial\Omega,F)\geq c\,\operatorname{diam}F,\) then \(\operatorname{Mod}(\Omega\setminus F)\geq m(c).\)_ (b) _Suppose \(E\subset F\) are two compact connected sets contained in \(\Omega\). If_ \[m_{1}\,\leq\,\operatorname{Mod}(\Omega\setminus E)\,\leq\,\operatorname{Mod }(\Omega\setminus F)\,\leq\,m_{2},\] _then \(\operatorname{diam}E\asymp\operatorname{diam}F.\) In fact, there exists a constant \(C=C(m_{1},m_{2})>1\) so that \(F\subset B(e,\,C\cdot\operatorname{diam}E)\) for any point \(e\in E\), where \(B(x,r)\) denotes the ball of radius \(r\) centered at \(x\)._ A _conformal rectangle_\(\mathbf{R}\) is a simply connected domain with four marked prime ends \(z_{1},z_{2},z_{3},z_{4}\). In this paper, all conformal rectangles will be _marked_, i.e. equipped with a distinguished pair of opposite sides. The Schwarz-Cristoffel formula provides a conformal map from \(\mathbf{R}\) onto a geometric rectangle \([0,m]\times[0,1]\). If one insists that the marked sides of \(\mathbf{R}\) are mapped onto the vertical sides of \([0,m]\times[0,1]\), then the number \(m\in(0,\infty)\) is determined uniquely. The number \(m:=\operatorname{Mod}\mathbf{R}\) is known as the _modulus_ of \(\mathbf{R}\) and is equal to the modulus of the curve family \(\Gamma_{\updownarrow}\) which separates the distinguished pair of opposite sides. For further properties of conformal modulus, we refer the reader to [2, Chapter 4] and [8, Chapter 2]. ### Farey tessellation Let \(\triangle_{\mathrm{hyp}}\subset\mathbb{D}\) be the ideal triangle in the unit disk with vertices \(1\), \(\omega=e^{2\pi i/3}\) and \(\overline{\omega}=e^{4\pi i/3}\). Repeatedly reflecting \(\triangle_{\mathrm{hyp}}\) in its sides, one obtains a tessellation of the unit disk by ideal triangles. The dual graph (which joins centers of the triangles by hyperbolic geodesics) is called the _Farey tree_\(\mathcal{F}\), see Figure 2. We designate the center of \(\triangle_{\rm hyp}\) as the root vertex. Each non-root triangle \(\triangle\) can be labeled by a digit \(1,2,3\) followed by a finite sequence of \(L\)'s and \(R\)'s, which indicates the path one travels from \(\triangle_{\rm hyp}\) to \(\triangle\). For example, in the word \[2\;\underbrace{L}_{k_{1}=1}\;\underbrace{R}_{k_{2}=1}\;\underbrace{LLL}_{k_{3 }=3}\;\underbrace{RR}_{k_{4}=2}\;\underbrace{LL}_{k_{5}=2}\;\underbrace{R}_{ k_{6}=1}\;\underbrace{LLLLL}_{k_{7}=5}\;\underbrace{RR}_{k_{8}=2},\] the digit \(2\) indicates that we start by walking along the dual tree from the root vertex to its second child. After the first step, each vertex has two children and we have to decide whether to turn left or right. The options are indicated by '\(L\)' and '\(R\)' respectively. **Lemma 2.2**.: _For a non-root triangle \(\triangle\) in the Farey tessellation,_ \[\log\frac{1}{\operatorname{diam}\triangle}\asymp\sum_{i=1}^{m}\log(1+k_{i}).\] Proof.: It is easier and clearly equivalent to work in the upper half plane \(\mathbb{H}\) where \(\triangle_{\rm hyp}\) has vertices \(0\), \(1\) and \(\infty\) and in the first step, we walk down. Let \[\triangle_{0}=\triangle_{\rm hyp},\;\triangle_{1}=(0,1/2,1),\;\triangle_{2}, \;\dots,\;\triangle_{n}=\triangle\] Figure 2: The Farey tessellation and Farey tree be the sequence of triangles from \(\triangle_{\rm hyp}\) to \(\triangle\). Each triangle \(\triangle_{j}\) in this sequence has three vertices on the real axis \(a_{j}<b_{j}<c_{j}\). To estimate \(\operatorname{diam}\triangle_{j}\), we keep track of the ratio \[r(\triangle_{j}):=\frac{b_{j}-a_{j}}{c_{j}-a_{j}},\] which measures the distortion of the triangle \(\triangle_{j}\). Each time we do an right turn after a left turn or vice versa, the ratio is "reset" to a value in \([1/3,2/3]\). After a series of \(k\) consecutive left turns, \(1-r\asymp 1/k\), while after a series of \(k\) consecutive right turns, \(r\asymp 1/k\). After making \(k\) left or right turns in a row, the diameter goes down by a factor of roughly \(k+1\): for \(1\leq k\leq k_{j+1}\), \[\log\frac{1}{\operatorname{diam}\triangle_{k_{1}+k_{2}+\dots+k_{j}+k}}-\log \frac{1}{\operatorname{diam}\triangle_{k_{1}+k_{2}+\dots+k_{j}+1}}\,\asymp\, \log(k+1).\] When we make a right turn after a series of \(k_{j}\) left turns (or a left turn after a series of \(k_{j}\) left turns), the diameter goes down by a factor of \(k_{j}+1\), i.e. \[\log\frac{1}{\operatorname{diam}\triangle_{k_{1}+k_{2}+\dots+k_{j}+1}}-\log \frac{1}{\operatorname{diam}\triangle_{k_{1}+k_{2}+\dots+k_{j}}}\,\asymp\, \log(k_{j}+1).\] The above equations give the desired bound for \(\operatorname{diam}\triangle\). ### Weak conformal removability Suppose \(X\) and \(X^{\prime}\) are two compact sets in the complex plane and \(\varphi:\hat{\mathbb{C}}\setminus X\to\hat{\mathbb{C}}\setminus X^{\prime}\) is a conformal map that extends continuously to a homeomorphism of the sphere. We describe a condition which guarantees that \(\varphi\) is a Mobius transformation: **Lemma 2.3**.: _Suppose that there is a countable exceptional set \(E\subset X\) and a countable collection of closed subsets \(s_{1},s_{2},\dots\) of \(X\), called shadows, such that every point in \(X\setminus E\) belongs to infinitely many sets \(s_{i}\). If_ \[\sum_{i=1}^{\infty}\operatorname{diam}^{2}s_{i}<\infty,\qquad\sum_{i=1}^{ \infty}\operatorname{diam}^{2}\varphi(s_{i})<\infty, \tag{2.1}\] _then \(\varphi\) is a Mobius transformation._ For convenience, we write \(s^{\prime}_{i}=\varphi(s_{i})\). Note that (2.1) implies that \(X\) and \(X^{\prime}\) have 2-dimensional Lebesgue measure 0. Proof.: Call a direction \(v\)_good_ if for almost every line \(\ell\) pointing in the direction of \(v\), the set \(\varphi(\ell\cap X)\) has linear Lebesgue measure 0. One says that \(\varphi\) is _absolutely continuous on lines_ (ACL) if the directions parallel to the coordinate axes are good. It is well known that if \(\varphi\in W^{1,2}_{\rm loc}(\mathbb{C}\setminus X)\) is ACL, then \(\varphi\in W^{1,2}_{\rm loc}(\mathbb{C})\). Weyl's lemma then guarantees that \(\varphi\) is conformal on the Riemann sphere, and therefore, a Mobius transformation. Below, we will show that every direction is good. Instead of showing that a set has zero 1-dimensional Lebesgue measure \(m_{1}\), we may instead show that it has zero 1-dimensional content \(m_{1}^{\infty}\). The definition of 1-dimensional content is similar to that of 1-dimensional measure, but allows covers by balls of arbitrary size. Therefore, the lemma reduces to showing that for almost every line \(\ell\) parallel to \(v\), the 1-dimensional content of \(\varphi(\ell\cap X)\) is 0. Since the set \(E\) is countable, almost every line \(\ell\) parallel to \(v\) misses \(E\). For such a line, \[m_{1}^{\infty}(\varphi(\ell\cap X))\leq\sum_{s_{i}\cap\ell\neq\emptyset,\,i>N} \operatorname{diam}s^{\prime}_{i}. \tag{2.2}\] The last equation holds for _any_\(N\geq 1\) since any point in \(X\setminus E\) is contained in infinitely many shadows, which allows us to avoid putting the first \(N-1\) shadows in the cover. In other words, \[\sum_{s_{i}\cap\ell\neq\emptyset}\operatorname{diam}s^{\prime}_{i}<\infty \quad\implies\quad m_{1}^{\infty}(\varphi(\ell\cap X))=0. \tag{2.3}\] As \[\int_{\ell\mid\mid v}\biggl{\{}\sum_{s_{i}\cap\ell\neq\emptyset} \operatorname{diam}s^{\prime}_{i}\biggr{\}}\,d\ell \leq \sum_{i=1}^{\infty}\operatorname{diam}s_{i}\cdot\operatorname{diam }s^{\prime}_{i}\] \[\leq \frac{1}{2}\biggl{(}\sum_{i=1}^{\infty}\operatorname{diam}^{2}s_ {i}+\operatorname{diam}^{2}s^{\prime}_{i}\biggr{)}\] \[< \infty,\] the integrand must be finite for a.e. \(\ell\). This completes the proof. Background on true trees In this section, we discuss the link between true trees and Shabat polynomials. We then describe the local geometry of true trees whose vertices have bounded valence. Finally, we define shortcuts and obstacles that will be used to give moduli estimates to control the global geometry of trees. ### True trees and Shabat polynomials Let \(T\) be a finite tree in the plane. To find its conformally balanced shape \(\mathcal{T}\), label the sides of edges of \(T\) in counter-clockwise order: \(\vec{e}_{1},\vec{e}_{2},\ldots,\vec{e}_{2N}\). For each half-edge \(\vec{e}_{i}\), form an equilateral triangle \(\triangle(\vec{e}_{i},\infty)\) whose sides have unit length. We first glue these equilateral triangles into a \(2N\)-gon \(\boldsymbol{D}_{2N}\) with sides \(\vec{e}_{i}\), labeled counter-clockwise, and central vertex \(\infty\). We then glue \(\vec{e}_{i}\) with \(\vec{e}_{j}\) whenever \(\vec{e}_{i},\vec{e}_{j}\) are opposite sides of the same edge \(e\in T\). This construction produces a topological sphere which has a flat structure away from the cone points at the vertices of the triangles. Uniformizing this sphere produces the desired tree \(\mathcal{T}\subset\hat{\mathbb{C}}\). Associated to a true tree \(\mathcal{T}\) is a _Shabat polynomial_\(p(z)\) with critical values \(\pm 1\) such that \(\mathcal{T}=p^{-1}([-1,1])\). To construct \(p\), colour each triangle \(\triangle(\vec{e}_{i},\infty)\subset\hat{\mathbb{C}}\setminus\mathcal{T}\) black or white, so that adjacent triangles have opposite colours. On each black triangle \(\triangle(\vec{e}_{i},\infty)\), define \(p(z)\) to be the conformal map onto the upper half-plane \(\mathbb{H}\) which takes \(\vec{e}_{i}\to[-1,1]\) and \(\infty\to\infty\). Similarly, on each white triangle \(\triangle(\vec{e}_{i},\infty)\), define \(p\) to be the conformal map onto the lower half-plane \(\mathbb{L}\) which takes \(\vec{e}_{i}\to[-1,1]\) and \(\infty\to\infty\). Since \(\mathcal{T}\) is a true tree, \(p\) extends to a continuous function on the Riemann sphere. As \(\mathcal{T}\) is made up of real-analytic arcs, \(p\) is meromorphic on the Riemann sphere, and hence a rational function. As the only pole of \(p\) is at infinity, it is a polynomial. Finally, since \(p\) is \(N:1\) at infinity, \(p\) is a polynomial of degree \(N\). From the construction, it is readily seen that \(p\) has critical values \(\pm 1\) and \(\mathcal{T}=p^{-1}([-1,1])\). In order to define the Shabat polynomial uniquely, we need to specify which vertices are sent to \(+1\) and which vertices are sent to \(-1\). Making a different choice amounts to multiplying \(p(z)\) by \(-1\). If \(\mathcal{T}\) has a distinguished vertex \(v_{\text{root}}\), then it is natural to choose the Shabat polynomial so that \(p(v_{\text{root}})=1\). ### Trees of bounded valence We now present some general results on the local behaviour of true trees. The following lemmas say that true trees whose vertices have bounded valence are well-behaved: neighbouring edges have comparable size and the relative distance between non-adjacent edges is bounded below. **Lemma 3.1**.: _Let \(d\geq 2\) be an integer. Suppose \(e=\overline{v_{1}v_{2}}\) is an edge in a true tree \(\mathcal{T}\) with \(\deg v_{1}\leq d\) and \(\deg v_{2}\leq d\). There is a simply connected neighbourhood \(U\supset e\) with \(\operatorname{Mod}(U\setminus e)\geq m(d)\) such that only edges adjacent to \(e\) can intersect \(U\)._ **Lemma 3.2**.: _Fix an integer \(d\geq 2\). Let \(v\) be a vertex of a conformally balanced tree \(\mathcal{T}\). If the degrees of all vertices in \(\{w:d_{\mathcal{T}}(v,w)\leq 2\}\) are \(\leq d\), then the diameters of the edges \(\overline{vv_{i}}\) emanating from \(v\) are comparable (with the comparison constant depending on \(d\))._ The proofs use the concept of a _star_ of a vertex in a true tree. For a vertex \(v\) of \(\mathcal{T}\), we define \(\star_{v}\) as the union of the triangles \(\triangle(\vec{e},\infty)\) that contain \(v\). We enumerate the \(2\deg v\) triangles in \(\star_{v}\) counter-clockwise: \(\triangle_{1},\triangle_{2},\ldots,\triangle_{2\deg v}\). Now, decompose the unit disk \(\mathbb{D}\) into \(2\deg v\) sectors \(\sigma_{1},\sigma_{2},\ldots,\sigma_{2\deg v}\) using \(2\deg v\) equally-spaced radial rays. For each \(i=1,2,\ldots,2\deg v\), let \(\psi_{i}\) be the conformal map from \(\sigma_{i}\) to \(\triangle_{i}\) which takes vertices to vertices, with \(0\) mapping to \(v\). Since \(\mathcal{T}\) is a true tree, the maps \(\psi_{i}\) glue along radial rays to form a conformal map \(\psi_{v}:\mathbb{D}\to\star_{v}\). On an edge \(e_{i}=\overline{vv_{i}}\) of \(\mathcal{T}\) emanating from \(v\), we mark the points \(a_{i},b_{i}\) such that the segments \(\overline{va_{i}},\overline{a_{i}b_{i}},\overline{b_{i}v_{i}}\) have equal length in the equilateral triangle model of \(\triangle(\vec{e}_{i},\infty)\). Note that the points \(a_{i},b_{i}\) do not depend on which one of the two sides of \(e_{i}\) is used. Applying Koebe's distortion theorem to \(\psi_{v}\) tells us that the diameters of the \(2\deg v\) segments \[\big{\{}\overline{va_{i}},\ \overline{a_{i}b_{i}}\,:\,i=1,2,\ldots,\deg(v) \big{\}}\] are comparable. By considering stars centered at the neighbouring vertices \(v_{i}\), we see that \[\big{\{}\overline{a_{i}b_{i}},\ \overline{b_{i}v_{i}}\,:\,i=1,2,\ldots, \deg(v)\big{\}}.\] are also comparable. Putting these estimates together proves Lemma 3.2. Lemma 3.1 follows from Lemma 2.1 (a) after applying Koebe's distortion theorem to \(\psi_{v_{1}}\) and \(\psi_{v_{2}}\). Similar reasoning shows: **Lemma 3.3**.: _Suppose \(\{\mathcal{T}_{n}\}_{n=0}^{\infty}\) is an infinite sequence of conformally balanced trees whose vertices have uniformly bounded degrees. Then any subsequential Hausdorff limit of a sequence of edges \(e^{(n)}\subset\mathcal{T}_{n}\) is either a point or a real-analytic arc._ Proof.: Suppose the edge \(e^{(n)}\) connects the vertices \(v_{1}^{(n)}\) and \(v_{2}^{(n)}\). As above, we mark the points \(a^{(n)}\) and \(b^{(n)}\) which trisect the edge \(e^{(n)}\). We pass to a subsequence so that the maps \(\psi_{v_{1}}^{(n)}\) and \(\psi_{v_{2}}^{(n)}\) converge uniformly on compact subsets of the unit disk. If the limiting maps \(\psi_{v_{1}}=\lim_{n\to\infty}\psi_{v_{1}}^{(n)}\) and \(\psi_{v_{2}}=\lim_{n\to\infty}\psi_{v_{2}}^{(n)}\) are constant, then the edges \(e^{(n)}\) collapse to a point. Otherwise, the limiting edge \(e=\lim e^{(n)}\) is covered by two compatible real-analytic arcs \(\overline{v_{1}b}=\lim_{n\to\infty}\overline{v_{1}^{(n)}b^{(n)}}\) and \(\overline{av_{2}}=\lim_{n\to\infty}\overline{a^{(n)}v_{2}^{(n)}}\). ### Shortcuts and obstacles Let \(\mathcal{T}\) be a conformally balanced tree in the plane, normalized so that the Riemann map \(\varphi:\hat{\mathbb{C}}\setminus\overline{\mathbb{D}}\to\hat{\mathbb{C}} \setminus\mathcal{T}\) satisfies \(\varphi(z)=z+O(1/z)\) as \(z\to\infty\). To control the geometry of \(\mathcal{T}\), we estimate conformal moduli of various path families \(\Gamma\) contained in doubly-connected domains \(\mathbf{A}\subset\mathbb{C}\). An instructive example is the family of closed curves surrounding an edge of the tree, which will be discussed in detail in Section 3.4. The idea behind our estimates is as follows: since we will only estimate moduli in the setting of finite balanced trees, we will not have to worry about the possibility that the area of \(\mathcal{T}\) might be positive. By conformal invariance, we may estimate the modulus in any of the three conformally equivalent models \(\hat{\mathbb{C}}\setminus\mathcal{T}\), \(\boldsymbol{D}_{2N}/\!\sim\) or \((\hat{\mathbb{C}}\setminus\mathbb{D})/\!\sim\). In the latter model, the equivalence relation on \(\partial\mathbb{D}\) is given by the identifications of \(\varphi\) and the family \(\varphi^{-1}(\Gamma)\) consists of sets \(\varphi^{-1}(\gamma)\) that may be disconnected: if a curve \(\gamma\in\Gamma\) crosses an edge \(e\in\mathcal{T}\), then \(\varphi^{-1}(\gamma)\)_enters_ one side of \(\varphi^{-1}(e)\), _teleports_ through the identification provided by \(\varphi\), and _exits_ on the other side of \(\varphi^{-1}(e)\). We will construct admissible metrics of the form \[\rho=\alpha_{0}\Big{(}\rho_{0}+\sum_{e\in\mathcal{T}}\alpha_{e}\rho_{e}\Big{)}, \tag{3.1}\] where the _background metric_\(\rho_{0}={\bf 1}_{\varphi^{-1}({\bf A})}\) serves the purpose of controlling the length of curves \(\gamma\) that do not intersect \({\cal T}\), while the _obstacles_\(\rho_{e}\) have the purpose of penalizing teleportation so that shortcuts are not worthwhile. The constant \(\alpha_{0}\) is chosen so that curves that do not intersect \({\cal T}\) have length \(\geq 1\) under \(\alpha_{0}\rho_{0}\). We build the obstacles \(\rho_{e}\) so that they assign length \(\geq 1\) to all curves \(\gamma\) that intersect \(e\) (and are not confined to the union of the triangles that are incident to \(e\)). It is easiest to describe the construction in \({\mathbf{D}}_{2N}/\!\sim\), which is a surface composed of \(2N\) equilateral triangles \(\triangle(\vec{e}_{i},\infty)\) of side length \(1\): namely, we define \(\rho_{e}\) as three times the characteristic function of the \(1/3\)-neighborhood of \(e\) in the flat metric, i.e. \[\rho_{e}=3\times{\bf 1}_{B_{1/3}(e)},\] where \(B_{1/3}(e)\) is the set of points of distance at most \(1/3\) from \(e\). We denote the conformal transport of this metric to \(\hat{\mathbb{C}}\setminus\overline{\mathbb{D}}\) again by \(\rho_{e}\). Since any point \(z\in\hat{\mathbb{C}}\setminus\overline{\mathbb{D}}\) can be in the support of at most \(D=\max_{v\in{\cal T}}\deg(v)\) obstacles, it can be in at most \(D+1\) of the sets \(\mathop{\rm supp}\rho_{0}\cup\{\mathop{\rm supp}\rho_{e}\}\), and the area of \(\rho\) can be estimated by \[A(\rho)\,\leq\,(D+1)^{2}\,\alpha_{0}^{2}\Big{(}A(\rho_{0})+\sum_{e}\alpha_{e}^ {2}\cdot A(\rho_{e})^{2}\Big{)}\,\lesssim\,\alpha_{0}^{2}\Big{(}A(\rho_{0})+ \sum_{e}\alpha_{e}^{2}\Big{)}. \tag{3.2}\] For an edge \(e\) in \({\cal T}\), we denote by \({\cal T}(e)\subset{\cal T}\) the subtree consisting of the edge \(e\) and its descendants (as measured from the root vertex). It is easy to see that \(S(e)=\varphi^{-1}({\cal T}(e))\) is an arc in the unit circle \(\partial\mathbb{D}\). We define the _outer shortcut_ of \(e\) as the Euclidean length of \(S(e)\): \[s(e)=\mathop{\rm length}(S(e)).\] We define \({\cal T}^{-}(e)={\cal T}(e)\setminus e\) as the union of all the descendants of \(e\). Naturally, we define \(S^{-}(e)=\varphi^{-1}({\cal T}^{-}(e))\) and \(s^{-}(e)=\mathop{\rm length}(S^{-}(e))=s(e)-\pi/N\). Unless \(e\) is a boundary edge, the difference between the outer and inner shortcuts is not significant. ### A lower bound for the diameters of edges Let \({\cal T}\) be a true tree and \(\Omega_{2}\subset\mathbb{C}\) be the simply-connected domain bounded by the equipotential curve \(\varphi\big{(}\{z:|z|=2\}\big{)}.\) The hydrodynamic normalization of the conformal map \(\varphi\) implies that \(\operatorname{diam}\mathcal{T}\geq c_{0}>0\) is bounded from below by a universal constant (the sharp value \(c_{0}=2\) is irrelevant for our purpose). In view of Lemma 2.1, to give a lower bound for the diameter of an edge \(e_{0}\) in \(\mathcal{T}\), it is enough to give an upper bound for the modulus of the family of curves \(\Gamma_{\circlearrowright}(\mathbf{A})\) that separate the boundary components of \(\mathbf{A}=\Omega_{2}\setminus e_{0}\). We will now show that the metric \[\rho=\frac{1}{s^{-}(e_{0})}\Big{(}\mathbf{1}_{A(0;1,2)}+\sum_{e\in\mathcal{T} _{n}^{-}(e_{0})}s(e)\rho_{e}\Big{)}\] is admissible for \(\varphi^{-1}(\Gamma_{\circlearrowright}(\mathbf{A}))\), where the summation is over the descendants of \(e_{0}\). Consider a curve \(\gamma\in\Gamma_{\circlearrowright}(\mathbf{A})\). If we pull \(\gamma\) back by \(\varphi^{-1}\), we get a path in the annulus \[A(0;1,2)=\{z:1\leq|z|<2\},\] which may teleport from \(x\in\partial\mathbb{D}\) to \(y\in\partial\mathbb{D}\) if \(\varphi(x)=\varphi(y)\in\mathcal{T}_{n}\setminus e_{0}\). If \(\gamma\) does not pass through any edge in \(\mathcal{T}_{n}^{-}(e_{0})\), then the radial projection of \(\varphi^{-1}(\gamma)\) onto \(\partial\mathbb{D}\) contains \(S^{-}(e_{0})\) and the metric \(\rho_{0}=\mathbf{1}_{A(0;1,2)}\) assigns length \(\geq s^{-}(e_{0})\) to \(\varphi^{-1}(\gamma)\). In general, the inclusion \[S^{-}(e_{0})\subset\pi_{\operatorname{rad}}(\varphi^{-1}(\gamma))\cup\bigcup _{\begin{subarray}{c}e\in\mathcal{T}_{n}^{-}(e_{0})\\ \gamma\cap e\neq\emptyset\end{subarray}}\varphi^{-1}(\mathcal{T}(e))\] shows that \[\int_{\varphi^{-1}(\gamma)}\Big{(}\mathbf{1}_{A(0;1,2)}+\sum_{e\in\mathcal{T} _{n}^{-}(e_{0})}s(e)\rho_{e}\Big{)}|dz|\geq s^{-}(e_{0}),\] which proves the admissibility of \(\rho\). Together with (3.2), this shows the upper bound \[M(\Gamma_{\circlearrowright}(A))\,\leq\,A(\rho)\,\lesssim\,\frac{1}{s^{-}(e_{ 0})^{2}}\bigg{[}1+\sum_{e\in\mathcal{T}_{n}^{-}(e_{0})}s(e)^{2}\bigg{]}.\] We have thus proved the following theorem: **Theorem 3.4**.: _Suppose \(\{\mathcal{T}_{n}\}_{n=0}^{\infty}\) is an infinite sequence of conformally balanced trees whose vertices have uniformly bounded degrees, for which the sums \(S_{n}=\sum_{e\in\mathcal{T}_{n}}s(e)^{2}\) are uniformly bounded. If \(e_{0}^{(n)}\subset\mathcal{T}_{n}\) is a sequence of edges with \(\inf s(e_{0}^{(n)})>0\), then any subsequential Hausdorff limit of the edges \(e_{0}^{(n)}\) is a real-analytic arc._ We now apply the above theorem to the sequence of the finite truncations \(\{\mathcal{T}_{n}\}\) of the infinite trivalent tree. Inspection shows that \[s(e)\asymp 2^{-d_{\mathcal{T}_{n}}(v_{\mathrm{root}},e)}.\] As the number of edges \(v\in\mathcal{T}_{n}\) with \(d_{\mathcal{T}_{n}}(v_{\mathrm{root}},e)=m\) is \(\asymp 2^{m}\), the sums \[S_{n}=\sum_{e\in\mathcal{T}_{n}}s(e)^{2}\] are uniformly bounded in \(n=1,2,\dots\). If \(e\subset\mathcal{T}\) is an edge in the infinite tree, then its representative \(e^{(n)}\subset\mathcal{T}_{n}\) has \(s(e^{(n)})\asymp 2^{-d_{\mathcal{T}_{\infty}}(v_{\mathrm{root}},e)}\). By the theorem above, the diameters of the edges \(e^{(n)}\) are bounded from below. _Remark_.: If \(\mathcal{T}_{n}\) is a _random_ conformally balanced trivalent tree with \(n\) edges, chosen uniformly among all of them, then it is not hard to show that the expectation \[E[S_{n}]=\sum_{e\in\mathcal{T}_{n}}E[s(e)^{2}]\] tends to \(\infty\) as \(n\to\infty\), suggesting that the diameters of the edges tend to zero. Indeed, it is known [5] that the diameters tend to zero with a power of \(1/n\), with high probability. ## 4 Structure of a subsequential limit Let \(\mathcal{T}_{n}\) be the conformally balanced trivalent tree of depth \(n\). In this section, we show that any subsequential limit of the \(\mathcal{T}_{n}\) has the right topological type: **Theorem 4.1**.: _For any subsequential Hausdorff limit of the \(\mathcal{T}_{n}\), one can find a homeomorphism of the plane which takes it onto the Farey tree \(\mathcal{F}\) of Section 2.2 union the unit circle \(\partial\mathbb{D}\)._ We first pass to a subsequence so that every edge in the infinite trivalent tree has a limit along this sequence. In the previous section, we saw that the limit of each edge is a real-analytic arc. We write \(\mathcal{T}_{\infty}\) for the union of the Hausdorff limits of the individual edges. We pass to a further subsequence so that the finite trees \(\mathcal{T}_{n}\) also possess a Hausdorff limit, which we denote by \(\mathcal{T}_{\infty}\sqcup\Lambda\). We refer to \(\Lambda\) as the _limit set_. The proof of Theorem 4.1 is based on a number of moduli estimates, which control the geometry of the finite trees \(\mathcal{T}_{n}\). With the help of these moduli estimates, we prove the following assertions: * \(\mathcal{T}_{\infty}\) is dense in the Hausdorff limit of the finite trees \(\mathcal{T}_{n}\). * For any branch \([v_{0},v_{1},v_{2},v_{3},\dots]\) of \(\mathcal{T}_{\infty}\) with \(d_{\mathcal{T}_{\infty}}(v_{m},v_{\mathrm{root}})=m\), \(\lim_{m\to\infty}v_{m}\) exists. * Given two branches \([v_{0},v_{1},v_{2},v_{3},\dots]\), \([w_{0},w_{1},w_{2},w_{3},\dots]\), \(\lim_{m\to\infty}v_{m}=\lim_{m\to\infty}w_{m}\) if and only if the limits of the corresponding branches in the Farey tree are the same. We then show the following two topological assertions: * The limit set \(\Lambda\) is a Jordan curve \(\partial\Omega\) which encloses \(\mathcal{T}_{\infty}\). * There is a natural correspondence between the complementary regions of \(\mathcal{T}_{\infty}\cup\partial\Omega\) and \(\mathcal{F}\cup\partial\mathbb{D}\). From here, the proof of Theorem 4.1 runs as follows: Proof of Theorem 4.1.: Let \(h\) be a homeomorphism of \(\mathcal{T}_{\infty}\) onto the Farey tree \(\mathcal{F}\), which takes vertices to the corresponding vertices. The above properties imply that \(h\) extends to a homeomorphism of the closures: \(\mathcal{T}_{\infty}\cup\partial\Omega\) and \(\mathcal{F}\cup\partial\mathbb{D}\). Since the complementary regions are Jordan domains, we can extend \(h\) to a homeomorphism of the plane. ### Shrinking of diameters For a vertex \(v\in\mathcal{T}_{n}\), we denote the subtree which consists of \(v\) and its descendants by \(\mathcal{T}_{n}(v)\). To prove (SL1) and (SL2), we show: **Lemma 4.2**.: (i) _The diameters of \(\mathcal{T}_{n}(v)\) tend to zero as \(d_{\mathcal{T}_{n}}(v_{\mathrm{root}},v)\to\infty\), uniformly in \(n\)._ (ii) _The diameters of \(\mathcal{T}_{n}(vLR^{k})\cup\mathcal{T}_{n}(vRL^{k})\) tend to zero if either \(d_{\mathcal{T}_{n}}(v_{\mathrm{root}},v)\to\infty\) or \(k\to\infty\), again uniformly in \(n\)._ As in the case of the Farey tree \(\mathcal{F}\) in the unit disk, the diameter of \(\mathcal{T}_{n}(v)\) depends on the nature of the word representing \(v\). If the path joining \(v_{\mathrm{root}}\) to \(v\) switches between left and right turns regularly, then the diameters of \(\mathcal{T}_{n}(v)\) decrease exponentially quickly. On the other hand, if the word for \(v\) has long sequences of consecutive \(L\)'s and \(R\)'s, then the diameters of \(\mathcal{T}_{n}(v)\) shrink at a polynomial rate. This dichotomy is reflected in the two types of estimates below. Hyperbolic decay.At an interior vertex \(v\in\mathcal{T}_{n}\), the domain \(\hat{\mathbb{C}}\setminus\mathcal{T}_{n}\) has three prime ends. Assuming that \(v\neq v_{\mathrm{root}}\) is not the root vertex, we can name the three prime ends as left, right and middle. The _left_ prime end lies between \(\overline{v_{\mathrm{parent}}v}\) and \(\overline{vv_{L}}\), while the _right_ prime end lies between \(\overline{v_{\mathrm{parent}}v}\) and \(\overline{vv_{R}}\). Naturally, the _middle_ prime end lies between \(\overline{vv_{L}}\) and \(\overline{vv_{R}}\). Let \(\gamma(v)\) denote the hyperbolic geodesic in \(\hat{\mathbb{C}}\setminus\mathcal{T}_{n}\) which joins the left and right prime ends at \(v\) and \(V(v)\) be the domain enclosed by \(\gamma(v)\), see Fig. 3. With this definition, a vertex \(w\) is contained in \(V(v)\) if and only if \(w\) is represented by a word which begins with \(v\). Moreover, if \(v_{2}\) is a descendant of \(v_{1}\), then \(V(v_{2})\subset V(v_{1})\). **Lemma 4.3**.: _Suppose \(v\) is an interior vertex of \(\mathcal{T}_{n}\), other than the root vertex. Then, \(\operatorname{Mod}V(v)\setminus V(vLR)\asymp 1\) and \(\operatorname{Mod}V(v)\setminus V(vRL)\asymp 1\)._ It is enough to show the statement regarding \(\operatorname{Mod}\mathbf{A}\) for \(\mathbf{A}=V(v)\setminus V(vLR)\) as the situation with \(\operatorname{Mod}V(v)\setminus V(vRL)\) is entirely symmetric. To prove the lemma, we need to give uniform upper bounds for the moduli of the curve families \(\Gamma_{\circlearrowleft}(\mathbf{A})\) and \(\Gamma_{\uparrow}(\mathbf{A})\), which are independent of \(n\) and \(v\in\mathcal{T}_{n}\). To deal with the first curve family, simply note that every \(\gamma\in\Gamma_{\circlearrowleft}(\mathbf{A})\) intersects at least one of the two edges \(\overline{vv_{L}}\) and \(\overline{v_{L}v_{LR}}\) so that the sum of the two obstacles \(\rho=\rho_{\overline{vv_{L}}}+\rho_{\overline{v_{L}v_{LR}}}\) is an admissible metric of area \(A(\rho)=O(1)\). To deal with the second curve family, by conformal invariance, we may give an upper bound for the modulus of the curve family \(\varphi^{-1}(\Gamma_{\uparrow}(\mathbf{A}))\) in \(\varphi^{-1}(\mathbf{A})\subset\hat{\mathbb{C}}\setminus\mathbb{D}\) which allows teleportation, as we did before in Section 3.4. Cutting \(\mathbf{A}\) along the tree, we obtain a conformal rectangle \(\mathbf{R}=\mathbf{A}\,\backslash\mathcal{T}_{n}\) whose vertices are the prime ends where \(\gamma(v)\) and \(\gamma(vLR)\) meet \(\mathcal{T}_{n}\). Its pre-image \(\hat{\mathbf{R}}=\varphi^{-1}(\mathbf{R})\subset\hat{\mathbb{C}}\,\backslash \mathbb{D}\) is a conformal rectangle whose vertices are the points where the geodesics \(\varphi^{-1}(\gamma(v))\) and \(\varphi^{-1}(\gamma(vLR))\) meet the unit circle. We label the vertices \(z_{1},z_{2},z_{3},z_{4}\) in counter-clockwise order such that \(z_{1}\) corresponds to the right prime end of \(v\). Due to the "left-right" turn between \(v\) and \(vLR\), the distances between the points \(z_{i}\), \(1\leq i\leq 4\), are comparable: \[|z_{1}-z_{2}|\asymp|z_{2}-z_{3}|\asymp|z_{3}-z_{4}|\asymp 2^{-d},\qquad d=d_{ \mathcal{T}_{n}}(v_{\rm root},v).\] Hence, the background metric \(\rho_{0}=\mathbf{1}_{\varphi^{-1}(\mathbf{A})}\) assigns length \(\gtrsim 2^{-d}\) to every curve in \(\varphi^{-1}(\Gamma_{\uparrow}(\mathbf{A}))\) that does not teleport. Arguing as in Section 3.4 shows that the metric \[\rho=C_{0}2^{d}\Big{(}\rho_{0}+\sum_{e\in V(v)}s(e)\rho_{e}\Big{)} \tag{4.1}\] is admissible if \(C_{0}\) is sufficiently large (independent of \(n\) and \(v\)). More precisely, while the set \(\varphi^{-1}(\gamma)\) may be disconnected, \[\sigma=\varphi^{-1}(\gamma)\cup\bigcup_{e\cap\gamma\neq\emptyset}S(e)\] Figure 3: To a non-root vertex \(v\in\mathcal{T}_{n}\), we associate the domain \(V(v)\), bounded by the curve \(\gamma(v)\). is connected and intersects both geodesics \(\varphi^{-1}(\gamma(v))\) and \(\varphi^{-1}(\gamma(vLR))\). Inspection shows that the integral \(\int_{\sigma}\rho_{0}|dz|\) computes the Euclidean length of \(\sigma\setminus\partial\mathbb{D}\), whereas \(\int_{\sigma}\bigl{(}\sum_{e\in V(v)}s(e)\rho_{e}\bigr{)}|dz|\) is bounded below by the Euclidean length of \(\sigma\cap\partial\mathbb{D}\). As a result, \[\int_{\sigma}\biggl{\{}\rho_{0}+\sum_{e\in V(v)}s(e)\rho_{e}\biggr{\}}|dz|\] is greater or equal to the Euclidean distance between the geodesics \(\varphi^{-1}(\gamma(v))\) and \(\varphi^{-1}(\gamma(vLR))\), which is comparable to \(2^{-d}\). Consequently, the factor \(C_{0}2^{d}\) in (4.1) makes the metric \(\rho\) admissible. From \(s(e)\asymp 2^{-d\mathcal{T}_{n}(v_{\mathrm{root}},e)}\), it is clear that \(\sum_{e\in V(v)}s(e)^{2}\lesssim 2^{-2d}\). The area bound \(A(\rho)=O(1)\) now follows from (3.2). Putting the above information together shows the desired modulus bound. Parabolic decay.We continue to assume that \(v\in\mathcal{T}_{n}\) is an interior vertex, other than the root vertex. For each \(0\leq j\leq n-1-d_{\mathcal{T}_{n}}(v_{\mathrm{root}},v)\), we connect the vertices \(vLR^{j}\) and \(vRL^{j}\) by two hyperbolic geodesics \(\alpha_{j},\beta_{j}\subset\hat{\mathbb{C}}\setminus\mathcal{T}_{n}\), with the _inner geodesic_\(\alpha_{j}\) joining \[(vLR^{j})_{\mathrm{right}}\quad\text{with}\quad(vRL^{j})_{\mathrm{left}}\] and the _outer geodesic_\(\beta_{j}\) joining \[(vLR^{j})_{\mathrm{left}}\quad\text{with}\quad(vRL^{j})_{\mathrm{right}}.\] We then define \(W_{j}=W_{j}(v)\) as the simply-connected domain bounded by the Jordan curve \(\alpha_{j}\cup\beta_{j}\). See Fig. 4. **Lemma 4.4**.: _Suppose \(v\) is an interior vertex of \(\mathcal{T}_{n}\), other than the root vertex. Then,_ \[\mathrm{Mod}\ W_{0}(v)\setminus W_{k}(v)\asymp\log(1+k),\] _for any \(1\leq k\leq n-1-d_{\mathcal{T}_{n}}(v_{\mathrm{root}},v)\)._ Since the annulus \(V(v)\setminus V(vLR^{k})\supset W_{0}\setminus W_{k}\), its modulus is strictly larger. In particular, the lemma implies that \(\mathrm{Mod}\,V(v)\setminus V(vLR^{k})\gtrsim\log(1+k)\). Proof.: For brevity, we write \(\mathbf{A}=W_{0}\setminus W_{k}\). To show the upper bound for \(\operatorname{Mod}\mathbf{A}\), we need to estimate the modulus of the family of curves \(\Gamma_{\circlearrowright}\) which separate the boundary components of \(\mathbf{A}\). The tree \(\mathcal{T}_{n}\) splits \(\mathbf{A}\) into two conformal rectangles \(\mathbf{R}_{\alpha}\) and \(\mathbf{R}_{\beta}\), with \(\partial\mathbf{R}_{\alpha}\supset\alpha_{0}\cup\alpha_{k}\) and \(\partial\mathbf{R}_{\beta}\supset\beta_{0}\cup\beta_{k}\). Since a curve in \(\Gamma_{\circlearrowright}(\mathbf{A})\) contains a crossing that joins the \(\mathcal{T}_{n}\)-sides of \(\mathbf{R}_{\alpha}\), \(\operatorname{Mod}\Gamma_{\circlearrowright}(\mathbf{A})\leq\operatorname{ Mod}\mathbf{R}_{\alpha}\). The latter modulus may be computed in the exterior unit disk: \(\operatorname{Mod}\mathbf{R}_{\alpha}=\operatorname{Mod}\varphi^{-1}( \mathbf{R}_{\alpha})\asymp\log(1+k)\) as desired. We now turn to the lower bound. For this purpose, we decompose \(\mathbf{A}\) into a union of shells: \[\mathbf{A}\,=\,\bigcup_{j=1}^{k}\mathbf{A}_{j}\,=\,\bigcup_{j=1}^{k}W_{j-1} \setminus W_{j}.\] By the parallel rule, it is enough to show that \(\operatorname{Mod}\Gamma_{\uparrow}(\mathbf{A}_{j})\lesssim j\), for each \(j=1,2,\ldots,k\). As usual, we estimate the modulus of the family \[\varphi^{-1}(\Gamma_{\uparrow}(\mathbf{A}_{j}))\subset\varphi^{-1}(\mathbf{A }_{j})\subset\hat{\mathbb{C}}\setminus\mathbb{D}.\] The pre-image \(\varphi^{-1}(\mathbf{A}_{j})=\hat{\mathbf{R}}_{\beta,j}\cup\hat{\mathbf{R}}_{ \alpha,j}\) consists of two conformal rectangles in \(\hat{\mathbb{C}}\setminus\mathbb{D}\), with \(\hat{\mathbf{R}}_{\alpha,j}\) bounded by \(\hat{\alpha}_{j-1},\hat{\alpha}_{j}\) and the unit circle, and \(\hat{\mathbf{R}}_{\beta,j}\) bounded by \(\hat{\beta}_{j-1},\hat{\beta}_{j}\) and the unit circle. Let \(\rho_{\alpha,0}(z)\) be the extremal metric on the conformal rectangle \(\hat{\mathbf{R}}_{\alpha,j}\) for the family of curves contained in \(\hat{\mathbf{R}}_{\alpha,j}\) that connect \(\hat{\alpha}_{j-1}\) and \(\hat{\alpha}_{j}\). It is easy to see that \(j+1.\) As in the proof of Lemma 4.3, there is a metric \(\rho_{\beta,0}\) of the form (4.1) with \(A(\rho_{\beta,0})\asymp 1\) which assigns length \(\geq 1\) to every curve \(\gamma\) in \(\hat{\mathbf{R}}_{\beta,j}\) that connects \(\hat{\beta}_{j-1}\) and \(\hat{\beta}_{j}\) with or without teleportation. More precisely, since the four marked endpoints of \(\hat{\beta}_{j-1}\) and \(\hat{\beta}_{j}\) have mutually comparable distances \(\asymp 2^{-d_{\mathcal{T}_{n}}(v_{\mathrm{root}},v)-j},\) the reasoning in the proof of Lemma 4.3 shows that the metric \[\rho_{\beta,0}=C_{0}2^{d_{\mathcal{T}_{n}}(v_{\mathrm{root}},v)+j}\bigg{(} \rho_{0}+\sum_{e\in V(v_{LRj-1})\cup V(v_{RLj-1})}s(e)\rho_{e}\bigg{)} \tag{4.2}\] is admissible where \(\rho_{0}=\mathbf{1}_{\hat{\mathbf{R}}_{\beta,j}}\) and \(C_{0}\) is sufficiently large. A path in \(\varphi^{-1}(\Gamma_{\uparrow}(\mathbf{A}_{j}))\) connects \(\hat{\alpha}_{j-1}\cup\hat{\beta}_{j-1}\) with \(\hat{\alpha}_{j}\cup\hat{\beta}_{j},\) where one is allowed to take shortcuts by teleporting from \(x\in\partial\mathbb{D}\) to \(y\in\partial\mathbb{D}\) if \(\varphi(x)=\varphi(y)\in\mathcal{T}_{n}\). Such a path is either contained in \(\hat{\mathbf{R}}_{\alpha,j},\) or contained in \(\hat{\mathbf{R}}_{\beta,j},\) or intersects one of the two edges \(e_{1}=[vLR^{j-1},vLR^{j}]\) and \(e_{2}=[vRL^{j-1},vRL^{j}].\) To obtain a metric admissible for \(\varphi^{-1}(\Gamma_{\uparrow}(\mathbf{A}_{j})),\) we modify \(\rho_{0}(z)=\rho_{\alpha,0}(z)+\rho_{\beta,0}(z)\) by adding two obstacles along the edges \(e_{1}\) and \(e_{2}\) which make it impractical for a path to teleport from \(\hat{\mathbf{R}}_{\beta,j}\) to \(\hat{\mathbf{R}}_{\alpha,j}\): \[\rho=(\rho_{\alpha,0}+\rho_{\beta,0})+\rho_{e_{1}}+\rho_{e_{2}}.\] As each obstacle has area \(O(1),\) the area \(A(\rho)\asymp j+1,\) which gives the desired modulus bound. Putting this together.We are now ready to show Lemma 4.2: Proof of Lemma 4.2.: Let \(v\in\mathcal{T}_{n}\) be an interior vertex, other than the root vertex. From the definitions, it is clear that \(\mathcal{T}_{n}(v)\subset V(v).\) Let \[[v_{\mathrm{root}},v]=[v_{0}=v_{\mathrm{root}},v_{1},v_{2},v_{3},\ldots,v_{m }=v]\] be the path in \(\mathcal{T}_{n}\) joining \(v_{\mathrm{root}}\) to \(v\). In view of the hydrodynamic normalization, \(V(v_{1})\subset\Omega_{2}\subset B(0,8)\) is contained in a ball of fixed size. Consequently, to prove (i), it is enough to show that \(\operatorname{Mod}V(v_{1})\setminus V(v)\) is large when \(d_{\mathcal{T}_{n}}(v_{\mathrm{root}},v)\) is large. There are two cases to consider. If the path \([v_{\mathrm{root}},v]\) frequently switch between left and right turns, then \(\operatorname{Mod}V(v_{1})\setminus V(v)\) will be large by Lemma 4.3 and the parallel rule. If we turn left many times or turn right many times without switching, then \(\operatorname{Mod}V(v_{1})\setminus V(v)\) will be large by Lemma 4.4. In both cases, \(\operatorname{diam}V(v)\to 0\) uniformly in \(n\) as \(d_{\mathcal{T}_{n}}(v_{\operatorname{root}},v)\to\infty\). To prove (ii), we note that \[\mathcal{T}_{n}(vLR^{k})\cup\mathcal{T}_{n}(vLR^{k})\,\subset\,W_{k}(v)\, \subset\,V(v)\] and appeal to Lemma 4.4. ### The limit set is a Jordan curve Our next objective is to show (SL3) and (SL4). For two vertices \(v_{1},v_{2}\in\mathcal{T}_{n}\), we denote by \(d_{\omega}(v_{1},v_{2})\) the harmonic measure as seen from infinity of the shortest arc on the unit circle that contains a point of \(\varphi^{-1}(v_{1})\) and a point of \(\varphi^{-1}(v_{2})\). The following lemma says that if the harmonic measure between two boundary vertices \(v_{1},v_{2}\) is small, then the Euclidean distance \(|v_{1}-v_{2}|\) is also small: **Lemma 4.5**.: _For any \(\varepsilon>0\), there exists an \(\eta>0\), such that if \(v_{1},v_{2}\in\mathcal{T}_{n}\) are two boundary vertices for which \(d_{\omega}(v_{1},v_{2})<\eta\), then the Euclidean distance \(|v_{1}-v_{2}|<\varepsilon\)._ We explain the proof via an analogy: if \(x_{1},x_{2}\) are two points on the unit circle, then either \(x_{1},x_{2}\) are contained in a single dyadic arc of length comparable to \(|x_{1}-x_{2}|\) or they are contained in the union of two adjacent dyadic arcs whose lengths are comparable to \(|x_{1}-x_{2}|\). Similarly, in the trivalent tree, one has two non-mutually exclusive possibilities: Denote \(v\) the last common ancestor of \(v_{1}\) and \(v_{2}\) so that \(v_{1},v_{2}\in\mathcal{T}_{n}(v)\) and \(v_{1}=vLX,v_{2}=vRY\) (or vice versa) for some sequences \(X,Y\). Then at least one of the following statements is true: 1. \(\omega_{\mathbb{C}\setminus\mathcal{T}_{n},\infty}(\mathcal{T}_{n}(v))\asymp d _{\omega}(v_{1},v_{2})\), 2. For the maximal integer \(k\geq 1\) so that \(v_{1}\in\mathcal{T}_{n}(vLR^{k})\) and \(v_{2}\in\mathcal{T}_{n}(vRL^{k})\), we have \(\omega_{\hat{\mathbb{C}}\setminus\mathcal{T}_{n},\infty}\big{(}\mathcal{T}_{ n}(vLR^{k})\cup\mathcal{T}_{n}(vRL^{k})\big{)}\asymp d_{\omega}(v_{1},v_{2})\). In either case, one may use Lemma 4.2 to show that the Euclidean distance \(|v_{1}-v_{2}|\) is small if \(d_{\omega}(v_{1},v_{2})\) is small. The same argument shows that for any \(\varepsilon>0\), there exists an \(\eta>0\), such that for any two boundary vertices \(v_{1},v_{2}\in\mathcal{T}_{n}\) with \(d_{\omega}(v_{1},v_{2})<\eta\), the union of the geodesics that join consecutive boundary vertices of \(\mathcal{T}_{n}\) between \(v_{1}\) and \(v_{2}\) has diameter \(<\varepsilon\). Indeed, these geodesics are contained in the region \(V(v)\) in the first case above and in the region \(W_{k}(v)\) in the second case. We now show the converse to Lemma 4.5, namely, if the harmonic measure between two boundary vertices in \(\mathcal{T}_{n}\) is bounded below, then so is their Euclidean distance: **Lemma 4.6**.: _For any \(\varepsilon>0\), there exists an \(\eta>0\), such that if \(v,w\in\mathcal{T}_{n}\) are two boundary vertices for which \(d_{\omega}(v,w)>\eta\), then the Euclidean distance \(|v-w|>\varepsilon\)._ Proof.: Let \([v_{0}=v_{\rm root},v_{1},v_{2},v_{3},\ldots,v_{n}=v]\) be the path in \(\mathcal{T}_{n}\) joining \(v_{\rm root}\) to \(v\) and \([w_{0}=v_{\rm root},w_{1},w_{2},w_{3},\ldots,w_{n}=w]\) be the path joining \(v_{\rm root}\) to \(w\). The assumption implies that there exists an \(n_{0}=n_{0}(\eta)\geq 1\) sufficiently large so that the harmonic measure between \(E=[v_{n_{0}},v]\) and \(F=[w_{n_{0}},w]\) is at least \(\eta/2\). Recall that in Section 3.4, we showed that the diameters of \(E\) and \(F\) are bounded from below. To show that \(E\) and \(F\) are a definite distance apart, it is enough to give an upper bound for the modulus of the family of curves \(\Gamma_{E\leftrightarrow F}\) that connect \(E\) to \(F\) in \(\Omega_{2}\). By the conformal invariance, we may instead estimate the modulus \(\varphi^{-1}(\Gamma_{E\leftrightarrow F})\) in \(A(0;1,2)\) where teleportation is allowed between the pre-images of points in \(\mathcal{T}_{n}\). An argument similar to the one in Section 3.4 shows that \[\int_{\varphi^{-1}(\gamma)}\Bigl{(}\mathbf{1}_{A(0;1,2)}+\sum_{e\in\mathcal{T }_{n}}s(e)\rho_{e}\Bigr{)}|dz|\geq\eta/2,\qquad\gamma\in\Gamma_{E\leftrightarrow F},\] i.e. \(2/\eta\) times the integrand is an admissible metric \(\rho\) with \(A(\rho)=O(1/\eta^{2})\). Lemmas 4.5 and 4.6 imply that \(\Lambda=\partial\Omega\) is a Jordan curve: Indeed, joining consecutive boundary vertices of \(\mathcal{T}_{n}\) by hyperbolic geodesics, we obtain a sequence of Jordan curves \(\Lambda_{n}.\) If we parametrize these curves by harmonic measure from infinity, then they converge uniformly to a continuous limit curve by Lemma 4.5, and this limit curve is simple by Lemma 4.6. Furthermore, it is disjoint from \(\mathcal{T}_{\infty}\) by Lemma 3.1. ### Formation of \(\Omega\)-horoballs We now turn to showing (SL5). Let \(v_{0}\neq v_{\rm root}\) be a vertex of the infinite trivalent tree. For \(j\geq 1\), set \[v_{j}=v_{0}LR^{j-1}\qquad\text{and}\qquad v_{-j}=v_{0}RL^{j-1}.\] From (SL3), the limits \[\lim_{j\to+\infty}v_{j}\quad\text{and}\quad\lim_{j\to-\infty}v_{j}\] exist and are equal. We refer to their common value \(p\) as a _cusp_ or _parabolic point_. In particular, the union of the edges \[\bigcup_{j=-\infty}^{\infty}\overline{v_{j}v_{j+1}}\,\subset\,\mathcal{T}_{\infty}\] defines a Jordan curve, which we denote \(\partial\Omega_{p}\). At the root vertex, one can similarly construct three Jordan domains \(\Omega_{p_{1}},\Omega_{p_{2}},\Omega_{p_{3}}\). We refer to the regions \(\{\Omega_{p_{i}}\}\) as \(\Omega\)-horoballs. **Lemma 4.7**.: _The regions \(\{\Omega_{p_{i}}\}\) enumerate the bounded components of \(\mathbb{C}\setminus\lim\mathcal{T}_{n}\)._ Proof.: We approximate the regions \(\Omega_{p_{i}}\) by Jordan domains \(\Omega_{p_{i}}^{(n)}\) constructed using the finite approximating trees \(\mathcal{T}_{n}\) as follows: Each finite tree \(\mathcal{T}_{n}\) contains only finitely many corresponding vertices \(\{v_{j}\}_{j=-m}^{m}\), where \(m=n-d(v_{\rm root},v_{0})\). The union of the edges \(\bigcup_{j=-m}^{m-1}\overline{v_{j}v_{j+1}}\subset\mathcal{T}_{n}\) is a Jordan arc. To form \(\partial\Omega_{p_{i}}^{(n)}\), we close this Jordan arc with the hyperbolic geodesic \(\alpha_{p_{i}}^{(n)}\subset\hat{\mathbb{C}}\setminus\mathcal{T}_{n}\) that connects the leaves \(v_{-m},v_{m}\in\mathcal{T}_{n}\). In view of Lemma 4.4, \(\operatorname{diam}\alpha_{p_{i}}^{(n)}\to 0\) and \(\Omega_{p_{i}}=\lim\Omega_{p_{i}}^{(n)}\). Since \(\Omega_{p_{i}}^{(n)}\) is disjoint from the tree \(\mathcal{T}_{n}\), the regions \(\Omega_{p_{i}}=\lim\Omega_{p_{i}}^{(n)}\) are indeed bounded components of the complement \(\mathbb{C}\setminus\lim\mathcal{T}_{n}\). Can there be any more complementary components? If \(O\) is any connected component of \(\mathbb{C}\setminus\lim\mathcal{T}_{n}\), then \(\partial O\subset\mathcal{T}_{\infty}\cup\Lambda.\) If \(\partial O\) intersects one of the edges of \(\mathcal{T}_{\infty}\), then \(O\) is one of the four \(\Omega\)-horoballs who form a neighborhood of this edge. If \(\partial O\) does not intersect \(\mathcal{T}_{\infty}\), then \(\partial O\subset\Lambda\), and since \(\Lambda\) is a Jordan curve, \(O\) must be the unbounded component of \(\mathbb{C}\setminus\Lambda\). Having established Properties (SL1)-(SL5), the proof of Theorem 4.1 is complete. ### Uniqueness of the limit For a non-root vertex \(v\in\mathcal{T}\), we define the _shadow_\(s_{v}\subset\partial\Omega\) as the shorter arc of \(\partial\Omega\) which joins \(vLRL^{\infty}=\lim_{m\to\infty}vLRL^{m}\) and \(vRLR^{\infty}=\lim_{m\to\infty}vRLR^{m}\). A brief inspection of the homeomorphic picture of the Farey tree \(\mathcal{F}\subset\mathbb{D}\) shows that any point on \(\partial\Omega\) that is not a cusp of an \(\Omega\)-horoball is contained in infinitely many shadows. The following estimate will be used in Section 5.5 in conjunction with Lemma 2.3 to show that the Hausdorff limit of the true trees \(\mathcal{T}_{n}\) is unique: **Lemma 4.8**.: _The sums_ \[\sum_{v\in\mathcal{T}_{n},\,v\neq v_{\rm root}}\Big{\{}{\rm diam}^{2}\,V(vRL) +{\rm diam}^{2}\,V(vLR)\Big{\}} \tag{4.3}\] _are uniformly bounded above, independent of \(n\)._ Since \(s_{v}\) is the Hausdorff limit as \(n\to\infty\) of \((V(vRL)\cup V(vLR))\cap\partial\Omega\), the above lemma implies that \(\sum_{v\in\mathcal{T}_{\infty},\,v\neq v_{\rm root}}{\rm diam}^{2}\,s_{v}<\infty\). In particular, \(\partial\Omega\) has area zero. Proof.: For a hyperbolic geodesic \(\hat{\gamma}\subset\{z\in\mathbb{C}:1<|z|<2\}\subset\hat{\mathbb{C}}\setminus \mathbb{D}\), let \(z_{\hat{\gamma}}\) be the Euclidean midpoint of \(\hat{\gamma}\) and \(B_{\hat{\gamma}}\subset\hat{\mathbb{C}}\setminus\mathbb{D}\) be the ball of hyperbolic radius \(1/10\) centered at \(\frac{1+|z_{\hat{\gamma}}|}{2}\cdot z_{\hat{\gamma}}\). In view of the restriction on \(\hat{\gamma}\), the ball \(B_{\hat{\gamma}}\) is contained in the bounded domain enclosed by \(\hat{\gamma}\) and the unit circle. Similarly, to a hyperbolic geodesic \(\gamma\subset\Omega_{2}\subset\hat{\mathbb{C}}\setminus\mathcal{T}_{n}\), we can associate the topological disk \(B_{\gamma}:=\varphi(B_{\varphi^{-1}\gamma})\). By Koebe's distortion theorem, \(B_{\gamma}\) is approximately round in the sense that its area is comparable to its diameter squared. We apply the above construction to the geodesics \(\gamma(v)=\partial V(v)\subset\Omega_{2}\) from Section 4.1, where \(v\neq v_{\rm root}\) ranges over interior vertices of \(\mathcal{T}_{n}\), other than the root vertex. From the construction, it is clear that \(B_{\gamma(v)}\subset V(v)\). To prove the estimate (4.3), it is enough to show that \[{\rm diam}\,V(vLR)\asymp{\rm diam}\,B_{\gamma(vLR)}, \tag{4.4}\] as the topological disks \(B_{\gamma(vLR)}\) are disjoint and are contained in a bounded set. In view of Lemma 2.1, to prove (4.4), we may show the following two moduli estimates: 1. \({\rm Mod}\,V(v)\setminus V(vLR)\) is bounded below. 2. \(\operatorname{Mod}V(v)\setminus B_{\gamma(vLR)}\) is bounded above. The first estimate was already established in Lemma 4.3. The second estimate is automatic from Koebe's distortion theorem. _Remark_.: Let \(\Omega\subset\mathbb{C}\) be a Jordan domain, \(K\subset\Omega\) be a compact set and \(z_{0}\in\Omega\) be an interior point. In the work of Jones and Smirnov [7], the _shadow_ of \(K\) with respect to \(z_{0}\in\Omega\) is defined as the set of endpoints of hyperbolic rays emanating from \(z_{0}\) which pass through \(K\). It is not difficult to show that the set \(s_{v}\) described above and the Jones-Smirnov shadow of the closed ball of hyperbolic radius \(1\) centered at \(v\) with respect to \(v_{\operatorname{root}}\in\Omega\) intersect and have comparable diameters. ## 5 Convergence In this section, we show that the Hausdorff limit \(\mathcal{T}_{\infty}\cup\partial\Omega\) of the finite trees \(\mathcal{T}_{n}\) is unique. The main step is to prove that it realizes the mating of \(z\to\overline{z}^{2}\), acting on the exterior unit disk \(\mathbb{D}_{e}\), and the Markov map \(\rho(z)\) associated to the reflection group of an ideal triangle \(\triangle_{\operatorname{hyp}}\), acting on the unit disk \(\mathbb{D}\). ### Relative harmonic measure Suppose that \(U\) is a Jordan domain and \(p\in\partial U\). While it does not make sense to talk about the harmonic measure of an arc \(I\subset\partial U\) as viewed from \(p\), one can talk about the _relative harmonic measure_ of two arcs \(I,J\subset\partial U\) that do not contain \(p\) : \[\omega_{U,p}(I,J)=\lim_{z\to p}\frac{\omega_{U,z}(I)}{\omega_{U,z}(J)}.\] It is easy to see that the quantity \(\omega_{U,p}(I,J)\) varies continuously provided that \(p\) stays away from \(I\cup J\). More precisely, if a sequence of Jordan quadruples \((U_{n},p_{n},I_{n},J_{n})\) converges in the Hausdorff topology to a Jordan quadruple \((U,p,I,J)\), then \[\omega_{U_{n},p_{n}}(I_{n},J_{n})=\lim_{n\to\infty}\omega_{U,p}(I,J).\] _Example_.: When \(\Omega=\mathbb{H}\), \(p=\infty\) and \(J=[0,1]\), the relative harmonic measure \(\omega_{\mathbb{H},\infty}(\cdot,[0,1])\) is just Lebesgue measure on the real line. ### Farey horoballs The Farey tree \(\mathcal{F}\) partitions the unit disk \(\mathbb{D}\) into regions which we call _Farey horoballs_\(H_{p_{i}}\). We index the Farey horoballs by the point where they touch the unit circle. We label the vertices on \(\partial H_{p_{i}}\) in counter-clockwise order by \(v^{j}(H_{p_{i}})\), \(j\in\mathbb{Z}\), with \(v^{0}(H_{p_{i}})\) being the vertex with the smallest combinatorial distance to \(v_{\rm root}\). By construction, the Farey tree is invariant under the group generated by reflections in the sides of \(\triangle_{\rm hyp}\). As such, Farey horoballs enjoy the following two properties: * Any two edges \(e_{1},e_{2}\subset\partial H_{p_{i}}\) have the same relative harmonic measure as viewed from \(p_{i}\), i.e. \[\omega_{H_{p_{i}},p_{i}}(e_{1},e_{2})=1.\] * If an edge \(e\) belongs to two neighbouring Farey horoballs \(H_{p_{i}}\) and \(H_{p_{j}}\), then the relative harmonic measures are the same from both sides: \[\omega_{H_{p_{i}},p_{i}}(I,e)=\omega_{H_{p_{j}},p_{j}}(I,e),\qquad I\subseteq e.\] ### Interior Structure of \(\Omega\) In Section 4, we saw that any Hausdorff limit \(\mathcal{T}_{\infty}\cup\partial\Omega\) of the \(\mathcal{T}_{n}\) is ambiently homeomorphic to the Farey tree \(\mathcal{F}\) union the unit circle \(\partial\mathbb{D}\). Recall that the connected components of \(\Omega\setminus\mathcal{T}_{\infty}\) are called \(\Omega\)-horoballs and are labeled by the point where they meet \(\partial\Omega\). **Lemma 5.1**.: _The \(\Omega\)-horoballs also enjoy properties (F1) and (F2)._ Proof.: Since the arguments are very similar, we only present the proof of the second property and leave the proof of the first property to the reader. We approximate \(\Omega_{p_{i}}\) by Jordan domains \(\Omega_{p_{i}}^{(n)}\) as in the proof of Lemma 4.7. Pick an arbitrary point \(p_{i}^{(n)}\in\alpha_{p_{i}}^{(n)}\). As the diameters of \(\alpha_{p_{i}}^{(n)}\) tend to \(0\), the points \(p_{i}^{(n)}\to p_{i}\). Suppose two neighbouring \(\Omega\)-horoballs \(\Omega_{p_{i}}\) and \(\Omega_{p_{j}}\) meet along an edge \(e\). Given an arc \(I\subset e\), we can approximate it in the Hausdorff topology by arcs \(I_{n}\subset e^{(n)}\subset\mathcal{T}_{n}\). By the aforementioned continuity of the relative harmonic measure, we have \(\omega_{\Omega_{p_{i}},p_{i}}(I,e)=\lim\omega_{\Omega_{p_{i}}^{(n)},p_{i}^{(n)}}(I_ {n},e^{(n)})\) so that it is enough to show \(\omega_{\Omega_{p_{i}}^{(n)},p_{i}^{(n)}}(I_{n},e^{(n)})\sim\omega_{\Omega_{p_{j }}^{(n)},p_{j}^{(n)}}(I_{n},e^{(n)})\). An intuitive albeit somewhat informal proof of (F2) is as follows: Run Brownian motion from \(\infty\) until it hits \(\mathcal{T}_{n}\). If it is to hit the arc \(I_{n}\subset e^{(n)}\) from the side of \(\Omega_{p_{i}}\), denoted by \(I_{n}\,|\,\Omega_{p_{i}}^{(n)}\), then it must pass through the gate \(\alpha_{p_{i}}^{(n)}\). Since the diameter of the gate \(\alpha_{p_{i}}^{(n)}\) is very small, \[\omega_{\Omega_{p_{i}}^{(n)},p_{i}^{(n)}}(I_{n},e^{(n)})\,\sim\,\frac{\omega_{ \mathbb{C}\setminus\mathcal{T}_{n},\infty}(I_{n}\,|\,\Omega_{p_{i}}^{(n)})}{ \omega_{\hat{\mathbb{C}}\setminus\mathcal{T}_{n},\infty}(e^{(n)}\,|\,\Omega_ {p_{i}}^{(n)})}\,=\,\frac{\omega_{\hat{\mathbb{C}}\setminus\mathcal{T}_{n}, \infty}(I_{n}\,|\,\Omega_{p_{j}}^{(n)})}{\omega_{\hat{\mathbb{C}}\setminus \mathcal{T}_{n},\infty}(e^{(n)}\,|\,\Omega_{p_{j}}^{(n)})}\,\sim\,\omega_{ \Omega_{p_{j}}^{(n)},p_{j}^{(n)}}(I_{n},e^{(n)}).\] Here, we have used that the harmonic measures on the two sides of every edge \(e\) in a true tree are identical. The lemma follows after taking \(n\to\infty\). For a rigorous justification of these asymptotic equalities, notice that the pre-images of the approximate \(\Omega\)-horoballs \(\Omega_{p_{i}}^{(n)}\) and \(\Omega_{p_{j}}^{(n)}\) under the hydrodynamically normalized Riemann maps \(\varphi_{n}:\hat{\mathbb{C}}\setminus\mathbb{D}\to\hat{\mathbb{C}}\setminus \mathcal{T}_{n}\) are of the form \[\varphi_{n}^{-1}(\Omega_{p_{i}}^{(n)})=\mathbb{D}_{e}\cap B_{n},\] where the \(B_{n}\) are (small) disks with centers near \(\partial\mathbb{D}_{e}\) and \(\varphi_{n}^{-1}(p_{i}^{(n)})\in\partial B_{n}.\) The conformal maps \(f_{n}\) of \(B_{n}\cap\mathbb{D}_{e}\) onto the upper half-plane \(\mathbb{H}\) that send \(p_{i}^{(n)}\) to \(\infty\) and \(\varphi_{n}^{-1}(e^{(n)})\) to \([0,1]\) extend by reflection to \(B_{n}\). As the modulus of the annulus \(B_{n}\setminus\varphi_{n}^{-1}(e^{(n)})\) tends to infinity, by the Koebe distortion theorem, \[\omega_{\Omega_{p_{i}}^{(n)},p_{i}^{(n)}}(I_{n},e^{(n)})=\text{length}(f_{n}( \varphi_{n}^{-1}(I_{n})))\sim\frac{\text{length}(\varphi_{n}^{-1}(I_{n}))}{ \text{length}(\varphi_{n}^{-1}(e^{(n)}))}=\frac{\omega_{\hat{\mathbb{C}} \setminus\mathcal{T}_{n},\infty}(I_{n}\,|\,\Omega_{p_{i}}^{(n)})}{\omega_{ \hat{\mathbb{C}}\setminus\mathcal{T}_{n},\infty}(e^{(n)}\,|\,\Omega_{p_{i}}^ {(n)})},\] which is what we wanted to show. Since \(\partial\mathbb{D}\cup\mathcal{F}\) and \(\partial\Omega\cup\mathcal{T}\) are ambiently homeomorphic, one has a correspondence between the bounded complementary components of \(\partial\mathbb{D}\cup\mathcal{F}\) (Farey horoballs) and those of \(\partial\Omega\cup\mathcal{T}\) (\(\Omega\)-horoballs). For each pair of corresponding complementary regions, form the conformal mapping \(\varphi_{i}:H_{i}\to\Omega_{i}\) which takes \[p(H_{i})\to p(\Omega_{i}),\quad v^{0}(H_{i})\to v^{0}(\Omega_{i}),\quad v^{1}(H _{i})\to v^{1}(\Omega_{i}).\] (As Farey horoballs and \(\Omega\)-horoballs are Jordan domains, by Caratheodory's theorem, the maps \(\varphi_{i}\) extend to homeomorphisms between the closures.) Since Farey and \(\Omega\)-horoballs possess the property (F1), \(\varphi_{i}\) maps \(v^{j}(H_{i})\to v^{j}(\Omega_{i})\) for any \(j\in\mathbb{Z}\). Moreover, as Farey and \(\Omega\)-horoballs possess the property (F2), we have: **Lemma 5.2** (Interior structure lemma).: _The mappings \(\varphi_{i}:H_{i}\to\Omega_{i}\) glue up to form a conformal mapping \(\varphi:\mathbb{D}\to\Omega.\) In other words, if \(H_{i}\) and \(H_{j}\) share a common edge \(e\), then \(\varphi_{i}|_{e}=\varphi_{j}|_{e}\)._ Indeed, since the edges of \(\mathcal{F}\) are analytic arcs and the homeomorphism \(\varphi\) is conformal on \(\mathbb{D}\setminus\mathcal{F}\), it follows that \(\varphi\) extends analytically across the open edges. As the vertices are isolated points, they are removable singularities. ### Exterior Structure of \(\Omega\) By definition, the harmonic measure \(\omega_{\hat{\mathbb{C}}\setminus\mathcal{T}_{n},\infty}\) is supported on \(\mathcal{T}_{n}\). From Koebe's \(1/4\) theorem, we know that the true trees \(\mathcal{T}_{n}\subset B(0,8)\) are contained in a fixed compact set, so that any subsequential weak-\(*\) limit \(\omega\) of the \(\omega_{\hat{\mathbb{C}}\setminus\mathcal{T}_{n},\infty}\) is a probability measure supported on the Hausdorff limit \(\mathcal{T}_{\infty}\cup\partial\Omega\). As the harmonic measure of any individual edge tends to zero, the support of the limiting measure \(\omega\) is contained in \(\partial\Omega\). Finally, since \(\partial\Omega\) is uniformly perfect, being a Jordan curve, \(\omega=\omega_{\Omega_{e},\infty}\). Consider the map \(f(z)=\overline{z}^{2}\) acting on the unit circle. It has fixed points at \(1\), \(\omega=e^{2\pi i/3}\) and \(\omega^{2}=e^{4\pi i/3}\), which divide the circle into three equal arcs. We call this partition \(\Pi_{0}\). For \(k=1,2,\dots\), the partition \(\Pi_{k}=f^{-k}(\Pi_{0})\) divides the circle in \(3\cdot 2^{k}\) equal arcs. We now define an analogous sequence of partitions of \(\partial\Omega\). We define the _order_ of an \(\Omega\)-horoball \(\Omega_{p}\) as \[\operatorname{ord}\Omega_{p}=\min_{v\in\partial\Omega_{p}}d_{\mathcal{T}_{ \infty}}(v_{\operatorname{root}},v).\] There are three \(\Omega\)-horoballs of order \(0\), which contain \(v_{\operatorname{root}}\). Inspection shows that for \(k\geq 1\), there are \(3\cdot 2^{k-1}\)\(\Omega\)-horoballs of order \(k\) and thus \[3+3+6+\dots+3\cdot 2^{k-1}=3\cdot 2^{k}\] \(\Omega\)-horoballs of order at most \(k\). For \(k=1,2,\dots,\) we define \(\Lambda_{k}\) as the partition of \(\partial\Omega\) into \(3\cdot 2^{k}\) arcs by the cusps of order \(\leq k\), i.e. the points where \(\Omega\)-horoballs of order \(\leq k\) meet \(\partial\Omega\). Since each arc in \(\Lambda_{k}\) subtends the same number of edges of \(\mathcal{T}_{n}\) up to an additive error of \(O(k)\), the harmonic measures of each arc in \(\Lambda_{k}\) are equal and we have: **Lemma 5.3** (Exterior structure lemma).: _There is a conformal mapping_ \[\psi:(\hat{\mathbb{C}}\setminus\overline{\mathbb{D}},\infty)\to(\hat{\mathbb{ C}}\setminus\overline{\Omega},\infty)\] _which takes \(\Pi_{k}\) to \(\Lambda_{k}\) for any \(k\geq 0\)._ ### Uniqueness of the Hausdorff limit The interior and exterior structure lemmas (Lemmas 5.2 and 5.3) show that any subsequential limit \(\partial\Omega\) realizes the mating of \(z\to\overline{z}^{2}\) and the Markov map \(\rho(z)\) of the reflection group of an ideal triangle (see Section 1.1). The structure lemmas also show that Hausdorff limit of the \(\mathcal{T}_{n}\) is unique. Indeed, if \(\mathcal{T}_{\infty}^{\prime}\cup\partial\Omega^{\prime}\) was another subsequential limit of \(\mathcal{T}_{n}\), in addition to \(\mathcal{T}_{\infty}\cup\partial\Omega\), we could conformally map each complementary region in \(\hat{\mathbb{C}}\setminus(\mathcal{T}_{\infty}\cup\partial\Omega)\) to the corresponding complementary region in \(\hat{\mathbb{C}}\setminus(\mathcal{T}_{\infty}^{\prime}\cup\partial\Omega^{ \prime})\). Lemmas 5.2 and 5.3 guarantee that these conformal mappings patch together to form a continuous self-map of the sphere \(h:\hat{\mathbb{C}}\to\hat{\mathbb{C}}\) which is conformal on \(\hat{\mathbb{C}}\setminus(\mathcal{T}_{\infty}\cup\partial\Omega)\). The tree \(\mathcal{T}_{\infty}\) is conformally removable as it is a union of real analytic arcs. Thus, \(h\) is conformal on \(\hat{\mathbb{C}}\setminus\partial\Omega\). By Lemmas 2.3 and 4.8, \(h\) must be a Mobius transformation. ### Convergence of the Shabat polynomials We subdivide each \(\Omega\)-horoball \(\Omega_{p_{j}}\) into triangles \(\triangle(\vec{e}_{i},p_{j})\) by connecting the vertices of \(\mathcal{T}_{\infty}\) on \(\partial\Omega_{p_{j}}\) to \(p_{j}\) by hyperbolic geodesics of \(\Omega_{p_{j}}\). We colour the triangles \(\triangle(\vec{e}_{i},p_{j})\subset\Omega\) black and white, so that adjacent triangles have opposite colours. We conformally map each black triangle \(\triangle(\vec{e}_{i},p)\) onto the upper half-plane \(\mathbb{H}\) so that \(\vec{e}_{i}\to[-1,1]\), \(p_{j}\to\infty\) and each white triangle \(\triangle(\vec{e}_{i},p)\) onto the lower half-plane \(\mathbb{L}\) so that \(\vec{e}_{i}\to[-1,1]\), \(p_{j}\to\infty\). Properties (F1) and (F2) from Section 5.1 guarantee that these conformal maps glue together to form a holomorphic function \(h\) defined on \(\Omega\). By choosing the colouring scheme appropriately, we can ensure that \(h(v_{\mathrm{root}})=1\) rather than \(-1\). From the description of the Shabat polynomials \(p_{n}\) for the true trees \(\mathcal{T}_{n}\) given in Section 3.1, it is not difficult to see that the \(p_{n}\to h\), uniformly on compact subsets of \(\Omega\). Indeed, the Hausdorff convergence of \(\mathcal{T}_{n}\) to \(\mathcal{T}_{\infty}\) implies that the triangles \(\triangle(\vec{e}_{i},\infty)\subset\hat{\mathbb{C}}\) defined in Section 3.1 converge to the corresponding triangle \(\triangle(\vec{e}_{i},p)\subset\Omega_{p}\) in the Caratheodory topology. As \(p_{n}\) and \(h\) are conformal maps from these triangles to the upper or lower half-planes, this tells us that \(p_{n}\to h\) uniformly on compact subsets of any triangle \(\triangle(\vec{e}_{i},p)\subset\Omega\). By considering a pair of triangles that have a common edge, one obtains that \(p_{n}\to h\) uniformly on compact subsets of the union of these two triangles, which shows that \(p_{n}\to h\) uniformly on compact subsets of \(\Omega\) away from the vertices of the trees. Finally, one may use a similar argument to obtain the uniform convergence in a neighbourhood of each vertex \(v\in\mathcal{T}_{\infty}\) by examining the behaviour of the maps \(p_{n}\) and \(h\) on the stars \(\star_{v}\), which were defined in Section 3.2. If \(R\) is the Riemann map from \(\mathbb{D}\to\Omega\), then \(h\circ R\) is a function on the unit disk whose fundamental domain consists of two copies of the fundamental domain for \(\mathrm{PSL}(2,\mathbb{Z})\), see Figure 5.6. ## Appendix A Trivalent true trees are dense Let \(K\) be a connected compact set in the plane. In this appendix, we show that one can approximate \(K\) in the Hausdorff topology by finite trivalent true trees, thereby giving another proof of Bishop's theorem. Start with a finite trivalent tree \(\mathcal{T}_{1}^{\prime}\), for instance, with the tree on the left side of Fig. 6 which consists of five edges. At each step, add two edges to each boundary vertex. This gives us a sequence of true trees \(\{\mathcal{T}_{n}^{\prime}\}_{n=1}^{\infty}.\) The arguments presented in this paper show that the finite trees \(\mathcal{T}_{n}^{\prime}\) converge in the Hausdorff topology to an infinite trivalent tree union a Jordan curve: \(\mathcal{T}_{\infty}^{\prime}\cup\partial\Omega^{\prime}\). Our objective is to show that for any compact connected set \(K\subset\mathbb{C}\) and \(\varepsilon>0\) one can choose the starting tree \({\cal T}^{\prime}_{1}\) appropriately so that the Hausdorff distance \[d_{\rm H}(L\circ{\cal T}^{\prime}_{n},K)<\varepsilon,\] for some linear mapping \(L(z)=az+b\) in \({\rm Aut}\,{\mathbb{C}}\). (The linear mapping compensates for the fact that the conformal map to \({\mathbb{C}}\setminus{\cal T}^{\prime}_{n}\) is hydrodynamically normalized.) We now make two reductions. _Reduction 1._ It is enough to show that for any Jordan curve \(\gamma\), one can choose the starting tree \({\cal T}^{\prime}_{1}\) so that \(d_{\rm H}(L\circ\partial\Omega^{\prime},\gamma)<\varepsilon\). Indeed, one can approximate any compact connected set \(K\subset{\mathbb{C}}\) in the Hausdorff topology by Jordan curves \(\gamma_{k}\) that are \((1/k)\)-thin, i.e. any point in the domain \(\Gamma_{k}\) enclosed by \(\gamma_{k}\) lies within \(1/k\) of \(\gamma_{k}\). This ensures that the domains \(\Gamma_{k}\) converge in the Hausdorff topology to \(K\). See Fig. 7 above for examples. Therefore, if \({\cal T}^{\prime}_{k,l}\) is a sequence of infinite trees such that \(d_{H}(L_{k,l}\circ\partial{\Omega}^{\prime}_{k,l},\gamma_{k})\to 0\), then \(d_{H}(L_{k,l}\circ{\cal T}^{\prime}_{k,l},\gamma_{k})\leq 2/k\) for all sufficiently large \(l\geq l_{0}(k)\). A diagonal argument produces a sequence of infinite trees which converges to \(K\) after linear rescaling. _Reduction 2._ We may assume that \(K=\partial\tilde{\Omega}\) is the image of the boundary of the developed deltoid \(\partial\Omega\) under a quasiconformal map \(f:{\mathbb{C}}\to{\mathbb{C}}\), which is conformal Figure 5: The fundamental domain for the function \(h\), drawn in the upper half-plane instead of the disk, is twice as big as the fundamental domain for \({\mathbb{H}}/\,{\rm PSL}(2,{\mathbb{Z}})\). The blue part of the fundamental domain is sent to the upper half-plane \({\mathbb{H}}\), while the orange part is send to the lower half-plane \({\mathbb{L}}\). on \(\Omega\). It is well known that quasiconformal images of the unit circle (quasicircles) are dense in the collection of Jordan curves. It turns out that quasiconformal images of any fixed Jordan curve are also dense. An anti-symmetrization argument from [4, 9] allows one to choose \(f\) to be conformal on \(\Omega\). See Lemma A.2 below. ### Trivalent tree weldings Recall that any non-root vertex in the trees \(\mathcal{T}_{n}\) and \(\mathcal{T}_{\infty}\) can be labeled by a digit \(1,2,3\) followed by a sequence of left and right turns. In order to label the vertices of Figure 6: Unbalanced truncations of the infinite trivalent tree. Figure 7: Approximating the unit circle and the unit disk in the Hausdorff topology by thin Jordan domains. \(\mathcal{T}^{\prime}_{n}\) and \(\mathcal{T}^{\prime}_{\infty}\) in a similar fashion, we designate a vertex in \(\mathcal{T}^{\prime}_{1}\) as the root vertex and select one of the adjacent vertices as the vertex labeled \(1\). Let \(\varphi:(\mathbb{D},0,1)\to(\Omega,v_{\mathrm{root}},p_{1})\) and \(\psi:(\mathbb{D}_{e},\infty,1)\to(\Omega_{e},\infty,p_{1})\) be conformal mappings to the interior and exterior of the developed deltoid respectively, where \[p_{1}\,=\,\lim_{k\to\infty}v_{1R^{k}}\,=\,\lim_{k\to\infty}v_{3L^{k}}\] is one of the three cusps of the developed deltoid of order \(0\). The composition \(h=\psi^{-1}\circ\varphi:\partial\mathbb{D}\to\partial\mathbb{D}\) defines a homeomorphism of the unit circle, which is called the _welding homeomorphism_ of \((\partial\Omega,v_{\mathrm{root}},p_{1},\infty)\). Form the analogous mappings \(\varphi^{\prime},\psi^{\prime}\) and \(h^{\prime}\) for \((\partial\Omega^{\prime},v^{\prime}_{\mathrm{root}},p^{\prime}_{1},\infty)\). Inspection shows that the weldings \(h\) and \(h^{\prime}\) are related by a piecewise linear homeomorphism \(F\) of the unit circle: \(h^{\prime}=F\circ h\). For instance, in the example depicted in Fig. 6, to describe \(F\), we divide the unit circle into three equal arcs and map these onto arcs of lengths \(\pi,\pi/2,\pi/2\) respectively, which indicates the fact that one third of the tree has the same number of edges as the other two thirds. Let \(\mathrm{TPL}_{1}\) denote the collection of piecewise linear homeomorphisms of the unit circle that arise in this way and \(\mathscr{T}\mathscr{W}=\{F\circ h:F\in\mathrm{TPL}_{1}\}\) be the collection of all trivalent tree weldings. It is not difficult to see that for any quasisymmetric homeomorphism of the unit circle \(F\in\mathrm{QS}_{1}\) which fixes \(1\in\partial\mathbb{D}\), there is a sequence of homeomorphisms \(F_{k}\in\mathrm{TPL}_{1}\) whose quasisymmetry constants are uniformly bounded such that \[F_{k}^{-1}\circ F:\partial\mathbb{D}\to\partial\mathbb{D}\] tend uniformly to the identity. (One may choose the homeomorphisms \(F_{k}\) so that their quasisymmetry constants are comparable to the quasisymmetry constant of \(F\).) ### An overview of the proof The curve \(K=\partial\tilde{\Omega}\) divides the Riemann sphere \(\hat{\mathbb{C}}\) into an interior domain \(\tilde{\Omega}\) and an exterior domain \(\tilde{\Omega}_{e}\). Composing \(f\) with conformal maps \(\psi:(\mathbb{D}_{e},\infty,1)\to(\Omega_{e},\infty,p_{1})\) and \(\tilde{\psi}^{-1}:(\tilde{\Omega}_{e},\infty,f(p_{1}))\to(\mathbb{D}_{e}, \infty,1)\), we get a quasiconformal self-mapping of the exterior unit disk \[F=\tilde{\psi}^{-1}\circ f\circ\psi.\] The following lemma (whose proof will be presented in Section A.4) allows us to approximate \(F\) by quasiconformal self-maps \(F_{k}\) of \(\mathbb{D}_{e}\) with \(F_{k}|_{\partial\mathbb{D}}\in\mathrm{TPL}_{1}\): **Lemma A.1**.: _Let \(F\) be a quasiconformal self-map of the exterior of the unit disk which fixes \(1\) and \(\infty\). We can approximate \(F\) by quasiconformal maps \(F_{k}:(\mathbb{D}_{e},1,\infty)\to(\mathbb{D}_{e},1,\infty)\) so that:_ 1. \(F_{k}=F\) _on_ \(\{z>1+1/k\}\)_._ 2. _The dilatations_ \(\|\mu_{F_{k}}\|_{\infty}<c<1\) _are uniformly bounded._ 3. _Restricted to the unit circle,_ \(F_{k}\) _is one of the piecewise linear homeomorphisms described above that relates the welding of the "genuine" developed deltoid_ \(\Omega\) _and a "generalized" developed deltoid_ \(\Omega^{\prime}_{k}\)_._ By Properties 1 and 2, we can select quasiconformal maps \(f_{k}:\mathbb{C}\to\mathbb{C}\) which tend to \(f\), are conformal on \(\Omega\) and have dilatations \(\psi_{*}\mu_{F_{k}}\) on \(\Omega_{e}\). (Since \(\partial\Omega\) has area zero by Lemma 4.8, the quasiconformal map \(f_{k}\) is uniquely specified by its dilation off \(\partial\Omega\) up to post-composition with a Mobius transformation.) We set \(\partial\tilde{\Omega}_{k}=f_{k}(\partial\Omega)\). Property 3 tells us that \(\big{(}\partial\tilde{\Omega}_{k},f_{k}(v_{\mathrm{root}}),f_{k}(p_{1}),\infty \big{)}\) has welding homeomorphism \(h_{k}=F_{k}\circ h\in\mathscr{T}\mathscr{W}\). Let \(\Omega^{\prime}_{k}\) be the generalized developed deltoid with welding \(h_{k}\in\mathscr{T}\mathscr{W}\). We now use partial conformal removability techniques to show that \(\tilde{\Omega}_{k}=L(\Omega^{\prime}_{k})\) for some linear map \(L\in\mathrm{Aut}\,\mathbb{C}\). Since \(\partial\tilde{\Omega}_{k}\) and \(\partial\Omega^{\prime}_{k}\) realize the same welding homeomorphism, the conformal map \[\phi:\big{(}\hat{\mathbb{C}}\setminus\!\partial\tilde{\Omega}_{k},f_{k}(v_{ \mathrm{root}}),f_{k}(p_{1}),\infty\big{)}\to\big{(}\hat{\mathbb{C}}\setminus \!\partial\Omega^{\prime}_{k},v^{\prime}_{\mathrm{root};\,k},p^{\prime}_{1; \,k},\infty\big{)}\] extends continuously to a homeomorphism on the Riemann sphere. We claim that \(\phi\) is conformal on the whole Riemann sphere (and therefore, a linear mapping since it fixes infinity). As in Section 4.4, for each non-root vertex \(v\) of \(\mathcal{T}^{\prime}_{\infty}\cong\mathcal{T}_{\infty}\), one can define the shadow \(s^{\prime}_{v}\subset\partial\Omega^{\prime}\) as the shorter arc of \(\partial\Omega^{\prime}\) between \(vLRL^{\infty}\) and \(vRLR^{\infty}\). Then, each point of \(\partial\Omega^{\prime}\) which is not a cusp lies in infinitely many shadows and \[\sum_{v\neq v_{\mathrm{root}}}\mathrm{diam}^{2}\,s^{\prime}_{v}<\infty.\] (A.1) We can also define a collection of shadows \(\{\tilde{s}_{v}\}\) for \(\tilde{\Omega}_{k}\) by taking the images of the shadows for the developed deltoid \(\Omega\) under the quasiconformal map \(f_{k}\). Since in each case, the shadows are defined in terms of the combinatorics of the infinite trivalent tree, \(\phi\) takes \(\tilde{s}_{v}\) onto \(s^{\prime}_{v}\) for each vertex \(v\neq v_{\rm root}\). Recall that in Section 4.4, to each shadow \(s_{v}\in\partial\Omega\), \(v\neq v_{\rm root}\), we associated a round set \(B_{v}\) with \[\operatorname{Area}B_{v}\asymp\operatorname{diam}^{2}B_{v},\qquad\operatorname {diam}B_{v}\asymp\operatorname{diam}s_{v},\qquad\operatorname{dist}(B_{v},s_{ v})\lesssim\operatorname{diam}s_{v},\] so that \(\{B_{v}\}\) are disjoint and contained in a bounded set. From the quasisymmetry of \(f_{k}\), we deduce that \[\sum_{v\neq v_{\rm root}}\operatorname{diam}^{2}\tilde{s}_{v}<\infty.\] (A.2) In view of Lemma 2.3, the inequalities (A.1) and (A.2) show that \(\phi\) is a Mobius transformation. This completes the proof of Bishop's theorem, modulo the technical Lemmas A.1 and A.2 which will be proved below. ### Quasiconformal images of Jordan curves In the following lemma, we explain how to approximate Jordan curves by quasiconformal images of a given Jordan curve: **Lemma A.2**.: _Let \(\Omega\subset\mathbb{C}\) be a bounded Jordan domain. For any Jordan curve \(\gamma\) and \(\varepsilon>0\), one can find a quasiconformal map \(f:\mathbb{C}\to\mathbb{C}\), which is conformal on \(\Omega\) and takes \(\partial\Omega\) onto a Jordan curve \(\partial\tilde{\Omega}\) for which the Hausdorff distance \(d_{H}(\partial\tilde{\Omega},\gamma)<\varepsilon\)._ Before proving the above lemma, we first make a preliminary observation: **Lemma A.3**.: _Let \(\gamma\subset\mathbb{C}\) be a smooth Jordan curve. For any \(\delta,\varepsilon>0\), one can express \(\gamma\) as the image of the unit circle under a quasiconformal map \(f:\mathbb{C}\to\mathbb{C}\) such that \(f(A(0;1-\delta,1+\delta))\) contains an \(\varepsilon\)-neighbourhood of \(\gamma\)._ Proof.: It is not difficult to express \(\gamma\) as the image of the unit circle under a smooth quasiconformal map \(f_{1}:\mathbb{C}\to\mathbb{C}\). Pick \(\rho>0\) so that \(f_{1}(A(0;\rho,1/\rho))\) contains an \(\varepsilon\)-neighbourhood of \(\gamma\). The desired quasiconformal map can be obtained by a radial reparametrization of \(f_{1}\): \[f(re^{i\theta}):=f_{1}(\phi(r)e^{i\theta}),\] where \(\phi:[0,\infty)\to[0,\infty)\) is a homeomorphism which is identity outside \([\rho,1/\rho]\) and takes \([\rho,1/\rho]\) to \([1-\delta,1+\delta]\). Proof of Lemma a.2.: Since smooth Jordan curves are dense in the set of all Jordan curves, one can find two smooth quasiconformal maps \(f_{1},f_{2}:\mathbb{C}\to\mathbb{C}\) such that \(d_{H}(f_{1}(\partial\mathbb{D}),\partial\Omega)<\varepsilon/2\) and \(d_{H}(f_{2}(\partial\mathbb{D}),\gamma)<\varepsilon/2\). Anti-symmetrizing as in [4, 9], one may assume that \(f_{1},f_{2}\) are conformal on the unit disk. The idea is to take \(f=f_{2}\circ f_{1}^{-1}\) and \(\partial\tilde{\Omega}=f_{2}\circ f_{1}^{-1}(\partial\mathbb{D})\). To make this argument work, one has to be slightly careful when choosing the maps \(f_{1}\) and \(f_{2}\). Here is the precise construction: 1. We first choose \(f_{2}\) so that \(d_{H}(f_{2}(\partial\mathbb{D}),\gamma)<\varepsilon/2\). 2. We then choose \(\delta>0\) sufficiently small so that \(f_{2}(A(0;1-\delta,1+\delta))\) is contained in an \(\varepsilon/2\)-neighbourhood of \(f_{2}(\partial\mathbb{D})\), and thus in an \(\varepsilon\)-neighbourhood of \(\gamma\). 3. Finally, we use Lemma A.3 to select a quasiconformal map \(f_{1}\) so that \[f_{1}^{-1}(\partial\Omega)\subset A(0;1-\delta,1+\delta).\] It is clear from the construction that the composition \(f=f_{2}\circ f_{1}^{-1}\) maps \(\partial\Omega\) into the \(\varepsilon\)-neighbourhood of \(\gamma\). ### Piecewise-linear approximations In the following lemma, we explain how to extend quasisymmetric maps from the boundary of a horizontal strip to the interior. This is a special case of a result of Vaisala, see [10]. **Lemma A.4**.: _Let \(\mathcal{S}=\{(x,y)\in\mathbb{R}^{2}:0<y<1\}\) be a horizontal strip of width 1. Suppose \(\phi_{0},\phi_{1}:\mathbb{R}\to\mathbb{R}\) are \(k\)-quasisymmetric maps which move points a bounded distance, i.e. \(|\phi_{i}(x)-x|<C\), \(i=0,1\). There exists a \(k_{1}\)-quasiconformal map \(\Phi:\mathcal{S}\to\mathcal{S}\) which takes \((x,0)\to(\phi_{1}(x),0)\) and \((x,1)\to(\phi_{2}(x),1)\), with \(k_{1}\) depending only on \(k\) and \(C\)._ Proof.: We begin by defining \(\Phi\) on \(\partial\mathcal{S}\) by \((x,0)\to(\phi_{0}(x),0)\) and \((x,1)\to(\phi_{1}(x),1)\). We can partition \(\mathcal{S}\) into a union of squares \(\{S_{n}\}\) using the vertical segments \(\{\ell_{n}\}_{n\in\mathbb{Z}}\) which connect \((n,0)\) and \((n,1)\). Similarly, we can partition \(\mathcal{S}\) into a union of conformal rectangles \(\{\tilde{S}_{n}\}\) using the line segments \(\{\tilde{\ell}_{n}\}_{n\in\mathbb{Z}}\) which connect \((\phi_{0}(n),0)\) to \((\phi_{1}(n),1)\). We first extend \(\Phi\) to the vertical segments \(\{\ell_{n}\}_{n\in\mathbb{Z}}\), so that it is linear on each segment \(\ell_{n}\) and takes \(\ell_{n}\) to \(\tilde{\ell}_{n}\). For each \(n\in\mathbb{Z}\), we extend \(\Phi\) from \(\partial S_{n}\to\partial\tilde{S}_{n}\) to \(\Phi:S_{n}\to\tilde{S}_{n}\) using the Beurling-Ahlfors extension. (The assumption on the maps \(\phi_{0}\) and \(\phi_{1}\) guarantees that \(\partial\tilde{S}_{n}\) are uniform quasicircles.) We now show how to approximate quasiconformal self-maps of \(\mathbb{D}_{e}\) by ones that are piecewise-linear on the unit circle. In the proof below, we will use a variant of the above lemma for the annulus \(A(0;1,1+1/k)\) : Proof of Lemma a.1.: The idea is to define \(F_{k}=F\circ\Phi_{k}\) by composing \(F\) with a quasiconformal homeomorphism \(\Phi_{k}:\mathbb{D}_{e}\to\mathbb{D}_{e}\) which is identity on \(|z|>1+1/k\). Let \(\Lambda_{k}\in\mathrm{TPL}_{1}\) be a piecewise linear map whose quasisymmetric constant is comparable to that of \(F\) such that \(\phi_{k}=F^{-1}\circ\Lambda_{k}\) moves points on the unit circle by \(O(1/k)\). By Lemma A.4, \(\phi_{k}\) admits a quasiconformal extension \(\Phi_{k}\) to the exterior unit disk which is identity on \(\{z>1+1/k\}\). From the construction, it is clear that \(F_{k}=F\circ\Phi_{k}\) satisfies Properties 1-3 as desired. ## Appendix B True tree approximation of cauliflower In this appendix, we describe a sequence of true trees, whose limit is the cauliflower, the Julia set of \(f(z)=z^{2}+1/4\). Since the arguments are similar to the ones for the finite truncations of the infinite trivalent tree, we only give a brief sketch of the proofs, with an emphasis on the differences. Let \(T_{1}\) be a planar tree which consists of a root vertex \(v_{\mathrm{root}}\) and four edges \[\overline{v_{\mathrm{root}}v_{\uparrow}},\quad\overline{v_{\mathrm{root}}v_ {\rightarrow}},\quad\overline{v_{\mathrm{root}}v_{\downarrow}},\quad\overline{ v_{\mathrm{root}}v_{\rightarrow}},\] labeled counter-clockwise. We colour the edges \(\overline{v_{\mathrm{root}}v_{\uparrow}},\;\overline{v_{\mathrm{root}}v_{ \downarrow}}\) blue and \(\overline{v_{\mathrm{root}}v_{\leftarrow}},\;\overline{v_{\mathrm{root}}v_{ \rightarrow}}\) red. To form \(T_{n+1}\) from \(T_{n}\), we attach additional edges at each leaf vertex: * If a leaf edge is red, we attach another red edge at the leaf vertex. * If a leaf edge is blue, we attach three edges, coloured blue-red-blue in counter-clockwise order. The trees \(T_{1}\) and \(T_{2}\) are depicted on Fig. 8. From this description, it is easy to see that \(T_{n}\) is made out of \[4+8+16+\cdots+2^{n+1}=2^{n+2}-4\] edges, with the same number of red and blue edges. Let \(\mathcal{T}_{n}\) be the true tree representative of \(T_{n}\). Note that the colouring is only used to describe the combinatorics of \(T_{n}\), it plays no role in how the true tree \(\mathcal{T}_{n}\) is constructed from \(T_{n}\). **Theorem B.1**.: _The trees \(\mathcal{T}_{n}\) converge in the Hausdorff topology to an infinite tree union a Jordan curve \(\mathcal{T}\cup\partial\Omega\). The Jordan curve \(\partial\Omega\) is the Julia set of \(z^{2}+1/4\), while the set of vertices of \(\mathcal{T}\) is the grand orbit of the critical point 0 of \(f(z)=z^{2}+1/4\). Let \(\psi:\Omega\to\mathbb{C}\) be the Fatou coordinate at the parabolic fixed point \(1/2\in\mathcal{J}(f)\), with_ \[\psi(f(z))=\psi(z)+1,\qquad\psi(0)=0.\] _The Shabat polynomials \(p_{n}(z)\) of \(\mathcal{T}_{n}\), with \(p_{n}(0)=1\), converge uniformly on compact subsets of \(\Omega\) to \(\cos(\pi\cdot\psi(z))\)._ Figure 8: A sequence of true trees given by an inductive construction. ### Topology of a subsequential limit We first show that any Hausdorff limit of the trees \(\mathcal{T}_{n}\) is ambiently homeomorphic to the set depicted on the right side of Fig. 9. Since each vertex of \(\mathcal{T}_{n}\) has at most 4 neighbours and the sums \[S_{n}=\sum_{e\in\mathcal{T}_{n}}s(e)^{2}\] (B.1) are uniformly bounded above, we are in the setting of Theorem 3.4. For any blue edge \(e_{0}\), the numbers \(s(e_{0})\) are uniformly bounded below, so the blue edges do not shrink. By Lemma 3.2, the red edges also do not shrink. Therefore, any subsequential Hausdorff limit of \(\mathcal{T}_{n}\) contains an infinite tree \(\mathcal{T}_{\infty}\) whose edges are real-analytic arcs. Let \(\mathcal{B}_{n}\subset\mathcal{T}_{n}\) be the subtree consisting of blue edges. Arguing as in Section 4.1, one can show that the subsequential limit of the subtrees \(\mathcal{B}_{n}\) is an infinite tree union a Jordan curve \(\partial\Omega\). Perhaps the new feature of the trees \(\mathcal{T}_{n}\) are the red edges, so we discuss their behaviour in more detail. The red edges are naturally grouped into _twigs_. There are two twigs emanating from the root vertex, which we denote \(\mathrm{tw}_{v_{\mathrm{root}},\leftarrow}\) and \(\mathrm{tw}_{v_{\mathrm{root}},\rightarrow}\), while a single twig \(\mathrm{tw}_{v}\) emanates from each degree 4 vertex \(v\), other than the root Figure 9: A sequence of true trees which approximates \(\mathcal{J}(z^{2}+1/4)\). vertex, which we denote by \(\operatorname{tw}_{v}\). The following lemma says that each twig connects a degree 4 vertex in \(\mathcal{T}_{n}\) to a cusp in \(\partial\Omega\), where it meets the two enclosing blue branches, without protruding outside of \(\Omega\) : **Lemma B.2**.: _Let \(\operatorname{tw}_{v}^{(n)}\) be a twig in \(\mathcal{T}_{n}\)._ (i) _Any Hausdorff limit of \(\operatorname{tw}_{v}^{(n)}\) is contained in \(\overline{\Omega}\)._ (ii) _Any Hausdorff limit of \(\operatorname{tw}_{v}^{(n)}\) connects \(v\in\mathcal{T}_{\infty}\) to the cusp \(p_{v}\in\partial\Omega\)._ Sketch of proof.: We explain the argument for the twig \(\operatorname{tw}_{v_{\text{root}},\rightarrow}\) as the general case is similar. We pass to a subsequence so that \(\mathcal{B}_{n}\) and \(\operatorname{tw}_{v_{\text{root}},\rightarrow}\) converge in the Hausdorff topology as \(n\rightarrow\infty\). We denote the associated cusp by \[p_{v_{\text{root}},\rightarrow}=\lim_{m\rightarrow\infty}(\uparrow R^{m-1}) =\lim_{m\rightarrow\infty}(\downarrow L^{m-1})\in\partial\Omega.\] (i) For \(1\leq m\leq n-1\), we construct hyperbolic geodesics \(\gamma_{m}^{(n)}\subset\hat{\mathbb{C}}\setminus\mathcal{T}_{n}\) connecting \(\uparrow R^{m-1}\) and \(\downarrow L^{m-1}\) as in Figure 10. By construction, \(\operatorname{tw}_{v_{\text{root}},\rightarrow}^{(n)}\subset\mathcal{T}_{n}\) is contained in the subdomain of \(\hat{\mathbb{C}}\setminus\mathcal{B}_{n}\) enclosed by \(\gamma_{m}^{(n)}\). For any \(m\geq 1\), the Hausdorff limit of the geodesics \(\gamma_{m}^{(n)}\) as \(n\rightarrow\infty\) is composed of three pieces: two pieces \(\gamma_{m,1},\gamma_{m,3}\) are hyperbolic geodesics in the tiles that make Figure 10: The twigs are enclosed by the hyperbolic geodesics \(\gamma_{m}^{(n)}\). up \(\Omega\), while the middle piece is a hyperbolic geodesic \(\gamma_{m,2}\) in \(\hat{\mathbb{C}}\setminus\overline{\Omega}\) which connects two cusps \(p_{-m},p_{m}\in\partial\Omega\). Therefore, the Hausdorff limit of the twigs \(\operatorname{tw}_{v_{\operatorname{root}},\rightarrow}^{(n)}\) is contained in \(\overline{\Omega}\) union the subdomain of \(\hat{\mathbb{C}}\setminus\overline{\Omega}\) enclosed by \(\gamma_{m,2}\). Since the points \(p_{-m},p_{m}\) tend to the cusp \(p_{v_{\operatorname{root}},\rightarrow}\in\partial\Omega\) as \(m\rightarrow\infty\), the subdomains of \(\hat{\mathbb{C}}\setminus\overline{\Omega}\) enclosed by \(\gamma_{m,2}\) shrink down to \(p_{v_{\operatorname{root}},\rightarrow}\). It follows that the Hausdorff limit of the twigs \(\operatorname{tw}_{v_{\operatorname{root}},\rightarrow}^{(n)}\) is contained in \(\overline{\Omega}\) as desired. (ii) Fig. 10 depicts a decreasing sequence of simply-connected domains \[W_{1}^{(n)}\,\supset\,W_{2}^{(n)}\,\supset\,\ldots\,\supset\,W_{m-1}^{(n)},\] which contain the set \(\mathcal{T}_{n}(v_{\rightarrow^{m}})\cup\big{\{}\uparrow R^{m-1},\downarrow L ^{m-1}\big{\}}\). A moduli estimate similar to the one in Section 4.1 shows that \[\operatorname{Mod}\bigl{(}W_{j}^{(n)}\setminus W_{j+1}^{(n)}\bigr{)}\gtrsim 1 /j,\qquad j=1,2,\ldots,m-2.\] By the parallel rule, \[\operatorname{Mod}\bigl{(}W_{1}^{(n)}\setminus W_{m-1}^{(n)}\bigr{)}\, \gtrsim\,1+1/2+\cdots+1/(m-2)\,\asymp\,\log m.\] Since the initial domain \(W_{1}^{(n)}\) is contained in a ball \(B(0,R_{0})\) where \(R_{0}>0\) is a universal constant, Lemma 2.1 implies that the diameter of \(\mathcal{T}_{n}(v_{\rightarrow^{m}})\cup\big{\{}\uparrow R^{m-1},\downarrow L ^{m-1}\big{\}}\) is small, which means that the red twig and the two blue branches come together at \(p_{v_{\operatorname{root}},\rightarrow}\in\partial\Omega\). ### Tile decomposition As shown on the right side of Fig. 9, the repeated pre-images of the line segment \([-1/2,1/2]\) separate \(\Omega\), the interior of the filled Julia set of \(f(z)=z^{2}+1/4\), into a countable collection of _tiles_. The union of these curves forms a tree whose vertices are points in the grand orbit of the critical point \(0\). We designate the critical point \(0\) as the root vertex. Note that \([0,1/2]\) is not a single edge but the union of countably many edges: \[[0,1/2]\,=\,[0,f(0)]\,\cup\,[f(0),f^{\circ 2}(0)]\,\cup\,[f^{\circ 2}(0),f^{ \circ 3}(0)]\,\cup\,\ldots\] We label the tiles as \(\Omega_{p,L}\) or \(\Omega_{p,R}\), where \(p\) ranges over the cusps in \(\partial\Omega\). A _bi-tile_\(\Omega_{p}\) is a horoball-like region formed by taking the interior of the closure of \(\Omega_{p,L}\cup\Omega_{p,R}\). Thus, \(\Omega\) is organized into a union of bi-tiles, as well as a union of tiles. Under iteration, any tile is eventually mapped onto \(\Omega_{1/2,L}\) or \(\Omega_{1/2,R}\). The tiles \(\Omega_{1/2,L}\) or \(\Omega_{1/2,R}\) are invariant under \(f\), and \(f\) restricts as a conformal automorphism on \(\Omega_{1/2,L}\) and \(\Omega_{1/2,R}\). We record the following two properties of \(\Omega\), which come from the dynamics of \(f\) and the symmetry of \(\Omega\) with respect to the real axis: * If \(\Omega_{p,X}\) is a tile, then each edge in \(\partial\Omega_{p,X}\) has the same relative harmonic measure as viewed from \(p\), i.e. if \(e_{1},e_{2}\subset\Omega_{p,X}\), then \[\lim_{z\to p,\,z\in\Omega_{p,X}}\frac{\omega_{z}(e_{1})}{\omega_{z}(e_{2})}=1.\] * If \(e\) is an edge that belongs to two neighbouring tiles \(\Omega_{p,X}\) and \(\Omega_{q,Y}\), then the relative harmonic measures are the same from both sides. This means that for any measurable subset \(E\subset e\), \[\lim_{z\to p,\,z\in\Omega_{p,X}}\frac{\omega_{z}(E)}{\omega_{z}(e)}=\lim_{z\to q,\,z\in\Omega_{q,Y}}\frac{\omega_{z}(E)}{\omega_{z}(e)}.\] Arguing as in Section 5, one can show that the true trees \(\mathcal{T}_{n}\) converge to an infinite tree union the Julia set of \(z^{2}+1/4\). ### Limit of Shabat polynomials We write \(X\) for one of the symbols \(L,R\). We may further decompose each tile \(\Omega_{p,X}\subset\Omega\) into countably many triangles \(\triangle(e,p,X)\) by connecting the vertices in \(\partial\Omega_{p,X}\) to the cusp \(p\in\partial\Omega_{p,X}\) by hyperbolic geodesics in \(\Omega_{p,X}\). We colour the triangles \(\triangle(e,p,X)\subset\Omega\) black and white, so that \[\triangle\,=\,\triangle\big{(}\overline{v_{\mathrm{root}}f(v_{\mathrm{root}} )},1/2,R\big{)}\,\subset\,\Omega_{1/2,R}\,=\,\Omega_{1/2}\cap\mathbb{H}\] is white and adjacent triangles have different colours. Reflecting \(\triangle\) in the real line, we get a triangle \(\overline{\triangle}\subset\Omega_{1/2,L}=\Omega_{1/2}\cap\mathbb{L}\). The union \(\triangle\cup\overline{v_{\mathrm{root}}f(v_{\mathrm{root}})}\cup\overline{\triangle}\) constitutes a fundamental domain for the action of \(f\) on \(\Omega\). Mapping properties of the cosine.To describe the mapping properties of \(\kappa(z)=\cos(\pi z)\), we draw the lines \(\{y=0\}\) and \(\{x=n:n\in\mathbb{Z}\}\) in the complex plane. These lines divide \(\mathbb{C}\) into vertical half-strips \(\{\mathcal{S}_{n,\pm}\}\) of width 1. These may be coloured black and white so that adjacent half-strips have opposite colours, with \[\mathcal{S}_{0,+}=\{z\in\mathbb{C}\,:\,0<\operatorname{Re}z<1,\,0< \operatorname{Im}z<\infty\}\] being white. The map \(\kappa\) takes each black half-strip conformally onto the upper half-plane and each white half-strip conformally onto the lower half-plane. The horizontal side of each \(\mathcal{S}_{n,\pm}\) is mapped to the interval \([-1,1]\), while the vertical sides are mapped to the intervals \((-\infty,-1]\) and \([1,\infty)\). Mapping properties of the Fatou coordinate.The Fatou coordinate at the parabolic fixed point \(1/2\in\mathcal{J}(z^{2}+1/4)\) provides a conformal bijection between the quotient cylinder \(\Omega/(z\sim f(z))\) and \(\mathbb{C}/\mathbb{Z}\), which is uniquely determined up to adding a constant in \(\mathbb{C}/\mathbb{Z}\). Recall from the statement of Theorem B.1 that we use the normalization \(\psi(v_{\mathrm{root}})=0\). **Lemma B.3**.: _The Fatou coordinate \(\psi\) is a holomorphic function on \(\Omega\) which maps triangles \(\triangle(e,p,X)\) conformally onto half-strips of the same colour._ Proof.: Define \(\psi_{1}:\triangle\cup\overline{v_{\mathrm{root}}f(v_{\mathrm{root}})}\cup \overline{\triangle}\to\mathbb{C}\) to be conformal map which takes \(\triangle\) to \(\mathcal{S}_{0,+}\) and \(\overline{\triangle}\) to \(\mathcal{S}_{0,-}\) with \[v_{\mathrm{root}}\to 0,\quad f(v_{\mathrm{root}})\to 1,\quad 1/2\to\infty.\] The map \(\psi_{1}\) extends to \(\Omega\) using the functional equation \(\psi_{1}(f(z))=\psi_{1}(z)+1\). Since \(\psi_{1}\) possesses the properties that uniquely determine \(\psi\), the two functions must be equal. Composing the above mappings, we get: **Corollary B.4**.: _The map \(z\to\cos(\pi\psi(z))\) takes each triangle \(\triangle(e,p,X)\subset\Omega\) conformally onto the upper half-plane or the lower half-plane, with black triangles mapping onto the upper half-plane \(\mathbb{H}\) and white triangles mapping onto the lower half-plane \(\mathbb{L}\). Furthermore, \(\cos(\pi\psi(z))\) takes edges to \([-1,1]\), cusps to infinity and \(v_{\mathrm{root}}\) to 1._ Considerations similar to the ones in Section 5.6 show that the the limit \(h(z)\) of the Shabat polynomials \(p_{n}(z)\) has the same description as the function \(\cos(\pi\psi(z))\) described in Corollary B.4. This completes the proof of Theorem B.1.
Werness, Lee, and the third authorによる数値実験の結果、大三元樹に関連付けられた子供画が、Lee, Lyubich, Makarov, and Mukherjeeによって導入されたデルtoidに近似していることが示唆されました。この論文では、この仮説を裏付けるものです。また、この手法の一種として、Bishopの定理に対する新しい証明を与えています。これは「真の樹木は密集している」というものです。さらに、$z\mapsto z^2+1/4$のJulia集合に収束するツリーのシーケンスも示しています。 Please let me know if you need to do anything else.
2309.12090
Multi-Task Cooperative Learning via Searching for Flat Minima
Multi-task learning (MTL) has shown great potential in medical image analysis, improving the generalizability of the learned features and the performance in individual tasks. However, most of the work on MTL focuses on either architecture design or gradient manipulation, while in both scenarios, features are learned in a competitive manner. In this work, we propose to formulate MTL as a multi/bi-level optimization problem, and therefore force features to learn from each task in a cooperative approach. Specifically, we update the sub-model for each task alternatively taking advantage of the learned sub-models of the other tasks. To alleviate the negative transfer problem during the optimization, we search for flat minima for the current objective function with regard to features from other tasks. To demonstrate the effectiveness of the proposed approach, we validate our method on three publicly available datasets. The proposed method shows the advantage of cooperative learning, and yields promising results when compared with the state-of-the-art MTL approaches. The code will be available online.
Fuping Wu, Le Zhang, Yang Sun, Yuanhan Mo, Thomas Nichols, Bartlomiej W. Papiez
2023-09-21T14:00:11
http://arxiv.org/abs/2309.12090v1
# Multi-Task Cooperative Learning via Searching for Flat Minima ###### Abstract Multi-task learning (MTL) has shown great potential in medical image analysis, improving the generalizability of the learned features and the performance in individual tasks. However, most of the work on MTL focuses on either architecture design or gradient manipulation, while in both scenarios, features are learned in a competitive manner. In this work, we propose to formulate MTL as a multi/bi-level optimization problem, and therefore force features to learn from each task in a cooperative approach. Specifically, we update the sub-model for each task alternatively taking advantage of the learned sub-models of the other tasks. To alleviate the negative transfer problem during the optimization, we search for flat minima for the current objective function with regard to features from other tasks. To demonstrate the effectiveness of the proposed approach, we validate our method on three publicly available datasets. The proposed method shows the advantage of cooperative learning, and yields promising results when compared with the state-of-the-art MTL approaches. _The code will be available online._ Keywords:Multi-Task Cooperative Learning Optimization. ## 1 Introduction With the development of deep learning, multi-task learning (MTL) has shown great potential to improve performance for individual tasks and to learn more transferable features (better generalizability), whilst reducing the number of the network parameters [16]. MTL has been widely studied in many domains including image classification [14] or image segmentation [9]. The core assumption behind MTL is that tasks could be correlated and thus provide complementary features for each other [4]. MTL is also applied in medical image analysis tasks [11, 6, 20, 5], where strong associations between multiple tasks commonly exist. For example, the diagnosis of cancer may indicate the extent of disease severity, which can be correlated with the patient's survival, thus diagnosis and prognosis of cancer could be learned simultaneously [18]. In clinical diagnosis, annotations of organs or tissues could support radiologists to grade disease, to mimic this process, Zhou _et.al_[24] studied to simultaneously segment and classify (grade) tumors into benign or malignant class using 3D breast ultrasound images. Similarly, to improve the prediction of lymph node (LN) metastasis [21], Zhang _et.al_ proposed a 3D multi-attention guided multi-task learning network for joint gastric tumor segmentation and LN classification [23]. Typically, MTL methods can be broadly categorized into hard and soft parameter-sharing paradigms [16]. The former adopts one backbone as the encoder to extract common features for all tasks, and the latter designs encoders for each task while constraining their associated parameters. To exploit the correlation between tasks, a large amount of work focuses on the architecture design of the network to enable the cross-task interaction [23]. For example, Misra _et.al_ designed a cross-stitch model to combine features from multiple networks [12]. Besides network design, many researchers pay more attention to the neural network optimization process to counter the _negative transfer_ issue [16]. As tasks could compete with each other for shared resources, the overall performance might be even poorer than those of solving individual tasks. To address this issue, previous works either change the weights of each task objective adaptively using heuristics [2], or manipulate the gradient to be descending direction for each task [10]. However, as those methods formulate MTL in a competitive manner, it is difficult to guarantee that the complementary information is fully utilized by each task. Moreover, most of them are designed for or evaluated on a simple scenario, where only one domain is involved and the tasks are homogeneous, namely all tasks are either dense prediction or image-level classification. In this work, we propose a novel cooperative MTL framework (MT-COOL), which manages to update the features of one task while taking into account the current state of other features. Specifically, we adopt the soft parameter-sharing strategy and update each sub-model conditioning on the information learned by other tasks in an alternative manner. To avoid the _negative transfer_ problem during the training, we further propose to search for flat minima of the current task with regard to others at each iteration. As a proof of concept, we first validate this method on the simple MNIST dataset for classification tasks. To show the advantage of the proposed approach in the medical domain, we use REFUGE2018 dataset for optic cup/disc segmentation and glaucoma classification, and HRF-AV dataset for artery and vein segmentation tasks. The results show a promising perspective of the proposed multi-task cooperative approach, compared to the state-of-the-art methods. The main contributions of this work are as follows: * We propose a novel MTL framework, which learns features for each task in a cooperative manner. * We propose an effective optimization strategy to alleviate convergence issues. * We validate the proposed method on three MTL scenarios with different task settings. The proposed method delivers promising results in all settings, compared with the state-of-the-art MTL approaches. ## 2 Method For a better explanation, here we take two-task learning as an example, which can be generalized to n-task problems easily. ### Bi-Level Optimization for Cooperative Two-Task Learning Formally, let \(x_{i}\in\mathbb{R}^{W\times H\times C}\) denotes an image with the width \(W\), height \(H\) and channel \(C\), \(y_{i}\in\mathbb{R}^{C_{0}}\) is a label for classification, (or \(y_{i}\in\mathbb{R}^{W\times H\times C_{0}}\) for segmentation) and \(C_{0}\) is the number of classes, \(F_{i}(\cdot;\theta_{i})\) is a feature extractor, \(G_{i}(\cdot;\phi_{i})\) is a prediction function for task \(i=1,\ldots,T\) where \(T\) is a number of tasks, and here \(T=2\). \(\theta_{i}\) and \(\phi_{i}\) are corresponding parameters to be learned. Our task is to predict label \(\widehat{y}_{i}=G_{i}(F_{i}(x_{i}))\). For MTL, instead of using shared backbone, _i.e._, \(F_{1}=F_{2}\), and updating them simultaneously with a single loss \(\ell\), we propose to optimize them in a cooperative manner, that is learning \((F_{1},G_{1})\) conditioned on a fixed and informative \(F_{2}\), and versa vice. Generally, it can be formulated as a bi-level optimization problem: \[(U)\min_{\theta_{1},\phi_{1}}\mathcal{L}_{1}(\theta_{1},\phi_{1},\theta_{2})=\ell_{1}(G_{1}(\mathcal{M}(F_{1}(x_{1};\theta_{1}),F_{2}(x_{1}; \theta_{2}));\phi_{1}),\widehat{y}_{1}), \tag{1}\] \[(L)\min_{\theta_{2},\phi_{2}}\mathcal{L}_{2}(\theta_{2},\phi_{2},\theta_{1})=\ell_{2}(G_{2}(\mathcal{M}(F_{1}(x_{2};\theta_{1}),F_{2}(x_{2}; \theta_{2}));\phi_{2}),\widehat{y}_{2}), \tag{2}\] where \(\ell_{i}\) is the loss function, e.g. cross-entropy loss for classification. \(\mathcal{M}\) denotes a feature fusion to facilitate the current task learning by incorporating useful information from other tasks. A common choice for \(\mathcal{M}\) is to use a linear combination of features, also known as _cross-stitch_[12] or concatenation operation in multi-layers (which is used in this work due to its simplicity). To solve the problem Eq.(1)-(2), we propose to update \((\theta_{1},\phi_{1})\) and \((\theta_{2},\phi_{2})\) alternatively, as other traditional methods for bi-level optimization problem could be inefficient [1] due to the complexity of deep neural networks. However, without any constraint, this alternative optimization strategy could fail to achieve convergence to an optimal solution. For example, at the \(t\)-th iteration, we first optimize \(\mathcal{L}_{1}(\theta_{1},\phi_{1},\theta_{2}^{(t-1)})\) to obtain an optimum \((\theta_{1}^{(t)},\phi_{1}^{(t)})\). It is possible that for the second task, \(\mathcal{L}_{2}(\theta_{2}^{(t-1)},\phi_{2}^{(t-1)},\theta_{1}^{(t-1)})< \mathcal{L}_{2}(\theta_{2}^{(t-1)},\phi_{2}^{(t-1)},\theta_{1}^{(t)})\), which means that the update for the first task could increase the prediction risk of the second one, and cancel the gain from optimization of \(\mathcal{L}_{2}\). Here, we also term this issue as _negative transfer_. To alleviate this effect, we propose to search for flat minima for one task with regard to the features from the other task in each iteration. ### Finding Flat minima via Injecting Noise As mentioned above, the network optimized for one task could be sensitive to the change of parameters for other tasks, which may cause non-convergent solutions. Hence, at each iteration, for each task, we search for an optimum that is non-sensitive to the update of other parameters within a fixed neighborhood. We term this kind of optima as _flat minima_. To formally state this idea, assume that noise \(\epsilon_{i}\sim\{\mathcal{U}(-b,b)\}^{d_{\epsilon_{i}}}\) with \(b>0\), \(d_{\epsilon}=d_{\theta_{i}}\) and \(d_{\theta_{i}}\) the dimension of \(\theta_{i}\). Then for _task 1_, at \(t\)-th iteration our target is to minimize the expected loss function with regard to the parameters \((\theta_{1},\phi_{1})\) and noise \(\epsilon_{2}\), _i.e.,_ \[(U)\ \mathcal{R}_{1}^{[t]}(\theta_{1},\phi_{1})=\int_{\mathbb{R}^{d \epsilon_{2}}}\mathcal{L}_{1}(\theta_{1},\phi_{1},\theta_{2}^{[t-1]}+\epsilon_ {2})dP(\epsilon_{2})=\mathbb{E}[\mathcal{L}_{1}(\theta_{1},\phi_{1},\theta_{2} ^{[t-1]}+\epsilon_{2})], \tag{3}\] \[s.t.\ |\theta_{1}-\theta_{1}^{[t-1]}|<b,\] where \(P(\epsilon_{2})\) is the noise distribution, and the solution is denoted as \((\theta_{1}^{[t]},\phi_{1}^{[t]})\). Similarly, for _task 2_, the loss function is as follows, \[(L)\ \mathcal{R}_{2}^{[t]}(\theta_{2},\phi_{2})=\int_{\mathbb{R}^{d \epsilon_{1}}}\mathcal{L}_{2}(\theta_{2},\phi_{2},\theta_{1}^{[t]}+\epsilon_ {1})dP(\epsilon_{1})=\mathbb{E}[\mathcal{L}_{2}(\theta_{2},\phi_{2},\theta_{1 }^{[t]}+\epsilon_{1})], \tag{4}\] \[s.t.\ |\theta_{2}-\theta_{2}^{[t-1]}|<b.\] Note that it is hard to find an ideal flat minimum \((\theta_{1}^{[t]},\phi_{1}^{[t]})\) for Eq. (3), such that \(\mathcal{L}_{1}(\theta_{1}^{[t]},\phi_{1}^{[t]},\theta_{2}^{[t-1]}+\epsilon_ {2}^{(j_{1})})=\mathcal{L}_{1}(\theta_{1}^{[t]},\phi_{1}^{[t]},\theta_{2}^{[t -1]}+\epsilon_{2}^{(j_{2})})\), \(\forall\epsilon_{2}^{(j_{1})},\epsilon_{2}^{(j_{2})}\sim P(\epsilon_{2})\), and \(\mathcal{L}_{1}(\theta_{1}^{[t]},\phi_{1}^{[t]},\theta_{2}^{[t-1]})<\mathcal{ L}_{1}(\theta_{1}^{[t-1]},\phi_{1}^{[t-1]},\theta_{2}^{[t-1]})\), which satisfies the requirement to avoid the optimization issue (see Sect. 2.1). Hence, our goal is to find an approximately flat minimum to alleviate this issue. A similar idea has been proposed for continual learning [19]. However, our method differs as follows: (1) the flat minimum in [19] is searched for the current task, while in our work, it is searched with regard to other tasks; (2) Once the flat minimum is found for the first task in a continual learning problem, search region for the remaining tasks is fixed, while in our work, the parameters for each task are only constrained in a single iteration, and search region could change during the optimization. In practice, it is difficult to minimize the expected loss, we instead minimize its empirical loss for Eq. (3) and Eq. (4) as follows, \[(U)\ L_{1}^{[t]}(\theta_{1},\phi_{1})=\frac{1}{M}\sum_{j=1}^{M} \mathcal{L}_{1}(\theta_{1},\phi_{1},\theta_{2}^{[t-1]}+\epsilon_{2}^{(j)})+ \lambda\cdot KL(\widehat{y}_{1}^{(j)},\frac{1}{M}\sum_{n=1}^{M}\widehat{y}_{1 }^{(n)}), \tag{5}\] \[(L)\ L_{2}^{[t]}(\theta_{2},\phi_{2})=\frac{1}{M}\sum_{j=1}^{M} \mathcal{L}_{2}(\theta_{2},\phi_{2},\theta_{1}^{[t]}+\epsilon_{1}^{(j)})+ \lambda\cdot KL(\widehat{y}_{2}^{(j)},\frac{1}{M}\sum_{n=1}^{M}\widehat{y}_{2 }^{(n)}), \tag{6}\] where \(\epsilon_{i}^{(j)}\) is a noise vector sampled from \(P(\epsilon_{i})\), \(M\) is the sampling times, and \(KL\) is the Kullback-Leibler Divergence. The first term in Eq. (5) or Eq. (6) is designed to find a satisfying minimum for the current task, and the second term enforces this minimum to be flat as desired. **Warm Up the Network.** To initialize the parameters for Eq.(3)) and Eq.(4) with non-sensitive \((\theta_{1}^{[0]},\theta_{2}^{[0]})\), we minimize the following loss function, \[\mathcal{L}_{total}=\frac{1}{M}\sum_{j=1}^{M}(\mathcal{L}_{1}( \theta_{1}+\epsilon_{1}^{(j)},\phi_{1},\theta_{2}+\epsilon_{2}^{(j)})+ \mathcal{L}_{2}(\theta_{2}+\epsilon_{2}^{(j)},\phi_{2},\theta_{1}+\epsilon_{1 }^{(j)})). \tag{7}\] **Algorithm.** We term the proposed **multi-task **co**operative **learning method as MT-COOL. The algorithm is described in Algorithm 1. Note that to alleviate the optimization issue discussed in Section 2.1, after the update for each task, we clamp the parameters to ensure that they fall within the flat region, as described in Line 17 in Algorithm 1. Network Configuration Fig. 1 illustrates the framework for two-task cooperative learning. Our framework consists of an encoder and task-specific decoders. The parameters at each layer of the encoder are evenly allocated to each task, and the learned features are then concatenated as the input of the next layer. Figure 1: A general framework for our MTL method. (a) is the conventional convolution block, (b) illustrates the structure of a convolution block for cooperative two-task learning, and (c) shows the general framework for MTL, which contains an encoder and task-specific decoders. ## 3 Experiments We validate our MTL framework in three scenarios as follows: (1) classification tasks on different classes with the MNIST dataset [8], (2) one domain for simultaneous segmentation and classification tasks using the REFUGE2018 dataset [13], and (3) one domain for two segmentation tasks with HRF-AV dataset [7]. For our method, we adopt the stochastic gradient descent (SGD) optimizer, and empirically set the bound value \(b=0.05\), the learning rate \(\alpha=\beta=0.1\). To reduce the training time and the memory, we simply set the sampling number \(M=1\). All experiments are implemented using one GTX 1080Ti GPU. ### Dataset (1) **MNIST.** This dataset contains 50,000 training and 10,000 testing images. To simulate a multi-task learning setting, we divide both the training and test images into two subsets with either even numbers \(\{0,2,4,6,8\}\) (denoted as _Task 1_) or odd numbers \(\{1,3,5,7,9\}\) (denoted as _Task 2_). For the network, we adopt the widely used LeNet architecture for MNIST dataset [8], of which the last layer contains 50 hidden units, followed by a final prediction output. (2) **REFUGE2018.** The REFUGE2018 challenge [13] provides 1200 retinal color fundus photography. The target of this challenge is glaucoma detection and optic disc/cup segmentation. We divide this dataset into 800 samples for training and 400 test subset, where the ratio of the number of glaucomas to non-glaucoma images are both \(1:9\). As discussed in [13], glaucoma is mostly characterized by the optic nerve head area. Hence, we cropped all images around the optic disc into \(512\times 512\). We used the UNet [15] for the segmentation task, with the four down-sampling modules as the shared encoders. The output of segmentation and the features from the bottom layers are taken as the input of the decoder for classification. (3) **HRF-AV.** This dataset [7] contains 45 fundus images with a high resolution of \(3504\times 2336\). The tasks for this dataset are the binary vessel segmentation and the artery/vein (A/V) segmentation. We randomly split the dataset into 15 and 30 samples for training and testing. We adopt the U-Net as the backbone with the bottom feature channel being 256. During training, we crop patches with size of \(2048\times 2048\) randomly as input. ### Results on MNIST Dataset #### 3.2.1 Ablation Study To validate the effectiveness of the two terms in Eq.(5) and Eq.(6), we conduct two experiments: (1) **Vanilla.** We simply optimize the objective of each task alternatively without any constraints or sampling operations. (2) **Ours (_w/o_ Reg).** We sample noises during training, and optimize the losses with solely the first term in Eq.(5) and Eq.(6), _i.e.,_ without the similarity regularization. We run 5 times for each method, and report their mean and standard deviation values. As shown in the top four rows of Table 1, compared to the **Independent** approach, the proposed **Vanilla** bi-level optimization method can utilize the features from other tasks and boost the performance of the current one. By introducing noises to find flat minima during training, **Ours (_w/o_ Reg)** further achieves higher prediction, particularly for _Task 2_. Finally, by adding similarity regularization, our method obtains the best results. #### 3.2.2 Comparison Study We compare the proposed method with four state-of-the-art (SOTA) MTL approaches, including MGDA [17], PCGrad [22], GradDrop [3] and CAGrad [10]. We also implement the **Joint** method as a baseline, which simply sums the loss of each task as the total loss for training. As shown in Table 1, all MTL methods improve the performance on each task, compared to **Independent**. Among all the compared methods, our technique performs the best on both tasks. ### Comparison on REFUGE2018 Dataset For REFUGE2018 dataset, we compare our method with CAGrad, GradDrop, MGDA, PCGrad, and Joint. We run each method three times, and report the \begin{table} \begin{tabular}{|c|c|c|c|} \hline Methods & Params & _Task 1_ & _Task 2_ \\ \hline Independent & \(\approx\) 2 & 99.41 \(\pm\) 0.03492 & 98.77 \(\pm\) 0.06029 \\ \hline \hline Ours (Vanilla) & 1 & 99.61\(\pm\)0.06210 & 99.37\(\pm\)0.04494 \\ \hline Ours (_w/o_ Reg) & 1 & 99.66\(\pm\)0.03765 & 99.56\(\pm\)0.07203 \\ \hline MT-COOL (Ours) & 1 & **99.72\(\pm\)0.03978** & **99.62\(\pm\)0.01576** \\ \hline \hline Joint & 1 & 99.60 \(\pm\) 0.03765 & 99.51 \(\pm\)0.06281 \\ \hline CAGrad [10] & 1 & 99.67\(\pm\)0.05293 & 99.51\(\pm\)0.05229 \\ \hline GradDrop [3] & 1 & 99.65\(\pm\) 0.03492 & 99.53\(\pm\)0.04245 \\ \hline MGDA [17] & 1 & 99.63\(\pm\) 0.05883 & 99.47\(\pm\)0.05078 \\ \hline PCGrad [22] & 1 & 99.66\(\pm\)0.04180 & 99.51\(\pm\)0.09108 \\ \hline \end{tabular} \end{table} Table 1: Performance of SOTA MTL methods on MNIST dataset. We set the number of parameters of **Joint** method as the base 1, and the values in the column ‘Params’ are the ratio of the parameter number of each method to the **Joint**. Figure 2: Visualization results from MTL methods on REFUGE2018 dataset. The selected samples rank the 1st quartile, median and 3rd quartile in terms of the segmentation performance of **Independent**. \(mean\pm std\) values of Dice score on optic cup and disc for the segmentation task, and accuracy (Acc), Area Under the Receiver Operating Characteristics (AUROC), sensitivity (Sen) and specificity (Spe) for the classification task. As shown in Table 2, our method achieves comparable results on the segmentation task with the **Independent**, while other MTL methods degrade significantly, particularly on Disc. For the classification task, our method achieves the best performance in terms of all the metrics. Fig. 2 provides the visualization results for qualitative comparison. One can see that the proposed method obtains the best prediction shape among all MTL methods. ### Comparison on HRF-AV Dataset We also conduct a comparison study on HRF-AV dataset. Each method is repeated three times, and the mean results are presented in Table 3. One can see that compared to the **Independent**, all the other MTL methods perform poorly, especially on A/V segmentation task. For example, the best F1 scores on A/V segmentation among the five MTL methods are 0.5127 and 0.5736, respectively, obtained by GradDrop, which are much lower than those from **Independent**. On the contrary, our method performs comparably with the **Independent** on A/V segmentation, and even slightly better on binary segmentation. For qualitative comparison, please refer to Fig.1 in the Supplementary material. \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{Params} & \multicolumn{4}{c|}{Segmentation} & \multicolumn{4}{c|}{Classification} \\ \cline{3-8} & & Cup (Dice\%) & Disc (Dice\%) & Acc & AUROC & Sen & Spe \\ \hline Independent & \(\approx\) 2 & 95.14\(\pm\)0.05110 & 86.87\(\pm\) 05644 & 0.900\(\pm\)0.00235 & 0.902\(\pm\)0.0106 & 0.658\(\pm\)0.0117 & 0.927\(\pm\)0.00392 \\ \hline \hline Joint & 1 & 91.19\(\pm\)0.7600 & 77.36\(\pm\)0.5236 & 0.907\(\pm\)0.0183 & 0.895\(\pm\)0.0221 & 0.658\(\pm\)0.0656 & 0.935\(\pm\)0.0264 \\ \hline CAGrad [10] & 1 & 92.67\(\pm\)0.7702 & 81.71\(\pm\)0.2874 & 0.914\(\pm\)0.00513 & 0.904\(\pm\)0.00562 & 0.658\(\pm\)0.0235 & 0.942\(\pm\)0.00796 \\ \hline GradDrop [3] & 1 & 91.70\(\pm\)0.6376 & 78.91\(\pm\)1.439 & 0.909\(\pm\)0.00424 & 0.922\(\pm\)0.0115 & 0.716\(\pm\)0.0471 & 0.930\(\pm\)0.00988 \\ \hline MGDA [17] & 1 & 93.87\(\pm\)0.5017 & 83.87\(\pm\)0.9732 & 0.895\(\pm\)0.0154 & 0.914\(\pm\)0.00610 & 0.633\(\pm\)0.0824 & 0.924\(\pm\)0.0260 \\ \hline PCGrad [22] & 1 & 91.74\(\pm\)0.5569 & 79.80\(\pm\)0.8748 & 0.911\(\pm\)0.00849 & 0.898\(\pm\)0.0136 & 0.675\(\pm\)0.0204 & 0.937\(\pm\)0.00796 \\ \hline MT-COOL (Ours) & 1 & **94.37\(\pm\)0.1706** & **86.18\(\pm\)0.3046** & **0.937\(\pm\)0.0113** & **0.942\(\pm\)0.0149** & **0.750\(\pm\)0.000** & **0.958\(\pm\)0.0126** \\ \hline \end{tabular} \end{table} Table 2: Performance of SOTA MTL methods on REFUGE2018 dataset. \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{Params} & \multicolumn{4}{c|}{A/V Segmentation} & \multicolumn{4}{c|}{Binary Segmentation} \\ \cline{3-10} & & Acc (A) & F1 (A) & Acc (V) & F1 (V) & Acc (AV) & F1 (A/V) & Acc & F1 \\ \hline Independent & \(\approx\) 2 & 0.9814 & 0.6999 & 0.9821 & 0.7492 & 0.9692 & 0.7698 & 0.9691 & 0.7831 \\ \hline Joint & 1 & 0.9622 & 0.3537 & 0.9661 & 0.5171 & 0.9664 & 0.7360 & 0.9691 & 0.7835 \\ \hline CAGrad [10] & 1 & 0.9687 & 0.4754 & 0.9696 & 0.5520 & 0.9668 & 0.7364 & 0.9690 & 0.7790 \\ \hline GradDrop [3] & 1 & 0.9708 & 0.5127 & 0.9716 & 0.5736 & 0.9666 & 0.7343 & 0.9686 & 0.7742 \\ \hline MGDA [17] & 1 & 0.9636 & 0.2343 & 0.9632 & 0.5315 & 0.9660 & 0.7263 & 0.9691 & 0.7793 \\ \hline PCGrad [22] & 1 & 0.9671 & 0.4262 & 0.9681 & 0.5387 & 0.9667 & 0.7357 & 0.9687 & 0.7763 \\ \hline MT-COOL (Ours) & 1 & **0.9801** & **0.6671** & **0.9811** & **0.7135** & **0.9674** & **0.7424** & **0.9701** & **0.7912** \\ \hline \end{tabular} \end{table} Table 3: Performance of SOTA MTL methods on HRF-AV dataset. ## 4 Conclusion In this work, we propose a novel MTL framework via bi-level optimization. Our method learns features for each task in a cooperative manner, instead of competing for resources with each other. We validate our model on three datasets, and the results prove its great potential in MTL. However, there are still some issues that need to be studied in the future. For example, we need to validate our method on large-scale tasks and find a more efficient learning strategy such as using distributed learning. Moreover, how to allocate the parameters to each task automatically and effectively is important for model generalization. For better interpretability, learning features specific to each task should also be studied.
マルチタスク学習 (MTL) は、医療画像分析において大きな可能性を秘めており、学習された特徴の汎用性を向上させ、個々のタスクにおけるパフォーマンスを改善しています。しかし、 MTL の大部分の研究は、アーキテクチャ設計や勾配操作に焦点を当てており、これらのシナリオでは、特徴は競争的な方法で学習されています。本研究では、MTL を多/双レベル最適化問題としてFormulate することで、各タスクの特性を協調的に学習させることを目的としています。具体的には、各タスクのサブモデルを交互に更新し、他のタスクの学習済みサブモデルを有利に活用します。最適化中に負の転送問題を軽減するためには、現在の最適化関数の特徴に対して、他のタスクの学習済み特徴を考慮した平坦な最小値を探します。このアプローチの効果を証明するために、本手法を公開されている3つのデータセット
2309.15927
Sharp Estimates on Coefficient functionals of Ozaki close-to-convex functions
The goal of this manuscript to establish the best possible estimate on coefficient functionals like Hermitian-Toeplitz determinant of secoend order involving logarithmic coefficients, initial logarithmic inverse coefficients and initial order Schwarzian derivatives of the Ozaki close-to-convex functions.
Sushil Kumar, Rakesh Kumar pandey, Pratima Rai
2023-09-27T18:07:58
http://arxiv.org/abs/2309.15927v1
# Sharp estimates on coefficient functionals of Ozaki close-to-convex functions ###### Abstract. The goal of this manuscript to establish the best possible estimate on coefficient functionals like Hermitian-Toeplitz determinant of secoend order involving logarithmic coefficients, initial logarithmic inverse coefficients and initial order Schwarzian derivatives of the Ozaki close-to-convex functions. Key words and phrases:Ozaki close-to-convex functions; logarithmic inverse coefficients; logarithmic coefficients; Hermitian-Toeplitz determinants; Schwarzian derivatives 2010 Mathematics Subject Classification: 30C45, 30C50 ## 1. Introduction Let \(\mathcal{A}\) denote the class of functions \(f(z)=z+\sum_{n=2}^{\infty}a_{n}z^{n}\) which are analytic in \(\mathbb{D}=|z|<1\) and \(\mathcal{S}\) be a subclass of \(\mathcal{A}\) consists of univalent functions. Let \(\mathcal{P}\) denote the class of function \(p(z)=1+\sum_{n=1}^{\infty}p_{n}z^{n}\) which are analytic and satisfy \(\Re(p(z))>0\) for \(|z|<1\). The notation of \(h_{1}\prec h_{2}\) means the analytic function \(h_{1}\) is subordinate to analytic function \(h_{2}\) if there is a Schwarz function \(w\) in \(|z|<1\) such that \(h_{1}(z)=h_{2}(w(z))\). If \(h_{2}\in\mathcal{S}\), then \(h_{1}\prec h_{2}\) if and only if \(h_{1}(0)=h_{2}(0)\) and \(h_{1}(|z|<1)\subset h_{2}(|z|<1)\). It means the behaviour of the function \(h_{1}\) is subordinate or constrained by the function \(h_{2}\) under some mapping [8]. In 1941, Ozaki [19] introduced the class \(\mathcal{F}\) as \[\mathcal{F}=\bigg{\{}f\in\mathcal{A}:\Re\bigg{(}1+\frac{zf^{\prime\prime}(z)}{ f^{\prime}(z)}\bigg{)}>-\frac{1}{2},\quad\text{for}\;\;|z|<1\bigg{\}}.\] If the function \(f\in\mathcal{F}\), then \(f\) satisfies the subordination relation \(1+(zf^{\prime\prime}(z)/f^{\prime}(z))\prec(1+2z)/(1-z)\), for \(|z|<1\). The function \(f_{1}\) defined as \(zf_{1}^{\prime\prime}(z)/f_{1}^{\prime}(z)=3z/(1-z)\) or \[f_{1}(z)=z+\frac{3}{2}z^{2}+2z^{3}+\frac{5}{2}z^{4}+\cdots \tag{1.1}\] and the function \(f_{2}\) defined as \(zf_{2}^{\prime\prime}(z)/f_{2}^{\prime}(z)=3z^{2}/(1-z^{2})\) or \[f_{2}(z)=z+\frac{1}{2}z^{3}+\frac{3}{8}z^{5}+\cdots \tag{1.2}\] belong to the class \(\mathcal{F}.\) Further, author [19] studied the class \(\mathcal{G}\) as \[\mathcal{G}=\bigg{\{}f\in\mathcal{A}:\Re\bigg{(}1+\frac{zf^{\prime\prime}(z)}{ f^{\prime}(z)}\bigg{)}<\frac{3}{2},\quad\text{for}\;\;|z|<1\bigg{\}}.\] Introduction The study of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_local_ properties of the _local_local_ properties of the _local_ logarithmic properties of the _local_ logarithmic properties of the _local_ properties of the _ and \(\Gamma_{3}=\frac{1}{2}(A_{4}-A_{2}A_{3}+\frac{1}{3}A_{2}^{3})\). Further, on substituting the value of \(A_{2},A_{3}\) and \(A_{4}\) from (1.5), we get \[\Gamma_{1}= -\frac{1}{2}a_{2}, \tag{1.7}\] \[\Gamma_{2}= -\frac{1}{2}(a_{3}-\frac{3}{2}a_{2}^{2}),\] (1.8) \[\Gamma_{3}= -\frac{1}{2}(a_{4}-4a_{2}a_{3}+\frac{10}{3}a_{2}^{3}). \tag{1.9}\] For locally univalent functions \(f\), the Schwarzian derivative is defined by \(S_{f}(z)=\left(\frac{f^{\prime\prime}(z)}{f^{\prime}(z)}\right)^{\prime}- \frac{1}{2}\left(\frac{f^{\prime\prime}(z)}{f^{\prime}(z)}\right)^{2}.\) Denote \(\sigma_{3}(f)=S_{f}(z)\) and from [22], the higher order Schwarzian derivative is given as \(\sigma_{n+1}(f)=(\sigma_{n}(f))^{\prime}-(n-1)\sigma_{n}(f)f^{\prime\prime}/f^ {\prime},\)\(n\geq 4.\) Without loss of generality, we assume that \(\sigma_{n}(f)(0)=:\mathbf{S}_{n}\) so that the third and fourth order Scharzian derivatives become \[\mathbf{S}_{3}=\sigma_{3}(f)(0)=6(a_{3}-a_{2}^{2})\quad\text{and}\quad\mathbf{ S}_{4}=\sigma_{4}(f)(0)=24(a_{4}-3a_{2}a_{3}+2a_{2}^{3}). \tag{1.10}\] Nehari [16] gave a criteria of univalency of a analytic function using Schwarzian derivatives. For two natural numbers \(q\) and \(n\), the \(q^{\text{th}}\) Hankel determinant \(H_{q}(n)\) for the function \(f\in\mathcal{S}\) is given by \(H_{q}(n):=\det\{a_{n+i+j-2}\}_{i,j}^{q},\)\(1\leq i,\)\(j\leq q,\)\(a_{1}=1\). The \(q^{th}\) Hermitian-Toeplitz determinant for the function \(f\in\mathcal{S}\) is given by \(T_{q,n}(F_{f}):=\det\{a_{ij}\},\) where \(a_{ij}=a_{n+j-i}\) for \(j\geq i\) and \(a_{ij}=\overline{a_{ji}}\) for \(j<i\). Thus, the second order Hermitian-Toeplitz determinant for the function \(f\in\mathcal{S}\) is given as \(T_{2,1}(F_{f})=\left|\begin{matrix}1&a_{2}\\ \bar{a}_{2}&1\end{matrix}\right|=1-|a_{2}|^{2}.\) In terms of logarithmic coefficient, \(T_{2,1}(F_{f}/2)\) becomes \(T_{2,1}(F_{f}/\gamma)=\left|\begin{matrix}\gamma_{1}&\gamma_{2}\\ \bar{\gamma}_{2}&\gamma_{1}\end{matrix}\right|=\gamma_{1}^{2}-|\gamma_{2}|^{2}.\) On substituting the values of \(\gamma_{1}\) and \(\gamma_{2}\) from (1.6), we get \[T_{2,1}(F_{f}/\gamma)=\frac{1}{16}(-a_{2}^{4}+4a_{2}^{2}+4a_{2}^{2}\Re a_{3}-4 |a_{3}|^{2}). \tag{1.11}\] We recall that in [7], authors computed sharp estimates on second and third order Hermitian-Toeplitz determinants involving initial coefficients of certain univalent functions. Obradovic and Tuneski [17] computed the bounds on Hermitian-Toeplitz determinant of third order involving initial coefficients of univalent functions. The authors [21] computed bounds on third order Hermitian-Toeplitz determinant for the starlike tan hyperbolic functions. The authors [15] computed bounds on Hankel and Toeplitz determinants of Logarithmic coefficients of inverse functions for certain univalent functions. In [4], authors determined bounds on certain coefficients together with growth estimates for Ozaki close-to-convex functions. Further, the sharp bound on second Hankel determinant involving initial coefficients as well as inverse coefficients for the subclass of strongly Ozaki close-to-convex functions were determined in [23]. Further, authors [18] improved upper bound of the third order Hankel determinant for the classes \(\mathcal{F}\) and \(\mathcal{G}\) respectively and they conjectured the sharpness of the bound. In a recent paper [9], authors established sharp bound on the second Hankel determinant involving logarithmic coefficients with invariance property of strongly Ozaki close-to-convex functions. Motivated by the above discussed literature, in the second section, we provide the sharp bounds on second order Hermitian-Toeplitz determinant \(T_{2,1}(F_{f}/\gamma)\), initial logarithmic inverse coefficients \(|\Gamma_{i}|\); \(i=1,2,3\), third order Schwarzian derivative \(|S_{3}|\) and the difference of successive inverse coefficients \(|A_{3}-A_{2}|\) as well as the difference of logarithmic inverse coefficients \(|\Gamma_{3}-\Gamma_{2}|\) respectively for the functions \(f\in\mathcal{F}\). In the third section, we provide the sharp bounds on second order Hermitian-Toeplitz determinant \(T_{2,1}(F_{f}/\gamma)\), initial logarithmic inverse coefficients \(|\Gamma_{i}|\); \(i=1,2,3\) and third and fourth order Schwarzian derivatives \(|S_{3}|\) and \(|S_{4}|\) for the functions \(f\in\mathcal{G}\). The following lemmas will play an important role in the demonstration of main results. **Lemma 1.1**.: _[_6_]_ _Let \(w(z)=c_{1}z+c_{2}z^{2}+c_{3}z^{3}+c_{4}z^{4}+...\) be a Schwarz function. Then_ \[|c_{1}|\leq 1,\quad|c_{2}|\leq 1-|c_{1}|^{2},\quad|c_{3}|\leq 1-|c_{1}|^{2}- \frac{|c_{2}|^{2}}{1+|c_{1}|}.\] **Lemma 1.2**.: _[_14_]_ _Let \(\mathcal{P}\) be the class of analytic functions having the Taylor series of the form_ \[p(z)=1+p_{1}z+p_{2}z^{2}+p_{3}z^{3}+\cdots \tag{1.12}\] _satisfying the condition \(\Re(p(z))>0\)\((z\in\mathbb{D})\). Then_ \[2p_{2}= p_{1}^{2}+t\xi,\] \[4p_{3}= p_{1}^{3}+2p_{1}t\xi-p_{1}t\xi^{2}+2t(1-|\xi|^{2})\eta,\] \[8p_{4}= p_{1}^{4}+3p_{1}^{2}t\xi+(4-3p_{1}^{2})t\xi^{2}+p_{1}^{2}t\xi^{3}+ 4t(1-|\xi|^{2})(1-|\eta|^{2})\gamma\] \[\qquad\qquad\qquad+4t(1-|\xi|^{2})(p_{1}\eta-p_{1}\xi\eta-\bar{ \xi}\eta^{2}),\] _for some \(\xi,\eta,\gamma\in\overline{\mathbb{D}}\) and \(t=(4-p_{1}^{2})\)._ ## 2. **The class \(\mathcal{F}\)** In this section, we first determine sharp bounds on \(T_{2,1}(F_{f}/\gamma)\) for the functions \(f\in\mathcal{F}\). **Theorem 2.1**.: _Let the function \(f(z)=z+\sum_{n=2}^{\infty}a_{n}z^{n}\) be in the class \(\mathcal{F}\). Then_ \[-\frac{1}{16}\leq T_{2,1}(F_{f}/\gamma)\leq\frac{95}{256}.\] _The upper and lower bound are sharp for the functions \(f_{1}\) and \(f_{2}\) given by (1.1) and (1.2) respectively._ Proof.: Let the function \(f\in\mathcal{F}.\) Then \(1+\frac{zf^{\prime\prime}(z)}{f^{\prime}(z)}=\frac{1+2w(z)}{1-w(z)}\) where \(w(z)\) is a Schwarz function defined in \(|z|<1\). Since \(p(z)=(1+w(z))/(1-w(z))\in\mathcal{P}\), we have \[1+\frac{zf^{\prime\prime}(z)}{f^{\prime}(z)}=\frac{3p(z)-1}{2}. \tag{2.1}\] The Taylors series expansions of left hand side and right hand side in expression (2.1) are given as \[1+\frac{zf^{\prime\prime}(z)}{f^{\prime}(z)}=1 +2a_{2}z+(-4a_{2}^{2}+6a_{3})z^{2}+(8a_{2}^{3}-18a_{2}a_{3}+12a_{ 4})z^{3}\] \[+(-16a_{2}^{4}+48a_{2}^{2}a_{3}-18a_{3}^{2}-32a_{2}a_{4}+20a_{5}) z^{4}+\cdots \tag{2.2}\] and \[\frac{3p(z)-1}{2}=1+\frac{3p_{1}z}{2}+\frac{3p_{2}z^{2}}{2}+\frac{3p_{3}z^{3} }{2}+\cdots \tag{2.3}\] respectively. On comparing (2.2) and (2.3), we get initial coefficients \[a_{2}= \frac{3}{4}p_{1}, \tag{2.4}\] \[a_{3}= \frac{1}{8}(3p_{1}^{2}+2p_{2}),\] (2.5) \[a_{4}= \frac{1}{64}(9p_{1}^{3}+18p_{1}p_{2}+8p_{3}). \tag{2.6}\] On substituting the values of \(a_{2}\) and \(a_{3}\) in (1.11), we get \[T_{2,1}(F_{f}/\gamma)=\frac{1}{16}\bigg{(}-\frac{81p_{1}^{4}}{256}+\frac{36p_{ 1}^{2}}{16}+\frac{36p_{1}^{2}}{128}\Re(3p_{1}^{2}+2p_{2})-\frac{1}{16}|3p_{1}^ {2}+2p_{2}|^{2}\bigg{)}.\] By using Lemma 1.2, the above expression becomes \[T_{2,1}(F_{f}/\gamma))=\frac{1}{4096}(-49p_{1}^{4}+576p_{1}^{2}-56p_{1}^{2}(4- p_{1}^{2})\Re(\xi)-16(4-p_{1}^{2})^{2}|\xi|^{2}). \tag{2.7}\] Using \(-\Re(\xi)\leq|\xi|,\) and setting \(x=|\xi|\in[0,1]\) and \(p_{1}=p\), expression (2.7) becomes \[T_{2,1}(F_{f}/\gamma)\leq\frac{1}{4096}(-49p_{1}^{4}+576p_{1}^{2}+56p_{1}^{2}( 4-p_{1}^{2})x-16(4-p_{1}^{2})^{2}x^{2})=\Upsilon(p,x).\] In order to prove our result, we determine the maximum value of the function \(\Upsilon\) over rectangular region \(\Omega=[0,2]\times[0,1].\) We consider two cases on region \(\Omega\) for the same. 1. First, the boundary points of \(\Omega\) are being considered. A simple calculation gives \[\Upsilon(0,x)=-\frac{x^{2}}{16}\leq 0,\quad\Upsilon(2,x)=\frac{95}{2 56},\quad\Upsilon(p,0)=\frac{1}{4096}(-49p^{4}+576p^{2})\leq\frac{95}{256},\] \[\text{and}\quad\Upsilon(p,1)=\frac{1}{4096}(-121p^{4}+928p^{2}-25 6)\leq\frac{95}{256}.\] 2. Next, the interior points of \(\Omega\) are being considered. A solution of system of equations \(\partial\Upsilon(p,x)/\partial p=0\) and \(\partial\Upsilon(p,x)/\partial x=0\) refers a critical point of \(\Upsilon.\) The equation \(\partial\Upsilon(p,x)/\partial x=0\) gives \(x=7p^{2}/(4(4-p^{2}))=x_{p}\in(0,1)\) which holds for \(p<\sqrt{16/11}\in(0,2).\) Further we solve \(\partial\Upsilon(p,x)/\partial p=0\) and substituting the value \(x_{p}\) we get \(p^{5}-8p^{3}+16p=0\) which is never possible for \(p\in(0,2).\) Thus, the function \(\Upsilon\) has no maximum value in the interior of \(\Omega.\) From the cases (1) and (2), we conclude the desire upper bound \(95/256\) on \(T_{2,1}(F_{f}/\gamma).\) Using inequality \(-\Re(\xi)\geq-|\xi|\) and setting \(x=|\xi|\) in (2.7), we get \[T_{2,1}(F_{f}/\gamma) \geq\frac{1}{4096}(-49p^{4}+576p^{2}-56p^{2}(4-p^{2})x-16(4-p^{2}) ^{2}x^{2})\] \[=\Psi(p,x).\] On the boundary of \(\Omega,\) a simple calculation on \(\Psi(p,x)\) yields \[\Psi(0,x)=-\frac{x^{2}}{16}\geq-\frac{1}{16},\quad\Psi(2,x)=\frac {95}{256},\quad\Psi(p,0)=\frac{1}{4096}(-49p^{4}+576p^{2})\geq 0,\] \[\text{and}\;\;\Psi(p,1)=\frac{1}{4096}(-9p^{4}+480p^{2}-256) \geq-\frac{1}{16}.\] In the interior of \(\Omega,\) we note that \(\partial\Psi(p,x)/\partial x=-((4-p^{2})(56p^{2}+32x(4-p^{2})))/4096\neq 0.\) The function \(\Psi\) has no minimum value in the interior region of \(\Omega.\) Therefore, we conclude the desire lower bound \(-1/16\) on \(T_{2,1}(F_{f}/\gamma).\) In the next result, we compute the sharp bounds on \(|\Gamma_{i}|;\)\(i=1,2,3\) for the functions \(f\in\mathcal{F}.\) **Theorem 2.2**.: _Let the function \(f(z)=z+\sum_{n=2}^{\infty}a_{n}z^{n}\) be in the class \(\mathcal{F}\). Then_ \[|\Gamma_{1}|\leq\frac{3}{4},\quad|\Gamma_{2}|\leq\frac{11}{16},\quad|\Gamma_{ 3}|\leq\frac{7}{8}.\] _All inequalities are sharp for the function \(f_{1}\) given in (1.1)._ Proof.: As before, if the function \(f\in\mathcal{F},\) then there exist a Schwarz function \(w(z)=\sum_{k=1}^{\infty}c_{k}z^{k}\) in \(|z|<1\) such that \(1+\frac{zf^{\prime\prime}(z)}{f^{\prime}(z)}=\frac{1+2w(z)}{1-w(z)}.\) A simple calculation gives \[\frac{1+2w(z)}{1-w(z)}=1 +3c_{1}z+3(c_{1}^{2}+c_{2})z^{2}+3(c_{1}^{3}+2c_{1}c_{2}+c_{3})z^ {3}\] \[+3(c_{1}^{4}+3c_{1}^{2}c_{2}+c_{2}^{2}+2c_{1}c_{3}+c_{4})z^{4}+\cdots \tag{2.8}\] On comparing (2.2) and (2.8), the initial coefficients in terms of Schwarz function become \[a_{2}= \frac{3}{2}c_{1}, \tag{2.9}\] \[a_{3}= \frac{1}{2}(4c_{1}^{2}+c_{2}),\] (2.10) \[a_{4}= \frac{1}{8}(20_{1}^{3}+13c_{1}c_{2}+2c_{3}). \tag{2.11}\] Using Lemma (1.1), from (1.7) and (2.9), we get \(|\Gamma_{1}|=3|c_{1}|/4\leq 3/4\). Further, using (2.9) and (2.10) in (1.8), we get \[|\Gamma_{2}|=\frac{1}{16}|-11c_{1}^{2}+4c_{2}|.\] Using Lemma 1.1, we get \[|\Gamma_{2}|\leq\frac{1}{16}(7|c_{1}|^{2}+4)\leq\frac{11}{16}.\] In view of (2.9), (2.10),(2.11) and (1.9), we have \[|\Gamma_{3}| =\frac{1}{48}\bigg{|}42c_{1}^{3}-33c_{1}c_{2}+6c_{3}\bigg{|}\] \[\leq\frac{1}{48}(42|c_{1}|^{3}+33|c_{1}||c_{2}|+6|c_{3}|).\] By making use of Lemma 1.1, we get \[|\Gamma_{3}|\leq\frac{1}{48}\bigg{(}42|c_{1}|^{3}+33|c_{1}||c_{2}|+6(1-|c_{1}| ^{2}-\frac{|c_{2}|^{2}}{1+|c_{1}|})\bigg{)}=\chi(|c_{1}|,|c_{2}|).\] Next, we find the maximum value of the function \(\chi\) over the region \(\Lambda=\{(|c_{1}|,|c_{2}|):|c_{1}|\leq 1,|c_{2}|\leq 1-|c_{1}|^{2}\}.\) The equation \(\partial\chi/\partial|c_{2}|=0\) provides \(|c_{2}|=(33|c_{1}|(1+|c_{1}|))/12\in(0,1)\) when \(|c_{1}|\in(0,\frac{-33+\sqrt{1221}}{66}).\) On substituting the value of \(|c_{2}|\) in equation \[\frac{\partial\chi}{\partial|c_{1}|}=\frac{1}{48}\bigg{(}126|c_{1}|^{2}-12|c _{1}|+33|c_{2}|+\frac{6|c_{2}|^{2}}{(1+|c_{1}|)^{2}}\bigg{)}=0,\] we get \(6291|c_{1}|^{2}+1890|c_{1}|=0,\) which is not possible. Therefore, it is noted that the function \(\chi\) has no maximum value in the interior of \(\Lambda.\) The continuity of the function \(\chi\) over the compact region \(\Lambda\) ensure the maximum value of \(\chi\) attains at boundary of \(\Lambda.\) Therefore, we have 1. \(\chi(0,|c_{2}|)=(1-|c_{2}|^{2})/8\leq 1/8,\) for all \(0\leq|c_{2}|\leq 1,\) 2. \(\chi(|c_{1}|,0)=(42|c_{1}|^{3}-6|c_{1}|^{2}+6)/48\leq 7/8\) for all \(0\leq|c_{1}|\leq 1,\) 3. \(\chi(|c_{1}|,1-|c_{1}|^{2})=(3|c_{1}|^{3}+39|c_{1}|)/48\leq 7/8\) for all \(0\leq|c_{1}|\leq 1.\) Thus, the maximum value of \(\chi\) is \(7/8\) in domain \(\Lambda.\) Next result provides the sharp estimate on \(|S_{3}|\) for the functions \(f\in\mathcal{F}.\) **Theorem 2.3**.: _Let the function \(f(z)=z+\sum_{n=2}^{\infty}a_{n}z^{n}\) be in the class \(\mathcal{F}\). Then_ \[|S_{3}|\leq 3.\] _The inequality is sharp for the function \(f_{2}\) given by (1.2)._ Proof.: In view of (1.10), (2.9) and (2.10), we have \[|S_{3}|=\frac{3}{2}|-c_{1}^{2}+2c_{2}|\leq\frac{3}{2}(|c_{1}|^{2}+2|c_{2}|).\] By Lemma 1.1, for \(0\leq|c_{1}|\leq 1\), we conclude the result as \[|S_{3}|\leq\frac{3}{2}(-|c_{1}|^{2}+2)\leq 3.\] Next, we compute the sharp bound on \(|A_{3}-A_{2}|\) and \(|\Gamma_{3}-\Gamma_{2}|\) for the functions \(f\in\mathcal{F}\). **Theorem 2.4**.: _Let the function \(f(z)=z+\sum_{n=2}^{\infty}a_{n}z^{n}\) be in the class \(\mathcal{F}\). Then_ 1. \(|A_{3}-A_{2}|\leq 4,\)__ 2. \(|\Gamma_{3}-\Gamma_{2}|\leq\frac{25}{16}.\)__ _Both the inequalities are sharp for the function given by (1.1)._ Proof.: (a) Using (1.5), (2.9) and (2.10), we have \[|A_{3}-A_{2}|=\frac{1}{2}|5c_{1}^{2}+3c_{1}-c_{2}|.\] On applying Lemma (1.1), we get \[|A_{3}-A_{2}|\leq\frac{1}{2}(4|c_{1}|^{2}+3|c_{1}|+1)\leq 4,\quad|c_{1}|\leq 1.\] (b) In view of (1.8), (1.9), (2.9), (2.10) and (2.11), we have \[|\Gamma_{3}-\Gamma_{2}|=\frac{1}{48}\bigg{|}(-42c_{1}^{3}+33c_{1}c_{2}-33c_{1 }^{2}+12c_{2}-6c_{3})\bigg{|}.\] Using Lemma (1.1), we get \[|\Gamma_{3}-\Gamma_{2}| \leq\frac{1}{48}\bigg{(}42|c_{1}|^{3}+33|c_{1}||c_{2}|+33|c_{1}|^ {2}+12|c_{2}|+6(1-|c_{1}|^{2}-\frac{|c_{2}|^{2}}{1+|c_{1}|})\bigg{)}\] \[=M(|c_{1}|,|c_{2}|).\] Next, we find the maximum value of the function \(M\) over the region \(\Lambda=\{(|c_{1}|,|c_{2}|):|c_{1}|\leq 1,|c_{2}|\leq 1-|c_{1}|^{2}\}.\) First, let us consider the boundary of \(\Lambda\). We have 1. \(M(0,|c_{2}|)=(-|c_{2}|^{2}+2|c_{2}|+1)/8\leq 1/4,\quad 0\leq|c_{2}|\leq 1,\) 2. \(M(|c_{1}|,0)=(42|c_{1}|^{3}+27|c_{1}|^{2}+6)/48\leq 25/16,\quad 0\leq|c_{1}| \leq 1,\) 3. \(M(|c_{1}|,1-|c_{1}|^{2})=(3|c_{1}|^{3}+21|c_{1}|^{2}+39|c_{1}|+12)/48\leq 2 5/16,\quad 0\leq|c_{1}|\leq 1.\) It is noted that \(\partial M/\partial|c_{2}|=\bigg{(}33|c_{1}|+12(1-|c_{2}|/(1+|c_{1}|))\bigg{)}/48\neq 0\) in the interior of \(\Lambda\). Thus, the function \(M\) has no critical point in the interior of \(\Lambda\). Thus, we conclude the result. ## 3. **The class \(\mathcal{G}\)** In this section, we first do investigation about the sharpness of the bounds on second order Hermitian-Toeplitz determinant \(T_{2,1}(F_{f}/\gamma)\) for the functions \(f\in\mathcal{G}\). **Theorem 3.1**.: _Let \(f(z)=z+\sum_{n=2}^{\infty}a_{n}z^{n}\) be in the class \(\mathcal{G}.\) Then_ \[-\frac{1}{144}\leq T_{2,1}(F_{f}/\gamma)\leq\frac{15}{256}.\] _The upper and lower bound are sharp for the function \(g_{1}\) and \(g_{2}\) given by (1.3) and (1.4) respectively._ Proof.: Let the function \(f\in\mathcal{G}.\) Then \(1+\frac{zf^{\prime\prime}(z)}{f^{\prime}(z)}=\frac{1-2w(z)}{1-w(z)}\) where \(w(z)\) is a Schwarz function defined on \(\mathbb{D}.\) Since \(p(z)=(1+w(z))/(1-w(z))\in\mathcal{P}\), then we have \[1+\frac{zf^{\prime\prime}(z)}{f^{\prime}(z)}=\frac{3-p(z)}{2}. \tag{3.1}\] The series expansion of right hand side of (3.1) becomes \[\frac{3-p(z)}{2}=1-\frac{p_{1}z}{2}-\frac{p_{2}z^{2}}{2}-\frac{p_{3}z^{3}}{2}- \frac{p_{4}z^{4}}{2}\cdots \tag{3.2}\] On comparing (2.2) and (3.2), we get \[a_{2}= -\frac{p_{1}}{4}, \tag{3.3}\] \[a_{3}= \frac{1}{24}(p_{1}^{2}-2p_{2}),\] (3.4) \[a_{4}= \frac{1}{192}(-p_{1}^{3}+6p_{1}p_{2}-8p_{3}). \tag{3.5}\] On putting the values of \(a_{2}\) and \(a_{3}\) from (3.3) and (3.4) in (1.11), we have \[T_{2,1}(F_{f}/\gamma)=\frac{1}{16}\bigg{(}-\frac{p_{1}^{4}}{256}+\frac{p_{1}^ {2}}{4}+\frac{p_{1}^{2}}{96}\Re(p_{1}^{2}-2p_{2})-\frac{1}{144}|p_{1}^{2}-2p_{ 2}|^{2}\bigg{)}.\] Using Lemma 1.2, above expression becomes \[T_{2,1}(F_{f}/\gamma))=\frac{1}{36864}(-9p_{1}^{4}+576p_{1}^{2}-24p_{1}^{2}(4 -p_{1}^{2})\Re(\xi)-16(4-p_{1}^{2})^{2}|\xi|^{2}). \tag{3.6}\] Using \(-\Re(\xi)\leq|\xi|\), and setting \(x=|\xi|\in[0,1]\) (with \(p_{1}=p\)), we get \[T_{2,1}(F_{f}/\gamma)\leq\frac{1}{36864}(-9p_{1}^{4}+576p_{1}^{2}+24p_{1}^{2}( 4-p_{1}^{2})x-16(4-p_{1}^{2})^{2}x^{2})=\Phi(p,x).\] In order to prove our result, we determine the maximum value of the function \(\Phi\) over rectangular region \(\Omega=[0,2]\times[0,1]\). 1. At the boundary points of \(\Omega\), It is noted that \[\Phi(0,x)=-\frac{x^{2}}{144}\leq 0,\quad\Phi(2,x)=\frac{15}{256},\quad \Phi(p,0)=\frac{1}{36864}(-9p^{4}+576p^{2})\leq\frac{15}{256},\] and \(\Phi(p,1)=\frac{1}{36864}(-49p^{4}+800p^{2}-256)\leq\frac{15}{256}\). 2. To compute the critical points of the function \(\Phi\), the system of equations \(\partial\Phi/\partial p=0\) and \(\partial\Phi/\partial x=0\) must have a solution in the interior of \(\Omega\). The equation \(\partial\Phi/\partial x=0\) gives \(x=3p^{2}/(4(4-p^{2}))=x_{p}\in(0,1)\) which holds for \(p<\sqrt{16/7}\in(0,2)\). Next, on putting the value of \(x_{p}\) in equation \(\partial\Phi/\partial p=0\), we obtain the equation \(p^{5}-8p^{3}+16p=0\) which is not possible for any \(p\in(0,2)\). Therefore, the function \(\Phi\) has no maximum value in the interior of \(\Omega\). Hence, from cases (1) and (2), we conclude that the best upper bound on \(T_{2,1}(F_{f}/\gamma)\) is \(15/256\). Next, on using \(-\Re(\xi)\geq-|\xi|\) and setting \(x=|\xi|\) in (3.6), we get \[T_{2,1}(F_{f}/\gamma)\geq\frac{1}{36864}(-9p^{4}+576p^{2}-24p^{2}(4-p^{2})x-16 (4-p^{2})^{2}x^{2})=N(p,x).\] Next, we determine the minimum value of the function \(N(p,x)\) over rectangular region \(\Omega=[0,2]\times[0,1]\). First, we consider boundary points of \(\Omega\). \[N(0,x)=-\frac{x^{2}}{144}\geq-\frac{1}{144},\quad N(2,x)=\frac{1 5}{256},\quad N(p,0)=\frac{1}{36864}(-9p^{4}+576p^{2})\geq 0,\] \[\text{and }N(p,1)=\frac{1}{36864}(-p^{4}+608p^{2}-256)\geq- \frac{1}{144}.\] Since \(\partial N/\partial x=-((4-p^{2})(24p^{2}+32x(4-p^{2})))/36864\neq 0\) for all \((p,x)\in(0,2)\times(0,1)\). Thus, the function \(N\) has no minimum value in the interior of \(\Omega\). Therefore, we conclude the minimum value of function \(N\) over \(\Omega\) is \(-1/144\). Next, we compute sharp bound on initial logarithmic inverse coefficients \(|\Gamma_{i}|\); \(i=1,2,3\) for the functions \(f\in\mathcal{G}\). **Theorem 3.2**.: _Let \(f(z)=z+\sum_{n=2}^{\infty}a_{n}z^{n}\) be in the class \(\mathcal{G}\). Then_ \[|\Gamma_{1}|\leq\frac{1}{4},\quad|\Gamma_{2}|\leq\frac{3}{16},\quad|\Gamma_{3} |\leq\frac{5}{24}.\] _All three inequalities are sharp for the function \(g_{1}\) given by (1.3)._ Proof.: Since the function \(f\in\mathcal{G}\), then there exist a Schwarz function \(w(z)=\sum_{k=1}^{\infty}c_{k}z^{k}\) in \(|z|<1\) such that \(1+\frac{zf^{\prime\prime}(z)}{f^{\prime}(z)}=\frac{1-2w(z)}{1-w(z)}.\) A simple calculation yields \[\frac{1-2w(z)}{1-w(z)}=1 +3c_{1}z+3(c_{1}^{2}+c_{2})z^{2}+3(c_{1}^{3}+2c_{1}c_{2}+c_{3})z^ {3}\] \[+3(c_{1}^{4}+3c_{1}^{2}c_{2}+c_{2}^{2}+2c_{1}c_{3}+c_{4})z^{4}+\cdots \tag{3.7}\] On comparing (2.2) and (3.7), the initial coefficients become \[a_{2}= -\frac{c_{1}}{2}, \tag{3.8}\] \[a_{3}= -\frac{c_{2}}{6},\] (3.9) \[a_{4}= -\frac{1}{24}(c_{1}c_{2}+2c_{3}). \tag{3.10}\] In view of (1.7) and (3.8), we have \(|\Gamma_{1}|=|c_{1}|/4\leq 1/4\). Further, from (3.8), (3.9) and (1.8), we have \[|\Gamma_{2}|=\frac{1}{48}|9c_{1}^{2}+4c_{2}|.\] By Lemma (1.1), we get \(|\Gamma_{2}|\leq(5|c_{1}|^{2}+4)/48\leq 3/16.\) Next, in view of (3.8), (3.9),(3.10) and (1.9), we have \[|\Gamma_{3}| =\frac{1}{48}\bigg{|}10c_{1}^{3}+9c_{1}c_{2}+2c_{3}\bigg{|}\] \[\leq\frac{1}{48}(10|c_{1}|^{3}+9|c_{1}||c_{2}|+2|c_{3}|).\] Using Lemma (1.1), we get \[|\Gamma_{3}|\leq\frac{1}{48}\bigg{(}10|c_{1}|^{3}+9|c_{1}||c_{2}|+2(1-|c_{1}|^ {2}-\frac{|c_{2}|^{2}}{1+|c_{1}|})\bigg{)}=S(|c_{1}|,|c_{2}|).\] To determine the maximum value of the function \(S\) over the region \(\Lambda=\{(|c_{1}|,|c_{2}|):|c_{1}|\leq 1,|c_{2}|\leq 1-|c_{1}|^{2}\}\), we consider following cases 1. On the boundary of \(\Lambda\), we have 1. \(S(0,|c_{2}|)=(1-|c_{2}|^{2})/24\leq 1/24\), for all \(0\leq|c_{2}|\leq 1\), 2. \(S(|c_{1}|,0)=(10|c_{1}|^{3}-2|c_{1}|^{2}+2)/48\leq 5/24\) for all \(0\leq|c_{1}|\leq 1\), 3. \(S(|c_{1}|,1-|c_{1}|^{2})=(-|c_{1}|^{3}+11|c_{1}|)/48\leq 5/24\) for all \(0\leq|c_{1}|\leq 1\). 2. We take the interior region of \(\Lambda\). A simple calculation gives \[\frac{\partial S}{\partial|c_{1}|}=\frac{1}{48}\bigg{(}30|c_{1}|^{2}-4|c_{1}|+ 9|c_{2}|+\frac{2|c_{2}|^{2}}{(1+|c_{1}|)^{2}}\bigg{)}=0,\] and \[\frac{\partial S}{\partial|c_{2}|}=\frac{1}{48}\bigg{(}9|c_{1}|-\frac{4|c_{2} |}{1+|c_{1}|}\bigg{)}=0.\] Using similar lines done in Theorem 2.2, we conclude that the system of equations \(\partial S/\partial|c_{1}|=0\) and \(\partial S/\partial|c_{2}|=0\) has no common solution in the interior of \(\Lambda\). Hence, the function \(S\) has no maximum value in the interior of \(\Lambda\). From the cases (1) and (2), we obtain desired estimate on \(|\Gamma_{3}|\). Next result provides the sharp bounds on third and fourth order Schwarzian derivatives \(|S_{3}|\) and \(|S_{4}|\) respectively for the functions \(f\in\mathcal{G}\). **Theorem 3.3**.: _Let the function \(f(z)=z+\sum_{n=2}^{\infty}a_{n}z^{n}\) be in the class \(\mathcal{G}.\) Then_ \[|S_{3}|\leq\frac{3}{2}\,\text{ and }\,|S_{4}|\leq 6.\] _The both the inequalities are sharp for the function given by (1.3)._ Proof.: (a) In view of (3.8), (3.9) and (2.10), we have \[|S_{3}|=\frac{1}{2}|3c_{1}^{2}+2c_{2}|\leq\frac{1}{2}(3|c_{1}|^{2}+2|c_{2}|).\] Using Lemma 1.1, above expression becomes \[|S_{3}|\leq\frac{1}{2}(|c_{1}|^{2}+2)\leq\frac{3}{2},\quad 0\leq|c_{1}|\leq 1.\] (b) In view of (3.8), (3.9) and (2.10), we get \[|S_{4}|=|6c_{1}^{3}+7c_{1}c_{2}+2c_{3}|\leq(6|c_{1}|^{3}+7|c_{1}||c_{2}|+2|c_{3 }|).\] From Lemma 1.1, the above expression becomes \[|S_{4}|\leq 6|c_{1}|^{3}+7|c_{1}||c_{2}|+2(1-|c_{1}|^{2}-\frac{|c_{2}|^{2}}{1+|c_{1 }|})=\delta(|c_{1}|,|c_{2}|).\] Next, we find the maximum value of the function \(\delta\) over the region \(\Lambda=\{(|c_{1}|,|c_{2}|):|c_{1}|\leq 1,|c_{2}|\leq 1-|c_{1}|^{2}\}.\) On the boundary of \(\Lambda\), it is noted that * \(\delta(0,|c_{2}|)=2(1-|c_{2}|^{2})\leq 2,\) for all \(0\leq|c_{2}|\leq 1,\) * \(\delta(|c_{1}|,0)=(6|c_{1}|^{3}-2|c_{1}|^{2}+2)\leq 6\) for all \(0\leq|c_{1}|\leq 1,\) * \(\delta(|c_{1}|,1-|c_{1}|^{2})=(-3|c_{1}|^{3}+9|c_{1}|)\leq 6\) for all \(0\leq|c_{1}|\leq 1.\) Next, we consider the interior of \(\Lambda\). The equation \(\partial\delta/\partial|c_{2}|=0\) gives \(|c_{2}|=(7|c_{1}|(1+|c_{1}|))/4\in(0,1)\) which is true for \(|c_{1}|\in(0,\frac{-7+\sqrt{161}}{14}).\) Further, we solve \[\frac{\partial\delta}{\partial|c_{1}|}=\left(18|c_{1}|^{2}-4|c_{1}|+7|c_{2}|+ \frac{2|c_{2}|^{2}}{(1+|c_{1}|)^{2}}\right)=0,\] and substituting the values of \(|c_{2}|\) in the previous equation which gives \(291|c_{1}|^{2}+66|c_{1}|=0,\) which is not possible. Therefore, the function \(\delta\) has no maximum value in the interior of \(\Lambda\). ## Acknowledgement The first and the third author express their thanks to the Institute of Eminence, University of Delhi, Delhi, India-110007 for providing financial support for this research under grant number-Ref. No./IoE/2023-24/12/FRP. The second author would like to thank to the UGC Non-NET Fellowship for supporting financially vide Ref. No. Sch/139/Non-NET/Ext-156/2022-2023/722.
この論文の目的は、Hermitian-Toeplitz determinantの2次、logarithmic coefficientを involvong initial logarithmic inverse coefficients and initial order Schwarzian derivative of the Ozaki close-to-convex functionを用いた最適な推定値の算出である。
2309.04619
High-entropy effect at rare-earth site in DyNi
We report the structural and magnetic properties of RNi (R=Dy, Tb$_{1/3}$Dy$_{1/3}$Ho$_{1/3}$, and Gd$_{1/5}$Tb$_{1/5}$Dy$_{1/5}$Ho$_{1/5}$Er$_{1/5}$) to investigate the high-entropy effect at the rare-earth site. The lattice parameters are almost unchanged by the increase of configurational entropy, which is due to the successive partial substitution of Dy by pair of rare earth elements located on both sides of Dy in the periodic table. All compounds exhibit ferromagnetic ground states. The replacement of Dy with Tb+Ho, which does not have magnetic interactions in competition with Dy, does not affect the magnetic ordering temperature. Although (Gd$_{1/5}$Tb$_{1/5}$Dy$_{1/5}$Ho$_{1/5}$Er$_{1/5}$)Ni shows the Curie temperature close to that of DyNi, an additional magnetic anomaly, which would be a spin reorientation, is observed probably due to the introduction of competing magnetic interactions between R=Gd and Er compounds and R=Tb, Dy, and Ho ones. We have also assessed the magnetocaloric effect, and the configurational entropy dependence of the magnetic entropy change reflects that of the temperature derivative of the magnetic susceptibility. Our analysis suggests the possibility of enhancing magnetocaloric properties by designing the anisotropy of rare-earth magnetic moments in the high-entropy state.
Yuito Nakamura, Koshin Takeshita, Terukazu Nishizaki, Jiro Kitagawa
2023-09-08T22:13:50
http://arxiv.org/abs/2309.04619v1
# High-entropy effect at rare-earth site in DyNi ###### Abstract We report the structural and magnetic properties of RNi (R=Dy, Tb\({}_{1/3}\)Dy\({}_{1/3}\)Ho\({}_{0/3}\), and Gd\({}_{1/5}\)Tb\({}_{1/5}\)Dy\({}_{1/5}\)Ho\({}_{1/5}\)Er\({}_{1/5}\)) to investigate the high-entropy effect at the rare-earth site. The lattice parameters are almost unchanged by the increase of configurational entropy, which is due to the successive partial substitution of Dy by pair of rare earth elements located on both sides of Dy in the periodic table. All compounds exhibit ferromagnetic ground states. The replacement of Dy with Tb+Ho, which does not have magnetic interactions in competition with Dy, does not affect the magnetic ordering temperature. Although (Gd\({}_{1/5}\)Tb\({}_{1/5}\)Dy\({}_{1/5}\)Ho\({}_{1/5}\)Er\({}_{1/5}\))Ni shows the Curie temperature close to that of DyNi, an additional magnetic anomaly, which would be a spin reorientation, is observed probably due to the introduction of competing magnetic interactions between R=Gd and Er compounds and R=Tb, Dy, and Ho ones. We have also assessed the magnetocaloric effect, and the configurational entropy dependence of the magnetic entropy change reflects that of the temperature derivative of the magnetic susceptibility. Our analysis suggests the possibility of enhancing magnetocaloric properties by designing the anisotropy of rare-earth magnetic moments in the high-entropy state. + Footnote †: preprint: AIP/12020 ## I Introduction High-entropy alloys (HEAs) are unique systems composed of multiple elements with near equimolar ratios. They offer a vast compositional space and are a promising platform for studying novel phenomena [1; 2; 3]. Additionally, they have attracted considerable attention due to their rich functionalities, such as high strength, energy storage, radiation protection, magnetism, superconductivity, and biocompatibility [4; 5; 6; 7; 8; 9; 10; 11; 12]. HEA concept is now introduced into intermetallic compounds (high-entropy intermetallic compounds). Numerous rare-earth intermetallic compounds exhibit magnetic moments solely attributed to the rare-earth elements. However, the influence of a high-entropy state at the rare-earth site on the magnetic ordering temperatures of such systems remains insufficiently explored. We are primarily concerned with the robustness of the magnetic ordering of rare-earth atoms in the presence of the high-entropy state. In this study, we focus on the well-defined RNi (R:rare-earth) system, wherein magnetic ordering temperatures and magnetic structures are elucidated. The highest magnetic ordering temperature [13] is 71 K in GdNi. The ordering temperature is moderately lower compared to R\({}_{2}\)In or R\({}_{6}\)CoTe\({}_{2}\) series, where Gd\({}_{2}\)In and Gd\({}_{6}\)CoTe\({}_{2}\) show the highest magnetic ordering temperatures of 190 K and 220 K, respectively [14]. Hence, we anticipate the possible destruction of magnetic orderings within all RNi compounds by increasing atomic disorder. Additionally, we are concerned with the potential modulation of magnetocaloric effects by introducing a high-entropy state. Certain RNi compounds demonstrate a significant magnetocaloric effect in proximity to the temperature of liquid hydrogen [13; 15]. This observation holds promise for magnetic refrigeration-based hydrogen liquefaction and is significant in realizing a hydrogen society. The magnetocaloric effects of HEAs have garnered considerable attention [16; 17; 18; 9; 19]. Notably, the equimolar quinary alloy GdTbDyHoEr exhibits a remarkable magnetocaloric effect [20]. A recent investigation into the configurational entropy dependence of magnetocaloric effects in rare-earth HEAs has revealed that magnetic properties depend on the intrinsic magnetic characteristics of rare-earth elements [21]. Another study [19] suggests a reduction in the peak value of magnetic entropy change with an increase in configurational entropy in HEAs containing Dy. Transition-metal-based HEAs, such as FeMnNiGeSi, have emerged as a novel material class enabling the manipulation of magnetocaloric effects by introducing magnetocaloric transformations [22]. To the best of our knowledge, reports on the magnetocaloric effects of crystalline high-entropy rare-earth intermetallic compounds are rare, while there are many reports for amorphous HEAs containing rare-earth and transition-metal elements [19]. It is well-established that the lattice parameters and the number of 4\(f\) electrons significantly impact the magnetic properties in rare-earth intermetallic compounds. So, we examined the configurational entropy dependence of the magnetic properties of DyNi through a successive replacement of Dy with a pair of rare-earth elements located on both sides of Dy in the periodic table: partial replacement by Tb+Ho or Gd+Tb+Ho+Er. Within our replacement sequence, we can maintain the lattice constants and the average number of 4\(f\) electrons. Consequently, we could explore the high-entropy effect at the rare-earth site while preserving the electronic state of DyNi intact. In RNi compounds, GdNi and (Dy, Ho, or Er)Ni crystallize into the orthorhombic CrB-type and the orthorhombic FeB-type structure, respectively [13; 23; 24]. The crystal structure of TbNi might be controversial: a monoclinic structure with the space group \(P2_{1}m\) or an orthorhombic with the space group \(Pmn^{13}\). All RNi (R=Gd to Er) compounds are ferromagnets with the Curie temperature \(T_{\rm c}\)=71 K for R=Gd, 67 K for R=Tb, 62 K for R=Dy, 37 K for R=Ho, and 13 K for R=Er, respectively [13; 25]. Despite the changes in crystal structure that occur upon going from R=Gd to R=Tb and from R=Tb to R=Dy, we synthesized DyNi, (Tb\({}_{1/3}\)Dy\({}_{1/3}\)Ho\({}_{1/3}\))Ni, and (Gd\({}_{1/5}\)Tb\({}_{1/5}\)Dy\({}_{1/5}\)Ho\({}_{1/5}\)Er\({}_{1/5}\))Ni, which are predominantly composed of the FeB-type structure components. In this paper, we report on the structural and magnetic properties of RNi (R=Dy, Tb\({}_{1/3}\)Dy\({}_{1/3}\)Ho\({}_{1/3}\), and Gd\({}_{1/5}\)Tb\({}_{1/5}\)Dy\({}_{1/5}\)Ho\({}_{1/5}\)Er\({}_{1/5}\)). Our findings confirm that ferromagnetic ordering is robust, and that \(T_{\rm C}\) is relatively unaffected by the increase of configurational entropy at the rare-earth site. (Gd\({}_{1/5}\)Tb\({}_{1/5}\)Dy\({}_{1/5}\)Ho\({}_{1/5}\)Er\({}_{1/5}\))Ni shows an additional magnetic anomaly below \(T_{\rm C}\), which suggests a possible spin reorientation. We evaluated the configurational entropy dependence of the magnetocaloric effect, which is discussed along with the anisotropy of rare-earth magnetic moments. ## II Materials and Methods Polycrystalline samples were prepared using a homemade arc furnace as detailed in Table 1. The materials used were rare earth (Gd, Tb, Dy, Ho, and Er) (99.9 %) and Ni (99.9 %). The constituent elements with the stoichiometric ratio were melted on a water-cooled Cu hearth under an Ar atmosphere. The button-shaped samples were remelted several times and flipped each time to ensure homogeneity. Each as-cast sample was then annealed in an evacuated quartz tube at 800 \({}^{\circ}\)C for four days. Room temperature X-ray diffraction (XRD) patterns of powdered samples were obtained using an X-ray diffractometer (XRD-7000L, Shimadzu) with Cu-K\(\alpha\) radiation. The temperature dependence of dc magnetization \(\chi_{\rm dc}\) (\(T\)) between 50 K and 300 K was measured using VSM (vibrating sample magnetometer) option of VersaLab (Quantum Design). The isothermal magnetization curves between 50 K and 110 K were also taken using the VersaLab. ## III Results and Discussion Figure 1 displays the XRD patterns of DyNi, (Tb\({}_{1/3}\)Dy\({}_{1/3}\)Ho\({}_{1/3}\))Ni, and (Gd\({}_{1/5}\)Tb\({}_{1/5}\)Dy\({}_{1/5}\)Ho\({}_{1/5}\)Er\({}_{1/5}\))Ni, along with the simulation pattern of DyNi with the FeB-type structure taken from the ICSD database (Coll Code: 103332). All experimental patterns match the simulation pattern. As mentioned in the Introduction, GdNi or TbNi crystallizes into a structure different from the FeB-type of RNi (R=Dy, Ho, and Er). However, the FeB-type structure is stabilized when dominant elements of Dy, Ho, and Er are present. We note that the extra diffraction peaks assigned as the R\({}_{2}\)O\({}_{3}\) (R=Dy, Tb+Dy+Ho, or Gd+Tb+Dy+Ho+Er) phase are detected (see * in Fig. 1). Table 1 lists the lattice parameters determined with the help of Rietveld refinement program [26; 27]. While the \(c\)-axis length is almost independent of configurational entropy change at the rare-earth site, the \(a\)-axis (the \(b\)-axis) exhibits a slight expansion (contraction) with increasing configurational entropy. Figure 2 depicts \(\chi_{\rm dc}\) (\(T\)) under an external field of 100 Oe for the RNi system. Each sample exhibits a steep increase in \(\chi_{\rm dc}\) below approximately 70 K, which is indicative of ferromagnetic ordering. \(T_{\rm C}\) is defined by the minimum point of the temperature derivative of \(\chi_{\rm dc}\) (see the inset of Fig.2 and Table 1). This is one of the effective ways to obtain \(T_{\rm C}\) of ferromagnets [28; 29]. DyNi undergoes a ferromagnetic transition at \(T_{\rm C}\)=59 K, which is consistent with the literature data [25]. (Tb\({}_{1/3}\)Dy\({}_{1/3}\)Ho\({}_{1/3}\))Ni possesses \(T_{\rm C}\)=63 K, slightly enhanced compared to DyNi, and the \(T_{\rm C}\) value remains unchanged in (Gd\({}_{1/5}\)Tb\({}_{1/5}\)Dy\({}_{1/5}\)Ho\({}_{1/5}\)Er\({}_{1/5}\))Ni. We note that the \(\chi_{\rm dc}\) (\(T\)) of (Gd\({}_{1/5}\)Tb\({}_{1/5}\)Dy\({}_{1/5}\)Ho\({}_{1/5}\)Er\({}_{1/5}\))Ni shows a small anomaly around 57 K, which is discussed later. The results of \(\chi_{\rm dc}\) (\(T\)) indicate that ferromagnetic ordering is resistant to atomic disorder at the rare-earth site. DyNi, HoNi, and ErNi, which possess the orthorhombic FeB-type structure, exhibit a non-collinear magnetic structure at \(T_{\rm C}\) = 62 K, 37 K, and 13 K, respectively [13; 25]. In these compounds, rare-earth magnetic moments have a ferromagnetic arrangement parallel to the \(a\)-axis and an antiferromagnetic arrangement parallel to the \(c\)-axis. The angle between the Figure 1: XRD patterns of DyNi, (Tb\({}_{1/3}\)Dy\({}_{1/3}\)Ho\({}_{1/3}\))Ni, and (Gd\({}_{1/5}\)Tb\({}_{1/5}\)Dy\({}_{1/5}\)Ho\({}_{1/5}\)Er\({}_{1/5}\))Ni. The origin of each pattern is shifted by a value for clarity. rare-earth moment and the \(a\)-axis is 29\({}^{\circ}\) for DyNi, 25\({}^{\circ}\) for HoNi, or 61\({}^{\circ}\) for ErNi [13; 25]. Although the crystal structures of GdNi and TbNi differ from the FeB-type (GdNi: CrB-type, TbNi: monoclinic or orthorhombic) [13], they are also ferromagnets with \(T_{\rm C}\) = 69 K and 67 K, respectively [13]. The magnetic ordering temperatures of RNi (=Dy, Ho, and Er) compounds follow the de Gennes scaling, which suggests a weak effect of energy-level splitting of the \(J\)-multiplet due to the crystalline-electric-field effect [30; 31] at \(T_{\rm C}\). In such a case, the 4\(f\) electron distribution of a single rare-earth ion would be responsible for the magnetic structure [32; 33]. The 4\(f\) electron distribution of a single R\({}^{3+}\) ion (R=Dy or Ho) is oblate, and the direction of the rare-earth magnetic moment is perpendicular to the equatorially expanded 4\(f\)-electron charge cloud [32]. On the other hand, the 4\(f\) electron distribution of a single Er\({}^{3+}\) ion is prolate [32], causing the magnetic moment of Er ion to be perpendicular to that of the R\({}^{3+}\) ion (R=Dy or Ho). In fact, the magnetic moments of DyNi and HoNi are nearly parallel to the \(a\)-axis and the direction of the Er\({}^{3+}\) moment tilts toward the \(c\)-axis. The 4\(f\) electron distribution of a single Tb\({}^{3+}\) ion is oblate, which is the same as Dy\({}^{3+}\) or Ho\({}^{3+}\). Therefore, the magnetic structure of (Tb\({}_{1/3}\)Dy\({}_{1/3}\)Ho\({}_{1/3}\))Ni would not be significantly different from that of DyNi. However, in (Gd\({}_{1/5}\)Tb\({}_{1/5}\)Dy\({}_{1/5}\)Ho\({}_{1/5}\)Er\({}_{1/5}\))Ni, competition between easy magnetization axes might occur, potentially leading to a spin reorientation as observed in (Gd\({}_{0.38}\)Tb\({}_{0.27}\)Dy\({}_{0.20}\)Ho\({}_{0.15}\))Mn\({}_{5}\)Sn [34]. As shown in Fig. 2, \(\chi_{\rm dc}\) (\(T\)) of (Gd\({}_{1/5}\)Tb\({}_{1/5}\)Dy\({}_{1/5}\)Ho\({}_{1/5}\)Er\({}_{1/5}\))Ni shows a small anomaly around 57 K, which is clearly detected by d\(\chi\)/d\(T\) with a double-dip structure (see the inset of Fig. 2). We speculate that the anomaly at a lower temperature of 57 K suggests a change of magnetic structure like a spin reorientation. The isothermal magnetization curves (\(M\): magnetization and \(H\): external field) measured around \(T_{\rm C}\) are shown in Fig.3(a) for DyNi, Fig. 3(b) for (Tb\({}_{1/3}\)Dy\({}_{1/3}\)Ho\({}_{1/3}\))Ni, and Fig.3(c) for (Gd\({}_{1/5}\)Tb\({}_{1/5}\)Dy\({}_{1/5}\)Ho\({}_{1/5}\)Er\({}_{1/5}\))Ni, respectively. In each sample, the pronounced steep increase of magnetization at lower external fields below approximately \(T_{\rm C}\) supports the ferromagnetic ground state. We note that the noticeable irreversibility is not observed in any of the samples. Figure 3(d) provides a comparison of the magnetization curves among the three compounds at temperatures of 50 K, 70 K, 90 K, and 110 K. With decreasing temperature, the \(M\)-\(H\) curve of (Tb\({}_{1/3}\)Dy\({}_{1/3}\)Ho\({}_{1/3}\))Ni deviates from the other curves, albeit displaying a resemblance to that of (Gd\({}_{1/5}\)Tb\({}_{1/5}\)Dy\({}_{1/5}\)Ho\({}_{1/5}\)Er\({}_{1/5}\))Ni. As illustrated in Fig. 2, \(\chi_{\rm dc}\) (\(T\)) of (Tb\({}_{1/3}\)Dy\({}_{1/3}\)Ho\({}_{1/3}\))Ni is smaller compared to the other two compounds at low temperatures, indicating a relatively weaker magnetic response in (Tb\({}_{1/3}\)Dy\({}_{1/3}\)Ho\({}_{1/3}\))Ni. Consequently, this might lead to the lowest \(M\) for (Tb\({}_{1/3}\)Dy\({}_{1/3}\)Ho\({}_{1/3}\))Ni at a fixed \(T\) (temperature) and \(H\). It should be noted that the variation in magnetic moment associated with each sample is another contributing factor to the differences in \(M\) at fixed \(T\) and \(H\). Further investigation is required to elucidate the individual element's specific contribution. Moreover, Fig. 3(d) reveals the intersection of magnetization curves between DyNi and (Gd\({}_{1/5}\)Tb\({}_{1/5}\)Dy\({}_{1/5}\)Ho\({}_{1/5}\)Er\({}_{1/5}\))Ni at 50 K or 70 K. Such phenomena may be attributed to changes in magnetic anisotropy energy and saturation magnetic moment. The magnetic entropy change \(\Delta S_{\rm mag}\) (\(T\),\(H\)) is obtained by using the Maxwell's relation as follows: \[\Delta S_{\rm mag}(T,H)=\int_{0}^{H_{\rm max}}\left[\frac{\partial M(T,H)}{ \partial T}\right]_{H}dH \tag{1}\] , where \(H_{\rm max}\) is the maximum external field. The temperature dependences of -\(\Delta S_{\rm mag}\) (\(T\)) at \(H_{\rm max}\)=10 kOe, 20 kOe, and 30 kOe for the RNi system are summarized in Fig.4 (a). All samples show a maximum of -\(\Delta S_{\rm mag}\) (\(T\)) at approximately \(T_{\rm C}\). According to Eq. (1), \(\Delta S_{\rm mag}\) (\(T\)) is influenced by \([\frac{\partial M(T,H)}{\partial T}]_{H}\). This implies that a significant change in \(M\) with decreasing temperature at a fixed \(H\) is necessary to enhance \(\Delta S_{\rm mag}\) (\(T\)). Therefore, it is worthwhile to compare -\(\Delta S_{\rm mag}\) (\(T\)) with \(-\)d\(\chi_{\rm dc}\)/d\(T\) (refer to Figs.4 (a) and 4 (b)), as the latter represents the change in the initial slope of the \(M\)-\(H\) curve resulting from the temperature variations. A larger value of \(-\)d\(\chi_{\rm dc}\)/d\(T\) has the potential to contribute to a more significant change in \(M\) when the temperature changes. The dependence of \(-\)d\(\chi_{\rm dc}\)/d\(T\) on configurational entropy resembles that of - \(\Delta S_{\rm mag}\) (\(T\)), particularly at \(H_{\rm max}\)=10 kOe. At each \(H_{\rm max}\), while the peak value of -\(\Delta S_{\rm mag}\) for (Gd\({}_{1/5}\)Tb\({}_{1/5}\)Dy\({}_{1/5}\)Ho\({}_{1/5}\)Er\({}_{1/5}\))Ni diminishes compared to DyNi, the temperature dependence of -\(\Delta S_{\rm mag}\) becomes broader. This broadening is advantageous for magnetic refrigeration applications. The presence of a spin reorientation below \(T_{\rm C}\) contributes to this advantage, as the modification in magnetic structure gives rise to an additional \(-\)d\(\chi_{\rm dc}\)/d\(T\). As mentioned earlier, this spin reorientation likely arises from the interaction between rare-earth elements with distinct magnetic anisotropy. Consequently, the present study suggests the potential enhancement of magnetocaloric properties by manipulating rare-earth magnetic moment anisotropy Figure 2: Temperature dependences of \(\chi_{\rm dc}\) of RNi system. The external field is 100 Oe. The inset is the temperature derivative of \(\chi_{\rm dc}\) for each sample. in the high-entropy state. In this discussion, we aim to compare the magnetocaloric effect between RNi and rare-earth HEAs. Specifically, we examine the peak value of -\(\Delta S_{\rm mag}\), denoted as -\(\Delta S_{\rm mag}^{\rm peak}\). In the RNi system, the configurational entropy dependence of -\(\Delta S_{\rm mag}^{\rm peak}\) exhibits a non-systematic trend. -\(\Delta S_{\rm mag}^{\rm peak}\) decreases on going from DyNi, (Gd\({}_{1/5}\)Tb\({}_{1/5}\)Dy\({}_{1/5}\)Ho\({}_{1/5}\)Er\({}_{1/5}\))Ni to (Tb\({}_{1/3}\)Dy\({}_{1/3}\)Ho\({}_{1/3}\))Ni. In certain rare-earth HEAs [19], -\(\Delta S_{\rm mag}^{\rm peak}\) decreases in the order of GdTbDy, GdTbDyHo, GdTb, and GdTbDyHoEr (GdTbHoEr). In these rare-earth HEAs, changes occur in the average number of 4\(f\) electrons and lattice constants, resulting in varying magnetic ordering temperatures ranging from 184 K to 258 K [19]. In contrast, our RNi system maintains a nearly constant \(T_{\rm C}\), likely due to minimal alterations in lattice parameters and the average number of 4\(f\) electrons. However, both RNi and rare-earth HEAs exhibit a non-systematic configurational entropy dependence of -\(\Delta S_{\rm mag}^{\rm peak}\). Therefore, it appears that factors other than configurational entropy may influence the control of -\(\Delta S_{\rm mag}^{\rm peak}\). Here we comment on the -\(\Delta S_{\rm mag}^{\rm peak}\) value of (Gd\({}_{1/5}\)Tb\({}_{1/5}\)Dy\({}_{1/5}\)Ho\({}_{1/5}\)Er\({}_{1/5}\))Ni. It is widely acknowledged that -\(\Delta S_{\rm mag}^{\rm peak}\) follows a power law dependence on the magnetic field [35], represented as -\(\Delta S_{\rm mag}^{\rm peak}\)\(\propto\)\(H^{n}\). By applying this relation to (Gd\({}_{1/5}\)Tb\({}_{1/5}\)Dy\({}_{1/5}\)Ho\({}_{1/5}\)Er\({}_{1/5}\))Ni (see also Fig.4 (a)) and deducing the exponent \(n\) to be 0.89, we can estimate -\(\Delta S_{\rm mag}^{\rm peak}\) value at \(H_{\rm max}\)=50 kOe to be 10.6 J/kg-K. This value is larger compared to equimolar quinary rare-earth HEAs such as GdTbDyHoEr and GdTbHoErPr, which exhibit -\(\Delta S_{\rm mag}^{\rm peak}\) values of 8.6 J/kg-K and 6.92 J/kg-K, respectively, at \(H_{\rm max}\)=50 kOe [20, 21]. ## IV Summary We have studied the effect of configurational entropy on the structural and magnetic properties of DyNi by successively replacing Dy with pair of R elements located on both sides of Dy in the periodic table. This elemental substitution of Dy preserves the lattice parameters and average number of 4\(f\) electrons. Although the crystal structures of GdNi and TbNi differ from the FeB-type of RNi (R=Dy, Ho, and Er), all RNi (R=Dy, Tb\({}_{1/3}\)Dy\({}_{1/3}\)Ho\({}_{1/3}\), and Gd\({}_{1/5}\)Tb\({}_{1/5}\)Dy\({}_{1/5}\)Ho\({}_{1/5}\)Er\({}_{1/5}\)) samples crystallize into the FeB-type structure. \(T_{\rm C}\) of DyNi is almost unchanged by increasing the configurational entropy at the rare-earth site, and the ferromagnetic ordering is robust under the high-entropy state. In (Gd\({}_{1/5}\)Tb\({}_{1/5}\)Dy\({}_{1/5}\)Ho\({}_{1/5}\)Er\({}_{1/5}\))Ni, the additional magnetic anomaly is observed, which would be attributed to a spin reorientation resulting from the introduction of Gd+Er and the emergence of competing magnetic interactions. The competition does not disrupt the ferromagnetic ordering, even in the high-entropy state, but rather leads to a spin reorientation transition. Furthermore, we assessed the magnetocaloric effect of the RNi system. Although the peak value of -\(\Delta S_{\rm mag}\) of (Gd\({}_{1/5}\)Tb\({}_{1/5}\)Dy\({}_{1/5}\)Ho\({}_{1/5}\)Er\({}_{1/5}\))Ni is reduced compared to DyNi, the temperature dependence of -\(\Delta S_{\rm mag}\) becomes broader. Additionally, we observed a strong correlation between the configurational entropy dependence of -\(\Delta S_{\rm mag}\) (\(T\)) and that of -d\(\chi\)d\({}_{\rm dc}\)/d\(T\). Hence, the broadening of -\(\Delta S_{\rm mag}\) (\(T\)) in (Gd\({}_{1/5}\)Tb\({}_{1/5}\)Dy\({}_{1/5}\)Ho\({}_{1/5}\)Er\({}_{1/5}\))Ni can be attributed to the spin reorientation arising from the mixing of rare-earth elements with distinct magnetic anisotropy. Consequently, our study suggests the potential for enhancing the magnetocaloric properties by designing the anisotropy of rare-earth magnetic moments in the high-entropy state. ###### Acknowledgements. J.K. is grateful for the support provided by the Comprehensive Research Organization of Fukuoka Institute of Technology. ## Author declarations ### Conflict of Interest The authors have no conflicts to disclose. ### Author Contributions Yuito Nakamura: Investigation, Formal analysis. Koshin Takeshita: Investigation, Formal analysis. Terukazu Nishizaki: Investigation, Formal analysis, Writing - reviewing & editing. Jiro Kitagawa: Supervision, Formal analysis, Writing - original draft, Writing - reviewing & editing. ## Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request.
Rare earth siteのハイエンターピー効果を調べるため、RNi(R=Dy、Tb$_{1/3}$Dy$_{1/3}$Ho$_{1/3}$、およびGd$_{1/5}$Tb$_{1/5}$Dy$_{1/5}$Ho$_{1/5}$Er$_{1/5}$)の構造と磁性プロパティを報告します。格子パラメータは、配置 entropyの増加に伴ってほぼ変化しません。これは、周期表上の両側の希土類元素によるDyの逐次的な部分的置換によって引き起こされるものです。すべてのコmpoundsは、磁性固相状態を示しています。Tb+HoとDyを置き換えることは、磁性相互作用とDyとの競争が影響を与えないため、磁気秩序温度に影響を与えません。Gd$_{1/5}$Tb$_{1/5}$Dy$_{1/5}$Ho$_{1/5}$
2309.03176
Error analysis for local coarsening in univariate spline spaces
In this article we analyze the error produced by the removal of an arbitrary knot from a spline function. When a knot has multiplicity greater than one, this implies a reduction of its multiplicity by one unit. In particular, we deduce a very simple formula to compute the error in terms of some neighboring knots and a few control points of the considered spline. Furthermore, we show precisely how this error is related to the jump of a derivative of the spline at the knot. We then use the developed theory to propose efficient and very low-cost local error indicators and adaptive coarsening algorithms. Finally, we present some numerical experiments to illustrate their performance and show some applications.
Silvano Figueroa, Eduardo M. Garau, Pedro Morin
2023-09-06T17:30:22
http://arxiv.org/abs/2309.03176v1
# Error analysis for local coarsening ###### Abstract In this article we analyze the error produced by the removal of an arbitrary knot from a spline function. When a knot has multiplicity greater than one, this implies a reduction of its multiplicity by one unit. In particular, we deduce a very simple formula to compute the error in terms of some neighboring knots and a few control points of the considered spline. Furthermore, we show precisely how this error is related to the jump of a derivative of the spline at the knot. We then use the developed theory to propose efficient and very low-cost local error indicators and adaptive coarsening algorithms. Finally, we present some numerical experiments to illustrate their performance and show some applications. **Keywords:** data reduction, knot removal, coarsening, compression, B-splines **Acknowledgements.** This work was partially supported by Consejo Nacional de Investigaciones Cientificas y Tecnicas through grant PIP 2021-2023 (11220200101180CO), by Agencia Nacional de Promocion Cientifica y Tecnologica through grant PICT-2020-SERIE A-03820, and by Universidad Nacional del Litoral through grant CAI+D-2020 50620190100136LI. This support is gratefully acknowledged. ## 1 Introduction Let us consider a univariate polynomial spline space \(\mathcal{S}\) of a fixed degree \(p\in\mathbb{N}\). It is usual to associate a corresponding knot vector \(\boldsymbol{\xi}\) and a B-spline basis [5, 15] so that each \(s\in\mathcal{S}\) is uniquely determined by \(n\coloneqq\dim\mathcal{S}=\#\boldsymbol{\xi}-(p+1)\) coefficients referred to as control points of the spline function \(s\). In computer aided design and for different practical purposes, it is useful to enrich such a spline space while maintaining the same spline function. This procedure is known as _knot insertion_[1, 2], and a closed formula for updating the control points using the so called Oslo Algorithm is well established. On the other hand, it is not possible in general to represent a spline exactly in a coarser space. For the case when some knots can be removed without changing the spline function some algorithms have been proposed in [16]. Additionally, the general problem of transforming a spline function between B-spline representations on two arbitrary knot vectors was considered in [7, Chapter 5], where the transformation was studied for splines belonging to both spline spaces. As we mentioned above, it is always possible to represent exactly a spline in a finer space, but the reverse procedure is not possible in general without changing the spline. It is thus interesting to analyze the problem of obtaining a suitable approximation \(\hat{s}\in\hat{\mathcal{S}}\) of a spline \(s\in\mathcal{S}\), where \(\hat{\mathcal{S}}\subset\mathcal{S}\) is a coarser spline space, i.e., a spline space of smaller dimension. This problem involves two main steps. * First, we need to decide which are the more suitable knots to remove from \(\boldsymbol{\xi}\) in order to define the coarser knot vector \(\hat{\boldsymbol{\xi}}\). * Then, once the coarser space \(\hat{\mathcal{S}}\) is determined, we have to define the spline \(\hat{s}\) that approximates \(s\). As far as we know, the notion of knot removal in this sense was first studied in [8], by considering a reversal of the Oslo algorithm. Although an in-depth analysis of the error incurred in when removing knots from a given spline was not performed in that article, an interesting discussion on the (non-)uniqueness of solution to the minimax problem was presented. The knot removal process can be considered as the inverse of the knot insertion algorithm. If these operations were reversible, the control point vector after knot removal would be the solution of a linear system associated to the knot insertion matrix. Such a system has no solution in general and it is thus necessary to consider some generalized solutions. A local construction of the control points after a single knot removal is proposed in [6], where an analysis and a strategy for choosing a good approximation are presented; a study of which knots are more convenient to remove from a given spline is not provided though. It is worth noting that finding the best approximation from the coarser space in a given norm can become costly and involve solving a global problem. It would be convenient to have a localized method, which is inexpensive and updates only the coefficients corresponding to knots which are close to the one being removed. The goal of this article is twofold. First, we analyze the error incurred in when removing a single knot from a spline \(s\in\mathcal{S}\), defined as \[\mathbb{E}_{\boldsymbol{\xi},j_{0}}(s):=\min_{g\in\hat{\mathcal{S}}}\|g-s\|, \tag{1}\] where \(\hat{\mathcal{S}}\) is the spline space associated to the knot vector \(\hat{\boldsymbol{\xi}}\) obtained from \(\boldsymbol{\xi}\) by removing one knot1 and \(\|\cdot\|\) is a suitable norm in \(\mathcal{S}\). In particular, we deduce a simple formula for computing such an error and the control points of the spline in \(\hat{\mathcal{S}}\) attaining the minimum in (1). Notice that the magnitude of the error associated to each knot provides a criterion for deciding which knot is the most convenient to be removed. Additionally, we also quantify precisely the relationship between the error (1) when removing a knot and the jump of the derivative of a suitable order2 of the spline \(s\) at such a knot. At this point, it is important to mention that a knot can be _safely_ removed from a spline, i.e., leaving it geometrically unchanged, whenever such derivative is continuous at the knot, or equivalently, if the jump of such derivative vanishes. Based on this observation, a criterion involving third derivatives has been already considered for the design of approximation algorithms using cubic splines of maximum smoothness (cf. [18, 3]). Footnote 2: The order of this derivative is the order polynomial \(p+1\) minus the multiplicity of the knot being considered. Secondly, using the error analysis described above, we propose some algorithms to adaptively remove knots. More specifically, given a tolerance \(\mathrm{TOL}>0\) and a spline \(s\in\mathcal{S}\), the algorithms compute a coarser spline space \(\hat{\mathcal{S}}\subset\mathcal{S}\) and a spline \(\hat{s}\in\hat{\mathcal{S}}\) satisfying \(\|\hat{s}-s\|<\mathrm{TOL}\); where the norm \(\|\cdot\|\) considered can be the \(L^{2}\)-, the \(L^{\infty}\)-, or the \(H^{1}\)-norm. For our presentation we follow some ideas and the notation from [9] and [14, Chapter 6]), because the framework used there is adequate for our developments regarding knot removal. Finally, it is worth mentioning that a data reduction strategy which automatically removes knots has already been presented in [10]. There, the authors assign a weight \(w\) to each interior knot of \(\boldsymbol{\xi}\), which encodes a rough measure of the significance of each knot in the representation of the spline \(s\). These weights approximate certain distance between the spline \(s\) and some subspace of \(\mathcal{S}\). The knots to be removed are then those with the smaller weights. The proposed algorithm is very interesting and although it removes several knots simultaneously, it involves an internal loop in which the error must be compared to the tolerance after each individual knot removal. After removing the knots they compute the best approximation of \(s\) as the solution of a global linear least-squares problem, hence this strategy is effective but costly since the main work must be done using all the data. In this article, although we remove one knot at a time, the computational cost is negligible because the discrepancy parameter (estimator) depends only on a few data points and the best approximation can be computed by modifying only a small part of the coefficient vector, and furthermore, at each step, only a few estimators and control points have to be recomputed (most of them remain unchanged). An extension of [10] to parametric curves and tensor-product B-spline surfaces was presented in [13]. See also the references therein for previous works on knot removal. This paper is organized as follows. In Section 2 we briefly revise the classic theory of spline spaces, including B-spline bases, stability and knot insertion. In Section 3 we analyze the error for a single knot removal deriving a simple formula for its computation and prove a characterization of such an error in terms of jumps of derivatives. In Section 4 we use the previous error analysis for defining local coarsening error indicators and develop some efficient and low-cost adaptive knot removal algorithms which guarantee that the error is below a prescribed tolerance. Finally, in Section 5 we explore numerically the performance of the proposed algorithms and illustrate some practical applications. Review on basics about univariate spline theory Let \([a,b]\subset\mathbb{R}\) and let \(Z=\{a=\zeta_{1}<\zeta_{2}<\dots<\zeta_{N}=b\}\) be a set of _breakpoints_. Let \(p\in\mathbb{N}\) be a polynomial degree, which from now on will remain fixed. To each interior breakpoint \(\zeta_{j}\), \(j=2,\dots,N-1\), we associate a number \(m_{j}\), called multiplicity, such that \(1\leq m_{j}\leq p+1\). Let \(\mathcal{S}\) be the space of piecewise polynomial functions of degree \(\leq p\) on \(Z\) that have \(p-m_{j}\) continuous derivatives at the breakpoint \(\zeta_{j}\). It is known that \(\mathcal{S}\) is a vector space of finite dimension \(n\coloneqq\dim\mathcal{S}=p+1+\sum_{j=2}^{N-1}m_{j}\). ### The B-spline bases and their \(L^{2}\)-stability We consider a well known B-spline basis for the spline space \(\mathcal{S}\), see e.g. [5, 15]. In order to define it we need to associate a _knot vector_\(\boldsymbol{\xi}\) which takes into account the polynomial degree \(p\), the breakpoints in \(Z\) and the corresponding multiplicity, as we explain next. Let \(\boldsymbol{\xi}\coloneqq\{\xi_{j}\}_{j=1}^{n+p+1}\) be an associated \((p+1)\)-basic knot vector, i.e., \[\boldsymbol{\xi}=\{\xi_{1},\dots,\xi_{p+1},\underbrace{\zeta_{2},\dots,\zeta _{2}}_{m_{2}\text{ times}},\dots,\underbrace{\zeta_{N-1},\dots,\zeta_{N-1}}_{m_{N-1} \text{ times}},\xi_{n+1},\dots,\xi_{n+p+1}\}, \tag{2}\] where \(\xi_{1}\leq\dots\leq\xi_{p+1}=\zeta_{1}\) and \(\zeta_{N}=\xi_{n+1}\leq\dots\leq\xi_{n+p+1}\). There exists a basis for \(\mathcal{S}\), called the B-spline basis \(\mathcal{B}\coloneqq\{B_{1,p},B_{2,p},\dots,B_{n,p}\}\), where the the \(i\)-th B-spline \(B_{i}\coloneqq B_{i,p}\) is non-negative, uniquely determined by the knots \(\{\xi_{i},\dots,\xi_{i+p+1}\}\), and locally supported in \([\xi_{i},\xi_{i+p+1}]\), for \(i=1,\dots,n\). Moreover, they constitute a convex partition of unity in \([a,b]\), i.e., \[\sum_{i=1}^{n}B_{i}(x)=1,\quad\forall x\in[a,b].\] We consider a useful norm in \(\mathcal{S}\), that we call the \(\boldsymbol{\xi}\)-norm, given by (cf. [10]) \[\|s\|_{\boldsymbol{\xi}}\coloneqq\left(\sum_{i=1}^{n}c_{i}^{2}\frac{\xi_{i+p +1}-\xi_{i}}{p+1}\right)^{\frac{1}{2}}, \tag{3}\] where \(\mathbf{c}=(c_{1},\dots,c_{n})^{T}\in\mathbb{R}^{n}\) is the vector of control points of the spline \(s\in\mathcal{S}\), i.e., \(s=\sum_{i=1}^{n}c_{i}B_{i}\). Defining the \(n\times n\) diagonal scaling matrix \(E_{\boldsymbol{\xi}}\) whose elements are \[\omega_{i}\coloneqq\left(\frac{\xi_{i+p+1}-\xi_{i}}{p+1}\right)^{\frac{1}{2}},\;\;i=1,\dots,n, \tag{4}\] we have that (3) can be written as \(\|s\|_{\boldsymbol{\xi}}=\|E_{\boldsymbol{\xi}}\mathbf{c}\|_{2}\). The following theorem states that the mesh-dependent norm \(\|\cdot\|_{\boldsymbol{\xi}}\) is equivalent to the standard \(L^{2}[a,b]\)-norm in \(\mathcal{S}\). At this point, it is key to emphasize that the equivalence constant depends on the polynomial degree \(p\) but is otherwise independent of the space \(\mathcal{S}\). A proof of this result can be found in [4, Theorem 5.2], cf. also [10, Proposition 5.2]. More recent proofs have been presented in [12, Theorem 11] and [12, Lemma 5]. **Theorem 2.1** (\(L^{2}\)-stability of the B-spline basis).: There exists a constant \(K_{p}>0\), which only depends on \(p\), such that \[K_{p}^{-1}\|s\|_{\boldsymbol{\xi}}\leq\|s\|_{L^{2}}\leq\|s\|_{\boldsymbol{\xi}}, \qquad\forall\,s\in\mathcal{S}.\] **Remark 2.2**.: This result is known as \(L^{2}\)-stability of the B-spline basis because it states the equivalence between the \(L^{2}\)-norm of a spline and a suitable vector norm of its coordinates in the B-spline basis. Indeed, this result holds for \(L^{q}\)-norms, with \(1\leq q\leq\infty\), but we presented Theorem 2.1 in this way to simplify the presentation. In particular, the classical \(L^{\infty}\)-stability of the B-spline basis reads: \[K_{p}^{-1}\|\boldsymbol{c}\|_{\infty}\leq\|s\|_{L^{\infty}}\leq\|\boldsymbol{c }\|_{\infty},\qquad\forall\,s\in\mathcal{S}, \tag{5}\] where, as before, \(\boldsymbol{c}\) denotes the vector of control points of \(s\), and \(\|\boldsymbol{c}\|_{\infty}=\max_{1\leq i\leq n}|c_{i}|\). ### Knot insertion In order to analyze precisely the error in the knot removal process, we first briefly recall some facts about knot insertion [1], its reverse operation. The purpose here is just to introduce a notation that is suitable for presenting the error analysis of knot removal in the next section. Let \(\boldsymbol{\xi}\) be a \((p+1)\)-basic knot vector as in (2) and \(\mathcal{S}\) be the corresponding spline space. Let \(j_{0}\) be such that \(2\leq j_{0}\leq N-1\), so that \(\zeta_{j_{0}}\) is an interior breakpoint in \(Z\). Let \(i_{0}:=p+1+\sum_{j=2}^{j_{0}}m_{j}\) whence, \(\xi_{i_{0}}=\zeta_{j_{0}}\), \(p+2\leq i_{0}\leq n\) and \(\xi_{i_{0}-1}\leq\xi_{i_{0}}<\xi_{i_{0}+1}\). Let \(\hat{\boldsymbol{\xi}}:=\boldsymbol{\xi}\setminus\{\xi_{i_{0}}\}\), that is, \[\hat{\boldsymbol{\xi}}=\{\hat{\xi}_{1},\ldots,\hat{\xi}_{n+p}\}=\{\xi_{1}, \ldots,\xi_{i_{0}-1},\xi_{i_{0}+1},\ldots,\xi_{n+p+1}\}.\] Therefore, we can regard \(\boldsymbol{\xi}\) as obtained from the \((p+1)\)-basic knot vector \(\hat{\boldsymbol{\xi}}\) after _inserting_ the knot \(\xi_{i_{0}}\). We denote by \(\ell:=m_{j_{0}}-1\) the multiplicity of \(\xi_{i_{0}}\) in \(\hat{\boldsymbol{\xi}}\). Here, \(\ell=0\) means that \(\xi_{i_{0}}\) is not a knot in \(\hat{\boldsymbol{\xi}}\), and therefore, from \(\hat{\boldsymbol{\xi}}\) to \(\boldsymbol{\xi}\) we have inserted a new breakpoint; whereas if \(1\leq\ell\leq p\) we have increased by \(1\) the multiplicity of the knot corresponding to the breakpoint \(\zeta_{j_{0}}\). Let \(\hat{\mathcal{B}}:=\{\hat{B}_{1},\hat{B}_{2},\ldots,\hat{B}_{n-1}\}\) be the B-spline basis associated to \(\hat{\boldsymbol{\xi}}\) and \(\hat{\mathcal{S}}\) be the spline space spanned by \(\hat{\mathcal{B}}\), so that \(\hat{\mathcal{S}}\subset\mathcal{S}\). Thus, each \(\hat{s}\in\hat{\mathcal{S}}\) can be expressed by \[\hat{s}=\sum_{i=1}^{n-1}\hat{c}_{i}\hat{B}_{i},\] for some \(\boldsymbol{\hat{c}}:=(\hat{c}_{1},\ldots,\hat{c}_{n-1})^{T}\in\mathbb{R}^{n-1}\). Since \(\hat{\mathcal{S}}\subset\mathcal{S}\) there exists \(\boldsymbol{c}:=(c_{1},\ldots,c_{n})^{T}\in\mathbb{R}^{n}\) such that \[\hat{s}=\sum_{i=1}^{n}c_{i}B_{i}.\] It is well known that the control points of \(\hat{s}\) in \(\mathcal{B}\) can be uniquely determined by those in \(\hat{\mathcal{B}}\) through the so called _knot insertion formula_[1] as \[c_{i}=\begin{cases}\hat{c}_{i},&\text{for}\;\;i=1,\ldots,i_{0}-p-2,\\ \lambda_{i}\hat{c}_{i}+(1-\lambda_{i})\hat{c}_{i-1},&\text{for}\;\;i=i_{0}-p-1, \ldots,i_{0}-\ell,\\ \hat{c}_{i-1},&\text{for}\;\;i=i_{0}-\ell+1,\ldots,n,\end{cases}\] where \[\lambda_{i}=\frac{\xi_{i_{0}}-\xi_{i}}{\xi_{i+p+1}-\xi_{i}},\quad i=i_{0}-p-1, \ldots,i_{0}-\ell.\] Notice that the mapping \(\hat{\mathbf{c}}\mapsto\mathbf{c}\) can be expressed in matrix form as \[\mathbf{c}=A\hat{\mathbf{c}}, \tag{6}\] where the _knot insertion matrix_\(A\in\mathbb{R}^{n\times(n-1)}\) is given by \[A=\begin{pmatrix}I_{i_{0}-p-2}&0&0\\ 0&A_{\text{loc}}&0\\ 0&0&I_{n-i_{0}+\ell}\end{pmatrix}, \tag{7}\] with the sub-matrix \(A_{\text{loc}}\in\mathbb{R}^{(p+2-\ell)\times(p+1-\ell)}\), hereafter called _local knot insertion matrix_, given by \[A_{\text{loc}}=\begin{pmatrix}\alpha_{1}&0&0&\cdots&0&0\\ 1-\alpha_{2}&\alpha_{2}&0&\cdots&0&0\\ 0&1-\alpha_{3}&\alpha_{3}&\cdots&0&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&0&\cdots&\alpha_{p-\ell}&0\\ 0&0&0&\cdots&1-\alpha_{p+1-\ell}&\alpha_{p+1-\ell}\\ 0&0&0&\cdots&0&1-\alpha_{p+2-\ell}\end{pmatrix}, \tag{8}\] and \[\alpha_{j}:=\lambda_{i_{0}-p-2+j}=\frac{\xi_{i_{0}}-\xi_{i_{0}-p-2+j}}{\xi_{i_ {0}-1+j}-\xi_{i_{0}-p-2+j}},\qquad j=1,\ldots,p+2-\ell. \tag{9}\] Notice that \(\{\alpha_{j}\}_{j=1}^{p+2-\ell}\) is monotonically (non-strictly) decreasing with \(\alpha_{1}=1\) and \(\alpha_{p+2-\ell}=0\). Additionally, it is important to notice that the matrix \(A_{\text{loc}}\) only depends on \(2p+1-\ell\) consecutive knots in \(\boldsymbol{\xi}\), namely \[\boldsymbol{\xi}_{\text{loc}}^{*}:=\{\xi_{i_{0}-p},\ldots,\xi_{i_{0}+p-\ell}\}. \tag{10}\] Throughout the next section, we make use of the following splitting for the control points vectors \(\mathbf{c}\) and \(\hat{\mathbf{c}}\) that allows us to emphasize the local nature of our developments: \[\mathbf{c}^{T}=\underbrace{(c_{1},\ldots,c_{i_{0}-p-2},}_{=\hat{\mathbf{c}}_ {\text{first}}^{T}},\underbrace{c_{i_{0}-p-1},\ldots,c_{i_{0}-\ell}}_{=\hat{ \mathbf{c}}_{\text{loc}}^{T}},\underbrace{c_{i_{0}-\ell+1},\ldots,c_{n}}_{=\hat {\mathbf{c}}_{\text{last}}^{T}}), \tag{11}\] and \[\hat{\mathbf{c}}^{T}=(\underbrace{\hat{c}_{1},\ldots,\hat{c}_{i_{0}-p-2},}_{= \hat{\mathbf{c}}_{\text{first}}^{T}},\underbrace{\hat{c}_{i_{0}-p-1},\ldots, \hat{c}_{i_{0}-\ell-1}}_{=\hat{\mathbf{c}}_{\text{loc}}^{T}},\underbrace{\hat {c}_{i_{0}-\ell},\ldots,\hat{c}_{n-1}}_{=\hat{\mathbf{c}}_{\text{last}}^{T}}). \tag{12}\] In particular, notice that (6) means \(\mathbf{c}_{\text{first}}=\hat{\mathbf{c}}_{\text{first}}\), \(\mathbf{c}_{\text{last}}=\hat{\mathbf{c}}_{\text{last}}\) and \[\mathbf{c}_{\text{loc}}=A_{\text{loc}}\hat{\mathbf{c}}_{\text{loc}}.\] ## 3 Error analysis for single knot removal Let us consider a \((p+1)\)-basic knot vector \(\boldsymbol{\xi}\coloneqq\{\xi_{1},\ldots,\xi_{n+p+1}\}\) associated to a set of breakpoints \(Z=\{\zeta_{1},\ldots,\zeta_{N}\}\) as explained in the previous section. Let \(\mathcal{S}\) denote the spline space spanned by the B-spline basis \(\mathcal{B}=\{B_{1},\ldots,B_{n}\}\) corresponding to \(\boldsymbol{\xi}\) with \(\dim\mathcal{S}=n\). Throughout this section we consider a fixed interior breakpoint \(\zeta_{j_{0}}\), i.e., \(2\leq j_{0}\leq N-1\). The knot removal that we consider, understood as the natural reverse process of single knot insertion, consists of decreasing by one the multiplicity of \(\zeta_{j_{0}}\) in the sequence of knots \(\boldsymbol{\xi}\). Without loss of generality, we consider the removal of the last knot in \(\boldsymbol{\xi}\) that is equal to \(\zeta_{j_{0}}\), i.e., we remove from \(\boldsymbol{\xi}\) the knot \(\xi_{i_{0}}\), with \(i_{0}\coloneqq p+1+\sum_{j=2}^{j_{0}}m_{j}\), so that \(p+2\leq i_{0}\leq n\) and \(\xi_{i_{0}-1}\leq\xi_{i_{0}}=\zeta_{j_{0}}<\xi_{i_{0}+1}\). Now, we consider the \((p+1)\)-basic knot vector \(\hat{\boldsymbol{\xi}}\coloneqq\boldsymbol{\xi}\setminus\{\xi_{i_{0}}\}\) and let \(\hat{\mathcal{S}}\) be the spline space spanned by the B-spline basis \(\hat{\mathcal{B}}=\{\hat{B}_{1},\ldots,\hat{B}_{n-1}\}\) corresponding to \(\hat{\boldsymbol{\xi}}\). The main purpose of this section is to analyze the error \[\mathbb{E}^{|\cdot|}_{\boldsymbol{\xi},j_{0}}(s)\coloneqq\min_{g\in\hat{ \mathcal{S}}}\|g-s\|, \tag{13}\] for a given spline \(s\in\mathcal{S}\), where \(\|\cdot\|\) denotes a norm in \(\mathcal{S}\). A best approximation \(\hat{s}\in\hat{\mathcal{S}}\) is a spline function attaining the minimum, that is, \[\hat{s}\coloneqq\operatorname*{argmin}_{g\in\hat{\mathcal{S}}}\|g-s\|. \tag{14}\] Our analysis includes a characterization of the error in (13) considering the \(\boldsymbol{\xi}\)-norm defined in (3) and a derivation of a simple formula for computing it in terms of the neighboring knots of \(\xi_{i_{0}}\) and a few control points of the spline \(s\). In addition, we propose an efficient way of computing the best approximation in (14). From now on, the error \(\mathbb{E}^{|\cdot|\hat{\boldsymbol{\xi}}}_{\boldsymbol{\xi},j_{0}}(s)\) will be denoted simply by \(\mathbb{E}_{\boldsymbol{\xi},j_{0}}(s)\), and with a little abuse of notation, if \(\mathbf{c}\coloneqq(c_{1},c_{2},\ldots,c_{n})^{T}\in\mathbb{R}^{n}\) denotes the vector of coefficients of \(s\) in \(\mathcal{B}\), named _control points_, we denote \(\|s\|_{\boldsymbol{\xi}}=\|\mathbf{c}\|_{\boldsymbol{\xi}}\). It is then easy to verify that \[\mathbb{E}_{\boldsymbol{\xi},j_{0}}(s)=\min_{\mathbf{b}\in\mathbb{R}^{n-1}}\| A\mathbf{b}-\mathbf{c}\|_{\boldsymbol{\xi}}=\min_{\mathbf{b}\in\mathbb{R}^{n-1}}\|E_{ \boldsymbol{\xi}}(A\mathbf{b}-\mathbf{c})\|_{2}, \tag{15}\] where \(E_{\boldsymbol{\xi}}\) is the scaled diagonal matrix whose entries are given by (4) and \(A\) is the knot insertion matrix from \(\hat{\boldsymbol{\xi}}\) to \(\boldsymbol{\xi}\) defined in Section 2.2. Moreover, if \(\hat{s}\) denotes the best approximation given by (14), the vector \((\hat{c}_{1},\hat{c}_{2}\ldots,\hat{c}_{n-1})^{T}\in\mathbb{R}^{n-1}\) of coefficients of \(\hat{s}\) in \(\hat{\mathcal{B}}\) satisfies \[\mathbf{\hat{c}}\coloneqq\operatorname*{argmin}_{\mathbf{b}\in\mathbb{R}^{n-1 }}\|A\mathbf{b}-\mathbf{c}\|_{\boldsymbol{\xi}}=\operatorname*{argmin}_{ \mathbf{b}\in\mathbb{R}^{n-1}}\|E_{\boldsymbol{\xi}}(A\mathbf{b}-\mathbf{c})\| _{2}. \tag{16}\] We notice that when considering the \(L^{2}\)-norm in \(\mathcal{S}\), the spline \(\hat{s}\) in (14) is given by the \(L^{2}\)-projection of \(s\) onto \(\hat{\mathcal{S}}\), denoted by \(\Pi_{\hat{\mathcal{S}}}s\). The next theorem states that the errors in (13) are equivalent when considering the \(L^{2}\)-norm and the \(\boldsymbol{\xi}\)-norm. This result is a particular case from [9, Theorem 2.1] (see also [14, Theorem 6.1]) but we include a proof here for the sake of completeness. **Theorem 3.1**.: Let \(s\in\mathcal{S}\). Suppose that \(g=\Pi_{\hat{\mathcal{S}}}s\) and \(\hat{s}\) are the solutions of (14) when we consider \(\|\cdot\|=\|\cdot\|_{L^{2}}\) and \(\|\cdot\|=\|\cdot\|_{\boldsymbol{\xi}}\), respectively, then \[\|g-s\|_{L^{2}}\leq\|\hat{s}-s\|_{L^{2}}\leq K_{p}\|g-s\|_{L^{2}},\] where \(K_{p}\) is the constant from Theorem 2.1, which depends only on \(p\), but is otherwise independent of \(\boldsymbol{\xi}\) and \(\hat{\boldsymbol{\xi}}\). Proof.: Let \(\mathbf{c}=(c_{i})_{i=1}^{n}\) denote the control points of an arbitrary \(s\in\mathcal{S}\) so that \(s=\sum_{i=1}^{n}c_{i}B_{i}\). Let \(g=\Pi_{\hat{\mathcal{S}}}s=\sum_{i=1}^{n-1}d_{i}\hat{B}_{i}\in\hat{\mathcal{S}}\) and \(\hat{s}=\sum_{i=1}^{n-1}\hat{c}_{i}\hat{B}_{i}\in\hat{\mathcal{S}}\) be the solutions of (14) considering the \(L^{2}\)- and the \(\boldsymbol{\xi}\)-norms, respectively. Therefore, if \(\mathbf{d}=(d_{i})_{i=1}^{n-1}\) and \(\mathbf{\hat{c}}=(\hat{c}_{i})_{i=1}^{n-1}\), we have \[\|g-s\|_{L^{2}}\leq\|\hat{s}-s\|_{L^{2}}\qquad\text{and}\qquad\|A\mathbf{\hat {c}}-\mathbf{c}\|_{\boldsymbol{\xi}}\leq\|A\mathbf{d}-\mathbf{c}\|_{ \boldsymbol{\xi}},\] with \(A\) the knot insertion matrix of \(\hat{\boldsymbol{\xi}}\) on \(\boldsymbol{\xi}\). Also, \(\hat{s}=\sum_{i=1}^{n}(A\mathbf{\hat{c}})_{i}B_{i}\) and \(g=\sum_{i=1}^{n}(A\mathbf{d})_{i}B_{i}\), whence Theorem 2.1 yields \[\|\hat{s}-s\|_{L^{2}}\leq\|A\mathbf{\hat{c}}-\mathbf{c}\|_{\boldsymbol{\xi}} \qquad\text{and}\qquad\|A\mathbf{d}-\mathbf{c}\|_{\boldsymbol{\xi}}\leq K_{p} \|g-s\|_{L^{2}}.\] The proof concludes taking into account these four inequalities. In view of the last result, although we are interested in the \(L^{2}\)-norm, we will focus on working with the \(\boldsymbol{\xi}\)-norm because it has the advantage of being localized in the sense that the computation of (15) and (16) involve only a small part of the data as we will see in the next two subsections. ### Computation of the best approximation in the \(\xi\)-norm Our next goal is a characterization of the solution \(\mathbf{\hat{c}}\) of (16) which allows its computation without solving a global system. The results of this section have been briefly introduced in [9, Example 4.1]. In order to make our presentation clearer, we expand them here and state them more precisely using the notation followed in this article. Recall that \(A\) is the knot insertion matrix from \(\hat{\boldsymbol{\xi}}\) to \(\boldsymbol{\xi}\) defined in Section 2.2 and that \(\ell+1\) denotes the multiplicity of the knot \(\xi_{i_{0}}\) in the knot vector \(\boldsymbol{\xi}\). Notice that \[\|A\mathbf{b}-\mathbf{c}\|_{\boldsymbol{\xi}}=\|E_{\boldsymbol{\xi}}(A \mathbf{b}-\mathbf{c})\|_{2}=\|E_{\boldsymbol{\xi}}A\mathbf{b}-E_{\boldsymbol {\xi}}\mathbf{c}\|_{2}=\|B\mathbf{b}-\mathbf{d}\|_{2}\] where \(B\mathrel{\mathop{:}}=E_{\boldsymbol{\xi}}A\in\mathbb{R}^{n\times(n-1)}\) and \(\mathbf{d}\mathrel{\mathop{:}}=E_{\boldsymbol{\xi}}\mathbf{c}\in\mathbb{R}^{n}\). Thus, problem (16) has a unique solution \(\mathbf{\hat{c}}\), which is the least squares solution of the system \[B\mathbf{\hat{c}}=\mathbf{d}.\] In order to write the matrix \(B\) in blocks, we first remark that matrix \(E_{\boldsymbol{\xi}}\) can be expressed as \[E_{\boldsymbol{\xi}}=\begin{pmatrix}E_{\text{first}}&0&0\\ 0&E_{\text{loc}}&0\\ 0&0&E_{\text{last}}\end{pmatrix} \tag{17}\] where \(E_{\rm first},E_{\rm loc}\) and \(E_{\rm last}\) are diagonal matrices, whose main diagonals are \[\mathbf{e}_{\rm first} =(\omega_{1},\ldots,\omega_{i_{0}-p-2})\in\mathbb{R}^{i_{0}-p-2},\] \[\mathbf{e}_{\rm loc} =(\omega_{i_{0}-p-1},\ldots,\omega_{i_{0}-\ell})\in\mathbb{R}^{p+2 -\ell},\] \[\mathbf{e}_{\rm last} =(\omega_{i_{0}+1-\ell},\ldots,\omega_{n})\in\mathbb{R}^{n-i_{0} +\ell},\] with \(\omega_{j}\) given in (4). Notice that \(E_{\rm loc}\in\mathbb{R}^{(p+2-\ell)\times(p+2-\ell)}\) and defining \[e_{j}:=\omega_{i_{0}-p-2+j},\qquad\text{for }j=1,\ldots,p+2-\ell, \tag{18}\] we have that \(\mathbf{e}_{\rm loc}=(e_{1},\ldots,e_{p+2-\ell})\). Now, taking into account (7) and that \(B=E_{\mathbf{\xi}}A\) we have that \[B=\begin{pmatrix}E_{\rm first}&0&0\\ 0&B_{\rm loc}&0\\ 0&0&E_{\rm last}\end{pmatrix}, \tag{19}\] with \(B_{\rm loc}\coloneqq E_{\rm loc}A_{\rm loc}\) and \(A_{\rm loc}\) as in (8). It is worth noticing that the matrix \(B_{\rm loc}\) only depends on \(2p+3-\ell\) consecutive knots in \(\mathbf{\xi}\), namely \[\mathbf{\xi}_{\rm loc}\coloneqq\{\xi_{i_{0}-p-1},\ldots,\xi_{i_{0}+p+1-\ell}\}, \tag{20}\] because the matrix \(A_{\rm loc}\) depends on \(\mathbf{\xi}_{\rm loc}^{*}\) in (10) and \(E_{\rm loc}\) depends on \(\mathbf{\xi}_{\rm loc}\). In the next result we establish a formula to compute the solution \(\hat{\mathbf{c}}\) of (16) and a characterization of the error in (15), which although it measures the \(\mathbf{\xi}\)-distance of \(s\) to the whole space \(\hat{\mathcal{S}}\), can be computed using only _local_ information. **Theorem 3.2**.: Let \(s\in\mathcal{S}\) and let \(\mathbb{E}_{\mathbf{\xi},j_{0}}(s)\) be defined by (13) using the \(\mathbf{\xi}\)-norm in \(\mathcal{S}\). Let \(\mathbf{c}=(c_{1},c_{2},\ldots,c_{n})^{T}\in\mathbb{R}^{n}\) be the vector of control points of \(s\), i.e., \(s=\sum\limits_{i=1}^{n}c_{i}B_{i}\). Let \(\hat{s}\in\hat{\mathcal{S}}\) be the best approximation of \(s\) in \(\hat{\mathcal{S}}\) in the \(\mathbf{\xi}\)-norm defined in (14), and let \(\hat{\mathbf{c}}:=(\hat{c}_{1},\hat{c}_{2}\ldots,\hat{c}_{n-1})^{T}\in\mathbb{ R}^{n-1}\) be the vector of control points of \(\hat{s}\), i.e., \(\hat{s}=\sum\limits_{i=1}^{n-1}\hat{c}_{i}\hat{B}_{i}\). Let us consider the splitting of \(\mathbf{c}\) and \(\hat{\mathbf{c}}\) given in (11) and (12), respectively. Then, there hold the following assertions: 1. \(\hat{\mathbf{c}}_{\rm first}=\mathbf{c}_{\rm first}\), \(\hat{\mathbf{c}}_{\rm last}=\mathbf{c}_{\rm last}\) and \(\hat{\mathbf{c}}_{\rm loc}\in\mathbb{R}^{p+1-\ell}\) is the least squares solution of the system \[B_{\rm loc}\hat{\mathbf{c}}_{\rm loc}=E_{\rm loc}\mathbf{c}_{\rm loc}.\] 2. The error \(\mathbb{E}_{\mathbf{\xi},j_{0}}(s)\) satisfies \[\mathbb{E}_{\mathbf{\xi},j_{0}}(s)=\|B_{\rm loc}\hat{\mathbf{c}}_{\rm loc}-E_{\rm loc }\mathbf{c}_{\rm loc}\|_{2}.\] Proof.: Taking into account (19) and (17) we have that \[B^{T}B=\begin{pmatrix}E_{\rm first}^{2}&0&0\\ 0&B_{\rm loc}^{T}B_{\rm loc}&0\\ 0&0&E_{\rm last}^{2}\end{pmatrix},\quad\text{and}\quad B^{T}E_{\mathbf{\xi}}= \begin{pmatrix}E_{\rm first}^{2}&0&0\\ 0&B_{\rm loc}^{T}E_{\rm loc}&0\\ 0&0&E_{\rm last}^{2}\end{pmatrix}.\] Since \(B^{T}B\hat{\mathbf{c}}=B^{T}E_{\mathbf{\xi}}\mathbf{c}\) we have that \[\begin{cases}E_{\text{first}}^{2}\hat{\mathbf{c}}_{\text{first}}=E_{\text{first}}^ {2}\mathbf{c}_{\text{first}}\\ B_{\text{loc}}^{T}B_{\text{loc}}\hat{\mathbf{c}}_{\text{loc}}=B_{\text{loc}}^{T} E_{\text{loc}}\mathbf{c}_{\text{loc}},\\ E_{\text{last}}^{2}\hat{\mathbf{c}}_{\text{last}}=E_{\text{last}}^{2} \mathbf{c}_{\text{last}}\end{cases}\qquad\text{i.e.}\quad\begin{cases}\hat{ \mathbf{c}}_{\text{first}}=\mathbf{c}_{\text{first}}\\ B_{\text{loc}}^{T}B_{\text{loc}}\hat{\mathbf{c}}_{\text{loc}}=B_{\text{loc}}^ {T}E_{\text{loc}}\mathbf{c}_{\text{loc}},\\ \hat{\mathbf{c}}_{\text{last}}=\mathbf{c}_{\text{last}}\end{cases}\] which implies the first assertion of the theorem. Additionally, we have that \[B\hat{\mathbf{c}}-E_{\mathbf{\xi}}\mathbf{c}=\begin{pmatrix}E_{\text{first}}\hat{ \mathbf{c}}_{\text{first}}\\ B_{\text{loc}}\hat{\mathbf{c}}_{\text{loc}}\\ E_{\text{last}}\hat{\mathbf{c}}_{\text{last}}\end{pmatrix}-\begin{pmatrix}E_{ \text{first}}\mathbf{c}_{\text{first}}\\ E_{\text{loc}}\mathbf{c}_{\text{loc}}\\ E_{\text{last}}\mathbf{c}_{\text{last}}\end{pmatrix}=\begin{pmatrix}0\\ B_{\text{loc}}\hat{\mathbf{c}}_{\text{loc}}-E_{\text{loc}}\mathbf{c}_{\text{ loc}}\\ 0\end{pmatrix},\] and therefore, \[\mathbb{E}_{\mathbf{\xi},j_{0}}(s)=\|B\hat{\mathbf{c}}-E_{\mathbf{\xi}}\mathbf{c}\|_{2 }=\|B_{\text{loc}}\hat{\mathbf{c}}_{\text{loc}}-E_{\text{loc}}\mathbf{c}_{ \text{loc}}\|_{2},\] which is the second assertion. **Remark 3.3**.: Since \(B_{\text{loc}}=E_{\text{loc}}A_{\text{loc}}\), the assertions in Theorem 3.2 can be stated as \[\mathbb{E}_{\mathbf{\xi},j_{0}}(s)=\min_{\mathbf{x}\in\mathbb{R}^{p+1-\ell}}\|E_{ \text{loc}}(A_{\text{loc}}\mathbf{x}-\mathbf{c}_{\text{loc}})\|_{2},\] and \[\hat{\mathbf{c}}_{\text{loc}}=\operatorname*{argmin}_{\mathbf{x}\in\mathbb{R}^ {p+1-\ell}}\|E_{\text{loc}}(A_{\text{loc}}\mathbf{x}-\mathbf{c}_{\text{loc}}) \|_{2}.\] ### A local formula for the error in \(\xi\)-norm In this section we find a simple formula for computing the error \(\mathbb{E}_{\mathbf{\xi},j_{0}}(s)\) in terms of \(\mathbf{\xi}_{\text{loc}}\) defined in (20) and \(\mathbf{c}_{\text{loc}}=(c_{i_{0}-p-1},\ldots,c_{i_{0}-\ell})^{T}\). We need the following result, whose proof follows from elementary linear algebra. **Lemma 3.4**.: Let \(M\in\mathbb{R}^{(k+1)\times k}\) be a matrix with linear independent columns. If \(\mathbf{q}\in\mathbb{R}^{k+1}\) satisfies \(\|\mathbf{q}\|_{2}=1\) and \(\mathbf{q}^{T}M=0\), then \[\min_{\mathbf{x}\in\mathbb{R}^{k}}\|M\mathbf{x}-\mathbf{y}\|_{2}=|\mathbf{q}^ {T}\mathbf{y}|,\] for all \(\mathbf{y}\in\mathbb{R}^{k+1}\). Proof.: Since \(M\) has \(k\) linearly independent columns, the column space \(C(M)\) of \(M\) has dimension \(k\) and its orthogonal complement \(C(M)^{\perp}\) has dimension one. Therefore, if \(\mathbf{q}\in\mathbb{R}^{k+1}\) satisfies \(\|\mathbf{q}\|_{2}=1\) and \(\mathbf{q}^{T}M=0\) we have that \(C(M)^{\perp}=\text{span}\{\mathbf{q}\}\). Given \(\mathbf{y}\in\mathbb{R}^{k+1}\), the minimum of \(\|M\mathbf{x}-\mathbf{y}\|_{2}\) is achieved when \(M\mathbf{x}-\mathbf{y}\) is orthogonal to \(C(M)\), whence \(\mathbf{y}-M\mathbf{x}\) is the orthogonal projection of \(\mathbf{y}\) onto \(C(M)^{\perp}\). Therefore \[\min_{\mathbf{x}\in\mathbb{R}^{k}}\|M\mathbf{x}-\mathbf{y}\|_{2}=\|(\mathbf{q} ^{T}\mathbf{y})\mathbf{q}\|_{2}=|\mathbf{q}^{T}\mathbf{y}|\,\|\mathbf{q}\|_{2} =|\mathbf{q}^{T}\mathbf{y}|.\] Taking into account Remark 3.3 and the last lemma, we are now in position to establish a formula for computing the error. **Theorem 3.5** (Main result I).: Let \(\mathbf{\mu}=(\mu_{1},\ldots,\mu_{p+1-\ell})^{T}\in\mathbb{R}^{p+1-\ell}\) be defined by \[\mu_{j-1}:=\frac{1-\alpha_{j}}{\alpha_{j-1}},\qquad j=2,\ldots,p+2-\ell, \tag{21}\] where \(\{\alpha_{j}\}_{j=1}^{p+2-\ell}\) is the set of values defining the local knot insertion matrix given by (9). Let \(\mathbf{r}_{\mathrm{loc}}:=(r_{1},\ldots,r_{p+2-\ell})^{T}\in\mathbb{R}^{p+2-\ell}\) be defined by \(r_{p+2-\ell}=\gamma_{\mathrm{loc}}e_{p+2-\ell}\) and \[r_{j-1}:=-\mu_{j-1}r_{j},\qquad j=2,\ldots,p+2-\ell, \tag{22}\] where \(\gamma_{\mathrm{loc}}:=\left(1+e_{p+2-\ell}\sum_{j=1}^{p+1-\ell}\frac{1}{e_{j }^{2}}\prod_{i=j}^{p+1-\ell}\mu_{i}^{2}\right)^{-\frac{1}{2}}\), and \(e_{j}\) is given in (18). Then, for all \(s\in\mathcal{S}\), \[\mathbb{E}_{\mathbf{\xi},j_{0}}(s)=|\mathbf{r}_{\mathrm{loc}}^{T}\mathbf{c}_{ \mathrm{loc}}|,\] where \(s=\sum_{i=1}^{n}c_{i}B_{i}\) and \(\mathbf{c}_{\mathrm{loc}}=(c_{i_{0}-p-1},\ldots,c_{i_{0}-\ell})^{T}\). **Remark 3.6**.: We recall that \(\mathbb{E}_{\mathbf{\xi},j_{0}}(s)\) denotes the error of the best approximation of a spline \(s\in\mathcal{S}\) when reducing by one unit the multiplicity of the \(j_{0}\)-th breakpoint. This theorem shows that the error \(\mathbb{E}_{\mathbf{\xi},j_{0}}:\mathcal{S}\to\mathbb{R}_{+}\) can be characterized by the vector \(\mathbf{r}_{\mathrm{loc}}\in\mathbb{R}^{p+2-\ell}\), which can be easily computed from the \(2p+3-\ell\) consecutive knots collected in \(\mathbf{\xi}_{\mathrm{loc}}=\{\xi_{i_{0}-p-1},\ldots,\xi_{i_{0}+p+1-\ell}\}\). The vector \(\mathbf{r}_{\mathrm{loc}}\) proposed in the statement of this theorem is obtained by setting a value for the last component \(r_{p+2-\ell}\) and performing backward substitution on the upper triangular bidiagonal matrix \(A_{\mathrm{loc}}^{T}\). Proof.: From Remark 3.3 we have that \[\mathbb{E}_{\mathbf{\xi},j_{0}}(s)=\min_{\mathbf{x}\in\mathbb{R}^{p+1-\ell}}\|E_{ \mathrm{loc}}A_{\mathrm{loc}}\mathbf{x}-E_{\mathrm{loc}}\mathbf{c}_{\mathrm{ loc}}\|_{2}.\] We will apply Lemma 3.4 with \(M=E_{\mathrm{loc}}A_{\mathrm{loc}}\) and \(\mathbf{y}=E_{\mathrm{loc}}\mathbf{c}_{\mathrm{loc}}\). Let \(\mathbf{q}_{\mathrm{loc}}:=E_{\mathrm{loc}}^{-1}\mathbf{r}_{\mathrm{loc}}\) and let us prove that \(\mathbf{q}_{\mathrm{loc}}\) is orthogonal to the columns of the matrix \(M\), i.e. \(\mathbf{q}_{\mathrm{loc}}^{T}E_{\mathrm{loc}}A_{\mathrm{loc}}=0\), or equivalently, that \[\mathbf{r}_{\mathrm{loc}}^{T}A_{\mathrm{loc}}=0.\] Indeed, given \(j\in\{1,\ldots,p+1-\ell\}\), taking into account (8), we have that \[(\mathbf{r}_{\mathrm{loc}}^{T}A_{\mathrm{loc}})_{j}=\mathbf{r}_{\mathrm{loc}}^ {T}\operatorname{col}_{j}(A_{\mathrm{loc}})=\alpha_{j}r_{j}+(1-\alpha_{j+1})r_{ j+1},\] and since \(r_{j}=-\mu_{j}r_{j+1}=-\frac{1-\alpha_{j+1}}{\alpha_{j}}r_{j+1}\), we have \[(\mathbf{r}_{\mathrm{loc}}^{T}A_{\mathrm{loc}})_{j}=-\alpha_{j}\frac{1-\alpha _{j+1}}{\alpha_{j}}r_{j+1}+(1-\alpha_{j+1})r_{j+1}=-(1-\alpha_{j+1})r_{j+1}+(1 -\alpha_{j+1})r_{j+1}=0.\] Since the definition of \(\gamma_{\mathrm{loc}}\) guarantees that \(\|\mathbf{q}_{\mathrm{loc}}\|_{2}=1\), we can finally apply Lemma 3.4 to obtain \[\mathbb{E}_{\mathbf{\xi},j_{0}}(s)=|\mathbf{q}_{\mathrm{loc}}^{T}\mathbf{y}|=|(E_{ \mathrm{loc}}^{-1}\mathbf{r}_{\mathrm{loc}})^{T}(E_{\mathrm{loc}}\mathbf{c}_{ \mathrm{loc}})|=|\mathbf{r}_{\mathrm{loc}}^{T}\mathbf{c}_{\mathrm{loc}}|,\] which concludes the proof. **Remark 3.7**.: Notice that \(\|\cdot\|_{\mathrm{cp}}:\mathcal{S}\to\mathbb{R}\), defined by \[\|s\|_{\mathrm{cp}}=\|\mathbf{c}\|_{2}=\Big{(}\sum_{i=1}^{n}c_{i}^{2}\Big{)}^{ \frac{1}{2}},\] is a norm in \(\mathcal{S}\). Following the same argument from the last section we have that \[\mathbb{E}_{\boldsymbol{\xi},j_{0}}^{\|\cdot\|_{\mathrm{cp}}}(s)=\min_{ \mathbf{x}\in\mathbb{R}^{p+1-\ell}}\|A_{\mathrm{loc}}\mathbf{x}-\mathbf{c}_{ \mathrm{loc}}\|_{2}.\] Moreover, the same steps of the proof of Theorem 3.5 imply that this error can be computed by \[\mathbb{E}_{\boldsymbol{\xi},j_{0}}^{\|\cdot\|_{\mathrm{cp}}}(s)=|\tilde{ \mathbf{r}}_{\mathrm{loc}}^{T}\mathbf{c}_{\mathrm{loc}}|, \tag{23}\] with the last component of \(\tilde{\mathbf{r}}_{\mathrm{loc}}\in\mathbb{R}^{p+2-\ell}\) given by \(\tilde{r}_{p+2-\ell}:=\tilde{\gamma}_{\mathrm{loc}}:=\big{(}1+\sum_{j=1}^{p+1 -\ell}\prod_{i=j}^{p+1-\ell}\mu_{i}^{2}\big{)}^{-\frac{1}{2}}\), and the remaining components given by the recurrence formula (22). On the other hand, we remark that the value \(D=D_{\boldsymbol{\xi},j_{0}}(s)\) introduced in [6, Equation 24] satisfies \(|D|=|\mathbf{d}_{\mathrm{loc}}^{T}\mathbf{c}_{\mathrm{loc}}|\), where the last component of the vector \(\mathbf{d}_{\mathrm{loc}}\in\mathbb{R}^{p+2-\ell}\) is set as \(d_{p+2-\ell}:=1\), and the remaining components are given by the recurrence formula (22). It is worth noting that the three quantities \(\mathbb{E}_{\boldsymbol{\xi},j_{0}}(s)\), \(\mathbb{E}_{\boldsymbol{\xi},j_{0}}^{\|\cdot\|_{\mathrm{cp}}}(s)\), and \(|D|\), are the absolute value of a linear functional from \(\mathcal{S}\) into \(\mathbb{R}\) that vanishes on \(\hat{\mathcal{S}}\), the dimension of which is one smaller than that of \(\mathcal{S}\). This explains why they can all be computed as the scalar product with vectors \(\mathbf{r}\), \(\mathbf{\tilde{r}}\), \(\mathbf{d}\), respectively, which are parallel. In particular, \[\delta_{\mathrm{loc}}\mathbb{E}_{\boldsymbol{\xi},j_{0}}^{\|\cdot\|_{\mathrm{ cp}}}(s)=\mathbb{E}_{\boldsymbol{\xi},j_{0}}(s)=\beta_{\mathrm{loc}}|D_{ \boldsymbol{\xi},j_{0}}(s)|,\qquad\forall\,s\in\mathcal{S},\] with \(\delta_{\mathrm{loc}}=\frac{r_{p+2-\ell}}{\tilde{r}_{p+2-\ell}}\) and \(\beta_{\mathrm{loc}}=r_{p+2-\ell}\). It is also worth noticing that even though these three indicators are multiples of each other, the factors relating them depend on the knots in \(\boldsymbol{\xi}_{\mathrm{loc}}\) and therefore, such a factors are expected to be different for each breakpoint \(\zeta_{j_{0}}\). In Section 5 we compare the behavior of coarsening algorithms based on these three estimators. We emphasize here that even though these three quantities vanish if and only if a knot can be safely removed without modifying the spline \(s\), the quantities \(\mathbb{E}_{\boldsymbol{\xi},j_{0}}(s)\) and \(\mathbb{E}_{\boldsymbol{\xi},j_{0}}^{\|\cdot\|_{\mathrm{cp}}}(s)\) also have information about some notion of the error that will be produced by such a knot removal, whereas the quantity \(D\) does not. ### On the jump of derivatives of spline functions In this section we deduce a formula for the jump of the derivatives of a spline function \(s\) at the breakpoint \(\zeta_{j_{0}}\), which we relate, in the next section, with the error analyzed above. The jump \(\mathcal{J}_{\xi}\) of a piecewise continuous function \(s\) is defined by \[\mathcal{J}_{\xi}(s):=\lim_{x\to\xi^{*}}s(x)-\lim_{x\to\xi^{*}}s(x).\] The next result can be found in [11, Lemma 3.21]; we include a proof here for the sake of completeness. **Lemma 3.8**.: Let \(\mathbf{\xi}=\{\xi_{j}\}_{j=1}^{n+p+1}\) be a \((p+1)\)-basic knot vector and let \(\mathcal{B}=\{B_{1},B_{2},\ldots,B_{n}\}\) be the B-spline basis of degree \(p\) associated to \(\mathbf{\xi}\). Let \(j\) be fixed such that \(1\leq j\leq n\). Let \(\xi\) be a knot from \(\xi_{j},\ldots,\xi_{j+p+1}\), and let \(m\) be its multiplicity among the knots \(\xi_{j},\ldots,\xi_{j+p+1}\). Then the \((p-m+1)\)-th derivative of \(B_{j}\) has a nonzero jump at \(\xi\) given by \[\mathcal{J}_{\xi}\left(D^{p-m+1}B_{j}\right)=\frac{p!}{(m-1)!}\,\frac{(\xi_{j +p+1}-\xi_{j})}{\prod_{k=j,\xi_{k}+\xi}^{j+p+1}\left(\xi_{k}-\xi\right)}. \tag{24}\] Proof.: We will proceed by induction on the degree \(p\) so we use a second subscript to make explicit the degree of the B-spline functions. Notice that the multiplicity \(m\) satisfies \(1\leq m\leq p+1\). It is easy to verify that (24) holds for the case \(m=p+1\), using the fact that \(\mathcal{J}_{\xi}(B_{j,p})\) is equal to \(1\) when \(\xi=\xi_{j}\) and equal to \(-1\) when \(\xi=\xi_{j+p+1}\). Thus, in particular, equality (24) holds when \(p=0\) and \(m=1\). Now, suppose that (24) holds for B-splines of degree \(p-1\). Taking into account the recurrence formula for the derivative of B-splines (see e.g. [12, Theorem 3]), and the linearity of \(D^{r-1}\) for \(r\geq 1\) and \(\mathcal{J}_{\xi}\), we have \[\mathcal{J}_{\xi}(D^{r}B_{j,p})=p\left(\frac{\mathcal{J}_{\xi}(D^{r-1}B_{j,p- 1})}{(\xi_{j+p}-\xi_{j})}-\frac{\mathcal{J}_{\xi}(D^{r-1}B_{j+1,p-1})}{(\xi_{ j+p+1}-\xi_{j+1})}\right).\] Taking \(r=p-m+1\) we have that \[\mathcal{J}_{\xi}(D^{p-m+1}B_{j,p})=p\left(\frac{\mathcal{J}_{\xi}(D^{p-m}B_{j,p-1})}{(\xi_{j+p}-\xi_{j})}-\frac{\mathcal{J}_{\xi}(D^{p-m}B_{j+1,p-1})}{( \xi_{j+p+1}-\xi_{j+1})}\right). \tag{25}\] Since \(\xi\) is one of the knots from \(\xi_{j},\ldots,\xi_{j+p+1}\), there are now three possible cases: \(\xi=\xi_{j}\), \(\xi=\xi_{j+p+1}\) or \(\xi_{j}<\xi<\xi_{j+p-1}\). \(\bullet\) Case \(\xi=\xi_{j}\). We observe that \(\xi\) occurs \(m-1\) times among the knots that define \(B_{j+1,p-1}\), and \(B_{j+1,p-1}\) has \(p-m\) continuous derivatives in \(\xi\), whence \(\mathcal{J}_{\xi}(D^{p-m}B_{j+1,p-1})=0\). Thus, applying the inductive assumption, we have \[\mathcal{J}_{\xi}(D^{p-m+1}B_{j,p}) =p\frac{\mathcal{J}_{\xi}(D^{p-m}B_{j,p-1})}{\xi_{j+p}-\xi_{j}}=p \frac{(p-1)!}{(m-1)!}\,\frac{(\xi_{j+p}-\xi_{j})}{\prod_{k=j,\xi_{k}+\xi}^{j+ p}(\xi_{k}-\xi)}\frac{1}{(\xi_{j+p}-\xi_{j})}\] \[=\frac{p!}{(m-1)!}\,\frac{1}{\prod_{k=j,\xi_{k}+\xi}^{j+p}(\xi_{k }-\xi)}=\frac{p!}{(m-1)!}\,\frac{\xi_{j+p+1}-\xi}{\prod_{k=j,\xi_{k}+\xi}^{j+ p+1}(\xi_{k}-\xi)},\] which coincides with (24) because \(\xi=\xi_{j}\). \(\bullet\) Case \(\xi=\xi_{j+p+1}\). A similar argument implies the assertion. \(\bullet\) Case \(\xi_{j}<\xi<\xi_{j+p+1}\). Now, \(\xi\) is a knot of multiplicity \(m\) among the knots that define both \(B_{j,p-1}\) and \(B_{j+1,p-1}\). Applying (25) and the induction hypothesis, we then obtain \[\mathcal{J}_{\xi}(D^{p-m+1}B_{j,p}) =\frac{p!}{(m-1)!}\left(\prod_{k=j,\xi_{k}=\xi}^{j+p}\frac{1}{(\xi_ {k}-\xi)}-\prod_{k=j+1,\xi_{k}=\xi}^{j+p+1}\frac{1}{(\xi_{k}-\xi)}\right)\] \[=\frac{p!}{(m-1)!}\prod_{k=j+1,\xi_{k}=\xi}^{j+p}\frac{1}{(\xi_{k }-\xi)}\left(\frac{1}{(\xi_{j}-\xi)}-\frac{1}{(\xi_{j+p+1}-\xi)}\right)\] \[=\frac{p!}{(m-1)!}\frac{(\xi_{j+p+1}-\xi_{j})}{\prod_{k=j,\xi_{k }=\xi}^{j+p+1}(\xi_{k}-\xi)}.\] which completes the proof. As a consequence of the last lemma we can derive a simple and _local_ formula for computing the jump of the \((p-\ell)\)-th derivative of \(s\in\mathcal{S}\) at the breakpoint \(\zeta_{j_{0}}\). **Theorem 3.9**.: Let \(\mathcal{S}\) be a spline space and let \(\boldsymbol{\xi}\coloneqq\{\xi_{1},\ldots,\xi_{n+p+1}\}\) be an associated \((p+1)\)-basic knot vector. Let \(\zeta_{j_{0}}\) be an interior breakpoint and \(i_{0}\) be the index such that \(\zeta_{j_{0}}=\xi_{i_{0}}\) and \(\xi_{i_{0}-1}\leq\xi_{i_{0}}<\xi_{i_{0}+1}\). Let \(\ell+1\) denote the multiplicity of \(\xi_{i_{0}}\) in the knot vector \(\boldsymbol{\xi}\) and let \(\mathbf{j}_{\mathrm{loc}}\coloneqq(z_{1},\ldots,z_{p+2-\ell})^{T}\in\mathbb{R} ^{p+2-\ell}\) be defined by \[z_{j-i_{0}+p+2}:=\frac{p!}{\ell!}\frac{(\xi_{j+p+1}-\xi_{j})}{\prod_{k=j,\xi_ {k}=\xi_{i_{0}}}^{j+p+1}(\xi_{k}-\xi_{i_{0}})},\qquad j=i_{0}-p-1,\ldots,i_{0} -\ell. \tag{26}\] Then, if \(s\in\mathcal{S}\), and \(s=\sum_{i=1}^{n}c_{i}B_{i}\), there holds \[\mathcal{J}_{\zeta_{j_{0}}}\big{(}D^{p-\ell}s\big{)}=\mathbf{j}_{\mathrm{loc} }^{T}\mathbf{c}_{\mathrm{loc}},\] where \(\mathbf{c}_{\mathrm{loc}}=(c_{i_{0}-p-1},\ldots,c_{i_{0}-\ell})^{T}\). Notice that \(\mathbf{j}_{\mathrm{loc}}\) depends only on \(\boldsymbol{\xi}_{\mathrm{loc}}\) in (20) and the polynomial degree \(p\). Proof.: Let \(\{\xi_{j},\cdots,\xi_{j+p+1}\}\) be the local knot vector of the \(j\)-th B-spline \(B_{j}\), for \(j=1,\ldots,n\). Since \(\ell+1\) is the multiplicity of \(\xi_{i_{0}}\) in \(\boldsymbol{\xi}\), we have that \(\xi_{i_{0}-\ell-1}<\xi_{i_{0}-\ell}=\cdots=\xi_{i_{0}}<\xi_{i_{0}+1}\). Thus, \(\xi_{i_{0}}\) appears \(m=\ell+1\) times in the local knot vector of \(B_{j}\) provided \(j=i_{0}-p-1,\ldots,i_{0}-\ell\). Otherwise, the multiplicity of \(\xi_{i_{0}}\) among the knots \(\xi_{j},\cdots,\xi_{j+p+1}\) is at most \(\ell\) and in consequence, \(D^{p-\ell}B_{j}\) is continuous at \(\xi_{i_{0}}\). Now, if \(s=\sum_{j=1}^{n}c_{j}B_{j}\in\mathcal{S}\), using (24) we have that \[\mathcal{J}_{\zeta_{j_{0}}}\big{(}D^{p-\ell}s\big{)}=\mathcal{J}_{\xi_{i_{0}}} \big{(}D^{p-\ell}s\big{)}=\sum_{j=i_{0}-p-1}^{i_{0}-\ell}c_{j}\mathcal{J}_{ \xi_{i_{0}}}\big{(}D^{p-\ell}B_{j}\big{)}=\sum_{j=i_{0}-p-1}^{i_{0}-\ell}\frac{ p!}{\ell!}\frac{(\xi_{j+p+1}-\xi_{j})}{\prod_{k=j,\xi_{k}=\xi_{i_{0}}}^{j+p+1}(\xi_{k} -\xi_{i_{0}})}c_{j},\] which is the desired assertion. ### Relationship between the error and the jump Using the result from the two previous subsections, we now state the precise connection between the error \(\mathbb{E}_{\boldsymbol{\xi},j_{0}}(s)\) defined in (13) and the jump \(\mathcal{J}_{\zeta_{j_{0}}}(D^{p-\ell}s)\). **Theorem 3.10** (Main result II).: Let \(\mathcal{S}\) be a spline space and let \(\boldsymbol{\xi}:=\{\xi_{1},\ldots,\xi_{n+p+1}\}\) be an associated \((p+1)\)-basic knot vector. Let \(\zeta_{j_{0}}\) be an interior breakpoint and \(i_{0}\) be the index such that \(\zeta_{j_{0}}=\xi_{i_{0}}\) and \(\xi_{i_{0}-1}\leq\xi_{i_{0}}<\xi_{i_{0}+1}\). Let \(\ell+1\) denote the multiplicity of \(\xi_{i_{0}}\) in the knot vector \(\boldsymbol{\xi}\). If \(s\in\mathcal{S}\), \[\mathbb{E}_{\boldsymbol{\xi},j_{0}}(s)=C_{\mathrm{loc}}|\mathcal{J}_{\zeta_{j_ {0}}}(D^{p-\ell}s)|, \tag{27}\] where \(C_{\mathrm{loc}}\) is a positive constant which depends only on \(\boldsymbol{\xi}_{\mathrm{loc}}\) in (20), defined explicitly by \[C_{\mathrm{loc}}:=r_{p+2-\ell}\frac{\ell!}{p!}\prod_{k=i_{0}+1}^{i_{0}+p-\ell }(\xi_{k}-\xi_{i_{0}}), \tag{28}\] where \(r_{p+2-\ell}\) is the constant from the statement of Theorem 3.5. Proof.: In view of Theorems 3.5 and 3.9, to establish the equality in (27) it will be enough to prove that \[\mathbf{r}_{\mathrm{loc}}^{T}=C_{\mathrm{loc}}\mathbf{j}_{\mathrm{loc}}^{T}.\] On the one hand, using the recursive formula for \(r_{j}\) in (22) we have that \[r_{j}=r_{p+2-\ell}(-1)^{p-\ell-j}\prod_{i=j}^{p+1-\ell}\mu_{i}, \tag{29}\] for \(j=1,\ldots,p+2-\ell\), and taking into account (21) and (9), \[\prod_{i=j}^{p+1-\ell}\mu_{i}=\frac{1}{\alpha_{j}}\prod_{i=j+1}^{p+1-\ell} \frac{1-\alpha_{i}}{\alpha_{i}}=\frac{1}{\alpha_{j}}\prod_{i=j+1}^{p+1-\ell} \frac{\xi_{i_{0}-1+i}-\xi_{i_{0}}}{\xi_{i_{0}}-\xi_{i_{0}-p-2+i}}=(-1)^{p-\ell -j+1}\frac{1}{\alpha_{j}}\frac{\prod_{k=i_{0}+j}^{i_{0}+p-\ell}(\xi_{k}-\xi_{ i_{0}})}{\prod_{k=i_{0}-p+j-1}^{i_{0}+p-\ell}(\xi_{k}-\xi_{i_{0}})}. \tag{30}\] On the other hand, since \(\xi_{i_{0}-\ell-1}<\xi_{i_{0}-\ell}=\cdots=\xi_{i_{0}}<\xi_{i_{0}+1}\), from (26) we obtain \[\frac{1}{z_{j}}=-\alpha_{j}\frac{\ell!}{p!}\prod_{k=i_{0}-p-1+j}^{i_{0}-1+j}( \xi_{k}-\xi_{i_{0}})=-\alpha_{j}\frac{\ell!}{p!}\prod_{k=i_{0}-p-1+j}^{i_{0}- \ell-1}(\xi_{k}-\xi_{i_{0}})\prod_{k=i_{0}+1}^{i_{0}-1+j}(\xi_{k}-\xi_{i_{0}}). \tag{31}\] Finally, (29), (30) and (31) imply that \(\frac{r_{j}}{z_{j}}=C_{\mathrm{loc}}\), for \(j=1,\ldots,p+2-\ell\), which completes the proof. If we focus on the \(L^{2}\)-norm, using Theorems 3.1 and 2.1 and the last theorem we conclude the following result. **Corollary 3.11**.: Under the assumptions of Theorem 3.10, let \(\hat{\mathcal{S}}\) be the spline space associated to \(\hat{\boldsymbol{\xi}}:=\boldsymbol{\xi}\setminus\{\xi_{i_{0}}\}\). Then, for any \(s\in\mathcal{S}\), \[\min_{\hat{s}\in\hat{\mathcal{S}}}\|s-\hat{s}\|_{L^{2}}\leq C_{\mathrm{loc}}| \mathcal{J}_{\zeta_{j_{0}}}(D^{p-\ell}s)|, \tag{32}\] with \(C_{\mathrm{loc}}\) as in (28). We remark that although the right hand side in (32) is fully computable, this quantity actually excessively overestimates the \(L^{2}\)-error in general. ## 4 Algorithms for adaptive knot removal Theorem 3.5 provides a simple formula to compute the error when removing a single knot from a spline. In this section we use that formula as an error indicator and develop an adaptive algorithm for coarsening. More precisely, if \(s\) is a spline and \(\mathrm{TOL}>0\) is a prescribed tolerance, the algorithm computes a spline \(\hat{s}\), which belongs to a coarser spline space, such that \(\|s-\hat{s}\|\leq\mathrm{TOL}\). We consider some norms that are important in applications such as the \(L^{2}\)-, the \(L^{\infty}\)- and the \(H^{1}\)-norm. ### Algorithm for the \(L^{2}\)- and the \(\xi\)-norm Algorithm 1 is our proposed adaptive coarsening algorithm for knot removal up to a tolerance. It starts with a prescribed value \(\mathrm{TOL}>0\) and a spline \(s\in\mathcal{S}\), where \(\mathcal{S}\) denotes the spline space associated to a given \((p+1)\)-basic knot vector \(\boldsymbol{\xi}\). Its goal is to remove a large number of knots and find a new spline in this coarser spline space, such that the distance between the original spline and the final one, in \(L^{2}\)- as well as in \(\boldsymbol{\xi}\)-norm, is less than \(\mathrm{TOL}\). We now describe the main modules inside this algorithm. The super-index (\(k\)) refers to the \(k\)-th iteration of the while loop. Since the algorithm removes one knot per iteration, it also indicates that \(k\) knots have been removed up to that time. - In line 2, we compute the local indicators \(\varepsilon_{j}^{(0)}\) for each \(j\in\{2,\ldots,N-1\}\) defined by \[\varepsilon_{j}^{(0)}:=\mathbb{E}_{\boldsymbol{\xi},j}(s),\] where \(\mathbb{E}_{\boldsymbol{\xi},j}(s)\) is characterized by Theorem 3.5. - The goal of lines 7 to 9 is to compute the control points of a new spline belonging to the space that results from removing one knot in the \(j_{*}^{(k)}\)-th breakpoint of \(\boldsymbol{\xi}^{(k)}\). We have split this stage in three steps in order to emphasize the local nature of this update: * In line 7 we just select some components of the vector \(\mathbf{c}^{(k)}\). More especifically, let \(i_{*}=p+1+\sum_{r=2}^{j_{*}^{(k)}}m_{r}^{(k)}\), where \(m_{r}^{(k)}\) is the multiplicity of \(r\)-th breakpoint in \(\boldsymbol{\xi}^{(k)}\) and let \(\ell_{*}=m_{j_{*}}^{(k)}-1\). Then \[\mathbf{c}^{(k)}_{\mathrm{loc}}=\left(c^{(k)}_{i_{*}-p-1},\ldots,c^{(k)}_{i_{* }-\ell_{*}}\right),\] where \(c_{i}^{(k)}\) is the \(i\)-th component of \(\mathbf{c}^{(k)}\). * In line 8, we compute the new local control points \(\mathbf{c}_{\mathrm{loc}}^{(k+1)}\) as the least squares solution of the following system \[E_{\mathrm{loc}}A_{\mathrm{loc}}\mathbf{c}_{\mathrm{loc}}^{(k+1)}=E_{\mathrm{ loc}}\mathbf{c}_{\mathrm{loc}}^{(k)},\] where \(A_{\mathrm{loc}},E_{\mathrm{loc}}\) are defined in (8) and (18) with \(i_{0}=i_{*}\), \(\ell=\ell_{*}\) and considering the knots in \(\boldsymbol{\xi}^{(k)}\). * In line 9, we assemble the new vector of control points \(\mathbf{c}^{(k+1)}\) by replacing the subvector \(\mathbf{c}_{\mathrm{loc}}^{(k)}\) by \(\mathbf{c}_{\mathrm{loc}}^{(k+1)}\). Thus, we have \[\begin{cases}c_{i}^{(k+1)}=c_{i}^{(k)},&\quad i\leq i_{*}-p-2,\\ c_{i}^{(k+1)}=c_{i+1}^{(k)},&\quad i\geq i_{*}-\ell_{*}.\end{cases}\] (33) - In line 10, we remove the knot \(\xi_{i_{*}}\) from the current knot vector, that is, \(\mathbf{\xi}^{(k)}\setminus\{\xi_{i_{*}}\}\), so that \[\begin{cases}\xi_{i}^{(k+1)}=\xi_{i}^{(k)},&i=1,\ldots,i_{*}-1,\\ \xi_{i}^{(k+1)}=\xi_{i+1}^{(k)},&i=i_{*},\ldots,n+p,\end{cases} \tag{34}\] and we also let \(N^{(k+1)}\) be the number of breakpoints of \(\mathbf{\xi}^{(k+1)}\), given by \(N^{(k+1)}=N^{(k)}\) when \(\ell_{*}>0\), and \(N^{(k+1)}=N^{(k)}-1\) when \(\ell_{*}=0\). - In line 11, we compute the local indicators \(\{\varepsilon_{j}^{(k+1)}\}_{j=2}^{N^{(k+1)}-1}\) as explained below. Such indicators are defined by \[\varepsilon_{j}^{(k+1)}\coloneqq\mathbb{E}_{\mathbf{\xi}^{(k+1)},j}(s_{k+1}), \tag{35}\] where \(s_{k+1}\) is the spline function with control points \(\mathbf{\mathrm{c}}^{(k+1)}\) in the spline space associated to the knot vector \(\mathbf{\xi}^{(k+1)}\). According to Theorem 3.5 and Remark 3.6 we have that \(\varepsilon_{j}^{(k+1)}\) depends only on the knots in \(\mathbf{\xi}^{(k+1)}_{\mathrm{loc},i_{j}}\coloneqq\{\xi_{i_{j}-p-1}^{(k+1)},\ldots,\xi_{i_{j}+p+1-\ell_{j}}^{(k+1)}\}\) and the coefficients in \(\mathbf{\mathrm{c}}^{(k+1)}_{\mathrm{loc},i_{j}}\coloneqq\{c_{i_{j}-p-1}^{(k+1)}, \ldots,c_{i_{j}-\ell_{j}}^{(k+1)}\}\). Here, \(i_{j}\coloneqq p+1+\sum_{r=2}^{j}m_{r}^{(k+1)}\), where \(m_{r}^{(k+1)}\) denotes the multiplicity of \(r\)-th breakpoint in \(\mathbf{\xi}^{(k+1)}\) and \(\ell_{j}=m_{j}^{(k+1)}-1\). Thus, taking into account (33) and (34) we have that3 Footnote 3: Notice that if \(\ell_{*}\geq 1\), then \(m_{j}^{(k+1)}=m_{j}^{(k)}\), for all \(j\neq j_{*}\). When \(\ell_{*}=0\), we have that \(m_{j}^{(k+1)}=m_{j}^{(k)}\), for \(j<j_{*}\) and \(m_{j}^{(k+1)}=m_{j+1}^{(k)}\), for \(j\geq j_{*}\). * if \(i_{j}\leq i_{*}-p-2\), \(\mathbf{\xi}^{(k+1)}_{\mathrm{loc},i_{j}}=\mathbf{\xi}^{(k)}_{\mathrm{loc},i_{j}}\) and \(\mathbf{\mathrm{c}}^{(k+1)}_{\mathrm{loc},i_{j}}=\mathbf{\mathrm{c}}^{(k)}_{\mathrm{ loc},i_{j}}\), whence \(\varepsilon_{j}^{(k+1)}=\varepsilon_{j}^{(k)}\). * if \(i_{j}\geq i_{*}+p+1\), \(\mathbf{\xi}^{(k+1)}_{\mathrm{loc},i_{j}}=\mathbf{\xi}^{(k)}_{\mathrm{loc},i_{j}+1}\) and \(\mathbf{\mathrm{c}}^{(k+1)}_{\mathrm{loc},i_{j}}=\mathbf{\mathrm{c}}^{(k)}_{\mathrm{ loc},i_{j}+1}\), so that \[\varepsilon_{j}^{(k+1)}=\begin{cases}\varepsilon_{j+1}^{(k)},&\text{ if }\,\ell_{*}=0,\\ \varepsilon_{j}^{(k)},&\text{ if }\,\ell_{*}\geq 1.\end{cases}\] Therefore, we have to compute \(\varepsilon_{j}^{(k+1)}\) using (35) only for the few indices \(j\) such that \(i_{*}-p-1\leq i_{j}\leq i_{*}+p\). We conclude this section with the following bound for the discrepancy between the original spline \(s\) and the output of Algorithm 1. **Theorem 4.1**.: Let \(s\in\mathcal{S}\) and \(\mathrm{TOL}>0\). Then, Algorithm 1 finishes after a finite number of iterations and returns a spline \(\hat{s}\in\hat{\mathcal{S}}\) such that \[\left\|s-\hat{s}\right\|_{L^{2}}\leq\left\|s-\hat{s}\right\|_{\mathbf{\xi}}<\mathrm{ TOL}\,.\] Proof.: It is clear that Algorithm 1 finishes after \(K\leq k_{\max}\) iterations. If \(s_{k}\) is the spline function with control points \(\mathbf{\mathrm{c}}^{(k)}\) in the spline space associated to the knot vector \(\mathbf{\xi}^{(k)}\) we have that \[\varepsilon^{(k)}\coloneqq\varepsilon_{j_{*}^{(k)}}^{(k)}=\left\|s_{k}-s_{k+1} \right\|_{\mathbf{\xi}^{(k)}},\,\,\,k=0,\ldots,K-1.\] Moreover, \(\hat{\mathbf{\xi}}=\mathbf{\xi}^{(K)}\) and \(\hat{s}=s_{K}\in\hat{\mathcal{S}}\), where \(\hat{\mathcal{S}}\) is the spline space associated to \(\hat{\mathbf{\xi}}\). Now, Theorem 2.1 yields \[\left\|s-\hat{s}\right\|_{L^{2}}\leq\left\|s-\hat{s}\right\|_{\mathbf{\xi}}=\left\| s_{0}-s_{K}\right\|_{\mathbf{\xi}^{(0)}}\leq\sum_{k=0}^{K-1}\left\|s_{k}-s_{k+1} \right\|_{\mathbf{\xi}^{(0)}}.\] Since \(\mathbf{\xi}^{(k)}\) is a subsequence of \(\mathbf{\xi}^{(0)}\), from [10, Proposition 5.2] we obtain that \(\|s_{k}-s_{k+1}\|_{\mathbf{\xi}^{(0)}}\leq\varepsilon^{(k)}\). Finally, noticing that the algorithm guarantees that \[\varepsilon^{(K-1)}<\mathrm{TOL}-(\varepsilon^{(0)}+\varepsilon^{(1)}+\ldots+ \varepsilon^{(K-2)}),\] we conclude that \[\|s-\hat{s}\|_{L^{2}}\leq\|s-\hat{s}\|_{\mathbf{\xi}}\leq\sum_{k=0}^{K-1} \varepsilon^{(k)}<\mathrm{TOL},\] which completes the proof. ### Algorithm for the \(L^{\infty}\)-norm In this section we explain how Algorithm 1 can be easily modified in order to obtain an approximation for a given spline in a coarser space such that the distance in \(L^{\infty}\)-norm is less than a prescribed tolerance. We denote by \(\|\cdot\|_{\mathrm{cpoo}}\) the spline norm of \(s\) defined by \(\|s\|_{\mathrm{cpoo}}:=\|\mathbf{c}\|_{\infty}=\max_{i=1,\ldots,n}|c_{i}|\), where \(c_{i}\) is the \(i\)-th B-spline coefficient of \(s\). Thus, the \(L^{\infty}\)-stability for the B-spline basis (cf. (5)) reads: \[K_{p}^{-1}\|s\|_{\mathrm{cpoo}}\leq\|s\|_{L^{\infty}}\leq\|s\|_{\mathrm{cpoo}}, \qquad\forall s\in\mathcal{S}. \tag{36}\] Following the notation introduced at the beginning of Section 3 we consider \[\mathbb{E}_{\mathbf{\xi},j_{0}}^{[\cdot|\cdot|_{\mathrm{cpoo}}}(s)=\min_{g\in \hat{\mathcal{S}}}\|s-g\|_{\mathrm{cpoo}}, \tag{37}\] and notice that the spline \(g\) achieving the minimum in (37) is, in general, not unique. Besides, it is easy to check that the minimum in \(\mathbb{E}_{\mathbf{\xi},j_{0}}^{[\cdot|\cdot|_{\mathrm{cpoo}}}(s)=\min_{\mathbf{ z}\in\mathbb{R}^{p+2-\ell}}\|\mathbf{c}_{\mathrm{loc}}-A_{\mathrm{loc}}\mathbf{z} \|_{\infty}\) is achieved at a unique \(\mathbf{\hat{c}}_{\mathrm{loc}}\) given by \[\mathbf{\hat{c}}_{\mathrm{loc}}=\operatorname*{argmin}_{\mathbf{z}\in\mathbb{ R}^{p+2-\ell}}\|\mathbf{c}_{\mathrm{loc}}-A_{\mathrm{loc}}\mathbf{z}\|_{ \infty}. \tag{38}\] From the analysis in [10, Section 4], it follows that the square matrix \(M:=[A_{\mathrm{loc}}\mid\mathbf{s}]\) is non-singular, where \(\mathbf{s}=(s_{1},\ldots,s_{p+2-\ell})^{T}\) with \(s_{i}=(-1)^{p-\ell-i}\), for \(i=1,\ldots,p+2-\ell\). Moreover, if \(\mathbf{x}=(x_{1},\ldots,x_{p+2-\ell})^{T}\) is the solution of \(M\mathbf{x}=\mathbf{c}_{\mathrm{loc}}\), then \(\hat{\mathbf{c}}_{\mathrm{loc}}=(x_{1},\ldots,x_{p+1-\ell})^{T}\) and \(\mathbb{E}_{\mathbf{\xi},j_{0}}^{[\cdot|\cdot|_{\mathrm{cpoo}}}(s)=|x_{p+2-\ell}|\). Algorithm 2 is the coarsening algorithm for the \(L^{\infty}\)-norm. **Remark 4.2**.: Notice that the explanation of line 11 of Algorithm 1 also applies in this case because \(\varepsilon_{j}^{(k+1)}\) depends only on the local knot insertion matrix and on the same few control points as before, see (38). We conclude this section with the following result. **Theorem 4.3**.: Let \(s\in\mathcal{S}\) and \(\mathrm{TOL}>0\). Then, Algorithm 2 finishes after a finite number of iterations and returns a spline \(\hat{s}\in\hat{\mathcal{S}}\) such that \[\|s-\hat{s}\|_{L^{\infty}}<\mathrm{TOL}\,.\] **Algorithm 2**\(L^{\infty}\)_adaptive_knot_removal Follow the same lines in Algorithm 1 taking into account the slight modifications detailed below: - In line 2, we compute the local indicators \(\varepsilon_{j}^{(0)}\coloneqq\mathbb{E}_{\boldsymbol{\xi},j}^{\|\cdot\|_{ \text{cpc}}}(s)\) for each \(j\in\{2,\ldots,N-1\}\). - In line 8, we compute \(\mathbf{c}_{\text{loc}}^{(k+1)}\) given by \[\mathbf{c}_{\text{loc}}^{(k+1)}=\operatorname*{argmin}_{\mathbf{z}\in\mathbb{ R}^{p+2}-\epsilon_{*}}\|\mathbf{c}_{\text{loc}}^{(k)}-A_{\text{loc}}\mathbf{z}\|_{ \infty},\] as explained above, where \(A_{\text{loc}}\) is defined in (8) with \(i_{0}=i_{*}\), \(\ell=\ell_{*}\) and considering the knots in \(\boldsymbol{\xi}^{(k)}\). - In line 11, we proceed as in line 11 of Algorithm 1 and compute only \(\varepsilon_{j}^{(k+1)}\coloneqq\mathbb{E}_{\boldsymbol{\xi}^{(k+1)},j}^{ \text{cpc}}(s_{k+1})\), for \(i_{*}-p-1\leq i_{j}\leq i_{*}+p\). Proof.: Similarly to the proof of Theorem 4.1, Algorithm 2 finishes after \(K\leq k_{\max}\) iterations. If \(s_{k}\) is the spline function with control points \(\mathbf{c}^{(k)}\) in the spline space associated to the knot vector \(\boldsymbol{\xi}^{(k)}\) we have that \[\varepsilon^{(k)}\coloneqq\varepsilon_{j_{*}^{(k)}}^{(k)}=\mathbb{E}_{ \boldsymbol{\xi}^{(k)},j_{*}^{(k)}}^{\|\cdot\|_{\text{cpc}}}(s_{k})=\|s_{k}-s_ {k+1}\|_{\text{cpc}\infty},\,\,\,k=0,\ldots,K-1.\] Now, since \(\hat{\boldsymbol{\xi}}=\boldsymbol{\xi}^{(K)}\) and \(\hat{s}=s_{K}\in\hat{\mathcal{S}}\), where \(\hat{\mathcal{S}}\) is the spline space associated to \(\hat{\boldsymbol{\xi}}\), and taking into account (36), we conclude that \[\|s-\hat{s}\|_{L^{\infty}}=\|s_{0}-s_{K}\|_{L^{\infty}}\leq\sum_{k=0}^{K-1}\| s_{k}-s_{k+1}\|_{L^{\infty}}\leq\sum_{k=0}^{K-1}\|s_{k}-s_{k+1}\|_{\text{cpc} \infty}=\sum_{k=0}^{K-1}\varepsilon^{(k)}<\operatorname{TOL}.\] ### Algorithm for the \(H^{1}\)-norm We conclude this section by considering the case of the \(H^{1}\)-norm. Roughly speaking, given a tolerance \(\operatorname{TOL}>0\) and a \(C^{0}\)-spline function \(s\), we first apply Algorithm 1 to the right-derivative \(s^{\prime}\) of \(s\) to obtain a coarsen approximation \(\hat{s}^{\prime}\) satisfying \(\|s^{\prime}-\hat{s}^{\prime}\|_{L^{2}}<\operatorname{TOL}^{\prime}\), for a suitable value of \(\operatorname{TOL}^{\prime}>0.\) We then integrate the spline \(\hat{s}^{\prime}\) to obtain a spline \(\hat{s}\) such that \(\|s-\hat{s}\|_{H^{1}}<\operatorname{TOL}\). Let \(\mathcal{S}\) denote the spline space associated to a \((p+1)\)-open4 knot vector \(\boldsymbol{\xi}=\{\xi_{i}\}_{i=1}^{n+p+1}\). In this section, we assume that the multiplicity of each breakpoint is at most \(p\) so that \(\mathcal{S}\subset C[a,b]\) and \(\mathcal{S}\subset H^{1}(a,b)\). The set of the right-derivatives, defined by \(\mathcal{S}^{\prime}\coloneqq\{s^{\prime}\ |\ s\in\mathcal{S}\}\) can be characterized as the spline space associated to the \(p\)-open knot vector \(\boldsymbol{\xi}^{\prime}\coloneqq\{\xi_{i}\}_{i=2}^{n+p}\), cf. [12, Theorem 7]. Footnote 4: The \((p+1)\)-basic knot vector \(\boldsymbol{\xi}\) is called _open_ if \(\xi_{1}=\cdots=\xi_{p+1}\) and \(\xi_{n+1}=\cdots=\xi_{n+p+1}\). If \(\mathbf{c}=(c_{1},\ldots,c_{n})^{T}\) denotes the vector of B-spline coefficients of \(s\in\mathcal{S}\), it is well known that the vector \(\mathbf{c}^{\prime}=(c_{2}^{\prime},\ldots,c_{n}^{\prime})^{T}\) of B-spline coefficients of \(s^{\prime}\) is given by \[c_{i}^{\prime}=\left(\frac{c_{i}-c_{i-1}}{\xi_{i}^{*}-\xi_{i-1}^{*}}\right), \qquad i=2,\ldots,n, \tag{39}\] where \(\xi_{i}^{*}\coloneqq\frac{\xi_{i+1}+\ldots+\xi_{i+p}}{p}\) denotes the \(i\)-th Greville abscissa. ``` 0:\(\mathrm{TOL}>0\), \(\mathbf{c}\) and \(\boldsymbol{\xi}\) (Here, \(\mathbf{c}\) contains the B-spline coefficients of \(s\in\mathcal{S}\), where \(\mathcal{S}\subset C[a,b]\) is the spline space associated to a \((p+1)\)-open knot vector \(\boldsymbol{\xi}\)) 1:\([\mathbf{c}^{\prime},\boldsymbol{\xi}^{\prime}]\leftarrow\) compute_control_points_of_the_derivative\((\mathbf{c},\boldsymbol{\xi})\) 2:\(\mathrm{TOL}^{\prime}=\mathrm{TOL}/\sqrt{(b-a)^{2}+1}\) 3:\([\boldsymbol{\hat{c}}^{\prime},\boldsymbol{\hat{\xi}}^{\prime}]\gets L^{2}\)adaptive_knots_removal\((\mathrm{TOL}^{\prime},\mathbf{c}^{\prime},\boldsymbol{\xi}^{\prime})\)\(\triangleright\) Algorithm 1 4:Build \(\boldsymbol{\hat{\xi}}\) from \(\boldsymbol{\hat{\xi}}^{\prime}\) 5:\(\hat{\mathbf{c}}\leftarrow\)compute_control_points_of_the_primitive\((\hat{\mathbf{c}}^{\prime},\boldsymbol{\hat{\xi}})\) Output:\(\hat{\mathbf{c}}\) and \(\boldsymbol{\hat{\xi}}\) (Now, \(\hat{s}\in\hat{\mathcal{S}}\), where \(\hat{\mathcal{S}}\) is the spline space associated to \(\boldsymbol{\hat{\xi}}\) and \(\hat{\mathbf{c}}\) contains the B-spline coefficients of \(\hat{s}\)) ``` **Algorithm 3**\(H^{1}\)**adaptive_knot_removal** Algorithm 3 is the coarsening algorithm for the \(H^{1}\)-norm, the modules of which we explain in the following paragraphs. - In line 1 we compute the B-spline coefficients \(\mathbf{c}^{\prime}\) of \(s^{\prime}\) using (39), and build the corresponding \(p\)-open knot vector by removing one occurence of the first and the last knot from \(\boldsymbol{\xi}\). - In line 3 we apply Algorithm 1 to obtain a spline \(\hat{s}^{\prime}\) satisfying \(\|s^{\prime}-\hat{s}^{\prime}\|_{L^{2}}<\mathrm{TOL}^{\prime}\), with \(\mathrm{TOL}^{\prime}\) as defined in line 2. Notice that Algorithm 1 is applied in a space with splines of degree \(\leq p-1\). - In line 4 we build the \((p+1)\)-open knot vector \(\boldsymbol{\hat{\xi}}\) obtained from \(\boldsymbol{\hat{\xi}}^{\prime}\) by adding one time the first and the last knot. - In line 5 we compute the B-spline coefficients \(\mathbf{\hat{c}}=(\hat{c}_{1},\ldots,\hat{c}_{\hat{n}})^{T}\) of \(\hat{s}\) as follows \[\begin{split}\hat{c}_{1}&=c_{1},\\ \hat{c}_{i}&=\hat{c}_{i}^{\prime}(\hat{\xi}_{i}^{*}- \hat{\xi}_{i-1}^{*})+\hat{c}_{i-1},\end{split}\quad i=2,\ldots,\hat{ n},\end{split} \tag{40}\] where \(\mathbf{\hat{c}}^{\prime}=(\hat{c}_{2}^{\prime},\ldots,\hat{c}_{\hat{n}}^{ \prime})^{T}\) are the B-spline coefficients of \(\hat{s}^{\prime}\), and \(\hat{\xi}_{i}^{*}\) are the Greville abscissas associated to \(\boldsymbol{\hat{\xi}}\). At this point, it is important to emphasize that \(\boldsymbol{\hat{\xi}}\) can be indeed obtained from \(\boldsymbol{\xi}\) by removing some knots. Additionally, we remark that since \(\boldsymbol{\hat{\xi}}\) and \(\boldsymbol{\xi}\) are _open_, the first equality in (40) guarantees that \(\hat{s}(a)=s(a)\). Regarding Algorithm 3, we have the following result. **Theorem 4.4**.: Let \(s\in\mathcal{S}\) and \(\mathrm{TOL}>0\). Then, Algorithm 3 finishes and returns a spline \(\hat{s}\in\hat{\mathcal{S}}\) such that \[\|s-\hat{s}\|_{H^{1}}<\mathrm{TOL}\,.\] Proof.: Due to Theorem 4.1, we have that \(|s-\hat{s}|_{H^{1}}<\mathrm{TOL}^{\prime}=\frac{\mathrm{TOL}}{\sqrt{(b-a)^{2}+ 1}}\). Additionally, since \(\hat{s}(a)=s(a)\), the following Poincare inequality holds: \[\|s-\hat{s}\|_{L^{2}}\leq(b-a)|s-\hat{s}|_{H^{1}}.\] Thus, \[\|s-\hat{s}\|_{H^{1}}^{2}=\|s-\hat{s}\|_{L^{2}}^{2}+|s-\hat{s}|_{H^{1}}^{2}\leq [(b-a)^{2}+1]|s-\hat{s}|_{H^{1}}^{2}<[(b-a)^{2}+1]\,\mathrm{TOL}^{\prime 2 }=\mathrm{TOL}^{2},\] which concludes the proof. ## 5 Numerical experiments We finish this article with some numerical tests that show the performance of the algorithms proposed in the previous section and briefly illustrate some useful applications to data reduction and local coarsening in numerical methods for partial differential equations. _Example 5.1_ (Adaptive coarsening and adaptive refinement).: In this test we illustrate the fact that the local coarsening can be regarded as the reverse procedure of local refinement. We consider the Runge function \(f_{1}(x)=\frac{1}{1+x^{2}}\), for \(-5\leq x\leq 5\), and perform adaptive refinement in order to approximate it considering the \(L^{2}\)-projection onto the space of \(C^{0}\) splines of degree \(\leq p\) defined on the graded meshes. We have considered \(p=2\) and \(p=4\). Then, starting with the finest adaptive mesh and the corresponding spline that best approximates \(f_{1}\) we apply Algorithm 1 and compute the \(L^{2}\)-error after each knot removal. The results obtained with four strategies are presented in Figure 1. Strategy 1 consists in applying Algorithm 1 as stated in the previous section whereas the other strategies consider different error indicators. Strategies 2, 3 and 4 consider \(\mathbb{E}_{\boldsymbol{\xi},j}^{\|\cdot|_{\mathrm{cp}}}(s)\) given in (23), \(|D|\) (see Remark 3.7) and the jumps (cf. Theorems 3.9 and 3.10) as local indicators, respectively. Strategies 1 to 3 show optimal slopes for the error in terms of degrees of freedom. The behavior of strategy 4 is very poor, and we thus disregard it in the subsequent experiments. Since Strategies 1 and 2 correspond to solving minimization problems in the _localized_ norms \(\|\cdot\|_{\boldsymbol{\xi}}\) and \(\|\cdot\|_{\mathrm{cp}}\), respectively, the choice of the control points vector \(\boldsymbol{\hat{c}}\) after each knot removal is clear, inexpensive, and explicitly stated in Algorithm 1. The fact that only a few control points change after each iteration implies that the update of the error indicators is also very cheap because most of them remain unchanged. In this respect, the only drawback of Strategy 2 is that the norm \(\|\cdot\|_{\mathrm{cp}}\) is not equivalent to the \(L^{2}\)-norm. Regarding Strategies 3 and 4, it is not clear how the control points should be defined after each knot removal. In order to benefit these strategies we computed \(\hat{s}\) as the \(L^{2}\)-projection of \(s\) after each iteration (by solving a global linear system). This is not so convenient computationally, as compared to the first two strategies. In Figure 2 we show the error curves for Strategies 1 to 3 applied to \(f_{2}(x)=\sqrt[5]{x}\) on the interval \([-1,1]\). It is notorious in this example that Strategy 1 outperforms the other two. This is to be expected because the former is the only one that is guaranteed to keep an approximation below a given tolerance in \(L^{2}\). Strategies 2 and 3 perform well, but are sub-optimal, and no theory guarantees to keep the \(L^{2}\)-error under control. Thus, we conclude that our algorithm automatically selects the most convenient knots to be sequentially removed. _Example 5.2_ (Data approximation in maximum-norm).: We consider an example inspired Figure 2: Adaptive coarsening for \(f_{2}(x)=\sqrt[5]{x}\), for \(-1\leq x\leq 1\). It is notorious in this example that Strategy 1 outperforms the other two. This is to be expected because the former is the only one that is guaranteed to keep an approximation below a given tolerance in \(L^{2}\). Strategies 2 and 3 perform well, but are sub-optimal, and no theory guarantees to keep the \(L^{2}\)-error under control. by [10, Example 6.3]. We sample the Runge function at \(101\) equally spaced points over the interval \([-5,5]\). We consider the continuous piecewise linear interpolant to the data written as a linear combination of \(C^{0}\) cubic B-splines. We then apply Algorithm 2 to remove knots until only \(7\) and \(3\) interior knots remain, respectively. In Figure 3 we show these approximations, and the error with respect to the original sampling. The results obtained here by simply applying Algorithm 1 are similar to those from [10, example 6.3] where a multistage data reduction technique has been used. _Example 5.3_ (Heat equation with local adaptive coarsening).: We consider the heat equation \(\frac{\partial u}{\partial t}-\frac{\partial^{2}u}{\partial x^{2}}=0\), for \(0<x<10\) and \(t>0\). We impose homogeneous Neumann boundary conditions for \(t>0\) and the initial value \(u(x,0)=u_{0}(x)\coloneqq 1+\sin(x^{\frac{7}{20}}\exp(\frac{11x}{50}))\), for \(0<x<10\), as showed in Figure 4 (left). For the spatial discretization we consider the space \(\mathcal{S}=\mathcal{S}_{p}\) of splines of degree \(\leq p\) of maximum smoothness defined on a partition \(Z\) of \(1001\) breakpoints equally distributed in the interval \([0,10]\). The approximation of the initial value is taken to be the \(L^{2}\)-projection \(s_{0}\in\mathcal{S}_{p}\) of \(u_{0}\). The initial error \(\|s_{0}-u_{0}\|_{L^{2}(0,10)}\) is around \(10^{-4}\) when considering the different values of \(p=2,3,4\). We have used the assembly routines from [17]. For the time discretization we consider the Backward Euler method with time step size \(\Delta t=0.01\). For each fixed polynomial degree \(p\), we proceed iteratively in two different ways to get an approximation of the solution \(u(x,t)\) at time \(t=1\). On the one hand, we consider the the same space \(\mathcal{S}_{p}\) at each time step \(t_{k}\coloneqq k\Delta t\), and proceed as usual: we solve a discrete elliptic equation for computing the approximation \(s_{k+1}\) of the solution at time \(t_{k+1}\) provided the approximation \(s_{k}\) at time \(t_{k}\) is known. On the other hand, as an alternative procedure, we perform a coarsening of \(s_{k}\) before computing the approximation \(s_{k+1}\). More specifically, assuming that \(s_{k}\) belongs to a spline space \(\mathcal{S}_{p}^{(k)}\) we apply Algorithm 3 to find a spline space \(\mathcal{S}_{p}^{(k+1)}\subset\mathcal{S}_{p}^{(k)}\) and \(\hat{s}_{k}\in\mathcal{S}_{p}^{(k+1)}\) such that \(\|\hat{s}_{k}-s_{k}\|_{H^{1}(0,10)}<10^{-3}\). Then, we use \(\hat{s}_{k}\) to find \(s_{k+1}\) belonging to \(\mathcal{S}_{p}^{(k+1)}\). Figure 3: Coarsening up to seven (left) and three (right) interior knots after starting with \(297\) interior knots approximating a sample of the Runge function. The solid line corresponds to the approximating spline and the dotted line to the error at the sampled points. The numerical results show that the quality of both approximations of the solution \(u(x,t)\) at time \(t=1\) is similar, because we obtain that the difference in \(L^{2}\)-norm is around \(10^{-4}\), in all cases \((p=2,3,4)\). In Figure 4 (right) we show curves showing the number of degrees of freedom as a function of time for the different polynomial degrees considered. In Figure 5 we plot the solution at time \(t=1\) that was obtained for the different polynomial degrees, showing also the breakpoints of the spline space at this final time.
この論文では、Spline関数のノックの削除によって生じる誤りを解析します。ノックの multiplicity が1以上の場合、その multiplicity は1単位ずつ減少します。特に、この誤りの計算を、近接するノックと考慮するSplineの制御点を用いて簡潔な公式に導出します。さらに、この誤りのSplineにおける微分のジャンプとの関係性を明らかにしていきます。これら理論を基に、効率的で低コストの局所エラーインデタと適応的な粗化アルゴリズムを提案します。最後に、これらの性能を数値実験で示し、いくつかの応用例を提示します。
2309.15714
Integrating LLM, EEG, and Eye-Tracking Biomarker Analysis for Word-Level Neural State Classification in Semantic Inference Reading Comprehension
With the recent proliferation of large language models (LLMs), such as Generative Pre-trained Transformers (GPT), there has been a significant shift in exploring human and machine comprehension of semantic language meaning. This shift calls for interdisciplinary research that bridges cognitive science and natural language processing (NLP). This pilot study aims to provide insights into individuals' neural states during a semantic relation reading-comprehension task. We propose jointly analyzing LLMs, eye-gaze, and electroencephalographic (EEG) data to study how the brain processes words with varying degrees of relevance to a keyword during reading. We also use a feature engineering approach to improve the fixation-related EEG data classification while participants read words with high versus low relevance to the keyword. The best validation accuracy in this word-level classification is over 60\% across 12 subjects. Words of high relevance to the inference keyword had significantly more eye fixations per word: 1.0584 compared to 0.6576 when excluding no-fixation words, and 1.5126 compared to 1.4026 when including them. This study represents the first attempt to classify brain states at a word level using LLM knowledge. It provides valuable insights into human cognitive abilities and the realm of Artificial General Intelligence (AGI), and offers guidance for developing potential reading-assisted technologies.
Yuhong Zhang, Qin Li, Sujal Nahata, Tasnia Jamal, Shih-kuen Cheng, Gert Cauwenberghs, Tzyy-Ping Jung
2023-09-27T15:12:08
http://arxiv.org/abs/2309.15714v2
Integrating LLM, EEG, and Eye-Tracking Biomarker Analysis for Word-Level Neural State Classification in Semantic Inference Reading Comprehension ###### Abstract With the recent proliferation of large language models (LLMs), such as Generative Pre-trained Transformers (GPT), there has been a significant shift in exploring human and machine comprehension of semantic language meaning. This shift calls for interdisciplinary research that bridges cognitive science and natural language processing (NLP). This pilot study aims to provide insights into individuals' neural states during a semantic relation reading-comprehension task. We propose jointly analyzing LLMs, eye-gaze, and electroencephalographic (EEG) data to study how the brain processes words with varying degrees of relevance to a keyword during reading. We also use a feature engineering approach to improve the fixation-related EEG data classification while participants read words with high versus low relevance to the keyword. The best validation accuracy in this word-level classification is over 60% across 12 subjects. Words of high relevance to the inference keyword had significantly more eye fixations per word: 1.0584 compared to 0.6576 when excluding no-fixation words, and 1.5126 compared to 1.4026 when including them. This study represents the first attempt to classify brain states at a word level using LLM knowledge. It provides valuable insights into human cognitive abilities and the realm of Artificial General Intelligence (AGI), and offers guidance for developing potential reading-assisted technologies. Large Language Model, Brain-Computer Interface, Human-Computer Interface, EEG, Eye-fixation, Cognitive Computing, Pattern Recognition, Reading Comprehension, Computational Linguistics. ## I Introduction Recent advancements in LLMs and generative AI have significantly impacted various aspects of human society and industry. Notable examples include GPT-X models developed by OpenAI and Midjourney, among others [1, 2, 3, 4]. As artificial agents improve their proficiency, it becomes increasingly crucial to deepen our understanding of machine learning, decision-making processes, and human cognitive functions [5]. For instance, both humans and machines employ strategies for semantic inference. Humans extract crucial information from texts via specific gaze patterns during reading [6, 7, 8], whereas language models predict subsequent words using contextual cues [9]. Therefore, this pilot study raises the question: Can we differentiate individuals' mental states when their gaze fixates on words of varying significance within a sentence, particularly at a word level, during tasks involving semantic inference and reading comprehension? The successfulness of the prediction tasks could have significant implications for current machine learning applications and both science and technology, such as Human-in-the-loop Machine Learning [10], Brain-Computer Interfaces (BCI) for text communications [11], and personalized Learning and Accessibility Tools in real-time [12]. Previous studies demonstrate biomarkers that affirm patterns in subjects during reading comprehension tasks. For example, several neurobiological markers linked to reading comprehension, including P300 and N400, were first identified in the 1980s [13]. As the groundbreaking research in reading comprehension, the study revealed that there are distinct patterns in N400 for "semantic moderate" and "semantic strong" words [14]. Furthermore, numerous classical theories within the cognitive science community aim to elucidate and delineate the processes through which humans comprehend text and make inferences. Kintsch [15] introduced the Construction-Integration (CI) model, which posits text comprehension as a two-stage process: initially constructing a textbase (comprehending the text at the surface and propositional level) and subsequently integrating it with prior knowledge to form a situation model (a mental representation of the text's content). Evans [16] suggests that cognition comprises two types of processes - automatic (Type 1) and deliberative (Type 2). The automatic process operates swiftly and relies on heuristics, whereas the deliberative process is slower, conscious, and grounded in logical reasoning. Rumelhart [17] suggests that all knowledge is organized into units called schemas, representing generic concepts stored in our memory. According to this theory, reading comprehension is activating appropriate schema matching the text's information [18]. Similar orthodox theories for text comprehension are Mental Models [19], Landscape Model [19], etc. While these theories in cognitive science offer valuable insights into text comprehension and inference, they often oversimplify cognitive processes and do not fully account for individual differences and context variability [20]. For instance, [21] attempted to analyze how both brain hemispheres comprehend expository versus narrative texts, which are reportedly more complex. However, their approach was limited to time-domain analysis of EEG signals, and the statistical evidence they provided was not robust enough to substantiate their conclusions [22]. With the advancement of machine learning (ML) algorithms, BCI technologies [23], and NLP techniques [24], conducting studies on reading comprehension in natural settings has become increasingly feasible. BCI systems establish a direct link between the human brain and the external environment, using the user's brain activity signals as a communication medium and translating them into usable data. Various signal modalities are employed in cognitive studies to investigate subjects' mental states, including Electroencephalography (EEG) [25], Functional Magnetic Resonance Imaging (fMRI) [26], Magnetoencephalography (MEG) [27], Positron Emission Tomography (PET) [28], and Eye-tracking methods [29]. For our study, because of its high temporal and spatial resolution and non-invasive properties, we specifically employ high-density EEG. Particularly, Hollenstein [30] have recorded simultaneous EEG and Eye-tracking data while subjects engage in sentence reading tasks, suggesting integrating these technologies with NLP tools holds significant potential. This integration enables us to delve deeply into the natural reading process, potentially paving the way for developing real-time reading monitors and converting everyday reading materials into computationally analyzable formats [31, 32]. This study uses the Zurich Cognitive Language Processing Corpus (ZuCo) dataset [30] to explore potential patterns distinguishing two specific mental states--those triggered when subjects fixate on semantically salient words (High-Relevance Words or HRW) and less significant words (Low-Relevance Words or LRW) during ZuCo's Task 3, which is centered on semantic inference. The main contribution of this study lies in the unique integration of NLP, EEG, and eye-tracking biomarker analysis across multiple disciplines. Prior work by [24] used seven NLP methods to build a comprehensive model for extracting keywords from sentences, employing deep neural networks for binary classification. However, the inflexibility of the embedded NLP model and the extreme data imbalance between the two classes resulted in significant over-fitting during the training of the classification model. As an improvement, this study uses advanced LLMs, such as GPT-4, to generate robust ground truths for HRWs and LRWs to the keyword. These ground truths are the foundation for extracting EEG time series data at the word level for 12 subjects. Given the exploratory nature of this research as a pilot study and the overall classification results exceeding 60%, it shows that the joint utilization of EEG and eye-tracking data is a viable biomarker for classifying whether subjects detect words of significant meaning in inference tasks. This study represents the first attempt to integrate the GPT model with EEG signal analysis to explain potential patterns in human comprehension and inference-making, specifically concerning words with substantial meaning. The remainder of this study is organized as follows: Section 2 presents the dataset used in our study, including subject information, experiment paradigms, and the data collection process and equipment. Section 3 explains our data processing pipeline methods involving the EEG feature extraction pipeline and classification algorithms. Section 4 exhibits our LLM comparison, eye-fixation statistics, fixation-related potential, classification results for 12 subjects across eight-word relations, and the corresponding analysis. Lastly, in Section 5, we juxtapose our findings with existing literature, deliberate on the limitations of our study, and propose potential avenues for future research. ## II Dataset The ZuCo dataset includes high-density 128-channel EEG and eye-tracking data from 12 native English speakers, covering 21,629 words, 1,107 sentences, and 154,173 fixations over 4-6 hours of natural text reading. The ZuCo dataset offers preprocessed EEG segments corresponding to each word, corresponded by eye fixations on word boundaries. These segments exhibit variable time steps, averaging around 150ms in duration. This study focused on Task 3 of the ZuCo dataset. This task, which achieved the highest mean accuracy score of 93.16% among the participants, involves reading sentences from the Wikipedia corpus that emphasize specific word relations. Eight of the nine-word relations in Task 3 were selected for analysis, excluding the "VISITED" relation due to its ambiguous interpretability. In this subset, 356 out of 407 sentences were used. Subject-specific omissions were also noted: ZGW missed "JOB," ZKB missed "WIFE," and ZPH missed "POLITICAL AFFILIATION" and "WIFE." Figure 1 is a visual representation of Task 3. This study analyzed many eye-fixation and EEG data features, specifically examining five features on both HRW and LRW. These features are gaze duration (GD), total reading time (TRT), first fixation duration (FFD), single fixation duration (SFD), and go-past time (GPT). For eye-fixation features, we used the data directly from ZuCo; for EEG data, we extracted our features based on its preprocessed data. The original data were collected in a controlled environment. EEG data were recorded using a 128-channel EEG Geodesic Hydrocel system with a sampling rate of 500 Hz and a bandpass of 0.1 to 100 Hz. The original recording reference was at Cz, we re-reference channels to the average of mastoids. Eye position and pupil size were captured using an EyeLink 1000 Plus eye tracker, also with a sampling rate of 500 Hz. For additional details on the data collection methodology and protocols, readers are referred to the original ZuCo study [30]. Fig. 1: **Task 3: Experiment Paradigm and Sample LLM Outputs.** (a) Experimental setup: In Task 3 of the ZuCo study, participants read 407 sentences featuring nine relationships (keywords ) on a computer screen. Simultaneously, we recorded both eye-gaze tracking data and EEG signals. Subsequently, participants were tasked with determining if the sentence contained the relation mentioned in a subsequent question. (b) Sample Language Model Output: A sample output from the language model is presented here. The top row displays a sentence with the “AWARD” relation. The language model identifies high- and low-relevance words to the keyword and highlights them in red and blue font colors in the following two rows. ## III Method ### _LLM and word extraction_ ``` 1:SentenceTable, WdEEGSegment 2:WdsGps, Mistakes, EEGGps 3:Initialize:Mistakes, TempWds, WdsGps, EEGGps 4:Models \(\leftarrow\) ['GPT-3.5 Turbo', 'GPT-4', 'LLaMA', 'Phind'] 5:Relations \(\leftarrow\) ['AWARD', 'EDUCATION',..., 'WIFE'] 6:NatualPrompt \(\leftarrow\) ['prompt 1'] 7:ForcedPrompt \(\leftarrow\) ['prompt 2'] 8:for model in Models do 9:CurrentModel \(\leftarrow\) LLM_API(model) 10:for relation in Relations do 11:InputRel \(\leftarrow\) ExtractRelation(relation) 12:for idx in 1:length(SentenceTable) do 13:InputAnswer, InputSent \(\leftarrow\) ExtractSentenceFrom(SentenceTable[idx]) 14:OutputAnswer, OutputWds \(\leftarrow\) CurrentModel(InputSent, NatualPrompt, InputRel) 15:ifInputAnswer==OutputAnswerthen 16:TempWds \(\leftarrow\) append(OutputWds) 17:else 18:AnswerForced, WdsForced \(\leftarrow\) CurrentModel(InputSent, ForcedPrompt, InputRel) 19:TempWds \(\leftarrow\) append(WdsForced) 20:Mistakes \(\leftarrow\) append(1) 21:endif 22:TempEEGGps \(\leftarrow\) ExtractEEG(TempWds, WdEEGSegment) 23:endfor 24:endfor 25:endfor 26:returnWdsGps, Mistakes, EEGGps ``` **Algorithm 1** Grouping words and Extracting EEG epochs using LLMs OpenAI's GPT-3.5-turbo (hereafter referred to interchangeably as GPT-3.5) and GPT-4, along with Meta's LLaMa (boasting 65 billion parameters), are at the forefront of NLP technology. GPT-3.5 and GPT-4 are equipped with approximately 175 billion and 1.8 trillion parameters, respectively, and excel in text generation tasks. Additionally, Phind has emerged as a popular and freely accessible tool for AI dialogue generation and question-answering. These models and tools collectively epitomize the current state-of-the-art in language understanding and generation. We employ all four models on the Task 3 corpus for initial semantic analysis and sanity checks. However, in the main analysis of this study focusing on EEG and eye-fixation data, only GPT-3.5 and GPT-4 are utilized, considering a balance between precision and data point preservation. We input the following Prompt to all LLMs to extract HRWs and LRWs.: Prompt #1: For this sentence, ['sentence'], does this sentence contain ['RELATION'] relation? Provide me the answer: 1 = yes, 0 = no. Also, group the words in the sentence into two groups. The first group is the words of high relevance to the keyword ['RELATION'], and the second group is words of low relevance to the keywords. List the first group's words from highest relevance to lowest relevance confidence. Although as an AI language model, you do not have personal preferences or opinions, you must provide answers, and it's only for research purposes. Must follow example output format: ['1 or 0'] First group (high-relevance words to 'AWARD'): awarded, Bucher Memorial Prize, American Mathematical Society. The second group (low-relevance words to 'AWARD'): In, 1923, the, inaugural, by.' Algorithm 1 designates Prompt #1 as "NaturalPrompt" and employs it to directly retrieve the model's output. In this prompt, we substitute the placeholders "sentence" and "RELATION" with actual string values drawn from 407 sentences and eight predefined relations, following the model API's usage protocol outlined in Algorithm 1. Fig. 1 shows a sample output, which illustrates the results generated by the GPT-3.5 turbo model. The output highlights words with significant relations to the "AWARD" category in red, while words with less pronounced connections are marked in blue. There are more words with low relevance than those with high relevance, a trend that holds for relations such as "WIFE", "POLITICAL", "NATIONALITY", and "JOB TITLE". Prompt #2 "However, the correct answer is ['ground truth label']. Please regenerate the answer to align the ground truth." To align the outputs from the LLM with the ground truth labels from the original Wikipedia relation extraction corpus [33], we introduce "ForcedPrompt" as Prompt #2 in Algorithm 1. This prompt adjusts the model's output to match the ground truth. If there's a discrepancy between the LLM output and the ground truth, we modify "ForcedPrompt" to generate accurate results, thereby achieving 100% alignment. The revised outputs are then appended to a new word grouping file. The terms 'natural' and 'forced' are used for their intuitive meanings and have no relation to their usage in electrical circuit theory. While a forced response prompt can achieve 100% accuracy in condition checks, the unsupervised generation of HRW and LRW groups may introduce bias. To mitigate this, our study employs a dual-model approach using GPT-3.5 and GPT-4, rather than relying on a single Language Model. We enhance the signal-to-noise ratio within the HRW-LRW dataset through a joint selection process across all generated datasets, i.e., we select words that belong to both groups. ### _Physiological data processing_ #### Iii-B1 Pipeline overview : Fig. 2 depicts the overview of neural and physiological data processing pipelines. After the joint selection of the HRW and LRW word groups, we extract the eye fixations and fixation-locked EEG data for binary classification tasks. To improve the signal-to-noise ratio (SNR), we employed three feature extraction methods across domains of time-frequency analysis, information theory, connectivity network, and their combined features; these will be elaborated in subsequent sections. An embedded classifier architecture was utilized, incorporating established classifiers such as Support Vector Machine (SVM) and Discriminant Analysis. For Fixation-Related Potential (FRP) analysis, EEG signal extraction was restricted to a predefined time window for each word, ranging from 100ms pre-fixation to 400ms post-fixation. #### Iii-B2 FRP Analysis In contrast to one-dimensional ERP averages, which can obscure dynamic information and inter-trial variability [34], we employed ERPimage for a two-dimensional representation that allows for trial-by-trial analysis. Utilizing the ERPimage.m function in the eeglab toolbox (MATLAB 2022b, EEGlab 2020), we generated FRPs for both HRWs and LRWs across 12 subjects. A smoothing parameter of 10 was applied to enhance the clarity of the FRPimage, which span a temporal window from 100ms pre-fixation to 400ms post-fixation, resulting in a comprehensive ERP signal duration of 500ms. #### Iii-B3 EEG feature extraction **Band power**: We calculated the power in five EEG frequency bands: delta (0.5-4 Hz), theta (4-8 Hz), alpha (8-13 Hz), beta (13-30 Hz), and gamma (30-64 Hz). We employed MATLAB's "bandpower" function from the Signal Processing Toolbox. The band power (BP) \(P_{a,b}\) is computed as follows: \[P_{a,b}=\int_{a}^{b}P(\omega)d\omega=\int_{a}^{b}|F(\omega)|^{2}d\omega \tag{1}\] Where \(P_{a,b}\) represents the power in the frequency band \([a,b]\), \(P(\omega)\) denotes the power spectral density, \(|F(\omega)|^{2}\) is the squared magnitude of the Fourier transforms, with \(a\) and \(b\) being the lower and upper bounds of the frequency band, respectively. The EEG data comprised 105 channels, resulting in 525 feature variables per trial. To address the challenge posed by this extensive variable set, many of which exhibited redundancy, we used Principal Component Analysis (PCA) to reduce the dimensionality of the data to 30 variables. **Conditional entropy**: This study used conditional entropy (CondEn) to extract features of the EEG trail. It serves as a metric quantifying the level of mutual information between the two random variables. The mutual information between two discrete random variables is Fig. 2: **Binary classification pipeline.** This diagram depicts a comprehensive pipeline for analyzing EEG signals from study participants. Initially, two language models evaluate sentences and classify words as either ’High’ or ’Low’ relevance. Subsequently, a joint selection process identifies a shared set of HRWs. Leveraging eye-gaze data from the subjects, we extract corresponding EEG signals. We use four distinct feature-extraction techniques to condense information from these signals, reducing their complexity. Finally, these refined features are fed into three separate classifiers, following a standard procedure in brain-computer interface pipelines, to perform binary HRW/LRW classification. defined as follows: \[I(X;Y)=\sum_{y\in Y}\sum_{x\in X}p(x,y)\log\left(\frac{p(x,y)}{p(x)p(y)}\right) \tag{2}\] Where \(p(x)\) is the approximate density function. By employing this approach, the mutual information \(I(X;Y)\) is computed, establishing its connection with the CondEn \(I(X;Y)\). \[H(X|Y)=-\sum_{y\in Y}p(y)\sum_{x\in X}p(x|y)\log_{2}p(x|y) \tag{3}\] Where \(H(X|Y)\) is the CondEn of \(X\) given \(Y\), p(y) is the probability of occurrence of a value \(y\) from \(Y\), \(p(x|y)\) is the conditional probability of \(x\) given \(y\), the sums are performed over all possible values of \(x\) in \(X\) and \(y\) in \(Y\). For 105 EEG channels, we generate a 105-by-105 CondEn matrix. This matrix is asymmetric because mutual information and CondEn measure different aspects of the relationship between \(X\) and \(Y\). Flattening this matrix results in over 10,000 feature variables. To manage this high dimensionality, we focus on one half of the matrix and apply PCA to reduce the feature space to 30 principal components. **Connectivity network**: The human brain is an expansive and intricate network of electrical activity akin to a vast ocean of electric currents [35]. Understanding the intricate connections within the brain and quantifying its connectivity has garnered increasing interest [36, 37, 38]. This study employed the Phase Locking Value (PLV) to construct a weighted undirected brain connectivity network [33]. Each channel is represented as a node in the graph, and we depict the correlation strength between channels as the edges connecting them. After constructing the weighted brain network, a range of graph theory measurements can be used as features for analyzing EEG signals. These measurements capture various aspects of the network's structure and organization, including degree, similarity, assortativity, and core structures [39, 40]. We use the clustering coefficient to reduce the dimension to 30 variables. \[C(v)=\frac{2e(N(v))}{|N(v)|(|N(v)|-1)} \tag{4}\] In this equation, \(2e(N(v))\) counts the total number of edges in the neighborhood of \(v\), and \(|N(v)|(|N(v)|-1)\) is the total number of possible edges in the neighborhood of \(v\). The coefficient \(2\) in the numerator accounts for each edge connecting two vertices and is counted twice. The clustering coefficient provides insights into the tendency of nodes in a graph to form clusters or communities, with higher values indicating a greater density of interconnected nodes [40]. **Combine all three features**: Inspired by [41], combining features from different domains might improve the quality of features and classification performance. We concatenate the three features we introduced above, resulting in 90 variables. #### Iii-B4 Machine learning classifiers and feature selection Initially, the features--BP, CondEn, and PLV-connectivity network--have high dimensions with original dimensions of 525 \((105\times 5)\), 5565, and \(5565\)\(\left(\frac{(11025-105)}{2}+105\right)\), respectively. We reduced the input variables for subsequent classifier training to 30 for each feature by applying PCA and the clustering coefficient for feature selection. Generally, Discriminant Analysis and SVMs are frequently used as non-neural network classifiers in BCI [42]. We incorporated features extracted from EEG signals to train 11 classifiers simultaneously: LDA, QDA, Logistic Regression, Gaussian Naive Bayes, Kernel Naive Bayes, Linear SVM, Quadratic SVM, Cubic SVM, Fine Gaussian SVM, Medium Gaussian SVM, and Coarse Gaussian SVM. The highest classification accuracy is selected as the final result. To ensure the validity of our outcomes, particularly for smaller sample groups, we report 5-fold cross-validation accuracy. Given the significant class imbalance--LRW EEG data points outnumbering HRW by over 3:1--we applied non-repetitive random downsampling to the LRW class. This ensures equal representation of HRW and LRW data points in the training set. Consequently, the chance label of validation accuracy is 50%. While deep learning approaches like EEGnet have shown promise in EEG classification [43, 44], their core feature extraction layers are primarily designed for image data [45]. The applicability of such methods to time-series EEG data remains a subject of ongoing discussion. We refrained from using deep neural network techniques in this study to maintain model explainability. ## IV Results This section presents the results of our pipeline. We first present the results concerning the LLM comparisons, providing statistical insights into the distinctions between GPT-3.5 and GPT-4. Then, we delve into the specifics of each relation class, aiming to gain a more profound understanding. Then, we demonstrate eye fixation statistics for HRWs and LRWs. Next, we highlight the ERP analysis of the Fixation-locked EEG signal. Finally, we present the results of our binary classification. ### _LLM result analysis_ #### Iv-A1 GPT-3.5 and GPT-4 comparison During our experimental investigation involving a state-of-the-art large language model, we observed a remarkable level of accuracy when the model was tasked with answering reading comprehension questions from Tasks 1 and 3. Table I compares the performance of different language models on ZuCo Task 3 with that of 12 subjects. Given large language models' generative and non-deterministic nature, each experimental run produced slightly varying outputs. To mitigate this variability and optimize resource utilization, we executed each model five times and calculated the mean of their responses as the final output. As we can see from Table I, GPT-4 has the highest mean and lowest standard deviation among 12 subjects and all four LLMs over Tasks 1 and 3. Task 1 focused on sentiment inference, and 12 subjects generally have lower accuracy than Task 3. We didn't include Task 2 because it shares the same corpus with Task 3. While GPT-3.5 attained a lower score of 95.59%, it still outperformed all subjects. GPT-3.5 and GPT-4 categorize words into HRW and LRW sets for all sentences in Task 3. Specifically, GPT-3.5 generates the first group of HRW and LRW, while GPT-4 produces the second group. By "joint selection," we identify common elements between these first and second HRW groups to create a third HRW group, leaving the remaining words to constitute the third LRW group. Unless otherwise stated, references to HRWs and LRWs refer to the third group, jointly selected by GPT-3.5 and GPT-4. ### _Eye-fixation statistics_ Next, we analyzed the eye activities during the reading process. Table II compares the fixation counts and five additional eye-fixation features for HRWs and LRWs. We excluded the "VISITED" category from the initial nine categories of relationships, resulting in 7271 words distributed among the remaining eight categories after the commonset selection of GPT-3.5 and GPT4. Among these eight categories, LRWs significantly outnumbered HRWs by a six-to-one ratio, with 6,109 LRWs and 1,162 HRWs. Subsequently, we analyzed the fixation per word metric for the HRW and LRW categories for all 12 subjects. Note that the data from three subjects were incomplete for one or two relationships. Table II shows that HRWs received an average of 1.0584 fixations per word, while LRWs received 0.6576 fixations per word. We performed these calculations both with and without considering zero-fixation words. We presented the results in the second and third columns of the table. In our analysis, we also considered excluding words that received no fixations, followed by comparing average fixation counts between two distinct categories: HRWs and LRWs. The eye-fixation comparison between no-fixation word excluded and included is shown in Fig. 3 for all 12 subjects. We undertook this step because words lacking any fixations are predominantly associated with the LRW category. Our results show HRWs had an average of slightly more fixations per word than LRWs, with values of 1.5126 and 1.4026, respectively. This discovery aligns with our initial expectations, rooted in the dilution effect of the larger number of LRWs. We also compared five eye-fixation features, as presented in the last five columns of Table II. Generally, these features all measure the duration of a reader's gaze on a word, capturing nuances of first-pass reading, regressions and distinguishing between one or multiple fixations. Among these eye-fixation features, HRWs exhibited higher values than LRWs for four out of five metrics, except for SFD. Furthermore, four out of five features showed statistically significant differences, except for the GPT. ### _Fixation-related potentials_ The subsequent analysis illustrates the FRP for nine subjects. We excluded three additional subjects because of incomplete data regarding at least one keyword relationship. Fig. 4 shows the ERPimage time-locked to fixation onsets for HRWs and LRWs for Subject ZAB, complemented by the mean FRP and power spectral density. Power spectral density for the two (HRW and LRW) conditions demonstrates the most significant differences within the \([0.5,10]\) Hz and \([25,45]\) Hz ranges, indicative of delta and gamma band activities. Fig. 5 shows the topographic maps representing the average band power across five frequency bands for nine subjects. We excluded three subjects because of missing data in one or two relations within the total set of eight relations. The topographic maps in the first and second rows correspond to HRWs and LRWs. The third row displays the differential BP between HRWs and LRWs. Across all frequency bands, we observe a significant concentration of power primarily localized in occipital scalp regions, particularly within the delta and theta bands. This localization reflects the involvement of visual word-processing mechanisms. It's plausible to suggest that related and unrelated words initiate distinct perceptual processes, which could be attributed to top-down attentional modulation [46, 47]. Nevertheless, the most salient differences in BP are within the delta and gamma bands. These disparities may be linked to neural mechanisms that underlie semantic integration and comprehension, as discussed in [48]. ### _Binary classification analysis_ #### Iv-D1 Subject-wise classification results This study assessed the viability of using fixation-locked EEG data to detect whether participants looked at HRWs or LRWs. As previously mentioned, we determined the relevance labels using the GPT-3.5 and GPT-4 models and reported the highest validation accuracies of eleven classifiers. Fig. 6 visually represents the number of HRW and LRW samples reported by the GPT-3 and GPT-4 models and the overlapping data they share across twelve subjects. Each subject exhibited distinct reading patterns, and some, such as ZJM, ZJN, ZKH, and ZKW, showed notably high eye fixations per word. Consequently, this group of subjects contributed more EEG training data. First, we explored the differences between using word labels generated by different LLMs. We employed a 5-fold cross-validation approach for HRW versus LRW classification. Fig. 7 illustrates the classification accuracy of words labeled by GPT-3.5, GPT-4, and words jointly labeled by both LLMs, based on Linear SVM. Notably, among the three LLM-based methods for HRW and LRW grouping, the common HRW selection achieved the highest mean accuracy. Importantly, all mean classification accuracies surpass the chance level by jointly labeled data. Upon scrutinizing the average validation accuracy across the spectrum of the GPT models for each respective subject, it was discernible that an enhanced performance was typically recorded when the GPT-3.5 and GPT-4 models were employed in conjunction, as opposed to using either the GPT-3.5 or GPT-4 model in isolation. Next, we delve into the detailed comparisons of classification accuracy when we used four different features as inputs to 11 machine-learning classifiers. Fig. 8 shows the classification accuracy of words jointly labeled by both LLMs. This figure compares classification performance based on different EEG features. The "combine" and "CondEn" methods consistently have the highest validation accuracy across most subjects. In an individual subject context, we found that the Subjects ZDM, ZDN, ZJN, and ZKW characteristically showed superior validation accuracy across all the feature extraction methodologies and iterations of the GPT models. This could show more consistency in the EEG classification within their data sets. Conversely, Subjects Fig. 3: **Average fixation counts on the HRWs and LRWs.** The left figure displays the average fixation count across 12 subjects, including words without receiving any fixations. “No-fixation” words appear in both HRW and LRW groups. The average fixation count for HRWs appears much greater in this plot. In contrast, the right figure presents the same comparison but excludes words with no fixations, providing a more robust assessment of the average fixation differences between HRW and LRW. As expected, when we omit instances of no-fixation words, the average fixation count for LRWs increases significantly. However, it’s noteworthy that even with this adjustment, the average fixation count for HRW’s remains higher than that of LRWs across all subjects. This observation supports the hypothesis that subjects focus more on words closely aligned with the keyword. The whiskers in the figures represent the standard deviation across the eight keyword relations. ZGW, ZKB, and ZPH typically showed a diminished average validation accuracy. #### Iv-B2 Classifier performance analysis We thoroughly investigated the efficacy of several machine-learning classifiers when applied to words labeled jointly by GPT-3.5 and GPT-4, as delineated in Fig. 8. Four distinct feature sets served as inputs for evaluating these classifiers. The first set amalgamates all three techniques, as seen in Fig. 8(A), while the second set intertwines BP with PCA, as referenced in Fig. 8(B). The third set fuses CondEn with PCA, illustrated in Fig. 8(C), and the final set pairs PLV with the clustering coefficient, demonstrated in Fig. 8(D). Notably, linear classifiers achieved the highest accuracy, reaching 62.1% (on Subject ZPH). Fig. 8 provides a comprehensive view of the classification accuracy results, whereas Table III summarizes the average and standard deviation of classification performance among 12 subjects, using four different feature sets and eleven machine-learning algorithms. We noted a tangible variation in the accuracy of the classifiers across distinct methodologies and subjects in the Table. The Linear SVM consistently outperformed other algorithms, exhibiting peak accuracy of \(60.03\pm 1.72\%\) in combined features scenarios. Using the second feature set (BP + PCA) resulted in a marginal decrement in the accuracy of all classifiers, with the highest recorded at \(56.73\pm 1.80\%\) using Medium Gaussian SVM. In contrast, the third set (CondEn + PCA) enhanced accuracy for specific classifiers, with the Linear SVM being paramount, achieving \(59.37\pm 2.05\%\) at its highest. Conversely, employing the fourth set (PLV + clustering coefficient) precipitated a universal decline in overall accuracy across all classifiers, pinpointing \(54.70\pm 2.80\%\) for Linear SVM. ## V Discussion and Conclusion This pilot study introduced a novel BCI pipeline that synergistically combines LLMs, particularly Generative Pre-trained Transformers (GPT-3.5 and GPT-4), and an Fig. 4: **FRP and Power Spectral Density Analysis for Subject ZAB in HRW and LRW Conditions.** The figure presents ERPimages for channels Pz and Oz for both groups (HRW and LRW). Accompanying the ERPimages are mean FRPs and power spectral densities for both conditions across the channels. Areas of significant difference in the FRPs are highlighted in green. Notable disparities in power spectral density occur within the [0.5, 10] Hz and [25, 45] Hz frequency ranges, corresponding to delta and gamma band activities. EEG-based BCI. This is one of the first efforts to use GPT capability for this specialized intersection of neuroscience and artificial intelligence. Eye gaze is a prominent biomarker, holding crucial information for comprehending cognitive processes in individuals involved in task-specific reading activities [49]. In this study, we conducted average fixation analyses across three distinct dimensions: on a subject-by-subject basis, concerning specific semantic relations, and at the level of individual words. We performed these analyses on data collected from 12 participants and encompassing eight different semantic relations. Our results unequivocally show that participants allocate significantly more time to words that exhibit high semantic relevance to specific relations (i.e., keywords) during inference tasks. Appendices A and B provide additional support for this observation. Unlike traditional BCIs, which relied on precise stimulus presentation as timing markers to extract event-related EEG activities such as P300 and Steady-State Visual Evoke Potentials in well-controlled laboratory environments, our approach leveraged fixation onsets to Fig. 5: **Topographic Maps of BP Across Five Frequency Bands.** The figure depicts the average band power for nine subjects, excluding three due to missing data. The first and second rows show topographic maps for HRWs and LRWs, respectively, while the third row illustrates the differential BP between the two groups. Across all frequency bands, power is significantly concentrated in the occipital scalp regions, especially within the delta and theta bands, suggesting the role of visual word processing mechanisms. Notably, the most distinct differences in band power are observed in the delta and gamma bands, which may relate to neural mechanisms involved in semantic integration and comprehension. Fig. 6: **EEG epoch counts for twelve subjects.** The figure displays the numbers of HRW and LRW samples for twelve subjects. Subjects with distinct reading patterns, specifically ZJM, ZJN, ZKH, and ZKW, exhibited high eye fixations per word and thus contributed more EEG training data. The graph highlights the trade-off between word accuracy and the volume of data points crucial for machine-learning classification. capture EEG signals related to words during natural reading. This implementation significantly enhances the practicality of BCIs for real-world applications. We evaluated the performance of four distinct LLMs to improve classification outcomes. Our hybrid architecture, combining GPT-3.5 and GPT-4 as word labelers with eye tracking and BCI components, demonstrated remarkable performance, achieving an impressive accuracy rate exceeding 60% in the classification of word relevance. This enhancement was realized by applying SVMs to three domain-specific features: BP, CondEn combined with PCA, and PLV-based graph theory techniques. Carefully chose each feature for its well-established utility in BCI research and its capacity to enhance the signal-to-noise ratio. Additionally, we explored the pair-wise coherence of 5-frequency bands but ultimately decided against its use because of its computational complexity, particularly when considering the 105 EEG channels we employed. Furthermore, we comprehensively analyzed single-word fixation statistics for 12 subjects, encompassing eight classes within the HRW and LRW groups. To account for the absence of data in eight relationship instances -- Subject ZGW did not include "JOB", ZKB lacked "WIFE," and ZPH lacked both "POL AFF" and "WIFE"--we ultimately generated 184 figures (\(12\times 8\times 2-8\)), all of which are included in the supplementary materials. Our findings revealed that words within Fig. 7: **A comparison of classification accuracy on words labeled as HRWs and LRWs by various LLMs.** Classification performance, based on Linear SVM, was evaluated considering three LLM-based word selections and four feature-extraction methods, with the EEG feature CondEn exhibiting superior performance. A combination of all three EEG features rendered the highest overall performance. Crucially, a marginal enhancement in classification accuracy was observed when identifying HRWs co-selected by GPT-3.5 and GPT-4. ## References * [1] S. Agarwal, A. Agarwal, and A. Agarwal (2016) A unified framework for deep learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1-10. Cited by: SSII-A. * [2]J. A. Agarwal, A. Agarwal, and A. Agarwal (2016) A unified framework for deep learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1-10. Cited by: SSII-A. * [3]J. A. Agarwal, A. Agarwal, and A. Agarwal (2016) A unified framework for deep learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1-10. Cited by: SSII-A. * [4]J. A. Agarwal, A. Agarwal, and A. Agarwal (2016) A unified framework for deep learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1-10. Cited by: SSII-A. * [5]J. A. Agarwal, A. Agarwal, and A. Agarwal (2016) A unified framework for deep learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1-10. Cited by: SSII-A. * [6]J. A. Agarwal, A. Agarwal, and A. Agarwal (2016) A unified framework for deep learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1-10. Cited by: SSII-A. * [7]J. A. Agarwal, A. Agarwal, and A. Agarwal (2016) A unified framework for deep learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1-10. Cited by: SSII-A. * [8]J. A. Agarwal, A. Agarwal, and A. Agarwal (2016) A unified framework for deep learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1-10. Cited by: SSII-A. * [9]J. A. Agarwal, A. Agarwal, and A. Agarwal (2016) A unified framework for deep learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1-10. Cited by: SSII-A. * [10]J. A. Agarwal, A. Agarwal, and A. Agarwal (2016) A unified framework for deep learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1-10. Cited by: SSII-A. * [11]J. A. Agarwal, A. Agarwal, and A. Agarwal (2016) A unified framework for deep learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1-10. Cited by: SSII-A. * [12]J. A. Agarwal, A. Agarwal, and A. Agarwal (2016) A unified framework for deep learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1-10. Cited by: SSII-A. * [13]J. A. Agarwal, A. Agarwal, and A. Agarwal (2016) A unified framework for deep learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1-10. Cited by: SSII-A. * [14]J. A. Agarwal, A. Agarwal, and A. Agarwal (2016) A unified framework for deep learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1-10. Cited by: SSII-A. * [15]J. A. Agarwal, A. Agarwal, and A. Agarwal (2016) A unified framework for deep learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1-1. Cited by: SSII-A. * [16]J. A. Agarwal, A. Agarwal, and A. Agarwal (2016) A unified framework for deep learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1-1. Cited by: SSII-A. * [17]J. A. Agarwal, A. Agarwal, and A. Agarwal (2016) A unified framework for deep learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1-1. Cited by: SSII-A. * [18]J. A. Agarwal, A. Agarwal, and A. Agarwal (2016) A unified framework for deep learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1-1. Cited by: SSII-A. * [19]J. A. Agarwal, A. Agarwal, and A. Agarwal (2016) A unified framework for deep learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1-1. Cited by: SSII-A. * [20]J. A. Agarwal, A. Agarwal, and A. Agarwal (2016) A unified framework for deep learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1-1. Cited by: SSII-A. * [21]J. A. Agarwal, A. Agarwal, and A. Agarwal (2016) A unified framework for deep learning. In Proceedings of the IEEE on Computer Vision and Pattern Recognition (CVPR), pp. 1-1. Cited by: SSII-A. * [22]J. A. Agarwal, A. Agarwal, and A. Agarwal (2016) A unified framework for deep learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1-1. Cited by: SSII-A. [MISSING_PAGE_POST] the HRW group garnered significantly higher average fixation counts than those in the LRW group. These findings provide valuable insights into how participants comprehend the reading corpus. Despite these advances, the study has several limitations. This study faces challenges because of the 'black box' nature of LLMs, particularly in the context of the non-deterministic relation, such as 'AWARD,' where certain outputted words appear incongruous. This limitation might affect our findings' generalizability and underscore the need for a quantitative assessment to ensure the accuracy and validity of keyword identification. Additionally, contextual complexities often influence semantic classifications. For example, "gold" acquire distinct semantic relevance when juxtaposed with terms like "medal." The sentences incorporating specific target terms, such as "NATIONALITY" or "WIFE," exhibit a significant disparity in the distribution between HRW and LRW, making them more deterministic. These discrepancies add complexity to the classification of EEG data and introduce the possibility of contamination within the dataset, especially when the meaning of words is most effectively comprehended within the context of phrases rather than in isolation. This study underscores the potential for more expansive research on elucidating reading-related cognitive behaviors. The promise of integrating LLMs into BCIs also points towards future advancements in reading assistance technologies. While acknowledging its limitations and complexities, our work is an early yet significant contribution, paving the way for more integrated studies to foster a deeper understanding of the multifaceted interplay between neuroscience and computational linguistics.
最近の巨大言語モデル(LLM)の普及(例:Generative Pre-trained Transformers(GPT))により、人間と機械の言語意味の論理的理解に大きな変化が生じています。この変化は、認知科学と自然言語処理(NLP)の橋渡しを行う多様な研究を必要とします。このパイロット研究では、読解認知に関連する神経状態を理解する目的で、個人の神経状態を調査することを目指しています。私たちは、LLM、目線、脳波計(EEG)データを共同分析することで、キーワードに関連する度合いが異なる単語を読解する際の脳の処理方法を調べます。また、読解時に単語の関連度がキーワードに関連する度合いによって、視 fixation に関連する脳波データを分類するために、特徴エンジニアリングアプローチを使用します。この単語レベルの分類における最高の検証精度は、12人の被験者に対して60%を超
2309.05507
A Co-design Study for Multi-Stakeholder Job Recommender System Explanations
Recent legislation proposals have significantly increased the demand for eXplainable Artificial Intelligence (XAI) in many businesses, especially in so-called `high-risk' domains, such as recruitment. Within recruitment, AI has become commonplace, mainly in the form of job recommender systems (JRSs), which try to match candidates to vacancies, and vice versa. However, common XAI techniques often fall short in this domain due to the different levels and types of expertise of the individuals involved, making explanations difficult to generalize. To determine the explanation preferences of the different stakeholder types - candidates, recruiters, and companies - we created and validated a semi-structured interview guide. Using grounded theory, we structurally analyzed the results of these interviews and found that different stakeholder types indeed have strongly differing explanation preferences. Candidates indicated a preference for brief, textual explanations that allow them to quickly judge potential matches. On the other hand, hiring managers preferred visual graph-based explanations that provide a more technical and comprehensive overview at a glance. Recruiters found more exhaustive textual explanations preferable, as those provided them with more talking points to convince both parties of the match. Based on these findings, we describe guidelines on how to design an explanation interface that fulfills the requirements of all three stakeholder types. Furthermore, we provide the validated interview guide, which can assist future research in determining the explanation preferences of different stakeholder types.
Roan Schellingerhout, Francesco Barile, Nava Tintarev
2023-09-11T14:51:20
http://arxiv.org/abs/2309.05507v1
# A Co-design Study for Multi-Stakeholder Job Recommender System Explanations ###### Abstract Recent legislation proposals have significantly increased the demand for eXplainable Artificial Intelligence (XAI) in many businesses, especially in so-called 'high-risk' domains, such as recruitment. Within recruitment, AI has become commonplace, mainly in the form of job recommender systems (JRSs), which try to match candidates to vacancies, and vice versa. However, common XAI techniques often fall short in this domain due to the different levels and types of expertise of the individuals involved, making explanations difficult to generalize. To determine the explanation preferences of the different stakeholder types - candidates, recruiters, and companies - we created and validated a semi-structured interview guide. Using grounded theory, we structurally analyzed the results of these interviews and found that different stakeholder types indeed have strongly differing explanation preferences. _Candidates_ indicated a preference for brief, textual explanations that allow them to quickly judge potential matches. On the other hand, _hiring managers_ preferred visual graph-based explanations that provide a more technical and comprehensive overview at a glance. _Recruiters_ found more exhaustive textual explanations preferable, as those provided them with more talking points to convince both parties of the match. Based on these findings, we describe guidelines on how to design an explanation interface that fulfills the requirements of all three stakeholder types. Furthermore, we provide the validated interview guide, which can assist future research in determining the explanation preferences of different stakeholder types. Keywords:Explainable AI, Job Recommender Systems, User Studies, Grounded Theory ## 1 Introduction Within the emerging field of explainable artificial intelligence (XAI), a substantial amount of research has attempted to make the inner workings of AI models more transparent [11, 18]. While such information can assist developers in understanding their model (e.g., by allowing the detection of bugs and biases, understanding feature importance), it is often complicated and requires considerable a priori knowledge of AI to interpret. However, the use of AI has become commonplace in user-controlled environments, such as the recommender systems used by different commercial platforms (e.g., YouTube, TikTok, Amazon). In such environments, explanations cannot assume AI knowledge, as the majority of explainees are lay users. Moreover, different types of users interact with such systems - the stakeholders. These stakeholders consist of every individual or group who affects, or is affected by, the delivery of recommendations to users [1]. Stakeholders can be strongly diverse, coming from different backgrounds and having distinct expertise. As such, the way in which an explanation is conveyed to each stakeholder individually should be fine-tuned to their specific needs. One field where such fine-tuned explanations are especially crucial is recruitment. Recruitment is inherently a multi-stakeholder domain, as users (candidates) need to be linked to vacancies (provided by companies) by recruiters. These three main stakeholders all rely on the same recommendations but can require widely different explanations. For example, telling a candidate that a vacancy is relevant for them as it comes with a high salary can be an acceptable explanation. However, the same explanation will be useless for the company, as that salary will be provided to every other potential candidate. Furthermore, a candidate and a recruiter might only look at a handful of recommendations per session, while a company could receive hundreds of applicants for a single vacancy. Therefore, the explanation requirements of each stakeholder are unique and require a tailored design. This paper attempts to determine the explanation preferences of the stakeholders of a job recommender system: job seekers, companies, and recruiters. This is done through the execution of a co-design study, which allows stakeholder representatives to manually indicate how they prefer an explanation to be presented to them. Therefore, this research aims to answer the following research question: **RQ:**: _What are the explanation preferences of recruiters, candidates, and company representatives for job recommender systems?_ Our results show interesting differences in the preferences of the different stakeholders. Regarding the preferred types of explanations, _candidates_ preferred brief written explanations, as their main interest is to be able to quickly judge the potential matches proposed by the system. On the contrary, company's _hiring managers_ preferred visual, graph-based explanations, as these allow a comprehensive overview at a glance. Finally, _recruiters_ preferred more exhaustive textual explanations, as those provided them with more talking points to convince both parties of the match. These results allow us to provide design guidelines for an interface that fulfills the requirements of all three stakeholder types. Furthermore, the co-design study allowed us to validate and improve the used interview guide. ## 2 Related work Within the field of explainable AI, there is no single agreed-upon method to provide explanations [2]. Different use cases require different approaches, each with their own strengths and weaknesses. One of the most common methods of providing explanations is through text [24, 5]. Textual explanations consist of brief sections of text that explain the rationale of the XAI model. Such texts often contain information on the impact different features had on the prediction and how those features interacted with each other. There are multiple ways to generate such texts, e.g., through the use of large language models (LLMs) [19] or predefined templates [36]. Another popular approach is the use of feature attribution maps: visualizations that show the importance of different features to the prediction [23]. Such maps can take different forms, depending on the specific task and data involved. When using tabular data, bar charts are often used to show the contribution of each different feature type to the prediction. When using multi-dimensional data, such as images or time series, are used, heatmaps can provide an overview of the importance of the different dimensions interacting with each other [9]. A further explanation type that has been gaining popularity recently, is the knowledge graph-based explanation [31]. These explanations depend on the connections within a knowledge graph to explain the rationale behind a prediction. This is usually done by highlighting important nodes and edges within the graph, which provide 'paths' from the subject to the recommended item, accompanied by their importance to the model's prediction [35]. ### Challenges in multi-stakeholder explainability In multi-stakeholder environments, explanations need to meet additional requirements [1]. An explanation that is sufficient for a developer, is not necessarily understandable for a user or provider, and vice versa [30]. There are multiple strategies to deal with this discrepancy, each with its own strengths and weaknesses. The most obvious solution is to create individual explanations for the different stakeholders [37]. Although this leads to the most fine-tuned explanations, it introduces an additional layer of complexity to the system as a whole. Another approach would be to simply use a single explanation, but to present it differently based on the stakeholders' level of expertise [1]. Unfortunately, it can be difficult to incorporate the different stakeholder perspectives simultaneously - some facts could be confidential or sensitive for a specific stakeholder, making it challenging to incorporate them in the explanation, even when they are relevant. Similarly, a highly specific overview of how the model came to the prediction might be useful for a developer, but will be too confusing for a lay user or provider. ### Explainability in job recommender systems Explaining reciprocal recommendations, such as job recommendations, tends to be more difficult than standard recommendations, as the preferences of both
最近の法案の提案により、多くの企業で説明可能AI(XAI)の需要が大幅に向上しており、特に「高リスク」な分野、例えば採用活動において顕著になっています。採用活動では、AIは、職務推薦システム(JRS)という形態で一般的に用いられており、職務に適合する候補者をマッチングさせ、逆に候補者と職務をマッチングさせる役割を果たしています。しかし、一般的なXAI技術は、参加する人々のレベルや種類の違いにより、説明を generalizing するのが難しいことが知られています。異なるステークホルダータイプ(候補者、採用担当者、企業)の解釈を決定するために、半 structured インタビューガイドを作成し、検証しました。その結果を基にして、構造的分析を行った結果、異なるステークホルダータイプは、解釈の好みが大きく異なることが判明しました。候補者は、迅速な判断を促すために簡潔なテキストの説明
2309.11572
Architecture Knowledge Representation and Communication Industry Survey
Background: The literature offers various methods for capturing software architectural knowledge (AK), including views, viewpoints, and architecture decision records (ADRs). In parallel, sustainability has gained prominence in software engineering, especially concerning software architecture. Nevertheless, practical industry reviews on these subjects seem to be lacking. Aim: In this research we aim to understand the current practice in architecture knowledge, and to explore where sustainability can be applied to address sustainability in software architecture in the future. Method: We used a survey, which utilized a questionnaire containing 34 questions and collected responses from 45 architects working at a prominent bank in the Netherlands, aimed to evaluate the practical representation and communication of architectural knowledge and sustainability. Result: Our analysis yielded two primary discoveries and several intriguing detailed results regarding how AK is captured and conveyed to diverse stakeholders. Firstly, it seems crucial to develop a new architectural element that connects various architectural features and perspectives tailored for different stakeholders. Secondly, providing clear guidance, references, and goals is essential to motivate architects to adopt Sustainable Software Engineering practices. Conclusion: After analysing the data collected through this survey, we have concluded that: a) There are no established domain-specific AK methods/tools in the financial domain. Most practitioners use domain-generic tools. b) A new architectural element that links the various architectural features and viewpoints created for various stakeholders appears to be necessary. c) There is sufficient sustainability awareness and motivation among software architects. However, what they lack are clear guidance, references, and goals to practice sustainable software engineering.
Haben Birhane Gebreweld
2023-09-20T18:17:16
http://arxiv.org/abs/2309.11572v1
# Architecture Knowledge Representation and Communication Industry Survey ###### Abstract. _Background:_ The literature presents several approaches, such as views, viewpoint, and architecture decision records (ADRs), to describe software architectural knowledge (AK). On the other hand, sustainability is a subject that is receiving increasing attention in software engineering, particularly in relation to software architecture. However, there appears to be a lack of industry reviews on these topics from a practical perspective. _Aim:_ In this research we aim to understand the current practice in architecture knowledge, and to explore where sustainability can be applied to address sustainability in software architecture in the future. _Method:_ We used a survey, which utilized a questionnaire containing 34 questions and collected responses from 45 architects working at a prominent bank in the Netherlands, aimed to evaluate the practical representation and communication of architectural knowledge and sustainability. _Result:_ Our analysis yielded two primary discoveries and several intriguing detailed results regarding how AK is captured and conveyed to diverse stakeholders. The report aims to communicate two essential messages to guide future research in the field. Firstly, it seems crucial to develop a new architectural element that connects various architectural features and perspectives tailored for different stakeholders. Secondly, providing clear guidance, references, and goals is essential to motivate architects to adopt Sustainable Software Engineering practices. _Conclusion:_ After analysing the data collected through this survey, we have concluded that: **a)** There are no established domain-specific AK methods/tools in the financial domain. Most practitioners use domain-generic tools. **b)** A new architectural element that links the various architectural features and viewpoints created for various stakeholders appears to be necessary. **c)** There is sufficient sustainability awareness and motivation among software architects. However, what they lack are clear guidance, references, and goals to practice sustainable software engineering. Software Engineering, Architecture Knowledge, Sustainability, Empirical Experiment, + Footnote †: journal: 10.145/1122445.1122456 + Footnote †: journal: 10.145/1122445.1122456 + Footnote †: journal: 10.145/1122445.1122456 + Footnote †: journal: 10.145/1122445.1122456 + Footnote †: journal: 10.145/1122445.1122456 + Footnote †: journal: 10.145/1122445.1122456 + Footnote †: journal: 10.145/1122445.1122456 ## 1. Introduction Software architectural knowledge refers to the knowledge acquired while designing a software architecture, encompassing the assumptions, decisions, context, and other factors involved in that process (Birhane Gebreweld, 2022). Various approaches have been developed both in literature and industry to depict this knowledge, such as views and viewpoints(Birhane Gebreweld, 2022), architecture decision records (ADRs)(Birhane Gebreweld, 2022), and standards like ISO 42010(Birhane Gebreweld, 2022) and the C4 Model(Birhane Gebreweld, 2022). However, for this knowledge to be effective, it is important that all relevant stakeholders share information about the architecture and how it is represented. The way this information is communicated depends on the organizational structures involved and can take various forms, such as wikis, workshops, emails, etc. Understanding how architectural knowledge is represented and communicated in professional practice is important to identify appropriate relationships that address sustainability elements in software architecture. By studying how this knowledge is represented and shared, we can gain insights into best practices for ensuring that this knowledge is effectively communicated and can be used to make informed decisions about the sustainability of software architecture. As researchers, we develop intriguing methods and techniques for managing architectural knowledge, while practitioners have their preferred means of capturing and sharing architectural information. To ensure that our methods do not become a "silver bullet" that practitioners do not utilize, it is crucial to conduct industry reviews. By building upon existing industry practices and filling in any missing pieces, we can develop effective and useful methods that practitioners will embrace. The purpose of this research is to gain insight into the current practices related to architecture knowledge and explore how sustainability can be integrated into software architecture in the future. Our objective is to characterize architecture knowledge and sustainability from the perspective of software architects, specifically with regards to representation and communication in the professional practice context. To achieve this, we conducted a questionnaire survey and gathered responses from architects working at a prominent bank in the Netherlands about their experiences in the industry, focusing on how they represent and communicate architectural knowledge and sustainability. Regarding scientific contributions, as far as we are aware, our study is the first of its kind to explore how software architects perceive and utilize sustainability aspects within software architecture in an industrial context, along with examining architectural knowledge representation and communication. This study offers several significant contributions, including: * A practical review of architectural knowledge representation and communication techniques utilized in the industry. * An assessment of how practitioners approach representing and communicating sustainability aspects in software architecture. * This study presents a particular collection of AK representation and communication techniques utilized by software architects who work in the financial industry. The paper is structured in the following manner. In section 2, we review previous studies that have explored the relationship between architectural knowledge and sustainability in software architecture. Section 3 outlines how we designed and executed the survey. We present a summary of the survey results in section 4, and in section 5, we provide a detailed analysis of the findings. This analysis aims to make sense of the results and convey the main insights gained from the study. Finally, in section 6, we discus the threats to validity of our study, and we provide our conclusion in section 7. ## 2. Related Work There is a wide range of architectural modeling languages available, but it is unclear whether they are capable of adequately describing software architecture to meet users' requirements. Additionally, the strengths, limitations, and needs of these languages are uncertain, creating a gap between what is offered and what users need. Malavolta et al.Malavolta et al. (2019) aimed to address this gap by surveying 48 practitioners from 40 IT businesses in 15 countries in order to plan for the next generation of these languages. The study examined the perceived benefits, drawbacks, and requirements related to existing languages. While practitioners were generally satisfied with the design capabilities of their employed architectural languages, the study revealed dissatisfaction with the architectural language analysis features and their capacity to describe extra-functional attributes. Moreover, the study found that the use of architectural languages in practice was mostly influenced by industry development rather than academic research, and there was a need for a more formal and practical architectural language. Our research shares similarities with the aforementioned study as we, too, are investigating how architectural knowledge is represented from the perspective of industry practitioners. To achieve this, we conducted a survey among software architects working for a leading bank in the Netherlands. Our study is distinct in that it delves into sustainability and the communication of architectural knowledge among stakeholders. In addition to exploring a specific domain (financial domain), we go beyond the use of architecture description languages and investigate how architects communicate and share their knowledge. Despite a significant amount of research and development of various models and tools, the widespread adoption of Architectural Knowledge Management (AKM) in software companies is lacking due to the cost of capturing AKCaplin et al. (2018) Capilla et al., Determining what the industry needs from AK to get through this barrier and identifying the advantages and disadvantages of the current AK techniques are therefore necessary. Capilla et al.Capilla et al. (2018) undertook an informal retrospective analysis based on their prior work as researchers and proponents of numerous AK research methodologies in order to address this. By conducting a series of interviews with various software businesses, they also looked into the trends and problems for a future research agenda to support the usage of AK in contemporary software development methods. They came up with some interesting observations using this method. While we are also looking into the tools and techniques practitioners use to capture and communicate architectural knowledge, which will help us understand current trends in the industry, our study has some parallels to the research mentioned above. Our research, in contrast to the aforementioned study, has a keen focus on comprehending how software architects represent and communicate both architectural knowledge and sustainability. While there have been secondary research studies conducted on sustainability in software engineering, none have particularly addressed software architecture. Andrikopoulos et al.Andrikopoulos et al. (2019) systematic mapping study seeks to fill this research gap by exploring the confluence between sustainability and software architecture. The study's findings showed that current studies have neglected the holistic viewpoint required to resolve such a complex issue by excessively emphasizing on particular sustainability-related dimensions. To develop the maturity of the field, more reflective research studies and improved coverage of the architecting life cycle activities are required. The report suggests a research agenda for sustainability-aware software architecture based on these findings. Our research is similar to the study as we also aim to explore the incorporation of sustainability aspects into software architecture. However, our study takes a unique approach by focusing on how sustainability aspects of software can be effectively represented and communicated from the perspective of software architecture practitioners by conducting industry survey with software architects. ## 3. Study Design and Execution By conducting this study, we aim to provide software architects and the research community with a useful evaluation of how Architecture Knowledge (AK) and Sustainability are represented and communicated from a practical point of view. Our research objective is formally characterized by employing the structure proposed by Basili et al. Basili et al. (2019) as follows. \begin{tabular}{p{142.3pt} p{142.3pt}} _Analyze_ & Architecture Knowledge \& Sustainability \\ _For the purpose of With respect to From point of view of In the context of_ & Characterizing Representation \& Communication \\ \end{tabular} Therefore, the present study defines the primary research questions (RQs) and its corresponding sub-research questions as follows: * _How is software architecture knowledge represented and communicated in practice?_ When addressing _RQ1_, we investigate and evaluate the entire industrial context of how AK is represented and communicated. In section 4, we will provide additional information on the tools, techniques, standards, and documentation utilized by software architects in the industry. This will aid us in understanding the industrial structure of AK representation and communication, which we can exploit to integrate sustainability elements into software architecture in the future. * _How is software architecture knowledge represented in the financial domain?_ Our goal through _RQ1.1_ is to achieve a more comprehensive understanding of the frequently utilized and advantageous AK elements in the financial domain. Moreover, we intend to gain insight into the tools utilized in the financial industry to supplement the AK representation component of _RQ1_. * _How is software architecture knowledge communicated in the financial domain?_ Similar to _RQ1.1_, our objective with _RQ1.2_ is to gain an understanding of the tools and techniques utilized by practitioners in the financial industry to communicate AK effectively. * _What architecture knowledge methods are domain-specific or domain-generic?_ Our aim with _RQ1.3_ is twofold: to identify any architecture knowledge methods that are specific to the financial sector and to identify any domain-generic methods that can be applied to the financial industry. * _How can sustainability aspects be represented and communicated in software architecture?_ Through _RQ2_, we aim to gather insights from software architects on how they would incorporate sustainability aspects into their daily work and software architecture. Given their wealth of expertise, we aim to challenge software architects to provide possible ways for integrating sustainability into software architecture. We provide a three-phase research methodology in order to respond to these RQs. Figure 1 shows an overview, which is expanded upon below. In **Step (1)**, To begin designing the survey, the first step is to identify the research questions and map them to a set of questions that will be used to gather information from software architects about their industry practices. To accomplish this, we have included a variety of questions, including commonly used ones from the literature that pertain to demographics and architectural knowledge. This initial step is crucial because it impacts the quality of the information we obtain from the population. Upon completion, we will have a comprehensive survey that will be distributed to the significant community of software architects at one of the largest banks in the Netherlands. We ask respondents **34 questions** in this survey, with 91% of them being open-ended and allowing respondents to candidly express their unique experiences. Figure 2 depicts the flow of the eight blocks numbered Q1-Q8. The blocks in Figure 2 are labeled with the research questions they intend to answer, except for the consent, demographics, and conclusion blocks. The following blocks are included: **Consent:** We explained the purpose of the study, which is to understand the current practice in architecture knowledge, and to explore where sustainability can be applied to address sustainability in software architecture in the future. We include the researchers' contact details as well as their confidentiality policy. **Demographics (Q1.1-Q1.4):** By probing the participants' professional backgrounds, particularly their involvement in software projects (see Figure 3), their experiences in their present organizations, and their specific roles or positions within the organization, this section aims to learn more about the participants (see Figure 4) **AK in Practice (Q2.1-Q2.5):** We start this part with a formal explanation of our interpretation of architectural knowledge to avoid any misunderstandings (AK)1. We acknowledge that there may be different ways to understand AK, though. In this section, we are therefore asking participants about the kinds of AK they document and retain, the AK components they believe are missing, Figure 1. Study Design. and the AK elements they consider to be the most valuable. Our objective in this section is to comprehend how participants are representing and communicating AK. **AK representation in financial domain (Q3.1-Q3.4):** In a similar manner, we began this section by providing a formal description of our interpretation of AK representation2. We ask participants about the notations, languages, methods, and tools they use to capture AK in general and specifically in their most recent project, as well as the architectural documentation they find most useful. As we are specifically targeting architects who work in a bank, our goal is to gain an understanding of how AK is represented in the financial domain. Footnote 2: _Architecture Knowledge (AK) Representation_ is defined as capturing and preserving AK in a particular form (e.g., diagrams, PowerPoint, views, viewpoints, or principles) **AK communication in Practice (Q4.1-Q4.3):** In this section, similar to the one above, we provide a description of what is meant by AK communication3. We also inquire with the participants about the stakeholders involved in their roles, as well as the tools and methods they use to communicate with different stakeholders. By doing so, we are able to gather first-hand information on how AK is communicated in practice. Footnote 3: _Architecture Knowledge (AK) Communication_ describes to how the knowledge is disclosed between involved stakeholders (e.g., via workshops, or corporate sharing platforms) **Domain Specific Vs Domain Generic AK (Q5.2-Q5.3):** We inquired with the participants about their familiarity with certain AK methods that are unique to their specific business domain, as well as the regulations they keep in mind while representing or communicating AK within their domain. Our goal is to distinguish between AK tools and methods that are specific to their business domain and those that have a more general purpose. **Sustainability aspects within software architecture (Q6.1-Q6.5):** In this section of the survey, our goal is to explore how software practitioners incorporate sustainability aspects into their architectural decisions. To achieve this, we have included a series of questions designed to better understand the participant's perception of IT sustainability. We begin by asking what the concept of IT sustainability means to the participant. Following this, we ask architects whether they consider sustainability aspects during their work, and depending on their response, we delve further to understand which aspects they consider or the reasons for not considering them. We also ask whether participants are aware of any IT sustainability targets set by their company or department, and if they integrate sustainability aspects into their daily work. Through these questions, we seek to gain insights into how architects interpret sustainability, both generally and specifically in the context of software architecture. **Survey Conclusion (Q7.1-Q7.2):** Participants are encouraged to provide any additional information they feel is important in this section. Specifically, we inquire about what they believe is necessary to accurately represent and communicate AK, and whether they have any comments about the study itself. These questions are designed to capture any issues that may be of significance to participants but have not been addressed in the survey. In **Step (2)**, we first conduct a pilot survey with a small group of research and industry experts to check the quality of the survey and eliminate any possible pitfalls and unclear aspects before reaching out to the main survey population. This step generates a set of feedback that needs to be incorporated into the original survey design. Then, we conduct a survey with the main population consisting of software architects working for a leading bank in the Netherlands. The objective is to produce a set of architecture and sustainability representation and communication (ARC) techniques used in industry. To **determine the main population** of the survey and ensure our survey was conducted effectively, we first established the objective of the study, which was to conduct a practical review of architectural knowledge and how it is represented and communicated in the financial industry. After analyzing the possible population for our survey, we determined that software architects would be the most suitable participants. This is because they possess extensive knowledge about AK and its application in the industry (check Figure 3), and because the organization already has architects in different software architecture roles with decades of experience (check Figure 4). Next, we reached out to the identified population using a two-fold approach. The first approach involved obtaining a mailing list of software architects within the organization, while the second involved requesting Team leads of different structures within the organization to provide us with a list of architects under their department. By consolidating the information from these two sources, Figure 2. Survey Questionnaire Flow we generated a draft list of 145 architects. After eliminating redundant names and removing the architects needed for an extensive interview, we arrived at a set of 124 architects. Using these two steps to the best of our abilities, we tried to reach out to all software architects working in the bank. However, we did not conduct any further steps to verify if we indeed reached out to all architects. The survey was conducted using **Qualtrics4** and designed to be anonymous to alleviate concerns about judgment or consequences. We reached out to the 124 architects via email and received 45 (39%) survey responses, with eight indicating they no longer worked at the company and 69 not replying. Footnote 4: [https://www.qualtrics.com/](https://www.qualtrics.com/) We compiled all recorded responses from all participants into a spreadsheet format where each column represents a specific question and each row represents a response from a participant. However, we made the decision to make all questions, except for the demographic and consent questions, optional. As the consent questions are crucial to obtain legal permission from participants to record their responses, and the demographic data is essential for interpreting the remaining optional questions. As a result, there are variations in the responses we received from each participant, as some attempted to answer all questions, while others chose to skip questions they did not want to answer. We analyzed the responses given to each question to find trends and prevailing viewpoints about the specific topic raised by the question. To facilitate analysis, we divided the questions into two categories: closed-ended and open-ended. For closed-ended questions, we simply counted the number of occurrences. For open-ended questions, we categorized responses by interpreting each response's intent(we presented the complete analysis of the responses in the replication package5). We created these categories using information from various sources, including the literature (such as the dimensions of sustainability), the study question, and similarities in the responses' intended purposes(check Table 6). Our main objective was to encode open-ended responses so that quantitative and conventional statistical analyses (Bradley et al., 2015) could be applied to the data. Footnote 4: [https://www.qualtrics.com/](https://www.qualtrics.com/) Footnote 5: [https://docs.google.com/spreadsheets/d/1KIMtIwXGCASJXzWJIRGAGwDrIntbyrH_p/edit/tusp-share_link:kouid-117224618040701995271&rtpfpf-true&sd-true](https://docs.google.com/spreadsheets/d/1KIMtIwXGCASJXzWJIRGAGwDrIntbyrH_p/edit/tusp-share_link:kouid-117224618040701995271&rtpfpf-true&sd-true) The reflection of the results to address our core research questions is the final step in **Step (3)**. Our objective is to provide a summary of the methods for architectural representation and communication that are currently used, as well as how architects address sustainability issues when designing software. We end by summarizing the takeaways from this practical review. Based on the data presented in Figure 3, it is evident that **94%** of the survey participants have engaged in software projects for a minimum of **10 years**, with their experience ranging from 10 to 41 years. This suggests that the results obtained in our study were derived from experts who possess extensive and valuable experience gained from a long and successful industrial career. Only **two** of the respondents reported having less than 10 years of experience, with 7 and 9 years respectively. We were fortunate to have received participation from a diverse group of architects in the bank, encompassing a broad range of roles. This enabled us to gain insights into software architecture from various levels of abstraction, as well as the experiences of different stakeholders. As illustrated in Figure 4, our participants spanned 7 different architectural roles, with the majority of them being Domain Architects (37%) and IT Architects (26%). The next most frequent roles were Solution Architects (11%) and Enterprise Architects (8%). ## 4. Results The main results we inferred from the data6 we gathered are presented in this section. It does so in accordance with the five blocks from **Q3 to **Q7** and the survey structure specified in Section 3 of the report. Footnote 6: Raw Data of Survey: [https://bit.ly/3EWC2H3](https://bit.ly/3EWC2H3) **Table 1** displays the responses to questions in the **Q3 block**, which focuses on the practical application of AK. We initiated the block by asking participants, "What does AK mean to you?" (Cheng et al., 2017). Most participants provided definitions that complemented the formal definition we provided. For instance, one participant explained that _'AK is a group of elements, such as problem statements, triggers to have a solution, reasons for creating a new solution, scope, requirements, assumptions, current state architecture, transition state, and target state. It involves defining and registering exceptions/deviations to deliver a solution building block."_ However, a few participants shared unique perspectives. One stated that _'AK is also about knowing what kind of architectural artifacts (e.g., future state architectures, current state architecture, guidelines, standards) exist in the organization and identifying any missing artifacts. But most importantly, it involves interpreting and using them correctly."_ We present the results for the **Q4 block**, which pertains to the practical representation of AK, in **Table 2**. **Table 3** displays the results of the **Q5 block**, which examines the implementation of AK communication. We began by asking participants about the stakeholders with whom they need to communicate AK knowledge in their present position. The stakeholders that participants engage with differ depending on their current role, in addition to their peers. For example, business architects communicate with business and IT management, business change teams, and IT delivery teams. Domain architects, on the other hand, engage with product owners, enterprise architects, principal architects, developers, business analysts, IT leads, Security CISO, Design Authority, and external vendors. IT architects communicate with Grid Owner, Product Owner, Business Analyst, Enterprise Architects, and Domain Architects. The results for the **Q6 block**, which pertain to domain-specific versus domain-generic _AK_, are presented in **Table 4**. Finally, we summarize the results for the **Q7 block**, which concerns questions related to sustainability aspects within software architecture, in **Table 5**. ## 5. Discussion We conducted a survey with software architects in one of the leading banks in the Netherlands and identified two main findings, along with a list of detailed results from analyzing each cluster of questions designed to address our research question as discussed in section 3. Overall, the survey yielded valuable insights into the representation and communication of software architecture knowledge in practice. The findings are: During the survey, we asked participants about the architectural elements they felt were missing in their work. This question was found to be revealing as it elicited many interesting responses. We categorized and analyzed these responses, which can be seen in Table 6, to facilitate quantitative data analysis. Most participants identified the need for architectural elements that can facilitate communication and bridge different viewpoints and features for various stakeholders. For example, one participant stated, "_As architecture is about communication, with different views, we tend to develop views that are understood by architects (which gives grip on the architecture), but less meaningful for (other) stakeholders. Linking architecture views to (non-architecturally) views of different stakeholders is now lacking. We tend to speak our own language and not the language of different stakeholders._" This view expressed by a participant is not isolated, as shown in Figure 5, where 8 out of 31 respondents (26%) shared similar views. Another participant expressed the desire for "more linkage to use non-architecture viewpoints" to better represent and communicate AK. Figure 4. Current Roles of Participants within the bank. _Where DA stands for domain architect, IA for IT architect, SA for solution architect, EA for enterprise architect, SWA for software architect, BA for business architect, and HA for hybrid cloud architect._ Figure 3. Survey Population Experience in Years \begin{table} \begin{tabular}{|p{34.1pt}|p{34.1pt}|} \hline Q2.2 & **Question:** What type of AK do you document and keep? Capilla et al.[5] \\ \hline \multicolumn{2}{|p{34.1pt}|}{**Answer:** 26 out of 32 respondents, which represents 81% of the participants, mentioned Solution Intent as the AK to keep and document. Meanwhile, Current State Architecture was mentioned by 10 participants (31%), and Design Decisions were mentioned by 25% of respondents. Future State Architecture was mentioned by 22% of the participants. Five participants mentioned both Guidelines and Standards as the AK to capture and retain. Additionally, there were several other AK documents mentioned by the participants that are worth noting. These include High level Design, Information Architecture, Architectural Landscape, Problem Statements, and Architectural Review Comments. \\ \hline Q2.3 & **Question:** Do you capture AK consistently in all projects? Capilla et al.[5] \\ \hline \multicolumn{2}{|p{34.1pt}|}{**Answer:** Of the 31 respondents to this question, 18 participants (58%) answered ”Yes” while the remaining 13 individuals (42%) answered ”No”. Among the 13 who answered ”No”, some specified their reasons, including 31% who said that every project is different, 31% who stated that AK is not required, 15% who believed that it can sometimes be an overkill, and others mentioned that it is labor-intensive and that they don’t have enough time. \\ \hline Q2.4 & **Question:** In your experience, what are the AK elements that you miss? \\ \hline \multicolumn{2}{|p{34.1pt}|}{**Answer:** Of the 31 participants who answered this question, 14% of them reported being satisfied with the existing AK elements. However, 26% (8 individuals) identified AK elements that act as a means of communication and bridge between diverse stakeholders as the ones they miss the most in their work. On the other hand, 23% pointed out that AK elements that give the context and status of the architecture were the ones they miss in their work.. Additionally, 13% of participants mentioned missing AK elements related to Charity, Detali, and Guidance, and 10% mentioned missing elements related to Design Decisions. Finally, 6% of participants missed AK elements related to Sustainability Aspects. \\ \hline Q2.5 & **Question:** In your experience, what are the AK elements that you find particularly useful? \\ \hline \multicolumn{2}{|p{34.1pt}|}{**Answer:** Out of 31 participants who responded to the question, 12 participants (39%) found AK elements related to Standards, References, and Guidelines to be the most beneficial. Furthermore, 20% of participants chose Architecture Model and Business Architecture as the most useful each. Thirteen percent of participants found Solution Intent[7] to be the most beneficial, while 10% each chose Design Decisions and context \& status of the architecture as the most useful AK elements. \\ \hline \end{tabular} \end{table} Table 1. Architecture Knowledge (AK) representation in financial domain \begin{table} \begin{tabular}{|p{34.1pt}|p{34.1pt}|} \hline Q3.1 & **Question:** Do you know any standard notation or language to capture **AK?** Capilla et al.[5] \\ \hline \multicolumn{2}{|p{34.1pt}|}{**Answer:** Of the 31 participants who responded to the question, 90% (28 participants) indicated ”Yes”. Among these 28 participants, 82% (23 individuals) specified that they use ArchiMate to capture AK, while 5 participants (18%) specified using UML. Additionally, 2 participants each (totaling 7%) mentioned using Sparx EA, Draw.io, PowerPoint, and BPMN as their preferred languages and notations for capturing **AK**. \\ \hline Q3.2 & **Question:**In your experience, what is the most useful architectural documentation? Capilla et al.[5] \\ \hline \multicolumn{2}{|p{34.1pt}|}{**Answer:** Of the 31 participants who responded to this question, 9 individuals (29%) identified ArchiMate as the most useful architecture documentation. Additionally, 5 participants each (totaling 16% of the total) mentioned Current State Architecture and Diagrams as being useful. Three participants each (10% of the total) identified Solution Intent, Views/Viewpoints, and PowerPoint as the most useful. Finally, 2 participants (7% of the total) each mentioned Design Decision Documents and Visio as being useful. \\ \hline Q3.3 & **Question:** If you think about your last project, in what format was the knowledge stored?Capilla et al. [5] \\ \hline \multicolumn{2}{|p{34.1pt}|}{**Answer:** 31 participants responded to the question on how they store AK, mentioning various formats and methods. ArchiMate, Word document, and PowerPoint were equally popular among 9 participants (representing 29%) as the formats used to store AK in their last project. Solution Intent was mentioned by 8 participants (26%), while Confluence was used by 6 participants (19%). \\ \hline Q3.4 & **Question:** What tools or methods do you use to capture and represent AK? \\ \hline \multicolumn{2}{|p{34.1pt}|}{**Answer:** Of the 31 respondents, 17 individuals (55%) use ArchiMate, 15 (48%) use PowerPoint, 14 (45%) use Sparx EA, and 11 (35%) use Confluence to capture and represent AK. Additionally, 10 participants use Microsoft Word and 9 use Visio for this purpose. \\ \hline \end{tabular} \end{table} Table 2. Architecture Knowledge (AK) representation in financial domain principles, as depicted in Figure 6. Moreover, other participants referred to various sustainability dimensions, indicating a high level of awareness on the subject. Subsequently, we inquired whether participants were aware of their organization or department's sustainability targets, and the majority responded negatively, highlighting a lack of awareness in this regard. Nevertheless, when we asked whether they integrate sustainability aspects into their work, 75% of participants responded affirmatively, which appears to contradict their lack of awareness of sustainability targets. However, this discrepancy may be attributed to their advanced level of understanding of the concept. Upon further investigation, most participants reported that they were not incorporating sustainability aspects into their daily work due to a dearth of clear guidelines, references, criteria, and goals. Additionally, they expressed a need for more support in bridging the knowledge gap on how to implement sustainability aspects in their work. Overall, our study underscores the importance of incorporating clear guidelines, references, criteria, and goals on sustainability aspects by the organization to leverage the motivation and high level of sustainability awareness of architects. \begin{table} \begin{tabular}{|p{28.5pt}|p{28.5pt}|} \hline Q5.1 & **Question:** Do you know certain methods for **AK** which are exclusively valid or applied to your business domain? \\ \cline{2-3} & **Answer:** Out of 31 respondents to this question, only 5 (16\%) responded “Yes”, and specified BPMN, TOGAF, SAFe, Target State Design as exclusive AKs to the finance area. \\ \cline{2-3} & **Question:** Can you think of any other AK methods that are general-purpose and that have not already been mentioned? \\ \cline{2-3} & **Answer:** In response to the question, 20 participants provided feedback. Of these, 11 participants (representing 55\% of the respondents) answered with “No.” Among the 8 participants who provided an answer to the specific inquiry, they mentioned the following frameworks: TOGAF, Agile Design Thinking, BPMN, Service Oriented Architecture, and Standardized Architecture Decision Records (ARDs). \\ \cline{2-3} & **Question:** In general, do you have to keep certain regulations in mind (e.g., GDPR, sustainability targets, etc.) while representing or communicating **AK** in your business domain? \\ \cline{2-3} & **Answer:** Out of 30 respondents to this question, 87\% (26 participants) disclosed that they consider certain regulations while representing and communicating AKs. \\ \hline \end{tabular} \end{table} Table 4. domain-specific Vs generic-specific AK \begin{table} \begin{tabular}{|p{28.5pt}|p{28.5pt}|} \hline Q6.1 & **Question:** What is IT sustainability for you? \\ \cline{2-3} & **Answer:** Among the 23 individuals who responded to the question, 30\% (7 participants) demonstrated a comprehensive understanding by referring to at least two aspects of sustainability or software engineering principles. Specifically, 39\% (9 participants) associated IT sustainability with the environmental dimension, while 22\% (5 participants) focused on the technical dimensions. The remaining two participants understood IT sustainability in terms of its economic dimension and its ability to cope with changing environments across all domains. \\ \hline Q6.2 & **Question:** Are you aware of any IT related sustainability targets or measures in your organization/department? \\ \cline{2-3} & **Answer:** Out of 28 respondents to this question, 18 (representing 64\% of the total) reported not being aware of any sustainability targets. The remaining 10 participants (36\%) reported having knowledge of some targets, with the most commonly mentioned targets being cloud-related policies and targets related to energy consumption, both at 33\%. \\ \hline Q6.3 & **Question:** Do you consider sustainability aspects in your current role? \\ \cline{2-3} & **Answer:** Of the 28 participants who responded to the question, 75\% (21 participants) said “Yes”. Out of these 21 participants, 48\% (10 participants) considered the technical aspect of sustainability, while 14\% (3 participants) considered the environmental aspect, and 9\% (2 participants) considered the economic aspect. Additionally, 14\% (3 participants) considered other aspects such as business and quality requirements, and adapting to changes. \\ \hline Q6.4 & **Question:** What tools or methods do you use to capture and represent AK? \\ \cline{2-3} & **Answer:** Of the 31 respondents, 17 individuals (55\%) use ArchiMate, 15 (48\%) use PowerPoint, 14 (45\%) use Sparx EA, and 11 (35\%) use Confluence to capture and represent AK. Additionally, 10 participants use Microsoft Word and 9 use Visio for this purpose. We also inquired about how and where they would incorporate sustainability into their daily work if it were necessary. Some suggested including it as a quality attribute in the Solution Intent or providing guidance on what to assess. Others suggested integrating it into the business architecture, domain architecture, intake phase, data center, and patterns and designs. \\ \hline \end{tabular} \end{table} Table 5. Sustainability aspects of software architecture \begin{table} \begin{tabular}{|p{28.5pt}|p{28.5pt}|} \hline Q6.1 & **Question:** What is IT sustainability for you? \\ \cline{2-3} & **Answer:** Among the 23 individuals who responded to the question, 30\% (7 participants) demonstrated a comprehensive understanding by referring to at least two aspects of sustainability or software engineering principles. Specifically, 39\% (9 participants) associated IT sustainability with the environmental dimension, while 22\% (5 participants) focused on the technical dimensions. The remaining two participants understood IT sustainability in terms of its economic dimension and its ability to cope with changing environments across all domains. \\ \hline Q6.2 & **Question:** Are you aware of any IT related sustainability targets or measures in your organization/department? \\ \cline{2-3} & **Answer:** Out of 28 respondents to this question, 18 (representing 64\% of the total) reported not being aware of any sustainability targets. The remaining 10 participants (36\%) reported having knowledge of some targets, with the most commonly mentioned targets being cloud-related policies and targets related to energy consumption, both at 33\%. \\ \hline Q6.3 & **Question:** Do you consider sustainability aspects in your current role? \\ \cline{2-3} & **Answer:** Of the 28 participants who responded to the question, 75\% (21 participants) said “Yes”. Out of these 21 participants, 48\% (10 participants) considered the technical aspect of sustainability, while 14\% (3 participants) considered the environmental aspect, and 9\% (2 participants) considered the economic aspect. Additionally, 14\% (3 participants) considered other aspects such as business and quality requirements, and adapting to changes. \\ \hline Q6.4 & **Question:** What tools or methods do you use to capture and represent AK? \\ \cline{2-3} & **Answer:** Of the 31 respondents, 17 individuals (55\%) use ArchiMate, 15 (48\%) use PowerPoint, 14 (45\%) use Sparx EA, and 11 (35\%) use Confluence to capture and represent AK. Additionally, 10 participants use Microsoft Word and 9 use Visio for this purpose. We also inquired about how and where they would incorporate sustainability into their daily work if it were necessary. Some suggested including it as a quality attribute in the Solution Intent or providing guidance on what to assess. Others suggested integrating it into the business architecture, domain architecture, intake phase, data center, and patterns and designs. \\ \hline \end{tabular} \end{table} Table 3. Architecture Knowledge(AK) communication in practice Two main research questions and three sub questions served as the framework for the survey that is the subject of this report. The following provides a summary of the findings. During our research, we discovered that architecture knowledge (AK) is communicated and represented through various documentation and artifacts such as Solution Intent, Current State Architecture, Design Decisions, and Guidelines. However, not all projects consistently capture AK, and some participants mentioned missing AK elements related to communication, context and status, design decisions, and sustainability aspects. On the other hand, participants found AK elements related to standards, references, guidelines, \begin{table} \begin{tabular}{|p{142.3pt}|p{142.3pt}|p{142.3pt}|p{142.3pt}|} \hline \multicolumn{4}{|c|}{**Q.2.4 In your experience, what are the AK elements that you miss?**} \\ \hline **Category** & **Description** & **Response Example** & **No** & **(\%)** \\ \hline Communication \& Bridge & AK elements that serve as a bridge between various view/viewpoints intended for different stakeholders and enable effective communication to provide a comprehensive understanding of the architecture. & _“As architecture is about communication, with different views, we tend to develop views that are understood by archicets (which gives grip on the architecture), but less meaningful for (other) stakeholders, linking architecture views to (non architectural) views of different stakeholders is now lacking. We tend to speak our own language and not the language of different stakeholders.”_ & _“Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _“Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _“Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _“_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _“_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _“_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _“_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _“_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _“_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _“_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _“_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that is implemented is in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is implemented in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that that is implemented in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that that is implemented in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that is implemented in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that is implemented in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that is implemented in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that are implemented in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that is implemented in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that is implemented in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that that is implemented in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that that is implemented in line with the architectural architecture models, and business architecture to be particularly useful. RQ1.1 _How is software architecture knowledge represented in the financial domain?_ Based on the responses, it can be concluded that standard notations or languages, such as ArchiMate and UML, are commonly used to capture and represent software architecture knowledge in the financial domain. ArchiMate was reported as the most commonly used notation. Various tools, including PowerPoint, Sparx EA, Confluence, Word documents, and Visio, were identified as useful for capturing and representing architecture knowledge. The most useful architectural documentation included ArchiMate, current state architecture and diagrams, solution intent, views/viewpoints, and PowerPoint. RQ1.2 _How is software architecture knowledge communicated in the financial domain?_ The findings suggest that software architecture knowledge in the financial domain is communicated through various methods and tools, including written documents, meetings, presentations, email, and workshops. The choice of communication method may depend on the stakeholder, the level of involvement, and the complexity of the information. Overall, PowerPoint is the most commonly used tool for sharing and communicating AK, followed by email and Confluence. Meetings and documents are also frequently used, with some participants reporting the use of workshops. However, it is important to note that the specific methods and tools for communicating software architecture knowledge may differ depending on the industry or domain. RQ1.3 _What architecture knowledge methods are domain-specific or domain-generic?_ Architecture knowledge (AK) methods are employed in practice through a combination of domain-specific and domain-agnostic approaches. While most respondents did not identify any AK elements specific to the finance domain, a few mentioned techniques such as BPMN, TOGAF, SAFe, and Target State Design as being exclusive to finance. However, it is worth noting that TOGAF and BPMN are not solely utilized in finance. Only a few general-purpose AK frameworks such as Agile Design Thinking, Service Oriented Architecture, and Standardized Architecture Decision Records (ARDs) were mentioned. This suggests that there may be a lack of awareness of domain-specific AKs or that many methods are considered applicable across different domains. Notably, 84% of respondents did not mention any domain-specific AKs, highlighting the need for further exploration of the AK methods unique to specific domains. RQ2 _How can sustainability aspects be represented and communicated in software architecture?_ To incorporate sustainability aspects into software architecture, clear guidelines, references, and goals are required to capture these aspects in daily work. Participants suggest integrating sustainability into the business and domain architecture, intake phase, data center, and patterns and designs. Quality attributes in the Solution Intent can also be used to represent sustainability aspects. However, a comprehensive understanding of the various dimensions and principles of sustainable software engineering is necessary for effective representation and communication. Providing clear guidance, references, and goals can motivate architects to practice sustainable software engineering and integrate sustainability into their daily work. Figure 5. Architectural Elements missed by Architects Figure 6. Architects understanding of Sustainable IT ## 6. Threats to Validity While we made every effort to conduct a rigorous study, there are several limitations that must be acknowledged. The _external validity_ of our study may be limited by the fact that we only targeted participants from a single organization, despite the fact that we were able to attract a decent number of participants with significant experience in software architecture. Although the organization we targeted was large and likely representative of the industry as a whole, the lack of diversity in our population may limit the generalizability of our findings. As a result, it may be necessary to replicate our study with a more diverse sample in order to confirm the external validity of our results. The _internal validity_ of a study can be threatened by various factors that impact the accuracy and reliability of the study's results. One potential threat to internal validity in a survey is the use of non-mandatory questions. In our study, we designed most of the questions to be non-mandatory to avoid obliging participants to answer questions they may not be qualified to answer or may have distaste towards. However, this design choice can impact the overall quality of responses received, as participants may choose not to answer certain questions, resulting in missing data and potentially biased results. To address this internal validity threat, we took a careful approach to analysing the survey responses. Rather than using the total recorded response for each question, we only considered the total number of respondents who answered each specific question. By doing this, we were able to account for missing data and ensure that the responses analysed for each question were only from participants who chose to answer that particular question. This approach allowed us to mitigate the potential impact of non-mandatory questions on the study's internal validity and ensure that our results were as accurate and reliable as possible. _Construct validity_ is a critical aspect of any research study that seeks to examine and measure theoretical concepts or constructs. In our study, we aimed to explore the perception of software architecture and architectural knowledge related to sustainability aspects, and we focused on software architects with a lot of experience to gather insights. While software architects may be the ideal candidates to respond to questions related to software architecture, it can be challenging to determine the best approach for measuring and analyzing sustainability aspects in software architecture due to the lack of an established view on the combination of these two areas. As researchers, we made every effort to define the theoretical concepts and constructs we wished to study and determine how to measure them in a valid and reliable way. However, the lack of consensus on the combination of sustainability and software architecture posed a significant challenge in this regard. Therefore, we opted to investigate how architects perceive sustainability concepts and where they may apply sustainability to address sustainability in software architecture. This approach allowed us to explore the perceptions and perspectives of experienced software architects, even in the absence of a well-established theoretical framework for the combination of sustainability and software architecture. However, this construct validity threat must be considered when interpreting our study's findings, and further research is needed to establish a more robust theoretical foundation for the study of sustainability in software architecture. ## 7. Conclusion and Future Work This paper presents the findings of a survey we conducted on the representation and communication of architectural knowledge (AK) in practice. Our study targeted software architects working for a leading bank in the Netherlands with extensive industry experience in various architectural roles. We identified two main findings through our analysis of the survey results: the need for a new architectural element that links different features and viewpoints created for various stakeholders, and the need for clear guidance, references, and goals to motivate architects to practice sustainable software engineering. These findings offer valuable insights for future research in the field. We recommend further investigation into the development of this new architectural element and how it can be integrated into existing practices. Additionally, we suggest exploring ways to promote sustainable software engineering practices among architects through the establishment of clear guidance and goals. Our study highlights the importance of effective AK representation and communication in software industry and the potential benefits of incorporating sustainable practices into architectural decision-making.
背景: ソフトウェアアーキテクチャの知識 (AK) を捕捉するための様々な手法が文献に提示されており、それにはビュー、視点、そしてアーキテクチャ決定記録 (ADR) が含まれています。同時に、持続可能性はソフトウェアエンジニアリングにおいて特にソフトウェアアーキテクチャに関心を引いている。しかしながら、これらのトピックに関する実務的な業界レビューは不足しているようです。 目的: この研究では、現在のアーキテクチャ知識の実践を理解し、将来、ソフトウェアアーキテクチャにおける持続可能性の適用箇所を探求することを目的としています。 方法: 45人のオペレーターのアンケート調査を実施し、アセットを調査し、さまざまなステークホルダーへのアーキテクチャ知識と持続可能性の現実的な表現とコミュニケーションを評価しました。 結果: 調査の分析は、アーキテクチャ知識の捕捉と伝達の様々な側面について、2つの主要な発見といくつかの
2306.17786
Tailoring quantum error correction to spin qubits
Spin qubits in semiconductor structures bring the promise of large-scale 2D integration, with the possibility to incorporate the control electronics on the same chip. In order to perform error correction on this platform, the characteristic features of spin qubits need to be accounted for. E.g., qubit readout involves an additional qubit which necessitates careful reconsideration of the qubit layout. The noise affecting spin qubits has further peculiarities such as the strong bias towards dephasing. In this work we consider state-of-the-art error correction codes that require only nearest-neighbour connectivity and are amenable to fast decoding via minimum-weight perfect matching. Compared to the surface code, the XZZX code, the reduced-connectivity surface code, the XYZ$^2$ matching code, and the Floquet code all bring different advantages in terms of error threshold, connectivity, or logical qubit encoding. We present the spin-qubit layout required for each of these error correction codes, accounting for reference qubits required for spin readout. The performance of these codes is studied under circuit-level noise accounting for distinct error rates for gates, readout and qubit decoherence during idling stages.
Bence Hetényi, James R. Wootton
2023-06-30T16:40:37
http://arxiv.org/abs/2306.17786v2
# Tailoring quantum error correction to spin qubits ###### Abstract Spin qubits in semiconductor structures bring the promise of large-scale 2D integration, with the possibility to incorporate the control electronics on the same chip. In order to perform error correction on this platform, the characteristic features of spin qubits need to be accounted for. E.g., qubit readout involves an additional qubit which necessitates careful reconsideration of the qubit layout. The noise affecting spin qubits has further peculiarities such as the strong bias towards dephasing. In this work we consider state-of-the-art error correction codes that require only nearest-neighbour connectivity and are amenable to fast decoding via minimum-weight perfect matching. Compared to the surface code, the XZZX code, the reduced-connectivity surface code, the XYZ\({}^{2}\) matching code, and the Floquet code all bring different advantages in terms of error threshold, connectivity, or logical qubit encoding. We present the spin-qubit layout required for each of these error correction codes, accounting for reference qubits required for spin readout. The performance of these codes are studied under circuit-level noise accounting for distinct error rates for gates, readout and qubit decoherence during idling stages. ## I Introduction Fault-tolerant quantum computation requires large-scale quantum processors, where quantum states can be encoded in a noise-free subspace of a myriad of noisy qubits. Spin qubits in semiconductor quantum dots offer critical features compatible with having millions of qubits locally connected in a 2D lattice, making them auspicious candidates for fault-tolerant quantum computing [1, 2]. Additionally, the half-century-long experience of the semiconductor industry is believed to provide a key advantage over other quantum computing platforms [2]. Spin qubit platforms to date have demonstrated single- and two-qubit operations and spin readout above 99% fidelity [3, 4, 5] in devices with up to six qubits [6]. Provided that the error rates of the physical qubits are below a certain threshold value, quantum error correction (QEC) codes can suppress errors exponentially with the number of qubits. One of the most famous QEC codes is the surface code, which requires only nearest-neighbour interaction between qubits on a square grid. Alongside the modest connectivity requirements, the popularity of the surface code lies on the high threshold error rate and the availability of fast and high-performance classical decoding schemes such as minimum-weight perfect matching (MWPM). Furthermore, numerous schemes have been developed for the surface code that enable universal quantum computation in a fault-tolerant way. In recent years, several QEC codes have been proposed that fulfil the above-listed criteria [7, 8, 9, 10, 11, 12]. Some of them even overtake the surface code in terms of error threshold [9] and connectivity requirements [10, 11, 12]. The threshold error rate is an important target for qubit platforms, however, its value depends strongly on the noise model assumed to calculate it. In circuit-level error models, individual physical gate errors, qubit readout errors and decoherence during idling are all taken into account. To simplify the model, these errors are assumed to happen with the same probability which makes the resulting numbers hard to compare with physical scenarios where some error mechanisms are more pronounced than others. Moreover, hardware-specific constraints can significantly change the quantum circuit introducing additional noise channels in the model. Here we provide a detailed quantitative analysis for a wide range of gate and readout fidelities, decoherence and measurement times. Simple formulas are derived to help future experiments estimating their device-specific thresholds and the qubit overhead required for fault tolerance. Our work serves to determine important experimental details required to perform quantum memory experiments. In particular we find fault-tolerant circuits for syndrome measurements and logical state preparation and readout, adapted to each QEC code considered and tailored to the needs of spin-qubit readout. In doing so we also determine the different demands on qubit connectivity made by each code. This paper is organized as follows in Sec. II the basic concepts of QEC are introduced followed by the derivation of the noise model for spin qubits and some simple formulas are presented relating the parameters of our error model to the experimental figures of merit. In the last subsection we derive the optimal measurement time for QEC applications accounting for qubit decoherence during mid-circuit measurements. These concepts are applied to the surface code in Sec. III revealing some of the hitherto unexplored parts of the noise-parameter space. Different QEC codes and the corresponding spin-qubit layouts are presented in Sec. IV, comparing the performance of different codes supplying valuable information about the connectivity qubit-overhead trade-off for future experiments. The main results from which the error-threshold for a device-specific noise model can be deduced are summarized in Tab. 1. An outlook for future work and concluding remarks are contained in Secs. V-VI. ## II Error correction with spin qubits ### A brief introduction to quantum error correction Our goal with quantum error correction is to encode logical qubits -i.e., qubits with arbitrarily suppressed error rates- in a low-dimensional subspace of several physical qubits, such that single- and two-qubit gates can be applied between the encoded logical qubits which are read out at the end of the fault-tolerant quantum circuit. One of the prime candidates for such an encoding is the surface code, where it has been shown that all the necessary ingredients can be implemented assuming entangling gates only between nearest-neighbor physical qubits. The rigorous description of some of these methods are beyond the scope of this paper, therefore we resort to quantum memory experiments. In a quantum memory experiment a single logical qubit is encoded in the logical \(X\) or \(Z\), left idling, while mutually commuting observables are repeatedly being measured, and then read out. Logical errors can be revealed indirectly from the collected observables without having measured the logical qubit. Since only one qubit of information is needed to be preserved, taking a connected lattice of \(N\) physical qubits, \(N-1\) independent multi-qubit measurements can be performed, provided that the measurement statistics of the logical subspace remains unaffected. In _stabilizer codes_, a set of mutually commuting operators, called _stabilizer operators_ define these measurements. Logical operators commute with all stabilizers and anti-commute with each other. Stabilizer operators are defined as Pauli strings acting on \(\geq 2\) physical qubits. Logical operators act on \(\geq d\) qubits, where \(d\) is called the _code distance_. The code distance defines the smallest number of independent errors that cannot be detected by stabilizer operators, since it is the smallest number of Pauli operators required to perform a logical gate. Stabilizer measurements ensure that in every cycle the system is projected back into the logical subspace. After several rounds of syndrome measurements, the logical qubit may be read out by measuring every qubit on which the logical operator has a support in the appropriate basis. Since physical errors may or may not have changed the logical information a _decoder_ is employed, a decoder is a classical algorithm that uses all the stabilizer measurement outcomes (called _syndromes_) to guess whether the physical errors have changed the outcome of the logical qubit measurement compared to the encoded information. Fault tolerance can be achieved if the decoder has better and better success rate as the number of qubits used for the encoding is increased. The conditions under which this is possible are formulated in the _threshold theorem_. The theorem can be formulated in multiple ways, using different assumptions on the noise processes [13; 14; 15]. Let us phrase the statement in the following way: _there exists a finite physical error probability \(p^{\text{th}}\) of local errors, below which a certain accuracy \(e\) of the quantum computation of depth \(D\) can be ensured by encoding logical qubits in \(\text{polylog}(D/\epsilon)\) physical qubits_. Summarizing the arguments above, the threshold error rate \(p^{\text{th}}\) of the physical qubits depends on three factors, _(i)_ the error correction code itself, _(ii)_ the decoding algorithm, _(iii)_ and the error model. Here we focus on a special class of stabilizer codes which is defined by the topological properties of the error syndromes which is easiest to understand in the Hamiltonian formalism. A Hamiltonian can be defined from the negative sum of all stabilizer operators. The ground state of this Hamiltonian (i.e., the mutual +1 eigenstate of all stabilizers) is two-fold degenerate. Logical operators act on this subspace as gauge operators. Single qubit errors change some of the syndromes, creating point-like excitations above the ground state. If the possible excitations in the model are composed of two bosonic paticles which are their own anti-particle and have non-trivial braiding statistics, the model is called a \(D(\mathbb{Z}_{2})\) anyon model [16]. Throughout this work we will refer to this set of codes as the _surface code family_ after its most prominent representative, the surface code. The choice of the decoder has a large impact on the error threshold. The maximum-likelihood decoder is shown to result in an optimal decoding, but has an exponential computational complexity which makes it unfeasible for system sizes large enough for fault tolerant quantum computations especially if the syndrome information needs to be processed in real time [17]. For members of the surface code family, error syndromes always appear pairwise due to the general properties of the underlying anyon model. This crucial property makes these codes amenable for decoding by minimum-weight perfect matching (MWPM) which is almost linear computational complexity [18]. Together with the qualitative similarities to the well-established surface code, this motivated our choice to focus on the surface code family and use the same MWPM decoding scheme throughout our work. The error model incorporates probability of different type of errors, as well as spatial and temporal correlations between error events. In the next subsections we will consider the physical origin of noise in spin qubit devices and derive an error model that acknowledges the special properties of this platform. The error model needs to be efficiently simulable, such that one can make predictions about the resources required for fault-tolerant quantum computing. ### Noise model of spin qubits In this section we consider some common features and constraints of spin-qubit devices and use them to establish the error model to be used for the calculation of the threshold surface of the surface code and its comparison to other QEC codes of the same family. There are multiple ways to encode qubits in the spin and orbital degrees of freedom of semiconductor quantum dots [19]. Here we focus on the type of spin qubits, where the qubit states correspond to the spin projections of a single electron or hole in a quantum dot that is split by a magnetic field according to the original proposal of Loss and DiVincenzo [1]. Arbitrary single qubit rotations as well as the two-qubit CX gate can be implemented natively in with spin qubits [6]. Gate errors are often characterized with a single number the gate fidelity \(\mathcal{F}_{G}\). As opposed to gate tomography, the fidelity does not give detailed information about the noise channels. Since different spin-qubit platforms can have very different noise channels, we will focus on the gate fidelity and assume that single- and two-qubit gates are followed by single- and two-qubit depolarizing noise, i.e., the \(m\)-qubit density matrix becomes \(\rho_{m}\rightarrow(1-p_{Gm})\rho_{m}+2^{-m}p_{Gm}\mathbb{1}_{2^{m}}\), where the probabilities are determined by the fidelity as \[p_{G1}=1-\mathcal{F}_{G1}\equiv p_{G}\frac{2}{1+\eta_{G}}, \tag{1a}\] \[p_{G2}=1-\mathcal{F}_{G2}\equiv p_{G}\frac{2\eta_{G}}{1+\eta_{G}}, \tag{1b}\] with \(p_{G}\) being the average gate fidelity and the error bias between single- and two-qubit gates reads \[\eta_{G}=\frac{p_{G2}}{p_{G1}}. \tag{2}\] An idling spin qubit loses its phase coherence at a higher rate than the change in population of its basis states. Dephasing can come from the low-frequency fluctuations of the qubit splitting that changes the precession frequency of the spin around the axis of the magnetic field. Such low-frequency noise often originates from the slow dynamics of nuclear spins, or \(1/f\) charge noise coupling to the spin splitting via spin-orbit interaction. Relaxation, on the other hand is dominated by phonon emission which is typically suppressed by the low phonon density of states at the energy of the qubit splitting. Relaxation can be treated in the Bloch-Redfield approximation [20], which yields an exponential decay of the diagonal elements of the density matrix. This is equivalent to Pauli X and Y errors (i.e., \(\rho\rightarrow(1-p_{T1})\rho+\frac{p_{T1}}{2}(X\rho X+Y\rho Y)\)) with probability [21] \[p_{T_{1}}=\frac{1-e^{-\tau_{i}/T_{1}}}{4}\approx\frac{\tau_{i}}{4T_{1}}. \tag{3}\] with \(T_{1}\) being the relaxation time and \(\tau_{i}\) is the idling time. On the other hand, dephasing due to low-frequency noise is better described in the filter function formalism [19], leading to a Gaussian decay of the off-diagonal element of the density matrix with a time scale \(T_{2}\). This process corresponds to Pauli-Z errors (i.e., \(\rho\rightarrow(1-p_{T2})\rho+p_{T2}Z\rho Z\)) with probability \[p_{T_{2}}=\frac{1-e^{-\tau_{i}^{2}/T_{2}^{2}}}{2}-\frac{1-e^{-\tau_{i}/T_{1}} }{4}\approx\frac{\tau_{i}^{2}}{2T_{2}^{2}}-\frac{\tau_{i}}{4T_{1}}. \tag{4}\] Figure 1: (a) Qubit layout and connectivity of a bulk section of the surface code. Gray filled nodes represent qubits that are connected to ancillas (empty nodes) via single links representing connections via two-qubit gates. Ancillas can be read out pair-wise or swapped with each other (double link). \(X\) and \(Z\) stabilizer plaquettes are shown in red and blue respectively. (b) Circuit representation of qubits \(q_{1}\) and \(q_{2}\) taking part in an \(X\) and a \(Z\) stabilizer measurement. The qubits shown are also part of two additional stabilizer measurements each (implied by faint gates). The CX schedule and the connectivity necessitate three (one) SWAP gates for the \(X\) (\(Z\)) stabilizer measurements. Red, cyan, and blue colored elements in the circuit represent single- and two-qubit depolarizing noise (or bit flip on classical lines) for readout, single and two-qubit gates, respectively. During the measurement qubits experience \(X\)-\(Y\) (dark green) and \(Z\) errors (light green) with different rate. The difference in decoherence rates can be quantified with the noise bias \[\eta_{T}=\frac{p_{T_{2}}}{p_{T_{1}}}\approx\frac{2\tau_{i}T_{1}}{T_{2}^{2}}-1. \tag{5}\] However, we note that \(\eta_{T}\ll T_{2}/T_{1}\) (since \(\tau_{i}\ll T_{1},T_{2}\)), meaning that the noise bias can be substantially lower than the naive expectation of \(\eta_{T}\sim T_{2}/T_{1}\). Furthermore, one might apply an arbitrary dynamical decoupling pulse sequence during idling further improving the \(T_{2}\) dephasing time that enters the above equations. Using these considerations we expect \(\eta_{T}\sim\mathcal{O}(10)\)[4]. Readout of spin qubits is typically carried out via spin-to-charge conversion and charge sensing. Here we only consider those conversion schemes which do not require a reservoir connected to the quantum dot accommodating the qubit. The conversion of spin to charge involves two spin qubits in close proximity, one in a known state and another one to be measured. Reducing the tunnel-barrier between the two quantum dots gives rise to a spin-selective tunnelling, i.e., Pauli-spin blockade, after which the (change in) charge state of the quantum dot can be detected. Alternatively, one can exploit the strong spin-photon coupling in a setup where a single particle with strong spin-orbit interaction is situated in a double quantum dot that is coupled to a resonator. From an architecture point of view both of these approaches require twice the space of a single qubit. Therefore, in the following we assume that readout involves pair of qubits in the qubit layouts to be presented. An important aspect of the readout is the compromise to be made between measurement time \(\tau_{R}\) and fidelity. The fidelity of charge sensing is exponentially improved by increasing the measurement time [22]. That does not mean that the longer one measures, the better it gets. Measurement time is limited by relaxation processes (deep in the Pauli blockade for charge-sensing) or Landau-Zener tunneling (in the case of a dispersive readout). The readout error can be described (qualitatively) as \[p_{R}(\tau_{R})=1-\mathcal{F}_{R}(\tau_{R})=1-(1-e^{-\tau_{R}/\tau_{\min}})e^{ -\tau_{R}/T_{1R}} \tag{6}\] where \(T_{1R}\) is the decoherence time of the qubit being read out and \(\tau_{\min}\) is the minimum integration time, i.e., time needed to achieve a signal-to-noise ratio of 2 [22]. The maximum readout fidelity is then achieved at the measurement time \[\tau_{R}^{*}=\tau_{\min}\log\left(1+\frac{T_{1R}}{\tau_{\min}}\right). \tag{7}\] In our error model readout errors are two-qubit depolarizing errors followed by a classical bit flip on the measurement outcome (e.g., infidelity of the charge detector). Both of these lead to a faulty syndrome bit with a joint probability \(p_{R}\). Finally, we note that within the Pauli noise model, reset errors during the stabilizer measurements generate the same syndrome as readout errors. Therefore, we merge these error rates into a single error parameter, the reset-readout error rate \[p_{RR}=p_{\rm res}(1-p_{R}(\tau_{R}))+p_{R}(\tau_{R})(1-p_{\rm res}), \tag{8}\] where \(p_{\rm res}\) is the probability that a faulty initialization flips the measurement outcome (e.g., for depolarizing noise \(p_{\rm res}=\frac{8}{15}(1-\mathcal{F}_{\rm init})\)). ### Error-threshold surface and resource estimation If the ratio of different probabilities is kept constant, one obtains a single-parameter error model where \(p=0\) corresponds to no errors. Therefore, according to the threshold theorem a finite error threshold \(p^{\rm th}\) can be found, the value of which will depend on the ratio of error probabilities. Asymptotically, for \(p\ll p^{\rm th}\) the logical failure rate can be approximated as \[P_{L}\sim w\left(\frac{p}{p^{\rm th}}\right)^{d/2}, \tag{9}\] where \(w\) is a prefactor that depends on how many length-\(d/2\) paths can lead to logical failure. Considering different ratios of the error parameters, i.e., different directions in the error-parameter space, and calculating the corresponding threshold values maps out a threshold surface in the parameter space \(\mathbf{p}\) that encloses the origin (\(\mathbf{p}=0\)). Inside the threshold surface the logical error rate can be decreased arbitrarily by increasing the number of qubits. Fixing the two error bias parameters \(\eta_{G}\) and \(\eta_{T}\), we are left with a three-dimensional space of \((p_{G},p_{T},p_{RR})\). In the simplest case, when the threshold surface is a plane and determined by three points \((p_{G}^{\rm th},0,0)\), \((0,p_{T}^{\rm th},0)\), and \((0,0,p_{RR}^{\rm th})\), it is straightforward to show that \[\frac{p}{p^{\rm th}}=\frac{p_{G}}{p_{G}^{\rm th}}+\frac{p_{T}}{p_{T}^{\rm th}} +\frac{p_{RR}}{p_{RR}^{\rm th}}, \tag{10}\] where \(p_{G}^{\rm th}=p_{G}^{\rm th}(\eta_{G})\) and \(p_{T}^{\rm th}=p_{T}^{\rm th}(\eta_{T})\). This formula is in correspondence with Ref. [23], where \(p/p^{\rm th}\propto\Lambda^{-1}\), and the observed nonlinearities of the threshold surface corroborate with the effects observed in the experiment (see Supplementary Information of [23]). In that case Eq. (10) implies that the isotropic circuit level noise (\(p_{\rm G}=p_{T}=p_{RR}=p\)) threshold can be recovered as \(p_{\rm ic}^{\rm th}=(p_{G}^{\rm th})^{-1}+(p_{R}^{\rm th})^{-1}+(p_{RR}^{\rm th })^{-1}\), whereas the threshold for the phenomenological noise model (\(p_{T}=p_{RR}=p\)) is \(p_{\rm ph}^{-1}=(p_{T}^{\rm th})^{-1}+(p_{RR}^{\rm th})^{-1}\). In Tab. 1 we summarize the parameters of the linearized threshold surface introduced in Eq. (10) for the six QEC codes studied in the upcoming sections. Assuming that a given error configuration is below the error threshold according to Eq. (10) one can estimate the number of qubits required for a single logical qubit with practical logical error rate. Let us fix a target logical error rate, i.e., \(P_{L}/w=10^{-12}\). The required code distance then becomes \[d=\left\lceil-\frac{24}{\log_{10}(p/p^{\rm th})}\right\rceil, \tag{11}\] implying a total qubit count of \(N_{\rm tot}=(\nu_{q}+\nu_{a})d^{2}\) for a single logical qubit, where \(\nu_{q(a)}\) is the number of qubits (ancillas) in a unit cell of the QEC code. E.g., for \(p/p^{\rm th}=0.5\) one needs \(N_{\rm tot}=6400(\nu_{q}+\nu_{a})\sim\mathcal{O}(10^{4})\) qubits. Less qubits are required as noise decreases. For example, \(N_{\rm tot}=576(\nu_{q}+\nu_{a})\sim\mathcal{O}(10^{3})\) qubits suffice for \(p/p^{\rm th}=0.1\). ### Trade-off between measurement time and fidelity The maximal-fidelity readout time \(\tau_{R}^{*}\) in Eq. (7) gives the optimal readout error rate if every qubit is measured simultaneously, e.g., at the end of the circuit. However, in quantum error correction data qubits are idling during the measurement of the ancillas, therefore one could expect that it is worthwhile to move away from the maximal readout fidelity point (by reducing the measurement time) in order to improve the idling error rates. We have seen in Eq. (6) how the readout error rate depends on the integration time \(\tau_{R}\). Furthermore, neglecting relaxation for simplicity, the idling error rate during readout and reset reads \[p_{T_{2}}(\tau_{R})\approx\frac{1-e^{-(\tau_{R}+\tau_{\rm env})^{2}/T_{2}^{2}} }{2}, \tag{12}\] where \(\tau_{\rm res}\) is the reset timescale. Using \(p_{RR}(\tau_{R})\) and \(p_{T_{2}}(\tau_{R})\) one can derive the readout time that takes the readout-reset and idling error rates furthest away from the threshold surface. Sticking to our example of a linear threshold surface characterized by \(p_{G}^{\rm th}\), \(p_{T}^{\rm th}\) and \(p_{R}^{\rm th}\), the distance from the threshold surface is given by \[\frac{p}{p^{\rm th}}=\frac{p_{G}}{p_{G}^{\rm th}}+\frac{p_{T}(\tau_{R})}{p_{T} ^{\rm th}}+\frac{p_{RR}(\tau_{R})}{p_{RR}^{\rm th}}. \tag{13}\] In order to reduce the qubit overhead, we need to minimize this ratio with respect to \(\tau_{R}\). Assuming that \(\tau_{R}\ll T_{2},T_{1\rm RO}\) the readout time that is optimal for QEC applications becomes \[\tau_{\rm QEC}^{*}=\tau_{R}^{*}-\tau_{\rm min}\log\left(1+\frac{p_{R}^{\rm th }}{p_{T}^{\rm th}}\frac{(\tau_{\rm QEC}^{*}+\tau_{\rm res})T_{1R}}{(1-2p_{\rm res })T_{2}^{2}}\right). \tag{14}\] This equation can easily be solved recursively, starting from \(\tau_{\rm QEC}^{*}=\tau_{R}^{*}\) on the right-hand side. As we will see later on the example of the surface code, it is even possible that the maximal fidelity readout time falls outside the threshold surface, while \(\tau_{\rm QEC}^{*}\) is deep inside it. #### ii.4.1 Stabilizer measurements and hook errors #### ii.5 Surface code with circuit-level noise Let us consider Kitaev's surface code with rotated boundaries [24]. In this QEC code, \(d^{2}\) data qubits are arranged in a square grid and the group of stabilizer operators contain \((d-1)^{2}\) plaquettes with products of Pauli-X and Pauli-Z operators acting on four nearest-neighbor qubits. At the boundaries of the lattice one needs \(2d-2\) stabilizers with support on only two qubits each yielding \(d^{2}-1\) constraints for the \(d^{2}\) degrees of freedom. Measuring all the stabilizers is then equivalent to completing four- and two-body parity measurements on neighbouring qubits. In what follows, we consider the standard circuit representation of the surface code and consider individual errors during a quantum memory experiment that can lead to non-local error chains and show a way to neutralize them. Afterwards, a possible experimental protocol of initializing and reading out a logical qubit \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline & connectivity & \(\nu_{q}/\nu_{a}\) & \(p_{G}^{\rm th}[\%]\) & \(p_{T}^{\rm th}[\%]\) & \(p_{R}^{\rm th}[\%]\) & \(\sqrt{\left(\frac{\delta p^{\rm th}}{p^{\rm th}}\right)^{2}}\) \\ \hline Surface code & \(3\frac{1}{3}\) & 1/2 & 0.82 & 3.94 & 14.5 & 0.08 \\ \hline XZZX code & \(3\frac{1}{3}\) & 1/2 & 0.37 & 15.1 & 12.9 & 0.11 \\ \hline 3-CX surface code & \(2\frac{2}{3}\) & 1/2 & 0.65 & 4.1 & 8.7 & 0.1 \\ \hline XYZ\({}^{2}\) code & 4 & 2/2 & 0.465 & 4.2 & 16.5 & 0.12 \\ \hline Floquet color code* & \(2\frac{1}{4}\) & 1.5/4.5 & 0.48 & 0.7 & 1.41 & 0.005 \\ \hline Honeycomb Floquet code* & \(2\frac{1}{4}\) & 1.5/4.5 & 0.43 & 1.16 & 0.99 & 0.015 \\ \hline \end{tabular} \end{table} Table 1: Spin-qubit connectivity (i.e., average number of two-qubit connections in a 2D lattice) and the parameters of the linearized threshold surface appearing in Eq. (10). The total number of qubits and ancillas are given by \(N_{q,a}=\nu_{q,a}d^{2}\) for a code distance \(d\). The overall performance of the fitting is further characterized by the maximum and the mean deviation from the numerical vaule, i.e., \(\delta p^{\rm th}=p^{\rm th}-p^{\rm num,th}\). Thresholds are given at \(\eta_{T}=20\) and \(\eta_{G}=1\). Asterisk denotes syndrome measurement via Bell-state preparation. is discussed, assuming noisy ingredients throughout the entire process. Provided that every qubit can be read out individually, the four-body stabilizer measurements of the surface code can be achieved by adding ancilla qubits at the center of every plaquette leading to denser square grid of qubits [25]. Ancillas are reset to the state \(\ket{0}\) before every stabilizer round and therefore do not add degrees of freedom to the system. The Z-plaquette measurements are then implemented via four controlled-not (CX) gates controlled on corresponding data qubits and targeted on the ancilla. The controlled gates will flip the state of the ancilla an even or an odd number of times depending on the parity of the four data qubits. Measuring the ancilla then yields the eigenvalue of the Z-stabilizer. Assuming any gate in the stabilizer measurement circuit can induce an error in the corresponding part of the circuit, errors can propagate from the ancilla qubit to the data qubits via the CX gates (e.g., X errors spread from the control to the target, and Z errors from the target to the control). The worst-case-scenario errors -called _hook errors_ in the literature- are the X (Z) errors in the X-stabilizer (Z-stabilizer) circuits that occurs before the third CX gate, because this error will spread to two of the data qubits [25]. Four (three) data qubit errors make up a full stabilizer (a full stabilizer and a single error), that is less harmful than two data-qubit errors. In the rotated surface code, the X (Z) logical operator is a Pauli string that acts on a column (row) of qubits. Depending on the schedule of CX gates, hook errors can be a distance-2 substring of a logical operator, meaning that half as many of them is required to induce an undetectable logical error, i.e., reducing the effective code distance to \(\lceil d/2\rceil\). With careful scheduling, however, it can be ensured that no single error event in the circuit can induce errors that reduce the code distance in the surface code [25]. #### ii.5.1 Noisy logical initialization and readout A quantum memory experiments consist of the initialization of the logical qubit in the Z (X) basis followed by several rounds of stabilizer measurements and the final readout of the logical qubit on the same basis. Afterwards a decoder is used to infer from the syndrome data if a logical X (Z) error happened. Comparing the initialized logical eigenvalue to the final corrected logical readout of the simulation allows one to calculate logical error rates for different physical error rates and code distances to determine the error threshold. The initialization of the logical qubit is often carried out assuming noise-free initial and final stabilizer measurements as well as perfect initialization and measurement of the data qubits. Here we briefly review this protocol to show that it leads to an unphysical solution for the threshold surface, and compare this result with a fault-tolerant protocol we used in our work, where every qubit and quantum gate is subject to the noise model introduced in Sec. II.2. In the case of the ideal logical initialization and readout protocols one prepares every qubit in the \(\ket{0}\) (\(\ket{+}\)) state which is an eigenstate of the logical \(Z\) (\(X\)) operator. Performing the first round of stabilizer measurements, the \(X\) (\(Z\)) stabilizer outcomes will be random even in the absence of errors, since the physical qubit resets did not prepare a surface code state. Since stabilizers commute with the logical operator, the logical eigenvalue is still intact after the first round of stabilizers, but the system Figure 2: (a) Threshold curves for ideal logical state preparation (magenta) and noisy logical state preparation in a quantum memory simulation. At low idling error rates the ideal logical preparation and measurement significantly overestimates the threshold. Threshold values are obtained from code distances \(d\in\{11,13,15,17\}\) and \(T=d\) stabilizer cycles. (b) Readout error rate (6) for varying integration (solid black) time, threshold curve (solid blue) for \(p_{G}=0.1\%\) and the linearized threshold curve (dashed blue). The integration time corresponding to the maximum fidelity (\(\tau_{R}^{*}\)) falls outside the threshold, while setting the integration time for the QEC-optimum (\(\tau_{\text{QEC}}^{*}\)) remains deep inside the threshold. Numbers used in this exemplary plot are \(\tau_{\text{min}}=1.5\mu\)s, \(T_{1R}=1\)ms, \(\tau_{\text{res}}=3\mu\)s, \(p_{\text{res}}=0.5\%\), and \(T_{2}=50\mu\)s. is projected into a subspace of some given \(X\) (\(Z\)) stabilizer eigenvalues. From the second round, every stabilizer measurement would return the same syndrome as in the first one, so one can start inserting errors and the decoder will have enough information to deal with the logical correction. The final measurement of the logical follows a similar logic: performing a noise-free round of stabilizer measurements before reading out the physical qubits that reveal the encoded logical eigenvalue. It is easy to see that the logical eigenvalue cannot be corrupted in this protocol if only readout or reset errors happen during the noisy stabilizer rounds. In a real experiment, however, the initialization and final readout are noisy processes as well. If the errors are strongly polarized towards readout errors, the logical readout will be heavily affected by errors, implying a finite threshold error value against readout errors. The preparation of the initial state can be performed fault-tolerantly with noisy operations only as well. If a code contains pure Pauli-X and pure Pauli-Z stabilizers and logical operators, only the Z-stabilizer (X-stabilizer) outcomes are needed to determine whether a logical Z (X) error happened, since X (Z) errors do not affect the X-stabilizers (Z-stabilizers). We can, therefore, prepare every qubit in the \(\ket{0}\) (\(\ket{+}\)) state initially and read out every qubit on the Z (X) basis in the end. The initial data qubit resets ensure that all Z (X) stabilizer eigenvalues are known before the first round, and final measurements can be used to infer the relevant syndromes after the last round of stabilizer measurements, thereby allowing for errors to be injected at any point in the process [26]. ## III Surface code with spin qubits In the previous section we have seen that the surface code can be conveniently embedded in a square lattice of qubits. However, since spin-qubit readout requires two qubits, the square-grid qubit layout needs to be modified. An example for such a modified lattice was provided by Ref. [2] and shown in Fig. 1(a). Data qubits are connected to four neighbouring ancillas via two-qubit gates but do not participate in a readout pair. Ancillas, on the other hand, always come in pairs where they can be swapped, reset, or read out in a single step. A pair of ancillas can be initialized directly in the \(\ket{00}\) state with a fidelity \(\mathcal{F}_{\mathrm{init}}\) and their Z-parity can be read out in a partially destructive process [27; 28] we denote as a \(ZZ^{*}\) measurement box. Keeping one of the ancillas in the \(\ket{0}\) state as a reference, the Z-stabilizer measurement circuits can be realized using a single SWAP gate as shown for the upper ancilla pair on Fig. 1(b). The X-stabilizer measurements, on the other hand, require in total three SWAPs (see the lower ancilla pair of Fig. 1(b)). Without the additional SWAP gates, both X and Z plaquettes are bound to have the same (or equivalent) CX schedules. Consequently, X-error pairs and Z-error pairs are injected in the same direction, reducing the code distance for either of the logical operators regardless of the choice of boundary conditions. If we restrict our attention to memory experiments, it is tempting to propose a qubit connectivity where the ancilla pairs are placed such that the the number of SWAP gates enforced by the layout is minimized. At the same time, moving towards fault-tolerant quantum computing with multiple logical qubits, such a hard-wiring of the CX schedule in the physical qubit layout is not possible. E.g., twist defects require an on-demand change in the checkerboard pattern of plaquette operators [29]. Fault-tolerant logical-state preparation and readout requires data qubits to be initialized and read out. This can be done by adding a single pair of ancilla to the surface code lattice, allowing one to assign one ancilla pair to every code qubit. Initialization is done by resetting the ancillas, swapping the data qubit with the ancilla it is connected to, and performing a second reset on the ancillas. Similarly the final measurement can be carried out using a single layer of SWAP gates. Consequently, data qubit measurement does not require additional hardware. Calculating the logical failure rate as a function of \(p=(p_{T}^{2}+p_{RR}^{2})^{1/2}\) for different code distances, the intersection of failure rates yields the threshold for a given \(p_{RR}/p_{T}\). Repeating this for several distinct ratios one obtains a threshold curve in the parameter space of \((0,p_{T},p_{R})\). In addition, it is important to consider the logical qubit initialized both along the \(X\) and \(Z\) axis (taking the smaller threshold), since the biased idling noise can strongly favour one type of logical operator. On Fig. 2(a) we compare the threshold curves with the logical initialization and readout performed using ideal stabilizer circuits and the fault-tolerant protocol for \(p_{G}=0\). As expected, only the fault-tolerant protocol leads to finite threshold for readout errors only showcasing the importance of the logical initialization protocol in memory-experiment simulations. Using the linear approximation of the threshold curve, we show that in certain cases the maximal readout fidelity is far from the optimal choice for error correction. For the parameter values on Fig. 2(b) the maximum fidelity (\(\mathcal{F}_{R}\approx 99\%\)) is achieved at \(\tau_{R}^{*}=9.8\mu\)s, whereas the optimal choice is \(\tau_{\mathrm{QC}}^{*}=5.8\mu\)s for \(T_{2}=3\mu\)s. Using the optimal readout time, we are deep inside the fault-tolerant regime, while the integration time for the maximal readout fidelity leaves so much time for decoherence on the data qubits that the rate of success decreases with increasing system size. The analysis so far considered an optimistic fixed value for the gate errors. Although gate errors make substantial contribution to the overall performance of error correction. In order to gain a more complete picture of the threshold surface we calculated the threshold surface in the three-dimensional parameter space of \((p_{G},p_{T},p_{R})\) for error biases \(\eta_{G}=1\) and \(\eta_{T}=20\). The results are presented on Fig. 3(b). The error threshold for a gate-error-dominated scenario [i.e., \(\mathbf{p}\approx(p_{G},0,0)\)] is substantially lower than the threshold values of the opposite limit. From the complete threshold surface it can be seen that a linear approximation might be sufficient, unless the error budget is dominated by idling errors, in which case the nonlinear tail of the plot needs to be explored more carefully. Due to the monotonity of the threshold surface, one expects the strongest bias dependencies in the corners, i.e., for \((0,p_{T}^{\text{th}}(\eta_{T}),0)\) and \((p_{G}^{\text{th}}(\eta_{G}),0,0)\). Therefore we focused our error-bias analysis on these two points. From Fig. 3(c) it is apparent that the idling threshold \(p_{T}^{\text{th}}\) is peaked at \(\eta_{T}=1/2\), that corresponds to depolarizing noise. This can be understood from the fact that for \(\eta_{T}\ll 1\), \(X\) and \(Y\) errors occur with a probability \(p_{X-Y}=p\), both contributing to logical \(X\) errors. Similarly, for the \(Z\)-biased case (\(\eta_{T}\gg 1\)) the probability-\(p\) errors contribute to logical-\(Z\) errors. On the other hand for depolarizing noise, only two of the three Pauli-s affect a logical with a joint probability \(2p/3\). Indeed we observe a roughly \(1.5\times\) increase in the error threshold. We note that noise-bias-tailored decoders could take advantage of the peculiar syndrome pattern of biased noise leading to potentially higher thresholds [30], but such an advantage has not been demonstrated yet for circuit-level error models. Furthermore, there is a significant increase in gate threshold \(p_{G}^{\text{th}}\) for errors biased towards single-qubit gate errors on Fig. 3(c). This can be understood from the stabilizer measurement circuit in Fig. 1(b). Only X-stabilizer circuits include single-qubit gates which only contribute to faulty syndromes. Faulty syndromes being equivalent to readout errors have a high threshold. ## IV Error correction beyond Kitaev's surface code Now we turn our discussion towards other members of the surface code family. In the recent years several new candidates appeared that challenge the surface code in terms of error threshold [9], required connectivity [10; 11; 12], and prospects for fault-tolerant logical gates [7; 8]. Among the codes considered here, there are codes that can be obtained from the Calderbank-Shor-Steane (CSS) construction [31] using only pure \(X\) and \(Z\) type of Pauli operators, while some other codes include mixed stabilizers and logicals. As we will show, candidates from the latter class tend to perform better against biased idling errors than the CSS counterparts. Furthermore, we analysed two Floquet codes, where the stabilizer operators change periodically from one stabilizer round to the next [10; 11]. Such a scheme facilitates the measurement of six-body stabilizer operators with very low connectivity (i.e., \(2\frac{1}{4}\) two-qubit links per qubit on average). ### The closest relatives: the XZZX and the 3-CX surface code The XZZX code can be simply derived from the rotated surface code, by exchanging \(X\) and \(Z\) Pauli operators along one of the diagonals for every plaquette such that every plaquette operator, from left to right and top to bottom consist of Pauli operators \(X\), \(Z\), \(Z\), and \(X\)[9]. Logical operators need analogous adjustments to maintain the necessary commutation relations. The XZZX code requires the same connectivity as the rotated surface Figure 3: (a) Qubit layout and connectivity map for a bulk section of the rotated surface code (b) error-threshold surface in the parameter space of gate- (\(p_{G}\)), idling- (\(p_{T}\)), and readout error rates (\(p_{R}\)) with biases set to \(\eta_{T}=20\) and \(\eta_{G}=1\). Each point is calculated from code distances \(d\in\{11,13,15,17\}\) and \(T=d\) stabilizer cycles. Green and blue arrows indicate the change of \((p_{G},0,0)\) and \((0,p_{T},0)\) corner points of the surface for a wide range of noise biases. (c) Dependence of \(p_{G}^{\text{th}}\) on the gate error bias \(\eta_{G}\) and \(p_{T}^{\text{th}}\) on the idling error bias \(\eta_{T}\). code (see Fig. 4(a)), but the local basis transformations necessitate extra single-qubit gates on the data qubits which increase the circuit depth and the error budget of gates. On the other hand the XZZX code brings significant improvement in terms of idling threshold compared to the surface code as can be seen on the scale of the \(p_{T}\) axis on Fig. 4(b). From the gate-bias dependence of the respective corner point (shown on Fig. 4(c)) it is clear that the XZZX code can present a real advantage over the surface code only if two-qubit gate errors are more likely. The most remarkable property of the threshold surface is the dependence of the idling threshold on \(\eta_{T}\). Since the logicals are not pure Pauli \(X\) and \(Z\) operators as for the surface code, the maximum is not achieved for depolarizing noise \(\eta_{T}=1/2\) but in the limit where \(\eta_{T}\gg 1\), which we believe to be the relevant limit for spin qubits (i.e., \(T_{2}\ll T_{1}\)). The second candidate, the 3-CX surface code, presented recently by Ref. [12] measures surface-code stabilizers in a two-round stabilizer measurement cycle such that stabilizers are measured once per round, but the state of the data qubits only returns to the original state in every second round. In this peculiar measurement sequence, only three of the four connections are used for every data qubit (see Fig. 4(d)), reducing the required connectivity to an effective hexagonal grid. The measurement sequence of the 3-CX surface code requires a modified spatial boundary compared to the rotated surface code [12] that we discuss in the supplementary information. Moreover the fault-tolerant logical readout suggested in Ref. [12] involves simultaneous measurement of all data and ancilla qubits. Although we only have the hardware to read out half of the qubits in our spin qubit lattice. In order to overcome this issue, we developed an improved final readout for which the ancilla-only readout pairs of Fig. 1(b) are sufficient. Figure 4: (a) Qubit layout and connectivity map for a bulk section of the XZZX code (b) threshold surface for \(\eta_{T}=20\) and \(\eta_{G}=1\), (c) Dependence of \(p_{G}^{\mathrm{th}}\) on the gate error bias \(\eta_{G}\) and \(p_{T}^{\mathrm{th}}\) on the idling error bias \(\eta_{T}\) for XZZX code. Each point is calculated from code distances \(d\in\{11,13,15,17\}\) and \(T=d\) stabilizer cycles. (d)-(f) Similarly, qubit layout, threshold surface, and bias dependencies of the 3-CX surface code. In QEC codes, reduced connectivity often comes at the expense of a significantly reduced threshold [32], however, Figs. 4(e)-(f) show -in correspondence with the findings of Ref. [12]- that no significant compromise was made by adapting the stabilizer measurement sequence of the surface code to a lower-connectivity qubit lattice. ### Xyz\({}^{2}\) matching code Hexagonal matching codes are special class of \(D(\mathbb{Z}_{2})\) anyon models where hexagonal plaquette stabilizers are bicolorable and host different anyon species with the required braiding statistics. Fermionic quasiparticles, combined from two anyons of different species, are confined to string stabilizers that connect same coloured plaquettes [7]. In larger-scale lattices the confined-fermion property of matching codes allow for twist-defect-based logical operators without introducing 5-body stabilizers as for the surface code. The XYZ\({}^{2}\) code is a variant of the hexagonal matching codes where the string stabilizers are parallel links on the hexagonal lattice [8]. Suitable boundary conditions of the code can be found from the concatenation of a two-qubit repetition code (stabilized by \(ZZ\)) and the XZZX. In total we get six-body stabilizer operators with \(X\), \(Y\), \(Z\), \(X\), \(Y\), and \(Z\) Paulis in clock-wise direction around the hexagonal plaquettes, and \(ZZ\) link along the \(XY\) edges of the hexagonal plaquettes. Stabilizer measurements in a spin qubit architecture can be achieved in two rounds such that there is a pair of qubits in the face of each hexagon as shown in Fig. 5(a). The scheduling of CX gates need to follow similar considerations to the surface code case in order to prevent the injection of hook errors. In the supplementary information we show that there is such a choice of CX schedule for the XYZ\({}^{2}\) code. The price to be payed for the dense encoding, is that there are less ancilla pairs than stabilizers. Meaning that for the final readout of the logical and inference of relevant stabilizers, the code qubits also need to be part of readout-pairs. However, in terms of connectivity the layout still remains a regular grid of qubits realizable e.g., with \(2\times N\) arrays an long-range links. The threshold surface of the XYZ\({}^{2}\) code is closer to a plane than that of the previous surface code variants (see Fig. 5(b)). Interestingly, the readout threshold remains comparable to that of the surface code even though the stabilizers are read out in separate measurement rounds. Not being a CSS code, the idling threshold is not peaked around the depolarizing limit and due to different \(X\) and \(Y\) basis conversions in the plaquette measurements single qubit gate errors contribute significantly to the error budget, i.e., error threshold biased towards single-qubit errors is not significantly higher than the ones biased towards two-qubit gate errors. These results are presented in Fig. 5(c). ### Floquet codes A new type of stabilizer codes has been proposed recently by Hastings and Haah [10], which do not have a static stabilizer group and logical operators, but they change periodically over six rounds. Their proposal was based on Kitaev's Honeycomb model (which as a static stabilizer code has no logical qubits). An even more recent example is the Floquet color code. This is a CSS code that can be obtained from a color code using the anyon condensation picture of Ref. [11]. Since the results for both codes are quantitatively very similar, we will present only those for the latter in the main text. We defer a full comparison of the two to the supplementary information. A brief summary of the anyon condensation picture can be found in the supplementary information, here we only consider the stabilizer group after different measurement rounds. The code can be realized on a hexagonal lattice of data qubits placing an ancilla pair to every edge of the lattice. From a an architecture standpoint Floquet codes require a very sparse connectivity, only \(2\frac{1}{4}\) links per data qubit leaving valuable space for wire routing and other elements of the control circuitry. The hexagonal lattice is tricolorable such that every hexagon has neighbours with a different color. We label them as R, G, and B. In every round one measures a set of links that are matching same-labelled hexagons on some basis. E.g., starting from the stabilizer group depicted in Fig. 6 -where R plaquettes are hosting only a pure Pauli-X operator, G plaquettes only Pauli-Z, and B plaquettes both types- one measures green links (matching G plaquettes) on the Z basis to effectively exchange the roles of R and B while leaving G intact. Such exchange of roles can be performed in a cyclic manner using differently colored X and Z links only to arrive in the same state after six rounds. We point out that leaving one species of plaquettes with the same stabilizers in each round is crucial for the preservation of the logical information. In the SWAP-based syndrome measurement scheme we have used so far, one would need two CX gates and a SWAP in-between to read out the link stabilizers. However, another stabilizer measurement protocol can also be employed that avoids using SWAP gates thereby reducing the gate-error budget of the noise model. If the ancilla state is prepared in the \((|00\rangle+|11\rangle)/\sqrt{2}\) Bell-state, CX-s can target the two ancillas independently, such that the measurement of the ancilla-pair remains deterministic. For the results presented in Fig. 6(b)-(c) we employed this Bell-state protocol. The threshold surface of the Floquet color code is very well approximated by a plane (see Fig. 6(b)). This property could be attributed to _(i)_ the fact that during a link measurement only a single data qubit error can be injected, _(ii)_ the lack of fault-tolerant logical initialization and readout. The bias dependencies on Fig. 6(c) of the gate and idling thresholds show the expected behaviour for CSS codes. Even though proposals exist for the spatial boundaries of both types of Floquet codes [11, 33], here we only studied these codes with toric spatial boundary conditions for simplicity. Likewise, we omitted the problem of fault-tolerant initialization and readout of the logical qubit since the multi-round stabilizer measurement protocol gives a finite (and relatively low) readout-reset threshold. ### Comparison of QEC codes As discussed previously, a linearized threshold surface could, through simple formulas, yield valuable insight about the device-specific threshold and the optimal measurement time for mid-circuit measurements. Here we focus on the corner points of the linearized threshold surface obtained by fitting a plane to the simulated data. The characterization of the surface with only three numbers aids simple estimates for present and future experiments determining optimal readout parameters for error correction as well as estimating the device-specific error threshold and qubit overhead for different spin qubit architectures from high- to low connectivity. Our results are summarized in Tab. 1. The three threshold parameters allow us to quantify some trends for spin-qubit devices expected from the QEC literature. E.g., error thresholds rapidly decrease by reducing the connectivity, the limiting case being linear qubit chains (connectivity\(=2\)) that are shown to have thresholds \(p^{\text{th}}=10^{-5}\)-\(10^{-4}\) for multiple linear QEC codes [32]. Further, we see that gate thresholds are up to an order of magnitude smaller than the other two threshold parameters in agreement with the results on the surface code where thresholds for circuit-level noise are significantly lower than that of simplified noise models. Finally we make some quantitative comparisons to values found in the literature for the different QEC codes. disregarding the finite idling bias, for the rotated surface code we get \(p_{\text{ic}}=0.65\%\) and \(p_{\text{ph}}=3.1\%\) which are in line with the expectations [30, 34]. For the XZZX code we obtain \(p_{\text{ic}}=0.35\%\) and \(p_{\text{ph}}=6.9\%\), where the latter is in good agreement with Ref. [9]. Some deviations are to be expected for these threshold values due to the differences in the stabilizer measurement protocol, e.g., the use of pair-wise ancilla readout and the additional SWAP gates. ## V Discussion Some important characteristic properties of spin qubit architectures are taken into account in our model and the threshold surface analysis allows one to tailor the results to system-specific features. However, some assumptions on the noise model need further improvement for more accurate device-level threshold estimates. In particular, gate errors as well as readout errors were modelled here as depolarizing noise, which may have tendencies to be more like either bit-flip or phase errors in a specific hardware. Also, correlated noise due to cross-talk can be taken into account within the Pauli-error model, but it requires a high-level characterization of the envisioned quantum processor. Finally, some types of errors like coherent errors (systematic rotation of every qubit with the same angle) are not captured in our model and need to be analyzed with separate methods [35]. Restricting the focus regarding the qubit layout and Figure 5: (a) Qubit layout and connectivity map for a bulk section of the XYZ\({}^{2}\) matching code. Plaquette stabilizers are \(XYZXYZ\) Pauli products acting on the the six data qubits around each ancilla pair and \(ZZ\) link stabilizers are shown with thick blue lines. Data qubit readout system is required for the final logical readout (see supplementary information for further details). (b)-(c) Threshold surface and bias dependencies of the XYZ\({}^{2}\) matching code for \(\eta_{T}=20\) and \(\eta_{G}=1\). Each point is calculated from code distances \(d\in\{11,13,15,17\}\) and \(T=d\) stabilizer cycles. the expected noise model would help to develop noise-tailored decoders [30] with improved performance compared to the one utilized in our work. Towards fault-tolerant quantum computation scalable architectures will also require real-time decoding, the performance of which likely to be compromised by the available time budget. Since the Bell-state protocol introduced for the link measurements of Floquet codes leads to a reduced number of SWAP gates, it is tempting to try a similar strategy for all other codes considered. However, note that some of the SWAP gates on the ancillas are enforced by the connectivity and the CX schedule which would be required even in the Bell-state protocol. Furthermore, in the Bell-state protocol, it is the reset errors of the ancillas that would propagate to the data qubits as opposed to gate errors in the SWAP-based method, reducing the readout-reset threshold in the former case. It is tempting to try and reduce the number of SWAP gates also for the measurements of plaquette stabilizers for all the QEC codes we considered like in the Bell-state protocol introduced for the link measurements of Floquet codes. However, we note that some of the SWAP gates on the ancillas are enforced by the connectivity and the CX schedule which would be required even in the Bell-state protocol. Furthermore, in the Bell-state protocol, it is the reset errors of the ancillas that would propagate to the data qubits as opposed to gate errors in the SWAP-based method, reducing the readout-reset threshold in the former case. Our findings can aid the development of new QEC codes tailored to spin-qubit architectures. On possibility is to combine XZZX stabilizers with the measurement sequence of the 3-CX surface code to obtain a lower connectivity error correction code with high idling threshold error rate at strongly biased noise. Floquet codes also provide promising prospects for future codes with low connectivity. Finally, the best error correction code will be the one that provides the lowest qubit overhead on route to fault-tolerant quantum computing. Schemes for logical operations using twist defects or lattice surgery [29] need to be revisited with the concrete noise models to find the lowest space-time overhead for a given device design. For the Clifford-circuit simulations we used stim [36]. For each code distance and physical error rate we took up to 300000 shots unless 30000 logical failures are encountered before that. For the decoding we used pyramching [18]. All the scripts used for the threshold calculations as well as the plotted data, once made available, will be at [37]. ## VI Conclusion Taking into account the common features of spin-qubit platforms we have derived an error model accounting for different error rates for gate and readout errors as well as decoherence during mid-circuit measurements. This helped us to quantify the trade-off between fast and accurate qubit measurements for error correction applications. Considering state-of-the-art error-correction codes that are compatible with locally connected 2D architectures, we proposed four different qubit layouts required for quantum memory experiments. Furthermore we analyzed the threshold surface in a multi-dimensional parameter space facilitating back-of-the-envelope estimates Figure 6: Qubit layout and connectivity map for a bulk section of the Floquet color code. After measuring the red-highlighted XX-links, additional plaquette stabilizers are 6-qubit Pauli-X stabilizers on R-labelled plaquettes, Pauli-Z stabilizers on G-labelled plaquettes and both X- and Z-stabilizers on B-labelled plaquettes (notation is coming from the parent color code). In each time step a set of links is measured such that the 6-qubit stabilizer types are exchanges between two plaquette types, indicated by yellow arrows. (b)-(c) Threshold surface and bias dependencies of the Floquet color code for \(\eta_{T}=20\) and \(\eta_{G}=1\). Each point is calculated from code distances \(d\in\{10,12,14,16\}\) and \(T=d/2\) stabilizer cycles (6 rounds per cycle). for the error threshold and qubit overhead involving on the gate and readout fidelities as well as the decoherence rates for a given experimental setup. ## VII Acknowledgements We thank B. Srivastava and the spin qubit team at IBM Research - Zurich the useful discussions, B.J. Brown for providing access to the script on the Floquet color code, and J. Asboth for pointing out some relevant references. The authors acknowledge support from the NCCR SPIN, a National Centre of Competence in Research, funded by the Swiss National Science Foundation (grant number 51NF40-180604).
``` 半導体構造におけるスピンクブットの回転は、大規模な2D統合の可能性を秘めており、同一チップ上に制御電子回路を組み込む可能性を秘めています。このプラットフォームでのエラー補正を行うためには、スピンクブットの特性を考慮する必要があります。例えば、クブット読み取りには、追加のクブットが不可欠であり、クブット配置の再考が必要となります。スピンクブットに影響を与えるノイズには、相対的なデフェasingの強さが特徴的な特徴もあります。本稿では、近傍接続のみを必要とする最先端のエラー補正コードと、最小重みパーフェクトマッチングによる高速デコードを可能にするコードを検討します。表面コードに比べて、XZZXコード、削減接続表面コード、XYZ$^2$マッチングコード、Floquetコードはそれぞれ、エラー閾値、接続性、論理
2309.07310
CRIL: A Concurrent Reversible Intermediate Language
We present a reversible intermediate language with concurrency for translating a high-level concurrent programming language to another lower-level concurrent programming language, keeping reversibility. Intermediate languages are commonly used in compiling a source program to an object code program closer to the machine code, where an intermediate language enables behavioral analysis and optimization to be decomposed in steps. We propose CRIL (Concurrent Reversible Intermediate Language) as an extension of RIL used by Mogensen for a functional reversible language, incorporating a multi-thread process invocation and the synchronization primitives based on the P-V operations. We show that the operational semantics of CRIL enjoy the properties of reversibility, including the causal safety and causal liveness proposed by Lanese et al., checking the axiomatic properties. The operational semantics is defined by composing the bidirectional control flow with the dependency information on updating the memory, called annotation DAG. We show a simple example of `airline ticketing' to illustrate how CRIL preserves the causality for reversibility in imperative programs with concurrency.
Shunya Oguchi, Shoji Yuen
2023-09-13T20:52:54
http://arxiv.org/abs/2309.07310v1
# CRIL: A Concurrent Reversible Intermediate Language ###### Abstract We present a reversible intermediate language with concurrency for translating a high-level concurrent programming language to another lower-level concurrent programming language, keeping reversibility. Intermediate languages are commonly used in compiling a source program to an object code program closer to the machine code, where an intermediate language enables behavioral analysis and optimization to be decomposed in steps. We propose CRIL (Concurrent Reversible Intermediate Language) as an extension of RIL used by Mogensen for a functional reversible language, incorporating a multi-thread process invocation and the synchronization primitives based on the P-V operations. We show that the operational semantics of CRIL enjoy the properties of reversibility, including the causal safety and causal liveness proposed by Lanese et al., checking the axiomatic properties. The operational semantics is defined by composing the bidirectional control flow with the dependency information on updating the memory, called _annotation DAG_. We show a simple example of 'airline ticketing' to illustrate how CRIL preserves the causality for reversibility in imperative programs with concurrency. ## 1 Introduction Reversible programming languages have been proposed to describe reversible computation where the control flows both forward and backward [25, 5, 24, 7]. They directly describe reversible computation and develop new aspects of software development since reversibility holds all information at any point of execution. In forward-only execution, the computation can overwrite the part of its intermediate history unless it is used in the following computation for efficiency. In analyzing the behavior, such as debugging, it is common to replay the execution to the point in focus to recreate the lost part of history. For a concurrent program, replaying the execution is usually difficult since updating shared resources among multiple control threads depends on the runtime environment. Intermediate languages mediate the translation from the source language to a low-level machine language for execution. Step-by-step translation via intermediate languages is a common technique for optimization in compilers. The intermediate language in LLVM [15] is often used as a behavioral model for program analysis. Mogensen uses RIL [17] as an intermediate language with reversibility for a functional reversible language in the memory usage analysis. RSSA [18] based on RIL is used for compiling and optimizing Janus programs [10, 4]. Reversibility with concurrency has been studied in process calculi [3, 21, 12, 11], in event structures [19, 20, 22, 16] and recently in programming languages such as Erlang [13] and a simple imperative programming language [7, 9]. We propose a reversible intermediate language CRIL by extending RIL. CRIL extends RIL by allowing multiple blocks to run concurrently and the synchronization primitive based on the P-V operations. In CRIL, concurrent blocks interact with each other via shared variables. To establish the reversibility for concurrent programs, the causality among shared variables has to be preserved. Unlike sequential reversible programs, even if one step of a program is reversible, the whole program is not reversible in general since shared variables may not be reversed correctly. To make a program of CRIL reversible, we give the operational semantics as the labeled transition system, \(\mathit{LTSI}_{CRIL}\), as the composition of the operational semantics with one-step reversibility and a data structure called 'annotation DAG'. An annotation DAG accumulates the causality of updating memory in a forward execution and rolls back the causality to control the reversed flow in the backward execution. We show that \(\mathit{LTSI}_{CRIL}\) has the basic properties for reversibility proposed in [14]. Using the approach of [14], it is shown that \(\mathit{LTSI}_{CRIL}\) enjoys the _Causal Safety_ and the _Causal Liveness_, which are important in analyzing CRIL programs compositionally. By translating a high-level programming language to CRIL, \(\mathit{LTSI}_{CRIL}\) works as a virtual machine, and its behavior is guaranteed to be reversible. CRIL enables fine-grained behavioral analysis such as optimization and reversible debugging. In section 4, we present a simple example of airline ticketing given in [6] to enable reversible debugging. The paper is organized as follows. Section 2 presents the syntax of CRIL and the operational semantics for control flow. Section 3 introduces annotation DAG as a data structure to store the causality of updating memory. We define \(\mathit{LTSI}_{CRIL}\) as the operational semantics for CRIL and show the reversibility of \(\mathit{LTSI}_{CRIL}\), which is followed by the airline ticketing example in section 4. Section 5 presents concluding remarks. ## 2 Cril The syntax of CRIL is defined in figure 1. Following RIL [17], a CRIL program consists of an unordered set of basic blocks. Given a set of labels \(\mathcal{L}\), a block has an entry point followed by a block body and an exit point with labels. A block body is either a basic instruction or a call statement. ### Basic block We assume all references to variables have a global scope and there exists a heap memory M indexed by integers, where M[x] denotes the \(x\)-th element in M. An expression \(e\) is either an arithmetic expression or a boolean expression with the usual operators +,-,-,-==,!=,!=,!=,!,!=,?=,&&, \(|\,|\,1\) of the C language, where ~ is the bitwise exclusive OR operation. The boolean operators and logical connectives treat 0 as false and any non-0 value as true. An expression can contain integer constants, which are denoted by \(k\). Entry/exit pointAn entry/exit point of a basic block is the following forms: \begin{tabular}{c c|c c} \multicolumn{2}{c|}{Entry point} & \multicolumn{2}{c}{Exit point} \\ \hline (1) & \(l\) \(\!\!<\)- & (1') & -? \(l\) \\ (2) & \(l_{1}\,;l_{2}\, where \(l,l_{1},l_{2}\in\mathscr{L}\). We write \(\mathsf{entry}(b)\) for the entry point of a basic block \(b\), and \(\mathsf{exit}(b)\) for the exit point of a basic block \(b\). The informal meaning of each item is explained as follows: (1) and (1'): \(l<\)- receives the control at \(l\) unconditionally in a forward execution. In a backward execution, it sends the control to the block that receives the control at \(l\). \(\lnot>l\) dually works in the reversed way of \(l<\)-. (2) and (2'): \(l_{1};l_{2}<\)- \(e\) receives the control at \(l_{1}\) when \(e\) is evaluated to a non-0 value and at \(l_{2}\) otherwise in a forward execution. In a backward execution, it returns the control to the block that receives the control at \(l_{1}\) when \(e\) is evaluated to non-0 and at \(l_{2}\) otherwise. \(e\succ l_{1};l_{2}\) dually works in the reversed way of \(l_{1};l_{2}<\)- \(e\). (3) and (3'): begin\(l\) receives the control from the call statement labeled by \(l\) in a forward execution. In a backward execution, it returns the control to the statement labeled by \(l\). end\(l\) dually works in the reversed way of end\(l\). A basic block is either an instruction block or a call statement. Instruction blockBasic instruction is in the forms: \begin{tabular}{l l l l l} (1) & _left_\(\oplus\)= \(e\) & (3) & \(\forall\ x\) & (5) & assert\(e\) \\ (2) & _left_\({}_{1}<\)-\(>\)_left_\({}_{2}\) & (4) & \(\mathsf{P}\ x\) & (6) & skip \\ \end{tabular} We write \(\mathsf{inst}(b)\) for the basic instruction in \(b\). The informal semantics is explained as follows: (1): _left_\(\oplus\)= \(e\) is an _update_ statement where _left_ is a left-value, and \(\oplus\in\{+,-,\^{\_}\}\). _left_ is relatively updated by \(e\) in that \(+\)=, \(-\)=, and \(\^{-}\)= with the same semantics as in the C language. If _left_\(=x\), \(x\) must not appear in \(e\). If _left_\(=\)M[\(x\)], heap references must not appear in \(e\). (2): _left_\({}_{1}<\)-\(>\)_left_\({}_{2}\) is an _exchange_ where _left_\({}_{1}\) and _left_\({}_{2}\) are left-values. It swaps the values specified by _left_\({}_{1}\) and _left_\({}_{2}\). The same variable must not appear on both sides of \(<\)-\(>\). (3) and (4): \(\forall\ x\) and \(\mathsf{P}\ x\) are the P and V operations for synchronization, which correspond to those commonly used in operating systems. We assume variables in P and V instruction only appear as the parameters of P and V. In a forward execution, \(\forall\ x\) is defined when \(x\) is 0 and terminates when \(x\) is 1 and P\(x\) is defined when \(x\) is 1 and terminates when \(x\) is 0. In a backward execution, \(\forall\ x\) and P\(x\) work as P\(x\) and V\(x\) of the forward execution respectively. (5): assert\(e\) aborts the execution if \(e\) evaluates to 0, and does nothing otherwise. (6): skip does nothing in either direction. We call \(\mathscr{R}=\mathit{Vars}\cup\{\mathbb{M}\}\)_memory resources_. Let \(\mathsf{Var}(E)\) be the set of memory resource references appearing in \(E\), where \(E\) is one of _entry_, _exit_, or _inst_ in the grammar of figure 1. For example, \(\mathsf{Var}(\mathsf{z-=M[x]+y})=\{\mathbb{M},\mathtt{x},\mathtt{y},\mathtt{ z}\}\). \(\mathsf{read}(b)\) is the memory resources that \(b\) uses, and \(\mathsf{write}(b)\) is the memory resources that \(b\) updates. \[\begin{array}{ll}\mathsf{read}(b)=\mathsf{Var}(\mathsf{entry}(b))\\ \cup\mathsf{Var}(\mathsf{inst}(b))\\ \cup\mathsf{Var}(\mathsf{exit}(b))\end{array}\qquad\mathsf{write}(b)=\begin{cases} \{x\}&\quad\text{If }\mathsf{inst}(b)=x\oplus\mathsf{=}e\\ \{\mathbb{M}\}&\quad\text{If }\mathsf{inst}(b)=\mathbb{M}[x]\oplus\mathsf{=}e\\ \{x,y\}&\quad\text{If }\mathsf{inst}(b)=x<\!\!-\!y\\ \{x,\mathbb{M}\}&\quad\text{If }\mathsf{inst}(b)\in\{x<\!\!-\!>\!M[y],\mathbb{M}[y]<\!\!-\!>x\}\\ \{\mathbb{M}\}&\quad\text{If }\mathsf{inst}(b)=\mathbb{M}[x]<\!\!-\!>\!M[y]\\ \{x\}&\quad\text{If }\mathsf{inst}(b)\in\{\mathbb{P}\ x,\mathbb{V}\ x\}\\ \varnothing&\quad\text{Otherwise.}\end{array}\] Call statementA _call statement_ is a basic block in the following form: \[l\mathrel{\hbox{\hbox to 0.0pt{\hbox{\kern 2.0pt<}\hbox{\kern-2.0pt\lower 4.0pt \hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt \sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt \lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt \sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt \lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt \sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt \lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt \sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt \lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt \sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt \lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt \sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt \lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt \sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt \lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt \sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt \lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt \sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt \lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt \sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt \lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt \sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt \lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt \sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt \lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt \sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt \lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt \sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt \lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt \sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt \lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt \lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt \sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt \lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt \lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt \lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt \sim}}\hbox{\kern 2.0pt\lower 4. ### Basic operational semantics The set of process identifiers \(\mathsf{PID}\) is \((\mathbb{N}_{+})^{*}\) where \(\mathbb{N}_{+}\) is the set of positive integers. \(p\in\mathsf{PID}\) denotes an identifier uniquely assigned to a process. When \(p\) executes a process block \(\mathsf{PB}(b,Pg)\), we also write \(\mathsf{PB}(p)\). If \(p\) is labeled by \(l\), \(\mathsf{PB}(p)=\mathsf{PB}(b,Pg)\) where \(\mathsf{entry}(b)=\mathsf{begin}l\). A special _root_ process has the identifier of \(\varepsilon\). The runtime invokes the root process and sends the control to a process block labeled by \(\mathtt{main}\) to start an execution of a CRIL program. For a process \(p\), \(p\cdot i\) is assigned to the \(i\)-th subprocess invoked by a call statement of process \(p\). \(\preceq\) is the prefix relation. A process set \(PS\) is a set of process identifiers satisfying (1) \(\varepsilon\in PS\); (2) \(p\in PS\) implies \(p^{\prime}\in PS\) for \(p^{\prime}\preceq p\); and (3) \(p\cdot i\) implies \(p\cdot j\in PS\) for \(j<i\). For a process set \(PS\) and a process id \(p\), \(\mathsf{isleaf}(PS,p)\) holds if for all \(p^{\prime}\in PS\), \(p\preceq p^{\prime}\) implies \(p=p^{\prime}\). A _process configuration_ is \((l,stage)\), where \(l\in\mathcal{L}\) and \(stage\in\{\mathsf{begin},run,end\}\) are the location of the control in a process block. If \(stage=\mathsf{begin}\), it is before executing the process block, if \(stage=\mathsf{run}\), it is executing the process block, and if \(stage=\mathsf{end}\) it terminated the process block. \(\mathsf{PC}\) is the set of process configurations. A _program configuration_ is \((Pg,\rho,\sigma,Pr)\), where \(Pg\) is the program (which never changes), \(\rho:\mathit{Vars}\rightarrow\mathbb{Z}\) maps a variable to its value, \(\sigma:\mathbb{N}\rightarrow\mathbb{Z}\) maps a heap memory address to its value. A _process map_\(Pr:\mathsf{PID}\rightarrow\mathsf{PC}\cup\{\bot\}\) maps a process to a process configuration. We assume \(Pr_{act}\) is a process set where \(Pr_{act}=\{p\in\mathsf{PID}|Pr(p)\in\mathsf{PC}\}\). \(\mathcal{C}\) is the set of all program configurations. A transition relation over program configurations \[(Pg,\rho,\sigma,Pr)\xleftarrow{p,Rd,Wt}\] \[(Pg,\rho,\sigma,Pr)\xleftarrow{p,Rd,Wt}\] \[(Pg,\rho,\sigma,Pr)\] is defined in figure 2. \((Pg,\rho,\sigma,Pr)\) steps forward to \((Pg,\rho^{\prime},\sigma^{\prime},Pr^{\prime})\) by the process \(p\) with reading memory resource \(Rd\) and updating memory resource \(Wt\). And \((Pg,\rho^{\prime},\sigma^{\prime},Pr^{\prime})\) steps backward to \((Pg,\rho,\sigma,Pr)\) in the same way. We explain the SOS rules in figure 2. **AssVar** and **AssArr** present the update behavior. The exchange behavior is presented by **SwapVarVar**, **SwapVarArr**, **SwapArrVar**, and **SwapArrArr**. **SwapVarArr** and **SwapArrVar** are reversible since \(y\) is evaluated to the same value on both sides of \(\xleftarrow{}\). **SwapVarVar** and **SwapArrArr** are clearly reversible. **Skip** presents the skip behavior. **Assert** presents the assertion behavior, which stops when \(e\) is evaluated to \(0\). **V-op** and **P-op** present the behavior of \(\mathbb{V}\)\(x\) and \(\mathbb{P}\)\(x\) for synchronization by \(x\) shared among concurrent processes. In forward execution, \(\mathbb{V}\)\(x\) sets \(x=1\) when \(x=0\), and waits otherwise. In backward execution, \(\mathbb{V}\)\(x\) sets \(x=0\) when \(x=1\), and waits otherwise. \(\mathbb{P}\) behaves in a symmetrical fashion. By the pair of \(\mathbb{V}\)\(x\) and \(\mathbb{P}\)\(x\), \(x\) can be used as a semaphore to implement the mutual exclusion for both directions of execution. **Inst** presents the one-step behavior of a basic block. The instruction updates \(\rho\) and \(\sigma\) and the entry and exit points give the status of the process. The process is running if \(stage\) is \(\mathsf{run}\). the process is at the initial block or at the final block, if \(stage\) is \(\mathsf{begin}\) or \(\mathsf{end}\). The transition label \(Rd\) is \(\mathsf{read}(b)\) and the transition label \(Wt\) is \(\mathsf{write}(b)\). **CallFork** presents that a call statement forks subprocesses. When \(p\) executes a call statement \(\mathsf{call}\)\(l_{1},\cdots,l_{n}\) forwards, it forks subprocesses labeled by \(l_{1},\cdots,l_{n}\) and \(p\) stores the label for returning the controls in \(Pr\). Note that the process map is changed to \(Pr^{\prime}\) with subprocesses after forking subprocesses. Since \(\mathsf{isleaf}(Pr^{\prime}_{act},p)\) does not hold, \(p\) does not pass the control to the next block until all the subprocesses are merged. **CallMerge** works dually to **CallFork**. In a forward execution, when all subprocesses reach the \(\mathsf{end}\) stage, all subprocesses are set to inactive and \(p\) resumes to pass the control to the next basic block. In a backward execution, **CallFork** behaves as **CallMerge** of forward execution and vice versa for **CallMerge**. Figure 2: The basic operational semantics In a program configuration of CRIL, there is no stack as in RIL to store the return label for subroutine calls. The process map stores the return label, which is not available until \(\mathsf{isleaf}(Pr_{act},p)\) holds, where it checks if the label is on the stack. Figure3 shows an example of CRIL program \(Pg\). There are four process blocks \(\{b_{1},b_{2},b_{3}\}\),\(\{b_{4},b_{5}\}\), \(\{b_{6}\}\), and \(\{b_{7}\}\). A process map assigns \(\epsilon\) to \(\{b_{1},b_{2},b_{3}\}\). In the following execution, it assigns 1 to \(\{b_{4},b_{5}\}\), 2 to \(\{b_{6}\}\), and 3 to \(\{b_{7}\}\). An example of the transitions for \(Pg\) is as follows: \(\begin{array}{l}(Pg,\rho_{0},\sigma_{0},[\epsilon\mapsto(\mathtt{main}, \mathtt{begin}))]\\ \underbrace{\epsilon,\varnothing,\varnothing}_{\mathrm{prog}}\ (Pg,\rho_{0},\sigma_{0},[ \epsilon\mapsto(\mathtt{1},\mathtt{run})])\\ \underbrace{\epsilon,\varnothing,\varnothing}_{\mathrm{prog}}\ (Pg,\rho_{0},\sigma_{0}, \begin{array}{l}[\epsilon\mapsto(\mathtt{1},\mathtt{run}),1\mapsto(\mathtt{ begin},\mathtt{sub0}),\\ 2\mapsto(\mathtt{sub1},\mathtt{begin}),3\mapsto(\mathtt{sub2},\mathtt{ negin})\end{array}\end{array}\end{array})\end{array}\end{array}\) where \(\rho_{1}=\rho_{0}[x\mapsto 1]\) where \(\rho_{2}=\rho_{2}[y\mapsto 1]\) where \(\rho_{3}=\rho_{1}[z\mapsto 1]\) \(\begin{array}{l}\frac{1,\{x\},\{x\}}{\mathtt{prog}}\ (Pg,\rho_{4},\sigma_{0}, \begin{array}{l}[\epsilon\mapsto(\mathtt{1},\mathtt{run}),1\mapsto(\mathtt{ sub0},\mathtt{end}),\\ 2\mapsto(\mathtt{sub1},\mathtt{end}),3\mapsto(\mathtt{sub2},\mathtt{end}) ]\end{array})\end{array}\) where \(\rho_{4}=\rho_{3}[x\mapsto 2]\) \(\begin{array}{l}\frac{e,\varnothing,\varnothing}{\mathtt{prog}}\ (Pg,\rho_{4},\sigma_{0}, [\epsilon\mapsto(\mathtt{1}2,\mathtt{run})])\\ \frac{e,\varnothing,\varnothing}{\mathtt{prog}}\ (Pg,\rho_{4},\sigma_{0},[ \epsilon\mapsto(\mathtt{main},\mathtt{end})])\end{array}\) This forward execution ends with \(\mathtt{x}=2,\mathtt{y}=1,\mathtt{z}=1\). The operational semantics show that the computation may be reversed to \((Pg,\rho_{0},\sigma_{0},[\epsilon\mapsto(\mathtt{main},\mathtt{begin}))]\). However, it is possible to reverse to a different configuration such as \(\mathtt{x}=0,\mathtt{y}=-1,\mathtt{z}=-1\) if the call statement is reversed in a different order. Thus, this operational semantics is not reversible. In the next section, we will combine an annotation for the dependency information as DAG to make the basic properties for reversibility as well as Causal Safety and Causal Liveness. ## 3 Reversibility of CRIL Table 1 (a) shows the transitions of store \(\rho\) by the sequence of basic blocks in the forward computation of the example in the previous section. Process \(p\) makes the forward (left-to-right) transition of \(\underbrace{\frac{p,Rd,Wt}{\mathtt{prog}}}_{\mathrm{prog}}\). The program configuration at the end is \((Pg,[\mathtt{x}\mapsto 2,\mathtt{y}\mapsto 1,\mathtt{z}\mapsto 1],\sigma_{0},[ \epsilon\mapsto(\mathtt{main},\mathtt{end})]\). The configuration may lead to a different store by the backward (right-to-left) transitions of \(\underbrace{\frac{p,Rd,Wt}{\mathtt{prog}}}_{\mathrm{prog}}\) as shown in table 1 (b). Although each step of the operational semantics keeps the local reversibility, it does not preserve the causality of shared memory. The forward step of \(\underbrace{\frac{p,Rd,Wt}{\mathtt{prog}}}_{\mathrm{prog}}\) updates \(Wt\) reading \(Rd\) making the causality from \(Rd\) to \(Wt\). Our idea is to control processes to keep the causality by observing \(Rd\) and \(Wt\) being combined with the operational semantics. Figure 3: A CRIL program \(Pg\) and \(b_{6}\), and the order between \(b_{5}\) and \(b_{6}\) affect the causality. We say \(b_{i}\)_conflicts_ with \(b_{j}\) where \(i\neq j\) if \(\mathsf{read}(b_{i})\cap\mathsf{write}(b_{j})\neq\varnothing\) or \(\mathsf{read}(b_{j})\cap\mathsf{write}(b_{i})\neq\varnothing\). Since \(b_{6}\) and \(b_{7}\) do not conflict with each other, the order between \(b_{6}\) and \(b_{7}\) does not affect the causality. Thus, for the forward execution in table 1 (a), the reversed execution \(b_{3}b_{2}b_{3}b_{6}b_{7}b_{4}b_{2}b_{1}\) reaches \(\rho_{0}\) as a legitimate reversed computation. ### Annotation DAG We shall present a data structure called 'annotation DAG' (Directed Acyclic Graph) that keeps the conflicting information in forward execution and controls the backward execution by matching the causality, observing the memory \(Wt\) updated by reading the memory \(Rd\). **Definition 1**.: _An annotation DAG is \(A=(V,E_{R},E_{W})\) satisfying the following conditions:_ 1. \(V\subseteq(\mathsf{PID}\times\mathbb{N})\cup\{\bot\}\) _where_ \(\mathbb{N}\) _is the set of natural numbers,_ \(\bot\in V\)_, and if_ \((p,n)\in V\) _then for all_ \(n^{\prime}\leq n\)_,_ \((p,n^{\prime})\in V\)_;_ 2. \(E_{R},E_{W}\subseteq V\times\mathcal{R}\times V\) _where_ \((v^{\prime},r,v),(v^{\prime\prime},r,v)\in E_{R}\cup E_{W}\) _implies_ \(v^{\prime}=v^{\prime\prime}\)_;_ 3. \(E_{R}\cap E_{W}=\varnothing\) _and_ \((V,E_{R}\uplus E_{W})\) _is a DAG with the finite set of nodes_ \(V\)_;_ 4. \((v^{\prime},r,v)\in E_{W}\) _and_ \(v^{\prime}\neq\bot\) _imply_ \((v^{\prime\prime},r,v^{\prime})\in E_{W}\)_; and_ 5. \((v,r,v^{\prime}),(v,r,v^{\prime\prime})\in E_{W}\) _implies_ \(v^{\prime}=v^{\prime\prime}\)__ \(\mathcal{A}\) _is the set of all annotation DAGs, and_ \(A_{\text{init}}\) _is_ \((\bot\},\varnothing,\varnothing)\)_._ We write \(v\stackrel{{ r}}{{\rightarrow}}v^{\prime}\) for \((v,r,v^{\prime})\in E_{W}\) and \(v\stackrel{{ r}}{{\dashrightarrow}}v^{\prime}\) for \((v,r,v^{\prime})\in E_{R}\). Condition 5 with conditions 3 and 2 ensures that when \(v^{\prime}\stackrel{{ r}}{{\rightarrow}}v\), there is a unique sequence of \(E_{W}\) with the label of \(r\) from \(\bot\) to \(v\): \(\bot\stackrel{{ r}}{{\rightarrow}}v_{1}\stackrel{{ r}}{{ \rightarrow}}\cdots\stackrel{{ r}}{{\rightarrow}}v_{n}=v\). \(\mathsf{last}(r,E_{W})\) denotes the last node \(v\) of such sequence. When \(\mathsf{last}(r,E_{W})=v\neq\bot\), \(v^{\prime}\stackrel{{ r}}{{\rightarrow}}v\) for a unique \(v^{\prime}\) and \(v\stackrel{{ r}}{{\rightarrow}}v^{\prime\prime}\) for all \(v^{\prime\prime}\). \(\mathsf{last}(r,\varnothing)=\bot\) for all \(r\in\mathcal{R}\). Since \(V\) is finite, for \((p,n)\in V\) there is the maximum number for process \(p\) if such \((p,n)\) exists. Given \(V\subseteq\mathsf{PID}\times\mathbb{N}\cup\{\bot\}\), we write \(\mathsf{max}_{p}(V)\) for \(max_{(p,n)\in V}\)\(n\) for some \((p,n)\in V\). \(\mathsf{max}_{p}(V)=-1\) when \((p,n)\notin V\) for all \(n\). **Definition 2**.: _For \(A_{1},A_{2}\in\mathcal{A}\), \(A_{1}=(V_{1},E_{R1},E_{W1})\xrightleftharpoons[]{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \ We illustrate the behavior controlled by the annotation DAG for the simple example of the previous section. Starting from the initial configuration, \((C_{0},A_{0})=((Pg,\rho_{0},\sigma_{0},[(\varepsilon\mapsto(\mathtt{main}, \mathtt{begin}))]),(\{\bot\},\varnothing,\)\(\varnothing))\), it ends up with \((C_{8},A_{8})=(P_{g},[x,y,z\mapsto 2,1,1],\sigma_{0},[\varepsilon\mapsto( \mathtt{main},\mathtt{end})])\). Forward accumulation of causalityWe present the construction of annotation DAGs as follows: 1. After process \(\varepsilon\) executes \(b_{1}\) and \(b_{2},A_{2}=(\{\bot,(\varepsilon,0),(\varepsilon,1)\},\varnothing,\varnothing)\); 2. The call statement in \(b_{2}\) forks three subprocesses. Then, process \(1\) executes \(b_{4}\), \((1,0)\) is added to \(V\) and \(\bot\xrightarrow[]{\mathtt{x}}(1,0)\) is added since \(\mathtt{read}(b_{4})=\mathsf{write}(b_{4})=\{\mathtt{x}\}\) to make \(A_{3}\), meaning \(\mathtt{x}\) is updated by the initial \(\mathtt{x}\), and the store is updated as \([\mathtt{x},\mathtt{y},\mathtt{z}\mapsto 1,0,0]\). 3. Next, process \(2\) executes \(b_{6}\) where \(\mathtt{read}(b_{6})=\{\mathtt{x},\mathtt{y}\}\) and \(\mathsf{write}(b_{6})=\{\mathtt{y}\}\). \(\xrightleftharpoons[]{\mathtt{2},\{\mathtt{x},\mathtt{y}\},\{\mathtt{y}\}}\)\(\mathtt{ann}\) adds a fresh node \((2,0)\), \(\bot\xrightarrow[]{\mathtt{y}}(2,0)\), and \((1,0)\xrightarrow[]{\mathtt{x}}(2,0)\). The causality of \((2,0)\) means \(\mathtt{y}\) is updated by the initial \(\mathtt{y}\) and \(\mathtt{x}\) of \((1,0)\) to make \(A_{4}\). 4. Then, process \(3\) executes \(b_{7}\) where \(\mathtt{read}(b_{7})=\{\mathtt{x},\mathtt{z}\}\) and \(\mathsf{write}(b_{7})=\{\mathtt{z}\}\). \(\xrightleftharpoons[]{\mathtt{3},\{\mathtt{x},\mathtt{z}\},\{\mathtt{z}\}}\)\(\mathtt{ann}\) adds \((3,0)\), \(\bot\xrightarrow[]{\mathtt{z}}(3,0)\), and \((1,0)\xrightarrow[]{\mathtt{x}}(3,0)\), to make \(A_{5}\) shown in figure 4 (a), meaning the causality at \((3,0)\) to update the initial \(\mathtt{z}\) using the initial \(\mathtt{z}\) and \(\mathtt{x}\) of \((1,0)\). 5. At last, process \(1\) executes \(b_{5}\) where \(\mathtt{read}(b_{5})=\mathsf{write}(b_{5})=\{\mathtt{x}\}\). \(\xrightleftharpoons[]{\mathtt{1},\{\mathtt{x}\},\{\mathtt{x}\}}\)\(\mathtt{ann}\) just adds \((1,1)\) and \((1,0)\xrightarrow[]{\mathtt{x}}(1,1)\) to form \(A_{6}\) shown in figure 4 (b), meaning \(\mathtt{x}\) is updated by \(\mathtt{x}\) of \((1,0)\). 6. No more causality is created after merging the subprocesses. Just the relation adds \((\varepsilon,2)\) and \((\varepsilon,3)\) with no edges to form \(A_{8}\) shown in figure 4 (c). Backward rollback of causalityThe following is the summary of the corresponding backward execution. 1. The removable nodes of \(A_{8}\) are \(\{(\varepsilon,3),(1,1)\}\). Here, \(C_{8}\) specifies \(\varepsilon\) to remove \((\varepsilon,3)\) followed by removing \((\varepsilon,2)\) back to \((C_{6},A_{6})\), where \(C_{6}=(Pg,[\mathtt{x},\mathtt{y},\mathtt{z}\mapsto 2,1,1],\sigma_{0},[ \varepsilon\mapsto(12,\mathtt{run}),1\mapsto(\mathtt{sub}0,\mathtt{end}),2\)\(\mapsto(\mathtt{sub}1,\mathtt{end}),3\mapsto(\mathtt{sub}2,\mathtt{end})])\) 2. \(C_{6}\) may reverse any subprocess, but \(A_{6}\) allows only to remove \((1,1)\) by \(\xrightleftharpoons[]{\mathtt{p},\mathit{Rd},\mathit{Wt}}\)\(\mathtt{ann}\) to obtain \(A_{5}\). 3. After removing \((1,1)\) and \((1,0)\xrightarrow[]{\mathtt{x}}(1,1)\) from \(A_{6}\), we obtain \(A_{5}\) whose removable nodes are \((2,0)\) and \((3,0)\). \((1,0)\) is not removable since \((1,0)\) has two outgoing edges, although \((1,0)=\mathtt{last}(\mathtt{x},E_{W})\). Figure 4: Annotation DAGs along with forward execution * \(C_{5}\) may reverse either process 2 or process 3, and let process 2 reverse to become \(C_{4}^{\prime}\). Then, remove \(\bot\stackrel{{ y}}{{\rightarrow}}(2,0)\) and \((1,0)\stackrel{{ x}}{{\dashrightarrow}}(2,0)\) to obtain \(A_{4}^{\prime}\) and \([x,y,z\mapsto 1,0,1]\) as the store \(\rho\). Note that \((C_{4}^{\prime},A_{4}^{\prime})\) did not appear in the forward execution. * From \((C_{4}^{\prime},A_{4}^{\prime})\), process 3 is reversed to remove \((3,0)\), \(\bot\stackrel{{\pi}}{{\rightarrow}}(3,0)\), and \((1,0)\stackrel{{ x}}{{\dashrightarrow}}(3,0)\) to obtain \(A_{3}\) and \([x,y,z\mapsto 1,0,0]\). * Then, process 1 is reversed by removing \((1,0)\) and \(\bot\stackrel{{ x}}{{\rightarrow}}(1,0)\) to obtain \(A_{2}=(\{\bot,(\varepsilon,0),(\varepsilon,1)\},\varnothing,\)\(\varnothing). * At last, process \(\varepsilon\) reverses \(b_{2}\) and \(b_{1}\) to obtain \((C_{init},A_{init})\). In (B4) step, there are two possibilities of reversing process 3 or process 2. In the above, \(A_{5}\) is reversed by process 2 to \(A_{4}^{\prime}\) followed by process 3. For a CRIL program \(Pg\), let \(B\) be the basic blocks in \(Pg\). Let \(\mathcal{O}=\mathsf{PID}\times\bigcup_{b\in B}\mathsf{read}(b)\times\bigcup_{b \in B}\mathsf{write}(b)\). Proposition 2 ensures there is always a removable node along with removable edges. ### Properties for reversibility We show that the operational semantics controlled by annotation DAG has proper properties for reversibility. We focus on the following two properties that are considered fundamental properties for reversibility [14]. **Causal Safety (CS):**: An action can not be reversed until any actions caused by it have been reversed. **Causal Liveness (CL):**: We should allow actions to reverse in any order compatible with Causal Safety, not necessarily the exact inverse of the forward order. [14] shows that those properties hold in an LTSI (LTS with Independence) provided that a small number of axioms are valid in the LTSI. We shall follow this approach by defining LTS from \(\xRightarrow{p,Rd,Wt}\) and add the independence relation to have the LTSI for the CRIL behavior. We will then show that the axioms for **CS** and **CL** hold. **Definition 4**.: \((\mathcal{C}\times\mathcal{A},\mathsf{Lab},\rightharpoonup)\) _is the forward LTS for CRIL where:_ * \(\mathsf{Lab}=\mathsf{PID}\times 2^{\mathcal{R}}\times 2^{\mathcal{R}}\)_; and_ * \((C,A)\xrightarrow{(p,Rd,Wt)}(C^{\prime},A^{\prime})\) _if_ \((C,A)\xRightarrow{p,Rd,Wt}(C^{\prime},A^{\prime})\)__ **Definition 5**.: _The (combined) LTS for CRIL is \((\mathcal{C}\times\mathcal{A},\mathsf{Lab}\uplus\mathsf{Lab},\rightharpoonup)\) where:_ * \(\mathsf{Lab}=\{(p,Rd,Wt)\mid(p,Rd,Wt)\in\mathsf{Lab}\}\)_; and_ * _For_ \(a\in\mathsf{Lab}\)_,_ \((C,A)\stackrel{{ a}}{{\dashrightarrow}}(C^{\prime},A^{\prime})\) _iff_ \((C,A)\stackrel{{ a}}{{\dashrightarrow}}(C^{\prime},A^{\prime})\)_, and_ \((C,A)\stackrel{{ a}}{{\dashrightarrow}}(C^{\prime},A^{\prime})\) _iff_ \((C^{\prime},A^{\prime})\stackrel{{ a}}{{\dashrightarrow}}(C,A)\)_._ Figure 5: Annotation DAGs in backward execution \(\mathsf{Lab}\uplus\mathsf{Lab}\) is ranged over by \(\alpha,\beta,\cdots\), and \(\mathsf{Lab}\) by \(a,b,\cdots\). \(\mathsf{und}:\mathsf{Lab}\uplus\mathsf{Lab}\rightarrow\mathsf{Lab}\) where \(\mathsf{und}(a)=a\) and \(\mathsf{und}(\underline{a})=a\). \(\underline{a}=a\). Given \(t:P\stackrel{{ a}}{{\rightarrow}}Q,\underline{t}\) is for \(Q\stackrel{{ a}}{{\rightarrow}}P\). For CRIL, the independence of transitions is defined as the independent memory update among concurrent processes. The processes running concurrently are not in the subprocess relation. Note that as \(\mathsf{pid}\ p\cdot 1\), \(p\cdot 2\), \(\cdots\) are assigned to the subprocesses of the process with \(\mathsf{pid}\) of \(p\). The process with the \(\mathsf{pid}\) of \(p\) is concurrent to the process with the \(\mathsf{pid}\) of \(q\) if \(p\not\preceq q\) and \(q\not\preceq p\). Hence, we give the dependence relation for labels as follows. **Definition 6**.: _For \(\alpha,\beta\in\mathsf{Lab}\uplus\mathsf{Lab}\) such that \(\mathsf{und}(\alpha)=(p_{1},Rd_{1},Wt_{1})\) and \(\mathsf{und}(\beta)=(p_{2},Rd_{2},Wt_{2})\), \(\alpha\ \ \mathfrak{t}_{\mathrm{lab}}\ \ \beta\) iff_ \[p_{1}\not\preceq p_{2}\ \wedge\ p_{2}\not\preceq p_{1}\ \wedge\ Rd_{1}\cap Wt_{2}=\varnothing\ \wedge Rd_{2}\cap Wt_{1}=\varnothing\] The independence of transitions in LTS is defined as the transitions with independent labels. We define the Labeled Transition System with Independent transitions as the operational semantics of CRIL. **Definition 7**.: _For \(t:(C_{1},A_{1})\stackrel{{\alpha}}{{\rightarrow}}(C^{\prime}_{1}, A^{\prime}_{1})\) and \(u:(C_{2},A_{2})\stackrel{{\beta}}{{\rightarrow}}(C^{\prime}_{2}, A^{\prime}_{2})\) in the combined LTS for CRIL, \(t\) and \(u\) are independent of each other, written as \(t\ \ \ \mathfrak{t}\ \ \mathfrak{t}_{\mathrm{lab}}\ \ \beta\)._ \((\mathcal{C}\times\mathcal{A},\mathsf{Lab}\uplus\mathsf{Lab},\rightarrow, \mathfrak{t})\) _is the LTS of CRIL with independence._ In the sequel, we write '\(\mathit{LTSI}_{\mathit{CRIL}}\)' for the LTS of CRIL with independence. #### 3.3.1 Basic properties for reversibility We take the axiomatic approach of [14], where the combination of the basic properties gives the proper reversibility. The first step is to show that the \(\mathit{LTSI}_{\mathit{CRIL}}\) is _pre-reversible_. For this purpose, we show \(\mathit{LTSI}_{\mathit{CRIL}}\) satisfies the following axioms: "**Square Property (SP)**", "**Backward Transitions are Independent (BTI)**", "**Well-Foundedness (WF)**", and "**Coinitial Propagation of Independence (CPI)**". Square Property(SP)For \(a\in\mathsf{Lab}\), when \(C\stackrel{{ a}}{{\underset{\mathrm{prog}}{\rightleftharpoons}}}C^{\prime}\), we write \(C\stackrel{{ a}}{{\rightarrow}}_{\mathrm{prog}}C^{\prime}\) and \(C^{\prime}\stackrel{{ a}}{{\rightarrow}}_{\mathrm{prog}}C\). Similarly, when \(A\stackrel{{ a}}{{\underset{\mathrm{ann}}{\rightleftharpoons}}}A^{\prime}\), we write \(A\stackrel{{ a}}{{\rightarrow}}_{\mathrm{ann}}A^{\prime}\) and \(A^{\prime}\stackrel{{ a}}{{\rightarrow}}_{\mathrm{ann}}A\). By the definition of the independence transitions, the square property of the \(\stackrel{{\alpha}}{{\rightarrow}}_{\mathrm{prog}}\) is immediately shown. **Proposition 3**.: _Suppose \(C\stackrel{{\alpha}}{{\rightarrow}}_{\mathrm{prog}}C^{\prime}\), \(C\stackrel{{\beta}}{{\rightarrow}}_{\mathrm{prog}}C^{\prime\prime}\), and \(\alpha\ \mathfrak{t}_{\mathrm{lab}}\ \beta\). Then there are the cofinal transitions \(C^{\prime}\stackrel{{\beta}}{{\rightarrow}}_{\mathrm{prog}}C^{ \prime\prime\prime}\) and \(C^{\prime}\stackrel{{\alpha}}{{\rightarrow}}_{\mathrm{prog}}C^{ \prime\prime\prime}\)._ For annotation DAGs, we need to trace the difference of nodes and edges added or deleted by \(\stackrel{{\alpha}}{{\rightarrow}}_{\mathrm{ann}}\) to show the square property. We use the following notation to present differences in annotation DAGs: \[\text{For }o:(V,E_{R},E_{W})\stackrel{{\alpha}}{{\rightarrow}}_{ \mathrm{ann}}(V^{\prime},E^{\prime}_{R},E^{\prime}_{W})\text{, }\text{iff}(o)=\begin{cases}(V^{\prime}-V,E^{\prime}_{R}-E_{R},E^{\prime}_{W}-E _{W})&\text{if }\alpha\in\mathsf{Lab},\\ (V-V^{\prime},E_{R}-E^{\prime}_{R},E_{W}-E^{\prime}_{W})&\text{if }\alpha\in\mathsf{Lab} \end{cases}\] \[(V,E_{R},E_{W})\odot^{\alpha}(\Delta V,\Delta E_{R},\Delta E_{W})=\begin{cases}(V \cup\Delta V,E_{R}\cup\Delta E_{R},E_{W}\cup\Delta E_{W})&\text{if }\alpha\in\mathsf{Lab},\\ (V-\Delta V,E_{R}-\Delta E_{R},E_{W}-\Delta E_{W})&\text{if }\alpha\in\mathsf{Lab} \end{cases}\] **Proposition 4**.: _Let \(\text{diff}(A\stackrel{{\alpha}}{{\rightarrow}}_{\mathrm{ann}}A^{ \prime})=(\Delta V^{\alpha},\Delta E^{\alpha}_{R},\Delta E^{\alpha}_{W})\) and \(\text{diff}(A\stackrel{{\beta}}{{\rightarrow}}_{\mathrm{ann}}A^{ \prime\prime})=(\Delta V^{\beta},\Delta E^{\beta}_{R},\Delta E^{\beta}_{W})\) with \(\alpha\ \mathfrak{t}_{\mathrm{lab}}\ \beta\). Then, \(\Delta V^{\alpha}\cap\Delta V^{\beta}=\Delta E^{\alpha}_{R}\cap\Delta E^{\beta}_ {R}=\Delta E^{\alpha}_{R}\cap\Delta E^{\beta}_{W}=\varnothing\)._ Proof.: For some \(v_{\alpha}\) and \(v_{\beta}\), \(\Delta V^{\alpha}=\{v_{\alpha}\}\) and \(\Delta V^{\beta}=\{v_{\beta}\}\). \(\alpha\)\(\iota_{\text{lab}}\)\(\beta\) implies \(v_{\alpha}\neq v_{\beta}\). All the edges of \(\Delta E^{\alpha}_{R}\uplus\Delta E^{\alpha}_{R}\) come into \(v_{\alpha}\) and all the edges of \(\Delta E^{\beta}_{R}\uplus\Delta E^{\beta}_{W}\) come into \(v_{\beta}\). Therefore, \(\Delta V^{\alpha}\cap\Delta V^{\beta}=\Delta E^{\alpha}_{R}\cap\Delta E^{\beta }_{R}=\Delta E^{\alpha}_{W}\cap\Delta E^{\beta}_{W}=\varnothing\). **Proposition 5**.: _Suppose \(A\stackrel{{ a}}{{\rightarrow}}_{\text{ann}}A^{\prime}\) and \(A\stackrel{{\beta}}{{\rightarrow}}_{\text{ann}}A^{\prime\prime}\) with a \(\iota_{\text{lab}}\)\(\beta\). Then there is \(A^{\prime\prime\prime}\) such that \(A^{\prime\prime}\stackrel{{ a}}{{\rightarrow}}_{\text{ann}}A^{ \prime\prime\prime}\) and \(\operatorname{diff}(A\stackrel{{ a}}{{\rightarrow}}_{\text{ann}}A^{ \prime})=\operatorname{diff}(A^{\prime\prime}\stackrel{{ a}}{{ \rightarrow}}_{\text{ann}}A^{\prime\prime\prime})\)._ Proof.: Assume \(A=(V,E_{R},E_{W})\), \(A^{\prime\prime}=(V^{\prime\prime},E^{\prime\prime}_{R},E^{\prime\prime}_{W})\), and \(a=(p_{a},Rd_{a},Wt_{a})\). \(A^{\prime\prime}\stackrel{{ a}}{{\rightarrow}}_{\text{ann}}A^{ \prime\prime\prime}\) for some \(A^{\prime\prime\prime}\) since \(a\in\mathsf{Lab}\). \(a\)\(\iota_{\text{lab}}\)\(\beta\) implies that \(\max_{p_{a}}(A)=\max_{p_{a}}(A^{\prime\prime})\) and \(\mathsf{last}(r,E_{W})=\mathsf{last}(r,E^{\prime\prime}_{W})\) for \(r\in Rd_{a}\). Therefore, \(\operatorname{diff}(A\stackrel{{ a}}{{\rightarrow}}_{\text{ann}}A^{ \prime})=\operatorname{diff}(A^{\prime\prime}\stackrel{{ a}}{{ \rightarrow}}_{\text{ann}}A^{\prime\prime\prime})\). **Proposition 6**.: _Suppose \(A\stackrel{{ a}}{{\rightarrow}}_{\text{ann}}A^{\prime}\) and \(A\stackrel{{\beta}}{{\rightarrow}}_{\text{ann}}A^{\prime\prime}\) with \(\underline{a}\)\(\iota_{\text{lab}}\)\(\beta\). Then there is \(A^{\prime\prime\prime}\) such that \(A^{\prime\prime}\stackrel{{ a}}{{\rightarrow}}_{\text{ann}}A^{ \prime\prime\prime}\) and \(\operatorname{diff}(A\stackrel{{ a}}{{\rightarrow}}_{\text{ann}}A^{ \prime})=\operatorname{diff}(A^{\prime\prime}\stackrel{{ a}}{{ \rightarrow}}_{\text{ann}}A^{\prime\prime\prime})\)._ Proof.: Assume \(\operatorname{diff}(A\stackrel{{\beta}}{{\rightarrow}}_{\text{ann}}A^{ \prime\prime})=(\Delta V^{\beta},\Delta E^{\beta}_{R},\Delta E^{\beta}_{W})\), and \(a=(p_{a},Rd_{a},Wt_{a})\). Let \(v=(p_{a},\max_{p_{a}}(V))\). \(\underline{a}\)\(\iota_{\text{lab}}\)\(\beta\) implies that no edges in \(\Delta E^{\beta}_{R}\uplus\Delta E^{\beta}_{W}\) go out from \(v\) and \(v^{\prime}\) such that \(v^{\prime}\stackrel{{ r}}{{\rightarrow}}v\) in \(A\). Therefore, \(A^{\prime\prime}\stackrel{{ a}}{{\rightarrow}}_{\text{ann}}A^{ \prime\prime\prime}\) for some \(A^{\prime\prime\prime}\). \(\underline{a}\)\(\iota_{\text{lab}}\)\(\beta\) and \(\underline{a}\in\underline{\mathsf{Lab}}\) derive \(\operatorname{diff}(A\stackrel{{ a}}{{\rightarrow}}_{\text{ann}}A^{ \prime})=\operatorname{diff}(A^{\prime\prime}\stackrel{{ a}}{{ \rightarrow}}_{\text{ann}}A^{\prime\prime\prime})\). **Proposition 7**.: _Suppose \(A\stackrel{{\alpha}}{{\rightarrow}}_{\text{ann}}A^{\prime}\) and \(A\stackrel{{\beta}}{{\rightarrow}}_{\text{ann}}A^{\prime\prime}\) with \(\alpha\)\(\iota_{\text{lab}}\)\(\beta\). Then \(A^{\prime\prime}\stackrel{{\alpha}}{{\rightarrow}}_{\text{ann}}A^{ \prime\prime\prime}\), where \(A^{\prime\prime\prime}=A^{\prime\prime}\odot^{\alpha}\)\(\operatorname{diff}(A\stackrel{{\alpha}}{{\rightarrow}}_{\text{ann}}A^{ \prime})\)._ Proof.: Proposition 5 and 6 derive \(A^{\prime\prime}\stackrel{{\alpha}}{{\rightarrow}}_{\text{ann}}A^{ \prime\prime\prime}\). **Proposition 8**.: _Suppose \(A\stackrel{{\alpha}}{{\rightarrow}}_{\text{ann}}A^{\prime}\), \(A\stackrel{{\beta}}{{\rightarrow}}_{\text{ann}}A^{\prime\prime}\), and \(\alpha\)\(\iota_{\text{lab}}\)\(\beta\). Then there are the cofinal transitions \(A^{\prime}\stackrel{{\beta}}{{\rightarrow}}_{\text{ann}}A^{ \prime\prime\prime}\) and \(A^{\prime\prime}\stackrel{{\alpha}}{{\rightarrow}}_{\text{ann}}A^{ \prime\prime\prime}\)._ Proof.: By proposition 4, \(\operatorname{diff}(A\stackrel{{\alpha}}{{\rightarrow}}_{\text{ann}}A^{ \prime})\) and \(\operatorname{diff}(A\stackrel{{\beta}}{{\rightarrow}}_{\text{ann}}A^{ \prime\prime})\) are disjoint if \(\alpha\)\(\iota_{\text{lab}}\)\(\beta\). Hence, the order of addition and deletion to/from \(A\) does not affect the result. Therefore, \((A\odot^{\alpha}\operatorname{diff}(A\stackrel{{\alpha}}{{ \rightarrow}}_{\text{ann}}A^{\prime}))\odot^{\beta}\operatorname{diff}(A \stackrel{{\beta}}{{\rightarrow}}_{\text{ann}}A^{\prime\prime})=(A \odot^{\beta}\operatorname{diff}(A\stackrel{{\beta}}{{\rightarrow}}_{ \text{ann}}A^{\prime\prime}))\odot^{\alpha}\operatorname{diff}(A \stackrel{{\alpha}}{{\rightarrow}}_{\text{ann}}A^{\prime})=A^{\prime \prime\prime}\). By proposition 7, we have \(A\stackrel{{\alpha}}{{\rightarrow}}_{\text{ann}}A^{\prime\prime}\) and \(A\stackrel{{\beta}}{{\rightarrow}}_{\text{ann}}A^{\prime\prime} \stackrel{{\alpha}}{{\rightarrow}}_{\text{ann}}A^{\prime\prime\prime}\) hold for such \(A^{\prime\prime\prime}\). Combining proposition 3 with proposition 8 by **ProgAnn**, the square property holds. **Lemma 1** (Square Property).: _Whenever \(t:(C_{P},A_{P})\stackrel{{\alpha}}{{\rightarrow}}(C_{Q},A_{Q})\), \(u:(C_{P},A_{P})\stackrel{{\beta}}{{\rightarrow}}(C_{R},A_{R})\), and \(t\)\(\iota\)\(u\), then there are cofinal transitions \(u^{\prime}:(C_{Q},A_{Q})\stackrel{{\beta}}{{\rightarrow}}(C_{S},A_{S})\), and \(t^{\prime}:(C_{R},A_{R})\stackrel{{\alpha}}{{\rightarrow}}(C_{S},A_{S})\)._ Backward Transitions are Independent (BTI)BTI is useful for reversibility because an available backward transition does not depend on any other backward transition. In CRIL, a label of \(LTSI_{CRIL}\) gives the information to establish BTI. **Lemma 2** (Backward Transitions are Independent).: _Whenever \(t:(C_{P},A_{P})\stackrel{{ a}}{{\rightarrow}}(C_{Q},A_{Q})\), \(u:(C_{P},A_{P})\stackrel{{ b}}{{\rightarrow}}(C_{R},A_{R})\), and \(t\neq u\), then \(t\)\(\iota\) Proof.: Assume \(A_{P}=(V,E_{R},E_{W})\), \(a=(p_{a},Rd_{a},Wt_{a})\), and \(b=(p_{b},Rd_{b},Wt_{b})\). Let \(v_{a}=(p_{a},\max_{p_{a}}(V))\) and \(v_{b}=(p_{b},\max_{p_{b}}(V))\). Assume \(p_{a}\preceq p_{b}\). Then \(p_{a}=p_{b}\) holds from the operational semantics. \(p_{a}=p_{b}\) derives \(t=u\), which contradicts \(t\neq u\). Therefore, \(p_{a}\not\preceq p_{b}\) holds. Similarly, \(p_{b}\not\preceq p_{a}\) also holds. Assume \(Rd_{a}\cap Wt_{b}\neq\varnothing\). There exists \(r\in Rd_{a}\cap Wt_{b}\). If \(r\in Wt_{a}\), then \(\mathsf{last}(r,E_{W})=v_{a}\) and \(\mathsf{last}(r,E_{W})=v_{b}\). Therefore \(p_{a}=p_{b}\), however it contradicts \(p_{a}\not\preceq p_{b}\). If \(r\not\in Wt_{a}\), then \(\mathsf{last}(r,E_{W})\stackrel{{ r}}{{\dashrightarrow}}v_{a} \in E_{R}\). \(r\in Wt_{b}\) derives \(\mathsf{last}(r,E_{W})=v_{b}\). Therefore \(v_{b}\stackrel{{ r}}{{\dashrightarrow}}v_{a}\in E_{R}\), however it contradicts that no edges go out from \(v_{b}\) derived from \(u\). Therefore \(Rd_{a}\cap Wt_{b}=\varnothing\). Similarly, \(Rd_{b}\cap Wt_{a}=\varnothing\) also holds. Well-Foundedness (WF)For a backward transition \((C,A)\stackrel{{ a}}{{\rightarrow}}(C^{\prime},A^{\prime})\), the number of nodes of \(A^{\prime}\) is strictly less than that of \(A\). Since the number of vertices of annotation DAG is finite, it is not possible to remove vertices infinitely. Coinitial Propagation of Independence (CPI)Given a commuting square with independence at one corner, CPI allows us to deduce independence between coinitial transitions at the other three corners. **Lemma 3** (Coinitial Propagation of Independence).: _Suppose \(t:(C_{P},A_{P})\stackrel{{\alpha}}{{\rightarrow}}(C_{Q},A_{Q})\), \(u:(C_{P},A_{P})\stackrel{{\beta}}{{\rightarrow}}(C_{R},A_{R})\), \(u^{\prime}:(C_{Q},A_{Q})\stackrel{{\beta}}{{\rightarrow}}(C_{S},A_ {S})\), \(t^{\prime}:(C_{R},A_{R})\stackrel{{\alpha}}{{\rightarrow}}(C_{S},A_ {S})\), and \(t\u\). Then \(u^{\prime}\u t\)._ Proof.: \(t\u\) implies \(\alpha\u_{\text{lab}}\beta\). Since \(\beta\u_{\text{lab}}\alpha,u^{\prime}\u t\). #### 3.3.2 Events The properties above make \(\mathit{LTSI}_{\mathit{CRIL}}\) pre-reversible. Next, we check if \(\mathit{LTSI}_{\mathit{CRIL}}\) can derive events for establishing reversibility. Following [14], events in \(\mathit{LTSI}_{\mathit{CRIL}}\) are derived as an equivalence over transitions. **Definition 8**.: _Let \(\sim\) be the smallest equivalence relation on transitions satisfying: if \(t:(C_{P},A_{P})\stackrel{{\alpha}}{{\rightarrow}}(C_{Q},A_{Q})\), \(u:(C_{P},A_{P})\stackrel{{\beta}}{{\rightarrow}}(C_{R},A_{R})\), \(u^{\prime}:(C_{Q},A_{Q})\stackrel{{\beta}}{{\rightarrow}}(C_{S},A_ {S})\), \(t^{\prime}:(C_{R},A_{R})\stackrel{{\alpha}}{{\rightarrow}}(C_{S},A_ {S})\), and \(t\u u\), then \(t\sim t^{\prime}\). The equivalence classes of forward transitions \([(C_{P},A_{P})\stackrel{{ a}}{{\rightarrow}}(C_{Q},A_{Q})]\), are the events. The equivalence classes of backward transitions \([(C_{P},A_{P})\stackrel{{\alpha}}{{\rightarrow}}(C_{Q},A_{Q})]\), are the reverse events._ Given \(\gamma=\alpha_{1}\cdots\alpha_{n}\in(\mathsf{Lab}\uplus\mathsf{Lab})^{*}\), a sequence of transitions \((C_{0},A_{0})\stackrel{{\alpha_{1}}}{{\rightarrow}}\cdots\stackrel{{ \alpha_{n}}}{{\rightarrow}}(C_{n},A_{n})\) is written as \(s:(C_{0},A_{0})\stackrel{{\gamma}}{{\rightarrow}}_{*}(C_{n},A_ {n})\). Since the transitions of program configurations in \(\mathit{LTSI}_{\mathit{CRIL}}\stackrel{{\alpha}}{{\rightarrow}}_{ \text{prog}}\) have no control for reversibility, events are substantially derived from the operations of annotation DAGs. **Definition 9**.: _Let \(\sim_{\text{ann}}\) be the smallest equivalence relation over operations of annotation DAGs satisfying: if \(o_{1}:A_{P}\stackrel{{\alpha}}{{\rightarrow}}_{\text{ann}}A_{Q}\), \(o_{2}:A_{P}\stackrel{{\beta}}{{\rightarrow}}_{\text{ann}}A_{R}\), \(o^{\prime}_{2}:A_{Q}\stackrel{{\beta}}{{\rightarrow}}_{\text{ann}}A_ {S}\), \(o^{\prime}_{1}:A_{R}\stackrel{{\alpha}}{{\rightarrow}}_{\text{ann}}A_ {S}\), and \(\alpha\u_{\text{lab}}\beta\), then \(o_{1}\sim o^{\prime}_{1}\). \([A\stackrel{{ a}}{{\rightarrow}}_{\text{ann}}A^{\prime}]_{\text{ann}}\) and \([A\stackrel{{ a}}{{\rightarrow}}_{\text{ann}}A^{\prime}]_{\text{ann}}\) are the forward and backward equivalence classes by \(\sim_{\text{ann}}\)._ **Proposition 9**.: _For \(t:(C_{P},A_{P})\stackrel{{\alpha}}{{\rightarrow}}(C_{Q},A_{Q})\) and \(t^{\prime}:(C_{R},A_{R})\stackrel{{\alpha}}{{\rightarrow}}(C_{S},A_ {S})\), the following holds. \(t\sim t^{\prime}\) iff \(o\sim_{\text{ann}}o^{\prime}\) and \(\exists\gamma_{\ell}(C_{P},A_{P})\stackrel{{\gamma}}{{\rightarrow}} _{*}(C_{R},A_{R})\) where \(o:A_{P}\stackrel{{\alpha}}{{\rightarrow}}_{\text{ann}}A_{Q}\) and \(o^{\prime}:A_{R}\stackrel{{\alpha}}{{\rightarrow}}_{\text{ann}}A_ {S}\)._ Intuitively, operations for annotation DAGs are independent if they add or remove nodes and edges at unrelated places. If \(o_{1}\sim_{\text{ann}}o_{2}\), then \(o_{1}\) and \(o_{2}\) add or remove the same fragment of annotation DAGs to or from the nodes of the same causality. In \(\mathit{LTSI}_{\mathit{CRIL}}\), the equivalence over operations of annotation DAGs is considered as an _event_. This shows that events for reversibility are consistently defined over \(\mathit{LTSI}_{\mathit{CRIL}}\), meaning the operational semantics is detailed enough to give the **IRE** property as follows, which is necessary for our objectives. **Independence Respects Events (IRE)** **Lemma 4** (Independence Respects Events).: _Suppose \(t\sim t^{\prime}\)\(\iota\). Then \(t\)\(\iota\) u._ Proof.: If \(t\sim t^{\prime}\), \(t\) has the same label as \(t^{\prime}\). Then, \(t\)\(\iota\) u. #### 3.3.3 Causal Safety and Causal Liveness Let \(\sharp(s,[A\stackrel{{ a}}{{\rightarrow}}A^{\prime}]_{\text{ann}})\) be the number of occurrences of transitions \(t\) in \(s\) such that \(t\in[(C,A)\stackrel{{ a}}{{\rightarrow}}(C^{\prime},A^{\prime})]\), minus the number of occurrences of transitions \(t\) in \(s\) such that \(t\in[(C,A)\stackrel{{ a}}{{\rightarrow}}(C^{\prime},A^{\prime})]\). Using the result of [14], the properties of **SP**(Lemma 1), **BTI**(Lemma 2), **WF**, **CPI**(Lemma 3), and **IRE** (Lemma 4) make **Causal Safety (CS)** and **Causal Liveness (CL)** hold. Due to the fact that the causality is stored in the annotation DAGs, the properties can be stated in \(\textit{LTSI}_{CRIL}\) as below. **Theorem 1** (Causal Safety).: _Whenever \((C_{P},A_{P})\stackrel{{ a}}{{\rightarrow}}(C_{Q},A_{Q})\), \(s:(C_{Q},A_{Q})\stackrel{{\gamma}}{{\rightarrow}}_{*}(C_{R},A_{R})\) with \(\sharp(s,[A_{P}\stackrel{{ a}}{{\rightarrow}}A_{Q}]_{\text{ann}})=0\), and \((C_{S},A_{S})\stackrel{{ a}}{{\rightarrow}}(C_{R},A_{R})\) then \((C_{P},A_{P})\stackrel{{ a}}{{\rightarrow}}(C_{Q},A_{Q})\)\(\iota\)\(t\) for all \(t\) in \(s\) such that \(\sharp(s,[A_{P}\stackrel{{ a}}{{\rightarrow}}A_{Q}]_{\text{ann}})>0\)._ **Theorem 2** (Causal Liveness).: _Whenever \((C_{P},A_{P})\stackrel{{ a}}{{\rightarrow}}(C_{Q},A_{Q})\), \(s:(C_{Q},A_{Q})\stackrel{{\gamma}}{{\rightarrow}}_{*}(C_{R},A_{R})\), \(\sharp(s,[A_{P}\stackrel{{ a}}{{\rightarrow}}A_{Q}])=0\), and \((C_{P},A_{P})\stackrel{{ a}}{{\rightarrow}}(C_{Q},A_{Q})\)\(\iota\)\(t:(C,A)\stackrel{{ b}}{{\rightarrow}}(C^{\prime},A^{\prime})\) for all \(t\) in \(s\) such that \(\sharp(s,[A\stackrel{{ a}}{{\rightarrow}}A^{\prime}])>0\) with \((C_{P},A_{P})\stackrel{{ a}}{{\rightarrow}}(C_{Q},A_{Q})\sim(C_{S},A_{S}) \stackrel{{ a}}{{\rightarrow}}(C_{R},A_{R})\), then we have \((C_{S},A_{S})\stackrel{{ a}}{{\rightarrow}}(C_{R},A_{R})\) with \((C_{P},A_{P})\stackrel{{ a}}{{\rightarrow}}(C_{Q},A_{Q})\sim(C_{S},A_{S}) \stackrel{{ a}}{{\rightarrow}}(C_{R},A_{R})\)._ Based on these properties, \(\textit{LTSI}_{CRIL}\) can be implemented correctly with the pointers for processes managed by a process map along with annotation DAGs as the operational semantics of CRIL. ## 4 Example: Airline ticketing We show a version of the airline ticketing program [6] in CRIL in figure 6. Two agents attempt to sell three seats of an airline. This program has a data race for variable seats of the remaining seats because two agents may check the remaining seats simultaneously before making sales. Since the data race does not always happen, it is useful to roll back to the point where checking remaining seats is insufficient. Here, agent1 and agent2 are used to record the number of tickets sold by each agent. Figure 6: An airline ticketing program in CRIL \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline basic block & seats & agent1 & agent2 \\ \hline \((\epsilon,0)\) & \(b_{1}\) & 3 & 0 & 0 \\ \((\epsilon,1)\) & \(b_{2}\) & 3 & 0 & 0 \\ \((1,0)\) & \(b_{4}\) & 3 & 0 & 0 \\ \((2,0)\) & \(b_{9}\) & 3 & 0 & 0 \\ \((1,1)\) & \(b_{5}\) & 3 & 0 & 0 \\ \((1,2)\) & \(b_{6}\) & 2 & 0 & 0 \\ \((1,3)\) & \(b_{7}\) & 2 & 1 & 0 \\ \((2,1)\) & \(b_{10}\) & 2 & 1 & 0 \\ \((2,2)\) & \(\mathbf{b_{11}}\) & 1 & 0 & 0 \\ \((2,3)\) & \(b_{12}\) & 1 & 1 & 1 \\ \((2,4)\) & \(\mathbf{b_{10}}\) & 1 & 1 & 1 \\ \((1,4)\) & \(\mathbf{b_{5}}\) & 1 & 1 & 1 \\ \((2,5)\) & \(b_{11}\) & 0 & 1 & 1 \\ \((1,5)\) & \(b_{6}\) & -1 & 1 & 1 \\ \((2,6)\) & \(b_{12}\) & -1 & 1 & 2 \\ \((2,7)\) & \(b_{10}\) & -1 & 1 & 2 \\ \((2,8)\) & \(b_{13}\) & -1 & 1 & 2 \\ \((1,6)\) & \(b_{7}\) & -1 & 2 & 2 \\ \((1,7)\) & \(b_{5}\) & -1 & 2 & 2 \\ \((1,8)\) & \(b_{8}\) & -1 & 2 & 2 \\ \((6,2)\) & \(b_{2}\) & -1 & 2 & 2 \\ \((6,3)\) & \(b_{5}\) & -1 & 2 & 2 \\ \hline \end{tabular} \end{table} Table 3: A faulty execution Table 3 shows a forward execution that ends \(\mathtt{seats}=-1\). Figure 7 is the annotation DAG when terminated at 'end main' in \(b_{3}\). To investigate the cause of the data race, we focus on the edges labeled with \(\mathtt{seats}\). The solid edges indicate that \(\mathtt{seats}\) is written in \((\varepsilon,0)\), \((1,2)\), \((2,2)\), \((2,5)\), and \((1,5)\). In particular, \(\mathtt{seats}\) defined at \((2,2)\) is used to update by processes 2 and 3 to cause the data race. (The steps in bold are involved in the problem.) To resolve the data race, each value of \(\mathtt{seats}\) should be checked exactly once, except for the last value of \(\mathtt{seats}\). Figure 8 shows the airline program where \(\mathtt{sub1}\) and \(\mathtt{sub2}\) are replaced by those with the V-P operations. The parameter of the V-P operations works as a semaphore to check and update \(\mathtt{seats}\) as a critical region. Figure 9 is the annotation DAG by the forward execution with \(\mathtt{sub1}\) done first once and then \(\mathtt{sub2}\) done twice. Process 1 executes \(b^{\prime}_{5}\) setting \(\mathtt{semaphore}=1\) at \((1,1)\) first. (\(\mathtt{sem}\) is for \(\mathtt{semaphore}\) in the figure.) This prevents process 2 executing \(b^{\prime}_{10}\) at \((2,1)\) since \(\mathtt{semaphore}\) must be 0. Backwards, \(b^{\prime}_{14}\) and \(b^{\prime}_{15}\) work as V \(\mathtt{semaphore}\). In the backward execution, the order of basic blocks is stored in the annotation DAG. It works as follows: * The sequence of \(\stackrel{{\mathtt{sem}}}{{\rightarrow}}\) is alternatively from V and P operations in the forward execution. \(\bot\stackrel{{\mathtt{sem}}}{{\longrightarrow}}(1,1)\) is by \(b^{\prime}_{5}\) and \((1,1)\stackrel{{\mathtt{sem}}}{{\longrightarrow}}(1,3)\) by \(b^{\prime}_{14}\), \(\cdots\), \((1,3)\stackrel{{\mathtt{sem}}}{{\longrightarrow}}(2,1)\) by \(b^{\prime}_{10}\), \((2,1)\stackrel{{\mathtt{sem}}}{{\longrightarrow}}(2,3)\) by \(b^{\prime}_{15}\),\(\cdots\). * When \(\mathtt{seats}=0\), \(\mathtt{semaphore}\) is released with no operation. \((2,7)\stackrel{{\mathtt{sem}}}{{\longrightarrow}}(1,5)\stackrel{{ \mathtt{sem}}}{{\longrightarrow}}(1,6)\) by \(b^{\prime}_{5}\) and \(b^{\prime}_{8}\) and \((1,6)\stackrel{{\mathtt{sem}}}{{\longrightarrow}}(2,9)\stackrel{{ \mathtt{sem}}}{{\longrightarrow}}(2,10)\) by \(b^{\prime}_{10}\) and \(b^{\prime}_{13}\). * In backward, \(\mathtt{sub2}\) is ready since \((2,10)\) is \(\mathtt{last}(E_{W},\mathtt{sem})\). * Then, \(\mathtt{sub1}\) is done with no operation and \((2,7)\) is P in \(\mathtt{sub2}\). The order of V and P is kept until reaching \(\bot\). Figure 8: An airline ticketing with semaphore Figure 9: The annotation DAG after the forward execution with semaphore ## 5 Concluding remarks We have proposed CRIL as a reversible concurrent intermediate language. CRIL is an extension of RIL [17] to enable running multiple subroutines as processes running in parallel. CRIL is intended to be fairly low-level in that each instruction is at a level similar to the three-address codes to mediate the translation from a high-level program to a machine-oriented code. The operational semantics of CRIL defined as \(\textit{LTSI}_{\textit{CRIL}}\) is shown to have the properties of Causal Safety and Causal Liveness under the independence of concurrent processes and shared memory update. By the result of [14], \(\textit{LTSI}_{\textit{CRIL}}\) also satisfies other properties: Parabolic lemma, Causal Consistency, Unique Transition, and Independence of Diamonds. As related work, [2] provides a compiler from ROOPL++ to PISA [23] with no intermediate language, where the translation from an object-oriented source program to the low-level PISA code is a big task. [7] proposes an annotation to a concurrent imperative program while executing forward, where the annotation is attached directly to the source program for reversing the execution. [8] investigates its properties of reversibility. CRIL uses a similar idea as Hoey's, but CRIL is at a rather lower level to provide a finer granularity for detailed analysis in translation, such as optimization. [9] presents a collection of simple stack machines with a fork and merge mechanism, where the causality is embedded in the runtime. For future work, we have focused only on the fundamental properties. We will investigate further how more properties in reversibility contribute to behavioral analysis for concurrent programs. Currently, the dependency of the heap memory M is treated as one memory resource. More detailed dependency is necessary for practical use. Deriving the optimization technique in the front-end part of compilers is future work via the reversible version of SSA, such as RSSA [18] for concurrent imperative programs. CRIL is based on the shared memory model. Incorporating channel-based communications is also future work to use for the message-passing model like Erlang [13]. AcknowledgementWe thank Dr. Irek Ulidowski of the University of Leicester for giving valuable suggestions to the draft. We also thank Prof. Nobuko Yoshida of the University of Oxford, Prof. Hiroyuki Seki, Prof. Koji Nakazawa, and Prof. Yuichi Kaji of Nagoya University for fruitful discussions. We thank the anonymous reviewers for providing fruitful comments. This work is supported by JSPS Kakenhi 21H03415.
reversible intermediate言語を併用したConcurrentプログラミング言語を、別の低レベルConcurrentプログラミング言語に変換する手法を提案します。 これは、可逆性を維持しながら、高レベルのConcurrentプログラミング言語を低レベルのConcurrentプログラミング言語に変換することを目的としたものです。 一般的であり、ソースプログラムをオブジェクトコードプログラムにコンパイルする際に使用されています。 これは、中間言語が、マシンコードに近いオブジェクトコードプログラムの生成を可能にするためです。 また、中間言語は、動作分析と最適化を段階的に分解することを可能にするため、特に役立ちます。 私たちは、Mogensenが使用しているR-ILの拡張として、CRIL(Concurrent Reversible Intermediate Language)を提案しています。 CRILは、P-V操作に基づいた同期プリミティブを含む、多threadedプロセス起動を備えた機能的な可逆言語であるR-ILの拡張です。
2309.08023
USM-SCD: Multilingual Speaker Change Detection Based on Large Pretrained Foundation Models
We introduce a multilingual speaker change detection model (USM-SCD) that can simultaneously detect speaker turns and perform ASR for 96 languages. This model is adapted from a speech foundation model trained on a large quantity of supervised and unsupervised data, demonstrating the utility of fine-tuning from a large generic foundation model for a downstream task. We analyze the performance of this multilingual speaker change detection model through a series of ablation studies. We show that the USM-SCD model can achieve more than 75% average speaker change detection F1 score across a test set that consists of data from 96 languages. On American English, the USM-SCD model can achieve an 85.8% speaker change detection F1 score across various public and internal test sets, beating the previous monolingual baseline model by 21% relative. We also show that we only need to fine-tune one-quarter of the trainable model parameters to achieve the best model performance. The USM-SCD model exhibits state-of-the-art ASR quality compared with a strong public ASR baseline, making it suitable to handle both tasks with negligible additional computational cost.
Guanlong Zhao, Yongqiang Wang, Jason Pelecanos, Yu Zhang, Hank Liao, Yiling Huang, Han Lu, Quan Wang
2023-09-14T20:46:49
http://arxiv.org/abs/2309.08023v3
# USM-SCD: Multilingual Speaker Change Detection Based on Large Pretrained Foundation Models ###### Abstract We introduce a multilingual speaker change detection model (USM-SCD) that can simultaneously detect speaker turns and perform ASR for 96 languages. This model is adapted from a speech foundation model trained on a large quantity of supervised and unsupervised data, demonstrating the utility of fine-tuning from a large generic foundation model for a downstream task. We analyze the performance of this multilingual speaker change detection model through a series of ablation studies. We show that the USM-SCD model can achieve more than 75% average speaker change detection F1 score across a test set that consists of data from 96 languages. On American English, the USM-SCD model can achieve an 85.8% speaker change detection F1 score across various public and internal test sets, beating the previous monolingual baseline model by 21% relative. We also show that we only need to fine-tune one-quarter of the trainable model parameters to achieve the best model performance. The USM-SCD model exhibits state-of-the-art ASR quality compared with a strong public ASR baseline, making it suitable to handle both tasks with negligible additional computational cost. Guanlong Zhao, Yongqiang Wang, Jason Pelecanos Yu Zhang, Hank Liao, Yiling Huang, Han Lu, Quan Wang Google LLC, USA {guanlongzhao,yqw,pelecanos,ngyuzh,hankliao,yilinghuang,luha,quanw}@google.com Speaker change detection, foundation model ## 1 Introduction Speaker change detection (SCD) [1] is the process of identifying the speaker turn points in a multi-speaker audio stream. SCD has broad applications in enhancing speaker diarization accuracy [2, 3], improving Automatic Speech Recognition (ASR) quality [4], generating line breaks in captions to boost readability and accessibility [5], and augmenting textual prompts for multi-modal large language models (LLMs) [6]. Conventionally, SCD is achieved by using a neural network to map acoustic features or speaker embeddings [7, 8, 9] to a frame or segment level yes/no speaker change prediction. The neural network is generally trained by minimizing the binary cross entropy loss between the ground-truth SCD labels and the predictions. These conventional approaches have various limitations. First, they require accurate timing information of the speaker change point, which is difficult to obtain since marking speaker change timestamps is a highly subjective process for human annotators. Second, the methods that use purely acoustic information ignore rich semantic information in the audio. Third, the methods that use speaker embeddings utilize sensitive biometric information that can be exploited for unintended purposes and are sub-optimal from a privacy point of view [10]. A few recent studies [2, 11, 12] explore using ASR-based approaches to detect word-level speaker changes to mitigate the aforementioned issues with conventional models. Xia _et al._[2] propose an SCD model using a Transformer-Transducer (T-T). Specifically, they augment the text transcription of the spoken utterance with a special speaker turn token <st>, and then train the model to output both regular text tokens and the special speaker turn token. This model does not need accurate timestamps for training since the T-T model is trained in a seq2seq fashion and does not need forced-alignment to provide training targets. The model also utilizes both acoustic and linguistic information in the input audio. As a follow up of that work, in [11] we propose a training loss that penalizes speaker change false acceptance and false rejection errors in the N-best hypotheses to further enhance performance. Wu _et al._[12] add an additional SCD module on top of an existing T-T ASR network to optimize the SCD and ASR tasks separately. Recent advances in self-supervised learning have ushered in a new era for speech tasks. Large pretrained foundation models [13] have led to significant performance improvement in various downstream speech tasks including emotion recognition [14], language identification [15], voice activity detection [16], and mispronunciation detection [17]. In this work we take advantage of the recent Google Universal Speech Model (USM) [18] framework to build an SCD model that is capable of recognizing speaker changes in 96 languages. In addition, the performance of prior ASR-based models is limited by the quantity of supervised SCD data available for individual languages, leading to lower performance. We explore the benefit of using a large quantity of unsupervised and supervised multilingual ASR data for model pretraining. The major contributions of this paper include (1) a 96-language SCD model that significantly outperforms the previous monolingual baseline; (2) detailed ablation studies of the proposed multilingual SCD model. ## 2 Method First, we build a pretrained model as the foundation model. We then fine-tune the foundation model with data annotated with speaker changes. ### Backbone model At a high level, the backbone model architecture used in this work consists of a Conformer encoder [19] and a Connectionist Temporal Classification (CTC) [20] decoder. The inputs are mel-spectra features and a one-hot vector representing the language of the utterance. We pass the input features through mean variance normalization, SpecAugment [21] (only for training), and multiple 2D-convolution layers (denoted as the _feature encoder_) to reduce the input frame rate, similar to the setup in wav2vec 2.0 [22]. We then append the features with a one-hot language embedding. The concatenated features are then projected by a linear input projection layer to match the dimension of the Conformer encoder, which consumes the input projection layer outputs. The Conformer encoder is trained with chunk-wise attention [18]. The output of the Conformer encoder is passed to a linear projection layer, outputting logits that correspond to WordPiece tokens. The model is trained with the CTC loss. We do not use the RNN-T paradigm [23] in this work due to its slow training speed as a result of its auto-regressive nature, which is especially prevalent when training large models with billions of parameters. ### Pretraining There are various pretraining techniques. In this work, we explore both supervised and unsupervised pretraining methods. #### 2.2.1 BEST-RQ pretraining We select BEST-RQ [24] as the unsupervised method to pretrain our networks. BEST-RQ provides a simple framework with a small number of hyperparameters for unsupervised training on large-scale unlabeled audio data. BEST-RQ applies a random-projection quantizer to map speech signals to discrete labels to enable BERT-style pretraining for ASR encoders. The quantizer randomly initializes a matrix and codebook, and uses the matrix to project the input speech signals and the codebook to find the nearest vector, where the index of the vector serves as the label. The pretraining process masks the speech signals and feeds them to the ASR encoder that learns to predict labels of the masked segment. The random projection performs dimension reduction for the speech signals while the random codebook provides an approximated discrete representation of the data distribution. Both the randomly initialized matrix and codebook are fixed during the pretraining process. In this study, the encoder in the BEST-RQ system employs the same model architecture as the Conformer encoder described in Sec. 2.1. #### 2.2.2 ASR pretraining For supervised pretraining, we warm start the backbone model's Conformer encoder from the BEST-RQ model's encoder and fine-tune it on ASR data. ### SCD fine-tuning For the SCD task, we fine-tune the pretrained model with speaker change data, and we refer to this type of model as **USM-SCD**. We warm start the backbone model's Conformer encoder from a pretrained model's encoder. The decoder projection layer is always randomly initialized. The training targets are WordPiece tokens augmented with speaker change annotations. To create training targets, we add a special speaker change token <st> between two different speakers' transcripts (e.g. "hello how are you <st> I am good <st>") to model speaker changes during training. Compared with audio-only SCD models [8], this model may more directly utilize the language semantics as a signal for speaker segmentation. For inference, we perform an ASR decoding with the SCD model, and identify the speaker change tokens. We use the timestamps of the predicted speaker turn tokens in the evaluation. ### Speaker change token posterior scaling The speaker change tokens are relatively scarce in the training data. To encourage the model to output speaker change tokens, we can apply a scaling factor to the posterior probability of the speaker change token \(p(\texttt{<st>}|\mathbf{X})\) during decoding, where \(\mathbf{X}\) is the model input. Assuming _greedy_ decoding during inference, this can be achieved by multiplying \(p(\texttt{<st>}|\mathbf{X})\) with a constant factor \(\lambda>1\), i.e., \(p^{\prime}(\texttt{<st>}|\mathbf{X})=\lambda\cdot p(\texttt{<st>}|\mathbf{X})\). Effectively this increases the posterior probability of the <st> token. _Greedy_ decoding simplifies the process since we do not need to redistribute the rest of the probability mass as a result of the scaling. In practice, we operate on the log posterior probability rather than on the raw posterior probability to avoid numerical issues, hence we have \(\log(p^{\prime}(\texttt{<st>}|\mathbf{X}))=\log(\lambda)+\log(p(\texttt{<st>}| \mathbf{X}))\). ### SCD evaluation metrics For SCD evaluation, we compute precision (percentage of model predictions that are true speaker changes), recall (percentage of ground-truth speaker changes that are correctly predicted by the model), and F1 score (the harmonic mean of the precision and recall). We treat the F1 score as a more comprehensive quality metric than the precision/recall rate alone. To compute these metrics, we align predicted and ground-truth speaker changes based on their timestamps, i.e., correct predictions should overlap with the ground-truth labels. Please refer to Fig. 1 for an example. For a detailed description of these metrics, please see Sec. 3 of [11]. ## 3 Experimental Setup ### Data We use various supervised and unsupervised short/long-form data across model training and evaluation. All internal datasets are collected according to Google's Privacy Principles [25] and abide by Google AI Principles [26]. #### 3.1.1 Train **YT-56-U**: This dataset is built by first randomly collecting three million hours of audio from "speech-heavy" user-uploaded YouTube videos, filtered by user-provided language tags. The three million hours of audio is then further segmented by a Voice Activity Detection (VAD) model and non-speech segments are removed. This yields approximately one million hours of unlabeled audio data. Later, we use a language identification model to select data that corresponds to 56 languages from that unlabelled audio data. _We use this dataset to pretrain the BEST-RQ model_. **VS-SUP**: We use a Voice Search dataset consisting of 85 language locales to pretrain the ASR model. There are a total of 1.2 billion short utterances (average duration 4 seconds) from Voice Search traffic. The data is anonymized and human transcribed. _No_ speaker change information is available for this dataset. _We use this Figure 1: Illustration of the SCD scoring mechanism for computing the precision, recall, and F1. “Spk A-C” stands for speaker annotations on a conversational utterance. “Ref SC” is the reference speaker change intervals. “Hyp SC” is the predicted speaker change. “Score” shows the scoring decision of each prediction and reference. dataset for ASR pretraining_ because short-form supervised ASR training data is significantly larger in volume than long-form data with speaker change labels. **YT-SUP**: This is a dataset with audio from YouTube videos that has text transcripts and speaker change labels from 96 languages. We group consecutive segments into a longer unit similar to [27]. The maximum sequence length for training is 30 seconds. The total quantity of training data is 108k hours, ranging from three hours (Paraguayan Guarani) to 4k hours (Brazilian Portuguese) across locales. _We use this dataset to fine-tune the USM-SCD model_. #### 3.1.2 Evaluation **YT-96-Eval**: For all languages, we have in total 1,400 hours of internal YouTube long-form evaluation data (no overlap with **YT-56-U** or **YT-SUP**) annotated with text transcriptions and speaker changes. On average, we have 15.2 (std: 4.5) hours of evaluation data per language and 5 speaker changes per minute of audio in this test set. **En-US-Eval**: For American English (En-US), we have additional internal and public test sets, see Table 1. For the first DIHARD challenge evaluation subset (DIHARD1), we remove all YouTube-derived utterances to avoid evaluating on utterances that might have appeared during training. For Fisher, we randomly sample a subset of 172 utterances for testing1. "Outbound" and "Inbound" are vendor-provided call center telephone conversations between call center operators and customers, initiated by the call center and by customers, respectively. "Outbound" and "Inbound" were previously used in [2, 3, 11]. Footnote 1: [https://github.com/google/speaker-id/blob/master/publications/Scdloss/eval/fisher.txt](https://github.com/google/speaker-id/blob/master/publications/Scdloss/eval/fisher.txt) ### Modeling details We extract 128-dim log-mel filter-bank energies from a 32ms window with a 10ms frame shift as the raw input feature to the model. We use a WordPiece model that has a vocabulary size of 16,384. The feature encoder contains two 2D-convolution layers of shape \(3\times 3\times 1\times 128\) and \(3\times 3\times 128\times 32\) (time\(\times\)frequency\(\times\)input-channel\(\cdot\)), respectively. The stride size of both convolution layers is 2 on both the time and frequency dimensions. The feature encoder increases the frame rate by 4-fold, from 10ms to 40ms, resulting in a 1,024-dim feature vector. The multi-headed self-attention in the Conformer layers has 8 attention heads. The chunk-wise attention in the Conformer encoder has an 8s context. The convolution kernel size is 5. We run experiments on a model with 1.84 billion parameters, where we have 32 Conformer layers and each layer has 1,536 dimensions. We use the Adafactor optimizer [33] with a transformer learning rate schedule. For fine-tuning tasks, we optimize the encoder and decoder with separate optimizers and learning rate schedules given that the encoder alone has been pretrained. For the encoder, we use a peak learning rate \(3\times 10^{-4}\) with 6k warm-up steps, while for the decoder projection layer we use a peak learning rate \(5\times 10^{-4}\) and 2k warm-up steps. Training was done with a global batch size of 4,096 on TPUs [34]. We monitor the training process on a held-out development set. For all models, we train them for around 40k steps. Empirically [35], fine-tuning from a well-trained foundation model only requires a small fraction of training steps compared with training from scratch. In this study, we observe that the model can converge to a reasonable state with as few as 5k training steps, which takes about 6.5 hours of training time with the aforementioned setup. ## 4 Results We compute the WER (for ASR) and SCD precision, recall, and F1 rates as quality metrics. For all evaluations, unless otherwise specified, we use greedy search and aggregate the evaluation data from all 96 languages (**YT-96-Eval**) to compute the final scores. For WER, we remove speaker change tokens from the scoring. ### Overall system comparisons on **YT-96-Eval** We first study the choice of the pretrained model. The results are summarized in the first two columns of Table 2. The SCD models fine-tuned from the two pretrained models yield comparable SCD F1 scores (0.3% relative difference), suggesting that they are comparable in terms of detecting speaker change events. The SCD model fine-tuned from the ASR model has significantly better WER (30.1 vs 34.3 across 96 languages; a 12.2% relative reduction), demonstrating the benefit of ASR-pretraining on the word-level SCD task. Next, we study the trade off between ASR and SCD. We fine-tune from the ASR-pretrained checkpoint to construct _ASR Pretrain w/o SCD_ that does not have the speaker change token in the training target, resulting in a WER of 28.8%. Therefore, we trade a 4.5% relative WER regression to add the SCD capability to the ASR model. To provide additional context, we compare the WER of the USM-SCD model with a strong publicly available ASR model Whisper [36] (large-v2, 1.55B parameters) that was trained on more than 400k hours of transcribed ASR data. We select 21 top performing languages from Whisper (which achieve WER lower than 40% on **YT-96-Eval**), and the results are shown in the last column of Table 2. We observe that although adding the SCD capability to an ASR model hurts the WER, the resulting USM-SCD model still has a better ASR performance on YouTube data compared to Whisper. ### Effect of sub-components to fine-tune We now study which model parameters to fine-tune. For this experiment, we always fine-tune from the ASR pretrained model given the results in Sec. 4.1. Given that we are using a different data source (i.e., **YT-SUP**) for SCD training and we modify the training targets, we always fine-tune the _feature_ encoder, input projection, \begin{table} \begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Tested} & \multirow{2}{*}{Domain} & \multirow{2}{*}{Dur. (h)} & \multicolumn{2}{c}{Average} \\ \cline{4-5} & & & & Turns/min & Duration/Ut. (min) \\ \hline AMI [28] & Meeting & 9.1 & 10 & 34 \\ Callhome [29] & Telephone & 1.7 & 19 & 5 \\ DIHARD1 [30] & Mixed & 16.2 & 12 & 9 \\ Fisher [31] & Telephone & 28.7 & 13 & 10 \\ ICSI [32] & Meeting & 2.8 & 13 & 55 \\ Inbound & Telephone & 21.0 & 9 & 5 \\ Outbound & Telephone & 45.6 & 13 & 6 \\ \hline \hline \end{tabular} \end{table} Table 1: Statistics of additional internal and public En-US test sets. \begin{table} \begin{tabular}{c c c c c c} \hline \hline & \multicolumn{2}{c}{BEST-RQ Pretrain} & ASR Pretrain & ASR Pretrain & Whisper \\ & w/ SCD & w/ SCD & w/o SCD & large-v2 \\ \hline \multirow{3}{*}{WER} & En-US & 17.1 & **12.6** & **12.6** & 16.2 \\ & 21-lang. & 21.1 & **16.6** & **16.6** & 30.1 \\ & 96-lang. & 34.3 & 30.1 & **28.8** & - \\ \hline \multirow{3}{*}{SCD} & Precision & 80.0 & **82.4** & - & - \\ & Recall & **52.6** & 51.9 & - & - \\ \cline{1-1} & F1 & 63.5 & **63.7** & - & - \\ \hline \hline \end{tabular} \end{table} Table 2: Overall system comparisons on **YT-96-Eval**. The _w/ SCD_ systems are fine-tuned from the corresponding pretrained models with speaker change tokens in the training target; the _w/o SCD_ system is trained to perform only ASR. and decoder projection layers, which consist of 27M trainable parameters. A preliminary experiment suggests that _only_ fine-tuning the feature encoder, input projection, and decoder projection layers does not converge well. Therefore, we selectively fine-tune certain layers of the Conformer encoder and freeze the rest of the parameters. All models are trained for 40k steps. The results are in Table 3. We observe that optimizing the last 4 layers is significantly better than optimizing the first 4 layers both in terms of WER and SCD metrics. Interestingly, optimizing both the first 4 and last 4 layers (i.e., 8 of 32 layers) gives the best ASR and SCD performance, which only accounts for \(\sim\)26% of the trainable parameters. ### Effect of the speaker change token posterior scaling Next, we study the effect of the speaker change token posterior scaling factor (cf. Sec. 2.4). Based on the results in Sec. 4.2, we use the model that is only fine-tuned on the first 4 and last 4 layers of the Conformer encoder (480M trainable parameters). We run experiments (see Fig. 2) by setting the factor \(\lambda\) from 1.0 to 9.0, with a step size of 1.0. Note that this experiment does not require retraining the model since the posterior scaling happens during inference. We observe that the posterior scaling does not significantly affect the ASR quality, with the maximum WER difference being less than 0.7% _relative_ (i.e., from 30.1% to 30.3%). More importantly, the scaling factor brings large gains in terms of SCD quality. Compared with the baseline configuration where there is no SCD posterior scaling (i.e., scaling factor 1.0), the best posterior scaling factor of 5.0 increases the SCD F1 score from 64.6% to 75.3%, a 16.6% relative improvement. ### En-US quality analysis For En-US, there are additional internal and public datasets that have speaker change labels (Table 1). We evaluate the _USM-SCD_ model fine-tuned from ASR on these datasets. We only fine-tune the first 4 and last 4 Conformer encoder layers and the SCD posterior scaling factor is set to 5.0 during inference. The per-testset results are summarized in Table 4. We also include the best performing system from [11] (denoted as _SCD loss_, 27M parameters monolingual En-US model) as a comparison. The _SCD loss_ system is trained with an SCD-optimized training loss on a super-set of the En-US portion of **YT-SUP**, with 2k hours of additional training data from other domains. We observe that the _USM-SCD_ system performs much better than the _SCD loss_ system, achieving 21% relative F1 score improvement. The precision and recall rates increase by 17.0% and 24.8% relative, respectively. ## 5 Discussion and Conclusion In this work we propose a multilingual SCD model that supports 96 languages. We take advantage of recent advances in large speech foundation models to construct this USM-SCD model and study its properties through a series of ablation studies. We find that the ASR-pretraining is crucial to model performance. We observe that we only need to fine-tune roughly one-quarter of the trainable parameters to achieve the best overall performance compared to fine-tuning all parameters. We also show that an inference-time SCD token posterior scaling that requires no additional computation can result in a 16.6% relative improvement in the SCD F1 score. Finally, compared with our previous monolingual En-US SCD model, the USM-SCD model outperforms it by 21% in terms of SCD F1 score. Based on benchmarks on TPU v5e [37], the USM-SCD model can run inference at 60x faster than real-time (batch size 1), demonstrating the application potential of this model. Possible future directions include replacing the CTC architecture with a fast RNN-T implementation and applying the token-level training loss proposed in [11] to further boost model quality. It is also interesting to explore multi-output RNN-T joint networks [38] to decouple the ASR and SCD tasks. ## 6 Acknowledgements The authors would like to thank Wei Han for the Whisper model evaluation setup, and Olivier Siohan, Parisa Haghani, Ignacio Lopez Moreno, and Pedro Moreno Mengibar for reviewing this work. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline Metrics & System & AMI & CallHome & DIHARD1 & Fisher & ICSI & Inbound & Outbound & _Pooled data_ \\ \hline \multirow{2}{*}{WER} & SCD loss & 39.8 & 57.3 & - & 30.6 & 46.1 & - & - & 34.3 \\ & USM SCD & 25.7 & 44.2 & - & 18.4 & 31.5 & - & - & **21.6** \\ \hline \multirow{2}{*}{Precision} & SCD loss & 79.4 & 82.0 & 78.8 & 82.6 & 77.8 & 72.8 & 75.1 & 77.6 \\ & USM SCD & 91.6 & 84.6 & 92.9 & 94.7 & 90.2 & 94.4 & 91.9 & **90.8** \\ \hline \multirow{2}{*}{Recall} & SCD loss & 68.1 & 59.1 & 52.4 & 75.7 & 58.7 & 79.2 & 58.7 & 65.2 \\ & USM SCD & 75.3 & 90.8 & 81.7 & 76.5 & 82.7 & 70.1 & 87.3 & **81.4** \\ \hline \multirow{2}{*}{F1} & SCD loss & 73.3 & 68.7 & 62.9 & 79.0 & 66.9 & 75.9 & 65.9 & 70.9 \\ & USM SCD & 82.6 & 87.6 & 86.9 & 84.6 & 86.3 & 80.5 & 89.5 & **85.8** \\ \hline \hline \end{tabular} \end{table} Table 4: En-US results based on **En-US-Eval**. DIHARD1 and In/Outbound do not have ground-truth text transcripts. The last column shows the evaluation metrics computed by pooling all test sets together. Figure 2: SCD token \(<\)st\(>\) posterior probability scaling results on **YT-96-Eval**.
多言語対応のスピーカー変化検出モデル(USM-SCD)を導入します。このモデルは、大規模な有意・無意のデータでトレーニングされた音声基盤モデルから派生しており、大規模な汎用基盤モデルからの微調整が、特定のタスクに適応する利点があります。この多言語対応スピーカー変化検出モデルの性能を分析するために、その性能を評価するための ablation study を実施しました。結果として、USM-SCD モデルは、96言語のデータセットのテストセットで、平均 75% の F1スコアを達成しました。アメリカ英語の場合、USM-SCD モデルは、さまざまな公的および内部テストセットで 85.8% の F1スコアを達成しました。これは、単言語ベースラインモデルに対して 21% の相対的な向上を示しています。また、最適化モデルのパラメータの 1/4 を
2301.13547
Machine learning of evolving physics-based material models for multiscale solid mechanics
In this work we present a hybrid physics-based and data-driven learning approach to construct surrogate models for concurrent multiscale simulations of complex material behavior. We start from robust but inflexible physics-based constitutive models and increase their expressivity by allowing a subset of their material parameters to change in time according to an evolution operator learned from data. This leads to a flexible hybrid model combining a data-driven encoder and a physics-based decoder. Apart from introducing physics-motivated bias to the resulting surrogate, the internal variables of the decoder act as a memory mechanism that allows path dependency to arise naturally. We demonstrate the capabilities of the approach by combining an FNN encoder with several plasticity decoders and training the model to reproduce the macroscopic behavior of fiber-reinforced composites. The hybrid models are able to provide reasonable predictions of unloading/reloading behavior while being trained exclusively on monotonic data. Furthermore, in contrast to traditional surrogates mapping strains to stresses, the specific architecture of the hybrid model allows for lossless dimensionality reduction and straightforward enforcement of frame invariance by using strain invariants as the feature space of the encoder.
I. B. C. M. Rocha, P. Kerfriden, F. P. van der Meer
2023-01-31T10:50:07
http://arxiv.org/abs/2301.13547v1
# Machine learning of evolving physics-based material models for multiscale solid mechanics ###### Abstract In this work we present a hybrid physics-based and data-driven learning approach to construct surrogate models for concurrent multiscale simulations of complex material behavior. We start from robust but inflexible physics-based constitutive models and increase their expressivity by allowing a subset of their material parameters to change in time according to an evolution operator learned from data. This leads to a flexible hybrid model combining a data-driven encoder and a physics-based decoder. Apart from introducing physics-motivated bias to the resulting surrogate, the internal variables of the decoder act as a memory mechanism that allows path dependency to arise naturally. We demonstrate the capabilities of the approach by combining an FNN encoder with several plasticity decoders and training the model to reproduce the macroscopic behavior of fiber-reinforced composites. The hybrid models are able to provide reasonable predictions of unloading/reloading behavior while being trained exclusively on monotonic data. Furthermore, in contrast to traditional surrogates mapping strains to stresses, the specific architecture of the hybrid model allows for lossless dimensionality reduction and straightforward enforcement of frame invariance by using strain invariants as the feature space of the encoder. **Keywords:** Concurrent multiscale (FE\({}^{2}\)) modeling, Surrogate modeling, Hybrid learning ## 1 Introduction Recent advances in materials science and manufacturing techniques are paving the way for the design of materials with highly-tailored microstructures, including metamaterials [1, 2], novel composite material systems [3, 4], printed cementitious materials [5] and multifunctional living materials [6]. The common thread in these new developments is a shift from traditional design focused on tailoring structures to material constraints towards tailoring material microstructures to macroscopic constraints. This shift in turn requires the development of highly-detailed models of material behavior across spatial scales and a shift to virtual structural certification, as trial-and-error design becomes infeasible [7, 8, 9]. Scale bridging has been traditionally performed through a bottom-up approach: physics-based constitutive models at smaller scales are calibrated using experiments and used to perform numerical simulations (using _e.g._ the Finite Element (FE) method) on representative lower-scale domains from which higher-scale physics-based models can be calibrated [10, 11]. However, physics-based constitutive models come with _a priori_ assumptions that often fail to reproduce complex lower-scale behavior [10]. The alternative is to opt for an FE\({}^{2}\) (or Computational Homogenization) approach: lower-scale FE models are embedded at every Gauss point of a higher-scale model and material behavior is directly upscaled with no constitutive assumptions at the higher scale [12, 13, 14]. Yet, the computational cost associated with repeatedly solving a large number of micromodels quickly becomes a bottleneck, in particular for many-query procedures such as design exploration and optimization that require several higher-scale simulations to be performed. Since the bottleneck of FE\({}^{2}\) lies in computing lower-scale models, a popular approach to reduce computational effort is to substitute the original FE micromodels with either structure-preserving reduced-order models [15, 16, 17, 18, 19, 20, 21] or purely data-driven surrogates [22, 23, 24, 25, 26, 27] trained offline. More recently, Recurrent Neural Networks (RNN) have become the model of choice especially for strain path-dependent materials, with a large body of literature dedicated to their use and tuning to different applications [28, 29, 30, 31, 32, 33, 34]. RNNs can reproduce complex long-term time dependencies in material behavior by learning latent representations of the material state, making them fast and flexible surrogates. However, these learned representations are not a priori related to actual thermodynamic internal state variables and the model is therefore poorly interpretable (see [35] for an interesting discussion on the subject). Furthermore, training for path dependency requires sampling from a potentially infinite-dimensional space of arbitrarily-long strain paths. This means training RNNs to reproduce complex material behavior often requires an inordinate amount of data (_curse of dimensionality_) and their purely data-driven nature limits their ability to extrapolate away from paths seen during training. In order to address these drawbacks, a growing number of recent works are shifting focus to models with a fusion of data-driven and physics-based components. Inspired by physics-informed neural networks ([36]), the authors in [37] opt for data-driven models with physics-inspired bias by enforcing thermodynamic principles in a weak sense through an augmented loss function. In a similar vein, the model in [38] learns hyperelasticity by linking together several carefully crafted neural nets to represent quantities with clear physical meaning, improving the interpretability of the resulting model. In [39] the authors extend a similar hyperelastic surrogate with a network that learns plastic flow direction and the evolution of a yield surface parametrized by a level set function, resulting in a hyperelastic-plastic model with superior extrapolation capabilities. A common thread in the aforementioned approaches, however, is that their learning architectures are heavily dependent on the type of model being learned (_e_.\(g\). hyperelasticity, plasticity), making extensions to other models a convoluted task. In contrast, the authors in [40, 41] propose a surrogate for heterogeneous micromodels constructed by directly employing unmodified versions of the constitutive models used for the micro constituents and using a customized network architecture to infer a homogenization operator from data that combines their responses. Nevertheless, the method employs a highly-specialized iterative online prediction routine requiring extra implementation effort and with increased computational overhead when compared to that of traditional surrogates mapping strains to stresses. Finally, in [42, 43, 44] a dictionary of candidate physics-based models is assumed and the role of machine learning shifts instead to that of performing model selection and/or design of experiments. In this work we explore an alternative approach for constructing hybrid surrogate models for path-dependent multiscale simulations. We start from the premise that existing physics-based models -- \(e\)._g_. the ones used to describe microscale constituents -- are not flexible enough to reproduce macroscale behavior but nonetheless encapsulate crucial physical features such as frame invariance and loading/unloading conditions. It is our aim to avoid learning these features directly from data, as that would require either an excessively large dataset or a highly-specialized learning architecture. We therefore opt for keeping the constitutive model as intact as possible and instead increasing flexibility by allowing some (or all) of its material parameters to evolve in time. The resulting model can be seen in Fig. 1: a data-driven encoder that learns the evolution of a set of material properties is linked to a physics-based material model decoder that maps strains to stresses. In contrast to other strategies in literature, we keep the architecture as general as possible: a general feature extractor parses macroscopic strains into features for the encoder -- which can be as simple as the strains themselves or other derived quantities (_e_.\(g\). strain invariants) -- and any type of constitutive model can in principle act as decoder (_e_.\(g\). hyperelasticity, plasticity, damage). By relegating stress computations to the decoder, we effectively introduce physics-based bias to the model.1 Furthermore, by letting the material model handle the evolution of its own internal variables, the model benefits from a recurrent component with interpretable memory structure that allows path dependency to arise naturally. The strategy we explore here is related to the one we propose in [46], but in that work we let an encoder learn local strain distributions for several virtual material points with fixed properties. We see the two approaches as being complementary, and therefore with potential for being used in combination to form a flexible range of hybrid surrogates. Footnote 1: In purely data-driven surrogates, we accept some bias in exchange for reduced variance — \(e\)._g_. by employing regularization or adopting prior distributions for model parameters [45] — in order to counter overfitting and improve generalization. But in that case the bias is merely a way to reduce complexity, with no physical interpretation and no _a priori_ impact on the extrapolation capabilities of the model. The remainder of the work is organized as follows. Section 2 contains a primer on concurrent multiscale (FE\({}^{2}\)) modeling and discusses the difficulties of training purely data-driven surrogates. In Section 3, we particularize the model of Fig. 1 to the case of a feedforward neural network encoder and discuss aspects related to offline training and online numerical stabilization. In Section 4 we assess the performance of the hybrid model in reproducing the behavior of fiber-reinforced composites using different encoder features and decoder models. Finally, some concluding remarks and future research directions are discussed in Section 5. ## 2 Concurrent multiscale (FE\({}^{2}\)) modeling In this section we present a short discussion on FE\({}^{2}\) modeling. The goal is not to be comprehensive -- the interested reader is referred to [13, 14] for detailed discussions on the subject -- but rather to expose the computational bottleneck associated with the method and pinpoint where surrogate models can be used to alleviate the issue. We then demonstrate how a Recurrent Neural Network (RNN) can be used as surrogate model and showcase some of the difficulties associated with their training and their extrapolation capabilities. ### Scale separation and coupling In FE\({}^{2}\) we assume the problem being solved can be split into a homogeneous macroscopic domain \(\Omega\) and a heterogeneous microscopic domain \(\omega\ll\Omega\) where small-scale geometric features are resolved. Here we opt for a first-order homogenization approach assuming the displacements on both scales can be related by: \[\mathbf{u}^{\omega}=\boldsymbol{\varepsilon}^{\Omega}\mathbf{x}^{\omega}+ \widetilde{\mathbf{u}} \tag{1}\] where microscopic displacements \(\mathbf{u}^{\omega}\) are split into a linear contribution proportional to the macroscopic strains \(\boldsymbol{\varepsilon}^{\Omega}\) and a fluctuation term \(\widetilde{\mathbf{u}}\) that accounts for microscopic heterogeneities. Since \(\boldsymbol{\varepsilon}^{\Omega}\) varies throughout the macroscopic domain, a micromodel for \(\omega\) is embedded at each Gauss point in \(\Omega\) and a microscopic boundary-value equilibrium problem assuming small displacements and strains is solved: \[\nabla\cdot\boldsymbol{\sigma}^{\omega}=\mathbf{0}\qquad\boldsymbol{ \varepsilon}^{\omega}=\frac{1}{2}\left(\nabla\mathbf{u}^{\omega}+\left( \nabla\mathbf{u}^{\omega}\right)^{\mathrm{T}}\right) \tag{2}\] microscopic stress \(\boldsymbol{\sigma}^{\omega}\) is related to microscopic strain \(\boldsymbol{\varepsilon}^{\omega}\) with traditional physics-based constitutive models for each phase in the heterogeneous domain. In the general case where the material models feature internal variables \(\boldsymbol{\alpha}\), we can write the constitutive update for the microscale domain as: \[\mathcal{M}^{\omega}\begin{cases}\boldsymbol{\alpha}_{t}^{\omega}=\mathcal{A} \left(\boldsymbol{\varepsilon}_{t}^{\omega},\boldsymbol{\alpha}_{t-1}^{\omega },\boldsymbol{\theta}^{\omega}\right)\\ \boldsymbol{\sigma}_{t}^{\omega}=\mathcal{S}\left(\boldsymbol{\varepsilon}_{t}^ {\omega},\boldsymbol{\alpha}_{t}^{\omega},\boldsymbol{\theta}^{\omega}\right) \end{cases} \tag{3}\] where \(\boldsymbol{\theta}^{\omega}\) are the material parameters of the microscopic constituents, the operators \(\mathcal{A}\) and \(\mathcal{S}\) can be split into an arbitrary number of blocks with different models (_e.g._ elasticity, elastoplasticity, damage) for the different material phases, and \(\boldsymbol{\alpha}^{\omega}\) is a concatenation of the internal variables of every microscopic Gauss point and therefore fully describes the path-dependent state of the microscopic problem. In order to determine the strains \(\boldsymbol{\varepsilon}^{\Omega}\) that serve as boundary conditions for the micromodels, a macroscopic small-strain equilibrium problem is solved: \[\nabla\cdot\boldsymbol{\sigma}^{\Omega}=\mathbf{0}\qquad\boldsymbol{ \varepsilon}^{\Omega}=\frac{1}{2}\left(\nabla\mathbf{u}^{\Omega}+\left( \nabla\mathbf{u}^{\Omega}\right)^{\mathrm{T}}\right) \tag{4}\] Figure 1: The hybrid surrogate combining a data-driven encoder for material parameters and a physics-based material model decoder. but this time no constitutive assumptions are adopted. Macroscale stresses are instead directly homogenized from the microscopic response: \[\mathbf{\sigma}^{\Omega}=\frac{1}{|\omega|}\int_{\omega}\mathbf{\sigma}^{\omega}\mathrm{d}\omega \tag{5}\] which couples the macroscopic strain \(\mathbf{\varepsilon}^{\Omega}\) with the microscopic solution. Since Eq. (1) also couples the solutions in the opposite direction, a bidirectional coupling is formed which requires the two-scale equilibrium problem to be solved iteratively. ### Data-driven surrogate modeling The coupled problem of Section 2.1 is extremely computationally demanding. The lower-scale domain \(\omega\) usually features complicated geometric features and must therefore be modeled with dense FE meshes in order to ensure accuracy. Worse yet, an independent microscopic problem must be solved at every integration point in \(\Omega\) for every iteration of every time step of the simulation. This nested nature quickly forms a computational bottleneck. Since the bulk of the computational effort lies in solving the micromodels, a popular approach to make multiscale analysis viable for practical applications is to substitute the microscopic FE models by data-driven surrogates. The idea is to perform a number of micromodel simulations under representative boundary conditions and use the resulting stress-strain pairs to train a machine learning model to be deployed when performing the actual two-scale simulations of interest. Naturally, the approach tacitly assumes that the number of offline micromodel computations required to train the model is much smaller than the number of times the microscopic behavior will be computed online. In the following, we use a simple example to demonstrate a number of difficulties associated with training such a model to reproduce path-dependent material behavior. ### Example: A one-dimensional RNN surrogate For this demonstration, we train a Long Short-term Memory (LSTM) network [47] to reproduce one-dimensional (single stress/strain component) elastoplasticity. The architecture of the model is shown in Fig. 2a and is implemented in PyTorch [48]. In order to minimize the risk of overfitting, a pragmatic model selection procedure is performed by first training the model with several non-monotonic strain paths and gradually increasing cell size until reasonable accuracy is obtained. This leads to a parsimonious model with a single LSTM cell with 5 latent units. At this point it is interesting to draw a parallel between the network and the micromodel whose behavior is being reproduced: the concatenation of the hidden state \(\mathbf{h}\) and cell state \(\mathbf{c}\) of the LSTM cell can be seen as a lower-dimensional surrogate for the set of microscopic internal variables \(\mathbf{\alpha}^{\omega}\) of Eq. (3). However, in contrast to the variables in \(\mathbf{\alpha}\), the latent variables \(\mathbf{h}\) and \(\mathbf{c}\) have no physical interpretation and evolve purely according to heuristic memory mechanisms that mimic patterns inferred during training. First, we train the LSTM using only monotonic data. Since only one strain component is being modeled, this initial dataset is composed simply of one strain path in tension and one in compression. The trained model is then used to predict a tension path with one unloading-reloading cycle. Having never seen unloading during training, the network reverses course and unloads on top of its loading path (Fig. 2b). This result is hardly surprising, but sheds light on the potentially deceiving nature of the training procedure: even though we are only concerned with a single strain component, predictions actually take place in an augmented space that describes strain paths in time which can be arbitrarily high-dimensional (as paths can be arbitrarily long). We can further demonstrate this manifestation of the _curse of dimensionality_ with the two additional examples of Fig. 3. In Fig. 3a we train the network with two unloading paths and it fails to predict a third one at an intermediate strain level. Here it can be deceiving to assume the third path can be interpolated from the other two: in the 48-dimensional space of strain paths (we use paths with 48 time steps each) the network is actually operating far away from training data. In Fig. 3b the network tries to reproduce a path seen during training but we first let the material rest at zero strain for five time steps before loading starts and for another five time steps at the end of the path. With purely data-driven latent dynamics, the initial rest disturbs the memory structure of the network and causes large deviations for a large portion of the path. For the rest at the end of the path, we see that the surrogate fails to predict the characteristic that the stress does not change upon constant deformation. Training data-driven models to accurately reproduce path dependency is therefore not straightforward: their latent representations of material state are not interpretable and even phenomena as trivial as resting at zero strain must be learned from data. At the core of successful applications of RNNs to this task are either extensive datasets obtained Figure 3: 1D LSTM surrogate trained with unloading/reloading and used to predict unseen unloading paths. Figure 2: An LSTM recurrent neural network as surrogate for 1D path-dependent material behavior trained with only monotonic data. with carefully crafted sampling strategies [33, 49] or highly tailored datasets for specific macroscopic problems [28]. Alternatively, active learning frameworks may be used to skip offline training altogether [50, 51], but at the cost of producing slower surrogates. ## 3 A hybrid surrogate model In this work we attempt to avoid the curse of dimensionality by relegating to a physics-based material model some of the tasks the RNN of Section 2.3 has to explicitly learn from data. In this section, we further formalize the hybrid approach of Fig. 1 by looking at the roles of each model component and their dependencies in time. We then particularize the model for the case of a feedforward neural network (FNN) encoder and discuss feature selection and numerical stabilization strategies. ### Evolving material parameters Physics-based material models are traditionally formulated with a fixed set of parameters \(\mathbf{\theta}\) either directly computed from a specific set of (numerical) experiments or indirectly from stress-strain measurements in a Maximum Likelihood Estimation (MLE) approach2. Here we start from the premise that letting (part of) \(\mathbf{\theta}\) evolve in time increases flexibility and allows the model to capture more complex material behavior. Conversely, keeping the remainder of the model intact improves interpretability and provides physics-based bias to the data-driven model tasked to learn this evolution. Footnote 2: The parameters \(\mathbf{\theta}\) can also be estimated through Bayesian inference and would therefore be described by a multivariate probability density instead of a fixed set of values. Regardless, that density would still be stationary in time. In Fig. 4, the hybrid model of Fig. 1 is unrolled in time for a number of consecutive time steps and represented as a graph showing the dependencies between variables. Filled and hollow nodes represent observed and latent variables, respectively, and are color coded to represent the different model components in Fig. 1. Similar to the microscale models of Eq. (3), we assume the constitutive behavior at the macroscale is given by a physics-based material model: \[\mathcal{M}^{\Omega}\begin{cases}\mathbf{\alpha}_{t}^{\Omega}=\mathcal{A}\left( \mathbf{\varepsilon}_{t}^{\Omega},\mathbf{\alpha}_{t-1}^{\Omega},\mathbf{\theta}_{t}^{ \Omega}\right)\\ \mathbf{\sigma}_{t}^{\Omega}=\mathcal{S}\left(\mathbf{\varepsilon}_{t}^{\Omega},\mathbf{ \alpha}_{t}^{\Omega},\mathbf{\theta}_{t}^{\Omega}\right)\end{cases} \tag{6}\] but now with time-dependent parameters \(\mathbf{\theta}_{t}\). Note that the model response at time \(t\) depends on the material state at time \(t-1\) through a set of internal variables \(\mathbf{\alpha}_{t-1}^{\Omega}\) (Fig. 4). This gives the model a recurrent nature not unlike that of the RNN of Fig. 1(a) with its state variables \(\mathbf{c}\) and \(\mathbf{h}\). The advantage here is that \(\mathbf{\alpha}\) has clear physical interpretation (plastic strains, damage variables, etc) and its evolution is handled by the fixed operator \(\mathcal{A}\) composed of clearly interpretable algorithmic steps grounded in physics and/or classical material phenomenology (_e.g._ a return mapping algorithm). Figure 4: Graph representation of the hybrid model architecture combining a data-driven encoder and a physics-based decoder. Filled circles represent observable variables and hollow circles represent latent variables. On the encoder side, we let the material properties \(\mathbf{\theta}\) evolve according to an evolution operator \(\mathcal{D}\) whose shape is learned from data: \[\mathbf{\theta}_{t}=\mathcal{D}\left(\mathbf{\varphi}_{t}\right) \tag{7}\] as a function of a set of features \(\mathbf{\varphi}\) that are themselves obtained from the macroscopic strains through a feature extractor \(\mathcal{F}\): \[\mathbf{\varphi}_{t}=\mathcal{F}\left(\mathbf{\varepsilon}_{t}^{\Omega}\right) \tag{8}\] where \(\mathbf{\varphi}_{t}\) could be simply the strains themselves or other quantities derived from it. More importantly, note that \(\mathbf{\theta}_{t}\) depends only on the current features \(\mathbf{\varphi}_{t}\) and we therefore assume the encoder is not recurrent (Fig. 4). This choice effectively limits the flexibility of \(\mathcal{D}\) and makes the hybrid surrogate fully rely on the more robust model \(\mathcal{M}^{\Omega}\) to explain path-dependent phenomena, helping counter the curse of dimensionality associated with sampling strain paths. For instance, it opens up the possibility to train the surrogate exclusively with monotonic data, as we will demonstrate in the examples of Section 4. In the following sections, we particularize the model for the case of \(\mathcal{D}\) being a fully-connected neural network and for specific choices of \(\mathcal{F}\) and \(\mathcal{M}\). Nevertheless, the general architecture of Figs. 1 and 4 is meant to be as flexible as possible: * The nature and dimensionality of \(\mathbf{\varphi}\) is not tied to that of \(\mathbf{\varepsilon}^{\Omega}\) since strains are also given directly to \(\mathcal{M}^{\Omega}\); * Other machine learning models for regression can also be used as \(\mathcal{D}\), and it could in principle be split into different models handling the evolution of different subsets of \(\mathbf{\theta}\). Any number of model parameters may also be left out of \(\mathbf{\theta}\) and either fixed as constants or optimized to constant values during training; * No assumption is made on the form of \(\mathcal{M}^{\Omega}\) or the nature or dimensionality of \(\mathbf{\alpha}^{\Omega}\). Instead of a single model, it could also for instance be a mixture of physics-based models combined with analytical homogenization techniques. ### Feature extractors A pragmatic choice for \(\mathcal{F}\) is to simply assume \(\mathbf{\varphi}\) is the macroscopic strain vector \(\mathbf{\varepsilon}^{\Omega}\) itself. It is also a familiar one, as we can then relate the resulting model to conventional surrogates mapping strains to stresses. However, since macroscopic strains are also directly passed on to the decoder, the architecture gives us the freedom to experiment with different features. Fig. 5 shows the two model architectures we explore in this work. For the two variants in Fig. 5a we either use \(\mathbf{\varepsilon}^{\Omega}\) itself or a set of small-strain invariants of the macroscopic strain tensor of increasing dimensionality: \[\textbf{I}_{\varepsilon}^{\Omega}=\begin{bmatrix}I_{1}^{\varepsilon}\end{bmatrix} \quad\mathrm{or}\quad\textbf{I}_{\varepsilon}^{\Omega}=\begin{bmatrix}I_{1}^{ \varepsilon}&I_{2}^{\varepsilon}\end{bmatrix} \tag{9}\] where the variants are given by the well-known expressions: \[I_{1}^{\varepsilon}=\mathrm{tr}\left(\mathbf{\varepsilon}\right),\quad I_{2}^{ \varepsilon}=\frac{1}{2}\left(\mathrm{tr}\left(\mathbf{\varepsilon}\right)^{2}- \mathrm{tr}\left(\mathbf{\varepsilon}^{2}\right)\right) \tag{10}\] Additionally, since the current study focus on elastoplasticity, it is also interesting to explore feature spaces including invariants from the deviatoric strain tensor: \[\textbf{I}_{\varepsilon}^{\Omega}=\begin{bmatrix}J_{2}^{\varepsilon}\end{bmatrix} \quad\mathrm{or}\quad\textbf{I}_{\varepsilon}^{\Omega}=\begin{bmatrix}I_{1}^{ \varepsilon}&J_{2}^{\varepsilon}\end{bmatrix} \tag{11}\] where: \[J_{2}^{\varepsilon}=\frac{1}{3}\left(I_{1}^{\varepsilon}\right)^{2}-I_{2}^{ \varepsilon} \tag{12}\] By using features based on invariants and since the decoder material model is itself already frame invariant for small strains, it follows that the resulting surrogate will naturally inherit this beneficial characteristic. This stands in contrast with traditional black-box surrogates mapping strains to stresses. Furthermore, opting for invariant-based features can be seen as a physics-based dimensionality reduction operation that can potentially reduce the amount of data needed to train the hybrid model. We also investigate the possibility of extracting features from the outputs of a precalibrated physics-based material model \(\overline{\mathcal{M}}\) subjected to the same strain path seen at the macroscale (Fig. 4(b)). Note that this specific architecture introduces an additional recurrent component to the model through the set \(\overline{\mathbf{\alpha}}\) of internal variables of \(\overline{\mathcal{M}}\). From a machine learning perspective, the role of \(\overline{\mathcal{M}}\) would be analogous to that of a temporal convolution operator or an RNN cell appended to the encoder. The key difference, however, is that \(\overline{\mathcal{M}}\) is fixed _a priori_ and therefore should not require extra sampling effort with respect to the more straightforward extractor in Fig. 4(a). Naturally, different choices for \(\overline{\mathcal{M}}\) yield models with distinct learning capabilities, and we therefore assume \(\overline{\mathcal{M}}\) encapsulates relevant information about not only the current values of \(\mathbf{\epsilon}^{\Omega}\) but also of its history. In the present scenario where the data is coming from micromodel computations, we opt for the intuitive choice of having \(\overline{\mathcal{M}}\) be one of the known constitutive models used to describe the microscopic material phases. We can therefore conceptually see \(\overline{\mathcal{M}}\) as an imaginary representative material point at the microscale that is always subjected to the average micromodel strain. We then use either a subset of its internal variables \(\overline{\mathbf{\alpha}}\) or a set of invariants \(\mathbf{\Gamma}_{\mathbf{\sigma}}\) of its stress outputs as features. ### Neural network encoder For simplicity, we opt for modeling the evolution of \(\mathbf{\theta}\) using classical feedforward neural networks with fully-connected layers. As both architectures in Fig. 5 ultimately compute macroscopic stresses given macroscopic strains, we can use supervised learning to train the model with a straightforward Maximum Likelihood approach. Gathering the complete set of network weights in a vector \(\mathbf{w}\) and seeing the complete surrogate as a monolithic model that computes an approximation \(\widehat{\mathbf{\sigma}}\) for stresses, we adopt the following observation model for the snapshot stresses \(\mathbf{\sigma}\): \[\mathbf{\sigma}=\widehat{\mathbf{\sigma}}\left(\mathbf{\epsilon},\mathbf{w}\right)+\xi, \quad\xi\sim\mathcal{N}\left(\xi|\mathbf{0},\beta^{-1}\mathbf{I}\right) \tag{13}\] where the superscript \(\Omega\) is dropped for convenience, \(\mathbf{I}\) is an identity matrix, and \(\xi\) is an additive Gaussian noise3. Under the assumption of a squared loss, maximizing the likelihood of a training dataset with \(N\) observations amounts to minimizing the loss function [45]: Footnote 3: Even though our observations come from a computer model and can be considered noiseless, the surrogate \(\widehat{\mathbf{\sigma}}\) is in general not arbitrarily flexible and the random variable \(\xi\) is therefore still necessary to explain why the model does not exactly fit every single observation in the dataset. \[L=\frac{1}{2}\sum_{n=1}^{N}\|\mathbf{\sigma}_{n}-\widehat{\mathbf{\sigma}}\left(\mathbf{ \epsilon}_{n},\mathbf{w}\right)\|^{2} \tag{14}\] with the variance of the noise that explains data misfit being simply \(\beta=N/2L\). The resulting loss function is the same one used for conventional data-driven surrogates and is therefore straightforward to implement. Nevertheless, it is worth noting that since we cannot directly observe \(\mathbf{\theta}\), computing the gradients of \(L\) with respect to \(\mathbf{w}\) involves backpropagating derivatives through the decoder \(\mathcal{M}\). Furthermore, since \(\mathbf{w}\) affects the evolution of the internal variables \(\mathbf{\alpha}\), backpropagation in time becomes necessary. Starting from Eq. (14) and walking back through Figure 5: The two types of FNN-based model architectures explored in this work, with different feature extraction steps. the graph of Fig. 4, the gradient of the loss at time step \(t\) of a given strain path is given by: \[\frac{\partial L_{t}}{\partial\mathbf{w}}=\frac{\partial L}{\partial\widehat{ \boldsymbol{\sigma}}_{t}}\left\{\frac{\partial\widehat{\boldsymbol{\sigma}}_{t }}{\partial\boldsymbol{\theta}_{t}}\frac{\partial\boldsymbol{\theta}_{t}}{ \partial\mathbf{w}}+\frac{\partial\widehat{\boldsymbol{\sigma}}_{t}}{ \partial\boldsymbol{\alpha}_{t}}\frac{\partial\boldsymbol{\alpha}_{t}}{ \partial\boldsymbol{\theta}_{t}}\frac{\partial\boldsymbol{\theta}_{t}}{ \partial\mathbf{w}}+\frac{\partial\widehat{\boldsymbol{\sigma}}_{t}}{ \partial\boldsymbol{\alpha}_{t}}\sum_{i=t-1}^{1}\left[\left(\prod_{\ell=t}^{t+ 1}\frac{\partial\boldsymbol{\alpha}_{\underline{t}}}{\partial\boldsymbol{ \alpha}_{\underline{t}-1}}\right)\frac{\partial\boldsymbol{\alpha}_{ \underline{t}}}{\partial\boldsymbol{\theta}_{\underline{t}}}\frac{\partial \boldsymbol{\theta}_{\underline{t}}}{\partial\mathbf{w}}\right]\right\} \tag{15}\] where the remaining gradient chain \(\partial\boldsymbol{\theta}/\partial\mathbf{w}\) is computed with conventional backpropagation through the network. If \(\mathcal{M}\) is implemented in a code base that allows for automatic differentiation (_e.g._ in PyTorch), these time dependencies are naturally taken into account as long as a persistent gradient tape is used within each strain path4. In this work we instead implement network training directly into an existing FE code, and therefore opt for the pragmatic approach of computing all partial derivatives of quantities derived from \(\mathcal{M}\) using finite differences. Footnote 4: This is already the case for RNNs, so switching from RNNs to the present model should require little to no changes to the way training is performed. Finally, in order to enforce upper and lower bounds for \(\boldsymbol{\theta}\) and avoid unphysical parameter values (_e.g._ negative elasticity moduli), we apply sigmoid activation to the final layer of the network and scale the parameters back from a \([0,1]\) range using predefined bounds: \[\theta_{i}=\theta_{i}^{\mathrm{low}}+\theta_{i}^{\sigma}\left(\theta_{i}^{ \mathrm{upp}}-\theta_{i}^{\mathrm{low}}\right) \tag{16}\] ### Material decoders As previously mentioned, any constitutive model can in principle be used as \(\mathcal{M}\). For the present study we focus on reproducing elastoplasticity and therefore narrow our choices down to the following set of potential decoders with increasing levels of complexity. The simplest one is a linear-elastic isotropic material with no internal variables: \[\sigma_{ij}=D_{ijkl}\varepsilon_{kl}\quad\mathrm{with}\quad D_{ijkl}=G\left( \delta_{ij}\delta_{kl}+\delta_{il}\delta_{jk}\right)+\left(K-\frac{2}{3}G \right)\delta_{ij}\delta_{kl} \tag{17}\] where index notation is used for convenience. For this model, \(\boldsymbol{\theta}\) comprises only the bulk and shear moduli \(K\) and \(G\), or equivalently the Young's modulus \(E\) the Poisson's ratio \(\nu\). The second decoder option is a simple plasticity model with \(J_{2}\) (von Mises) flow. The stress update in this case becomes: \[\sigma_{ij}=D_{ijkl}\left(\varepsilon_{ij}-\varepsilon_{ij}^{\mathrm{p}}\right) \tag{18}\] where strain is additively decomposed into elastic and plastic (\(\varepsilon^{\mathrm{p}}\)) contributions. The yield criterium and plastic flow rule are given by: \[\phi=\sqrt{3J_{2}^{\sigma}}-\sigma_{\mathrm{y}}\leq 0\quad\mathrm{and}\quad \Delta\varepsilon_{ij}^{\mathrm{p}}=\Delta\gamma\sqrt{\frac{3}{2}}\frac{S_{ij} }{\left\|S_{ij}\right\|_{\mathrm{F}}} \tag{19}\] where \(\mathbf{S}\) is the deviatoric part of the stresses, \(\gamma\) is a plastic multiplier, \(\sigma_{\mathrm{y}}\) is a yield stress parameter and we write the Frobenius norm as \(\left\|\cdot\right\|_{\mathrm{F}}\). In order to keep the model as simple as possible, we assume \(\sigma_{\mathrm{y}}\) is a material constant and therefore end up with a perfectly-plastic model with associative flow. The internal variables of this model are components of the plastic strain vector \(\varepsilon^{\mathrm{p}}\) and the only new material parameter is the yield stress \(\sigma_{\mathrm{y}}\). Finally, we also consider the more complex pressure-dependent, non-associative plasticity model proposed by Melro _et al._[52]. Stress update is the same as in Eq. (18), but yield surface and plastic flow are given by: \[\phi=6J_{2}^{\sigma}+2I_{1}^{\sigma}\left(\sigma_{\mathrm{c}}-\sigma_{\mathrm{t }}\right)-2\sigma_{\mathrm{c}}\sigma_{\mathrm{t}}\leq 0\quad\mathrm{and}\quad\Delta \varepsilon_{ij}^{\mathrm{p}}=\Delta\gamma\left(3S_{ij}+\frac{1-2\nu_{\mathrm{ p}}}{1+\nu_{\mathrm{p}}}I_{1}^{\sigma}\delta_{ij}\right) \tag{20}\] where \(\delta_{ij}\) is the Kronecker delta, \(\sigma_{\mathrm{t}}\) and \(\sigma_{\mathrm{c}}\) are yield stresses in tension and compression, respectively, and \(\nu_{\mathrm{p}}\) is a new parameter controlling plastic contraction and allowing for compressible plastic flow. Hardening can be described by making the yield stresses general functions of \(\varepsilon^{\mathrm{p}}\), but when used as a decoder we assume \(\sigma_{\mathrm{t}}\) and \(\sigma_{\mathrm{c}}\) do not depend on \(\varepsilon^{\mathrm{p}}\) and instead let the decoder \(\mathcal{D}\) describe their evolution. The model by Melro _et al._[52] is also the one used to describe the microscopic material phase responsible for the nonlinear behavior observed when homogenizing micromodel response, and can therefore be seen as the natural choice for \(\mathcal{M}\). Nevertheless, the other two decoders can provide interesting insights on the effect of introducing different levels of bias to the hybrid model. ### Online predictions and inherited stability The architecture of Fig. 1 is developed to be minimally intrusive and allow for existing material models to be used as decoders with minimum effort. We therefore implement the online routine of the model as a wrapper around an existing implementation of \(\mathcal{M}\). The basic structure of the wrapper can be seen in Algorithm 1. The hybrid nature of the model allows for a robust approach that ensures the numerical stability of the original model \(\mathcal{M}\) is inherited by the surrogate. This is achieved by only updating \(\boldsymbol{\theta}\) at the end of each time step, after the global implicit Newton-Raphson scheme converges. Material properties are therefore fixed while the global solver is iterating, and that means the tangent stiffness \(\mathbf{D}\) comes directly from \(\mathcal{M}\) and inherits its stability features. ``` Input: strain \(\boldsymbol{\varepsilon}_{\mathrm{new}}^{\Omega}\) at macroscopic Gauss point Output: stress \(\boldsymbol{\sigma}^{\Omega}\) and stiffness \(\mathbf{D}^{\Omega}\) at macroscopic Gauss point 1 use nested model with converged parameters and internal state: \(\left(\boldsymbol{\sigma}^{\Omega},\mathbf{D}^{\Omega},\boldsymbol{\alpha}_{ \mathrm{new}}\right)\leftarrow\mathcal{M}\left(\boldsymbol{\varepsilon}_{ \mathrm{new}}^{\Omega},\boldsymbol{\alpha}_{\mathrm{old}},\boldsymbol{\theta}\right)\); 2ifglobal solver has converged : 3 store latest converged strain: \(\boldsymbol{\varepsilon}_{\mathrm{old}}\leftarrow\boldsymbol{\varepsilon}_{ \mathrm{new}}\); 4 commit material history: \(\boldsymbol{\alpha}_{\mathrm{old}}\leftarrow\boldsymbol{\alpha}_{\mathrm{new}}\); 5 compute new features: \(\boldsymbol{\varphi}_{\mathrm{new}}\leftarrow\mathcal{F}\left(\boldsymbol{ \varepsilon}_{\mathrm{new}}\right)\); 6 update model parameters for the upcoming time step: \(\boldsymbol{\theta}\leftarrow\mathcal{D}\left(\boldsymbol{\varphi}_{\mathrm{ new}}\right)\); 7iffirst global iteration of time stepandGauss point is unstable : 8 stabilize encoder: \(\mathcal{D}\leftarrow\mathtt{stabilizeNetwork}\left(\boldsymbol{\varepsilon}_{ \mathrm{new}}^{\Omega}\right)\); 9 recompute features: \(\boldsymbol{\varphi}_{\mathrm{old}}\leftarrow\mathcal{F}\left(\boldsymbol{ \varepsilon}_{\mathrm{old}}^{\Omega}\right)\); 10 recompute model parameters for the current time step: \(\boldsymbol{\theta}\leftarrow\mathcal{D}\left(\boldsymbol{\varphi}_{\mathrm{old}}\right)\); 11return\(\boldsymbol{\sigma}^{\Omega},\mathbf{D}^{\Omega}\) ``` **Algorithm 1**Material wrapper implementing the online component of the hybrid surrogate. As an example, the \(J_{2}\) plasticity model of Eq. (19) is unconditionally stable as long as its hardening modulus \(h\geq 0\) for any \(\left(\boldsymbol{\varepsilon}_{t}^{\Omega},\boldsymbol{\alpha}_{t}^{\Omega}\right)\), which is the case for the perfectly-plastic version we consider here. It then follows that any hybrid surrogate with \(J_{2}\) decoder is also unconditionally stable. Note that this is only possible because strains are directly passed on to the decoder and would therefore not be an option for conventional surrogates (_e.g_. the RNN of Fig. 3). For those surrogates, the tangent stiffness would come directly from the jacobian of a highly-flexible data-driven model, often at the cost of numerical stability. ### Numerical stabilization Nevertheless, the decoder \(\mathcal{M}\) may be inherently unstable even with fixed material constants. This is for instance the case for the model by Melro _et al_. [52]: the non-associative flow rule of Eq. (20) can cause the tangent stiffness \(\mathbf{D}^{\Omega}\) to lose positive definiteness under certain strain conditions and for certain combinations of model parameters. To accommodate such a scenario and open up the possibility for online model adaptivity in other contexts, we propose a scheme for updating the encoder \(\mathcal{D}\) on the fly in order to enforce extra constraints locally. Back to Algorithm 1, at the beginning of a new time step we keep \(\boldsymbol{\theta}\) fixed to the one obtained with converged strains from the previous step and let the solver make a first strain prediction. After this first iteration, a stability criterion is checked and used to define a new loss function that can be used to update network weights in case instability is detected. Here we employ the determinant of the acoustic tensor \(\mathbf{Q}\): \[\mathbf{Q}=\mathbf{n}_{d}^{\mathrm{T}}\mathbf{D}^{\Omega}\mathbf{n}_{d} \tag{21}\] where \(\mathbf{n}_{d}\) is the vector normal to the strain localization direction creating the instability, which we find through an angle sweep procedure as in [53]. We then use \(\det\left(\mathbf{Q}\right)\) as a metric of stability and trigger a retraining procedure in case a negative value is detected. We then introduce a new loss function: \[L_{\mathrm{Q}}=-\frac{\left\langle\det\left(\mathbf{Q}\right)\right\rangle_{-}} {\det\left(\mathbf{Q}_{0}\right)} \tag{22}\] the start of the simulation. We minimize this new loss at every unstable point for a small number of epochs with low learning rate, and to discourage significant drifts from the original model we finish the stabilization procedure by updating the network using the original loss of Eq. (14) for a single minibatch. Finally, \(\boldsymbol{\theta}\) is updated using the retrained model and is kept fixed for the remaining iterations5. Note that the local constraint of Eq. (22) is therefore only enforced in a soft way and remaining instabilities might still cause the global solver to diverge, in which case we cancel the current increment, go back to the beginning of the time step and allow for the procedure to be triggered again. Footnote 5: Changing \(\mathcal{D}\) and therefore \(\boldsymbol{\theta}\) after every iteration would not work in favor of improving stability, but rather have the opposite effect. ## 4 Numerical examples The proposed model was implemented in an in-house Finite Element code developed using the open-source C++ numerical analysis library Jem/Jive [54]. In order to allow for seamless online retraining, network training was also implemented within the same code. We start this section by describing the datasets and model selection strategies used to build the surrogates. We then investigate the performance of the approach under several choices of encoders and decoders. Finally, we use the model within an FE\({}^{2}\) simulation and demonstrate the online stabilization procedure of Section 3.5. All simulations are performed on cluster nodes equipped with Xeon E5-2630V4 processors and \(128\,\mathrm{GB}\) RAM running CentOS 7. ### Data sampling and model selection Models are trained to reproduce the behavior of the fiber-reinforced composite micromodel shown in Fig. 6. Fibers are modeled as linear-elastic and the matrix is described by the pressure-dependent non-associative elastoplastic model by Melro _et al_. [52] (Section 3.4). Microscale material properties are adopted from [10]. The microscopic geometry shown in Fig. 6 results from an RVE study performed in [10] and is therefore considered representative. Following the discussion in Section 3, our aim is to investigate up to which extent it is possible to circumvent the curse of dimensionality associated with path dependency by training surrogates exclusively on monotonic strain paths and having time-dependent behavior arise naturally from a physics-based decoder. We therefore limit ourselves to monotonic paths for training. For consistency, we also employ exclusively monotonic data to perform model selection. For efficiency, we limit the present investigation to 2D simulations (_i.e._ three strain components) in the plane perpendicular to the fibers, but nevertheless expect the discussion and conclusions to generalize to 3D simulations as long as appropriate orthotropic decoders are employed. Datasets with \(2000\) monotonic strain paths are generated under both plane strain and plane stress assumptions. Fig. 7 shows the complete plane strain dataset, with a similar one also being generated for plane stress. Each path is generated with an FE\({}^{2}\) simulation of a single macroscopic element under displacement control along a fixed direction in strain space sampled from a uniform distribution. To circumvent convergence issues, we employ an adaptive time stepping technique that progressively reduces time step size when the simulation does not converge and gradually increases it back for subsequent increments. The simulations are stopped once a strain norm of \(10\,\mathrm{\char 37}\) is reached. As the adaptive scheme leads to paths with different numbers of Figure 6: The micromodel used in the examples of this work. time increments, we balance the dataset by ensuring every path is composed of \(30\) steps with strain norms as equally spaced as possible. To keep model selection straightforward and avoid the need for cumbersome k-fold cross validation or bootstrapping, we train a preliminary model with enough flexibility and an extensive training dataset and gradually increase the size of the validation set until the validation error converges to a good estimate of the expected prediction error [55]. This results in validation sets with \(500\) paths selected at random from the original datasets, leaving \(1500\) paths to be used for training. We then perform model selection by gradually increasing the complexity of our FNN encoders until the validation error stabilizes. From experimenting with different architectures, we find that encoders with 5 hidden layers of 50 units each with Scaled Exponential Linear Unit (SELU) [56] activation provide enough flexibility for all the examples treated here. To ensure enough regularization when computing learning curves with small datasets, we employ Bernoulli dropout layers with a rate of \(1\,\%\) after every hidden layer. Networks are trained for \(20\,000\) epochs and the model with lowest historical validation error is kept after training, further reducing the risk of overfitting on small datasets. To assess the capabilities of the trained surrogates, we compute an additional test dataset comprising \(50\) monotonic, Figure 8: Examples from a test dataset with 50 paths of each type. They are not used to train any of the networks or perform model selection. Figure 7: The complete plane strain dataset used to train the surrogates, comprising \(2000\) monotonic strain-stress paths. A similar dataset is generated under plane stress conditions. \(50\) unloading-reloading and \(50\) slow cycling paths, examples of which are shown in Fig. 8. To keep the comparisons fair, none of these paths are used to perform model selection and are therefore only considered after the surrogates are trained. We will use example curves like those from Fig. 8 for visual inspection of the model performance, but also the complete sets of \(50\) curves each for more rigorous statistical analysis. ### Elastic decoder It is interesting to first consider the simple linear-elastic decoder of Eq. (17), as it has no internal variables and therefore leads to a surrogate model comparable in nature to a conventional FNN trained on stress-strain pairs. As we will demonstrate, however, the limited physical bias provided by such simple model already proves advantageous. Here we let both elastic properties be controlled by the learned encoder: \[\boldsymbol{\theta}=\begin{bmatrix}E&\nu\end{bmatrix} \tag{23}\] where the bounds \(10^{1}<E<10^{5}\) and \(0<\nu<0.5\) are enforced as described in Eq. (16). We first perform a feature selection study and investigate how efficiently the model learns as the size of the dataset is increased. From the original plane strain training dataset of \(1500\) monotonic strain paths, we draw datasets with sizes ranging between \(1\) and \(150\) paths without replacement and use them to train networks with different encoder features. To get a reliable estimate of the expected prediction error, we repeat this process \(50\) times for each dataset size and encoder type, and for comparison we also do the same for conventional FNNs trained directly on stress targets (keeping the same architecture but going directly from the final hidden layer to stresses). This amounts to a total of \(3400\) trained networks from which we can compute an estimate of the prediction error by averaging \(\|\boldsymbol{\sigma}-\widehat{\boldsymbol{\sigma}}\|\) over the \(500\) paths left for validation. Fig. 8(a) plots averages of the validation error over the \(50\) training datasets used for each size. Although the hybrid architecture does not show an advantage over the FNN when the encoder is trained on strain features, there is a clear gain in learning speed when using only the two first strain invariants as features. Apart from accelerating learning and resulting in lossless dimensionality reduction, using invariants also results in a surrogate which is frame invariant under small strains. For comparison, we also train a conventional FNN on the same set of features, but those are unsurprisingly not enough to describe general strain states and much of the material response is interpreted by the FNN as observation noise. We zoom into the first part of the learning curves in Fig. 8(b), this time also showing single standard deviation uncertainty bands coming from the variance among the \(50\) training datasets. The hybrid network outperforms conventional FNNs in the low data regime and tends to be less sensitive to changes in dataset starting from about \(20\) training paths. Nevertheless, the extra flexibility of conventional FNNs allow them to achieve lower validation errors if significantly more training paths are used. Training the invariant-based hybrid network with the complete dataset of \(1500\) curves leads to surrogates with validation errors of about \(4\,\mathrm{MPa}\), accurately representing the monotonic behavior of the original micromodel. Fig. 10 Figure 9: Learning curves of models with elastic decoders and conventional FNN models. Mean error over the 500 validation monotonic paths. shows representative predictions of this model for paths from the test set. As expected, this surrogate with no internal variables is not capable of predicting non-monotonic strain paths, and effectively behaves like a hyperelastic material model just as the conventional FNN would. Nevertheless, the flexible and interpretable encoder-decoder architecture of Fig. 1 allows for new creative approaches in feature selection. As a demonstration, we keep the trained network of Fig. 10 intact and only modify its feature extractor to introduce a simple path-dependent mechanism: \[\boldsymbol{\varphi}_{T}\equiv\begin{bmatrix}\overline{I}_{1}^{\epsilon}& \overline{I}_{2}^{\epsilon}\end{bmatrix}_{T}=\operatorname*{argmax}_{0<t<T} \left(\left(I_{1}^{\epsilon}\right)_{t}^{2}+\left(J_{2}^{\varepsilon}\right)_ {t}\right) \tag{24}\] which freezes the evolution of \(\boldsymbol{\theta}\) if the path becomes non-monotonic. Note that the network does not need to be retrained and this modification can be employed exclusively when making online predictions, as the new features reduce to the original ones for the monotonic paths used for training. We plot two representative non-monotonic paths predicted by the modified model in Fig. 11. From the hyperelastic behavior of Fig. 10, the modified surrogate now behaves as a damage model: the non-linear material behavior is explained by a loss of stiffness which is made persistent by the history-aware feature extractor. Nevertheless, although an improvement to the original model, it is unreasonable to expect the physical bias introduced by a purely elastic model to reliably represent an elastoplastic micromodel. We therefore move to decoders with more relevant physics. ### \(J_{2}\) decoder In this section we choose as decoder \(\mathcal{M}\) the elastoplastic model of Eq. (19) with \(J_{2}\) plastic flow. Standing on its own, the model is _a priori_ perfectly plastic (constant \(\sigma_{y}\))6, but here we let its yield stress be controlled by the data-driven Figure 11: Predicting unloading with a linear-elastic decoder through history-aware feature extraction. Figure 10: Performance of the elastic decoder model for different test scenarios. encoder: \[\mathbf{\theta}=\left[\sigma_{y}\right] \tag{25}\] while enforcing \(10^{1}<\sigma_{y}<10^{3}\) and keeping the Young's modulus and Poisson's ratio fixed to values obtained from a single linear micromodel simulation. In contrast to the model with elastic decoder of the previous section, we now employ prior knowledge of the micromodel behavior and assume that all non-linearity should be explained by plasticity and do not let the elastic properties be dictated by the encoder. Still, the assumption of isotropic and incompressible plastic flow is a departure from the more complex pressure-dependent and non-associative behavior shown by the micromodel. Here we are therefore concerned with the effect of trading the flexibility of an elastic decoder for significantly more physical bias from a lower-fidelity representation of material behavior. At this point it is interesting to compare the performance of the hybrid surrogate with predictions coming from the state-of-the-art mesoscale material model for polymer composites proposed by Vogler _et al_. [57]. It is an orthotropic elastoplastic model with pressure-dependent non-associative flow precalibrated with a small number of monotonic uniaxial and biaxial stress-strain curves obtained from simulations on the exact same micromodel of Fig. 6 (see [10] for details on the calibration procedure). For this section, we switch to a dataset in plane stress, allowing the \(J_{2}\) model to describe richer nonlinear behavior under biaxial strain states. Fig. 12 shows the evolution of the validation set loss when training the \(J_{2}\)-decoded model with \(1500\) plane stress training paths. The error quickly stabilizes at around \(20\,\mathrm{MPa}\), significantly lower than the \(44\,\mathrm{MPa}\) average prediction error obtained with the precalibrated mesomodel. The added flexibility with respect to the original perfectly-plastic \(J_{2}\) model can be seen in the test set curves plotted in Fig. 13: the data-driven encoder leads to correct predictions of nonlinear hardening (Fig. 13a) and pressure-dependent plastic flow (Fig. 13b). The figures also highlight the inability of the mesomodel to predict the behavior in certain regions of the strain space, particularly under compression-dominated scenarios. The minimum validation error attained by the model is, however, nevertheless significantly higher than the \(4\)\(\mathrm{MPa}\) obtained with the elastic decoder of the previous section. This result is not entirely surprising, as the elastic decoder introduces much less bias into the model and therefore allows for a greater degree of flexibility when fitting monotonic data. On the other hand, what cannot be directly gleaned from Fig. 12 is that the \(J_{2}\) decoder benefits from having physics-based memory coming from its internal variables that allows for making predictions of non-monotonic behavior based solely on our assumption that nonlinearities come from plastic strain and therefore without ever having to see it during training. In Fig. 14 we plot predictions of the \(J_{2}\) surrogate for two different unloading-reloading paths from the test dataset. The model predicts unloading very well without being trained for it. Nevertheless, as Fig. 12 suggests, the model struggles to predict monotonic behavior under a number of different scenarios, from which it follows that any non-monotonic predictions along the same directions will also be inaccurate. Fig. 15 shows three examples of this behavior. The choice of decoder therefore involves a tradeoff between bias and flexibility that can be deceiving to base solely on validation error computed on monotonic data. Indeed, the decoder used in the next section outperforms \(J_{2}\)-based Figure 12: Evolution of the mean validation loss for the first 200 training epochs of a network with \(J_{2}\) decoder. Single dataset with 1500 monotonic paths. Figure 14: Network predictions with \(J_{2}\) decoder for unloading paths after being trained exclusively with monotonic paths. Figure 13: Predictions from the network with \(J_{2}\) decoder. Letting the yield stress evolve extends the model to more complex plasticity behavior. Figure 15: Examples of strain paths not well predicted by the \(J_{2}\) decoded model. decoders in most situations, but nevertheless a choice for the simpler decoder might still be justified -- _e.g_. if the unconditional numerical stability of a \(J_{2}\) decoder is desirable. ### Non-associative pressure-dependent elastoplastic decoder As one final exploration on model selection, we use as decoder the same elastoplastic model by Melro _et al_. used to describe the matrix material at the microscale [52]. As mentioned in Section 3.4, this model is the natural choice for \(\mathcal{M}\), as it attempts to explain the observed microscopic non-linear behavior with the same model from which the behavior arises. As before we keep the elastic properties of the model intact and let only the yield stresses and the plastic Poisson's ratio change in time: \[\boldsymbol{\theta}=\begin{bmatrix}\sigma_{\text{t}}&\frac{\sigma_{\text{c}}} {\sigma_{\text{t}}}&\nu_{\text{p}}\end{bmatrix} \tag{26}\] where \(10^{1}<\sigma_{\text{t}}<10^{4}\), \(0<\nu_{\text{p}}<0.5\) and \(1<\frac{\sigma_{\text{c}}}{\sigma_{\text{t}}}<100\). We opt for the ratio \(\frac{\sigma_{\text{c}}}{\sigma_{\text{t}}}\) instead of simply \(\sigma_{\text{c}}\) in order to also enforce \(\sigma_{\text{c}}>\sigma_{\text{t}}\). We expand upon the feature selection study of Fig. 9 by looking at several feature extractors coming both directly from strains and from the output of a precalibrated Melro model \(\overline{\mathcal{M}}\) with the same properties used at the microscale (Fig. 5b). Aside from the familiar choice of strain features (\([\varepsilon_{xx}~{}\varepsilon_{yy}~{}\gamma_{xy}]\to\textit{Melro}\)), we look into invariants of the strain tensor (\([I_{1}^{e}~{}I_{2}^{e}]\to\textit{Melro}\)), combinations including invariants of the deviatoric strain tensor (\([J_{2}^{e}]\to\textit{Melro}\), \([I_{1}^{e}~{}J_{2}^{e}]\to\textit{Melro}\)), plastic strain internal variables coming from the precalibrated feature extractor (\(\begin{bmatrix}\mathbb{P}_{xx}^{e}~{}\mathbb{P}_{yy}^{p}~{}\mathbb{P}_{xy}^{p }\end{bmatrix}\to\textit{Melro}\)) and stress invariants coming from the extractor (\(\begin{bmatrix}I_{1}^{\overline{y}}~{}J_{2}^{\overline{x}}\end{bmatrix}\to \textit{Melro}\)). We also include the precalibrated mesomod by Vogler _et al_. [57] and selected curves from Fig. 9a for comparison purposes. As before, we train \(50\) networks of each type for each size of dataset ranging from \(1\) to \(150\) paths drawn from the original dataset with \(1500\) paths. Each trained network is then used to compute the validation error over the \(500\) monotonic validation paths and the \(150\) test paths (\(50\) extra monotonic paths, \(50\) paths with unloading-reloading and \(50\) slow cycle paths). This results in an extensive study comprising \(6800\) trained networks and over one million test set simulations. Results are summarized in Fig. 16, with each point in a curve being the average over \(50\) networks. Once again using invariants as features proves beneficial, leading to lossless dimensionality reduction and frame invariant surrogates. All tested models perform better than the precalibrated mesomod, with a gap of more than one order of magnitude for the best performing surrogates. Interestingly, models with Melro-based decoders seem to learn as fast and be as flexible as models with elastic decoders, already for the monotonic curves in the validation dataset. This suggests that the new decoder does not impose extra undesirable bias in learning the specific material behavior treated here other than the assumptions that had already been introduced by elasticity (_e.g_. symmetries and couplings encoded by the elastic stiffness tensor). Any benefits reaped when extrapolating to non-monotonic paths, as we will see in the following, are therefore obtained at a negligible price in terms of monotonic behavior accuracy. This stands in contrast with the discussion on the \(J_{2}\) decoder of the previous section. Figure 16: Expected validation errors for Melro-decoded surrogates with different feature extractors (averages over \(50\) datasets). Figure 17: Monotonic test set predictions from feature-deficient Melro models (complete training dataset with 1500 paths). Figure 18: Learning curves for unloading-reloading test errors of Melro-decoded surrogates (averages of \(50\) datasets). Although Fig. 16 is not enough to discern between several of our encoder choices, it is interesting to take a closer look at the two clearly underperforming options. Fig. 17 shows predictions from \([J_{2}^{\varepsilon}]\to\mathit{Melro}\) and \(\left[\overline{z}_{xx}^{\mathrm{p}}\ \overline{z}_{yy}^{\mathrm{p}}\ \overline{z}_{xy}^{\mathrm{p}}\right]\to \mathit{Melro}\) for the same monotonic test path. The model with a single feature struggles to predict the entirety of the path, indicating that further reducing the dimensionality of the feature space is not possible for this dataset. The oscillatory stress predictions make this model unsuitable for online stress evaluation in a multiscale setting. For the model with plastic strain features, the feature extractor shows no plastic strains until high stress levels while in the micromodel plasticity starts much earlier, forcing the surrogate to remain in the elastic regime until a sudden jump brings it back to the expected path. Moving to unloading-reloading paths, we compare the performance of different feature sets by plotting the average test error over the \(50\) unloading-reloading paths in Fig. 18a. Here an interesting observation can be made: even the surrogate \([I_{1}^{\varepsilon}\ I_{2}^{\varepsilon}]\to\mathit{Elastic}\) -- which cannot predict unloading at all -- attains a lower test error than the precalibrated mesomodel. This apparent contradiction can be explained by plotting in Fig. 18b the average error computed only at unloading or reloading time steps: use of an elastic decoder -- and therefore of a conventional FNN or an RNN trained with insufficient data -- excels at predicting monotonic response but is consistently inaccurate for non-monotonic paths and shows little improvement when more monotonic paths are added to the training dataset. In contrast, the best-performing Melro models are consistently more accurate than the precalibrated mesomodel even when trained on very little data. We plot in Fig. 19 selected representative unloading paths from the test dataset for four of the surrogates. Unloading is once again well captured without having been seen during training, and since it emerges from a purely physical mechanism, it is reasonable to expect unloading at different points along the path to yield comparable results (_c.f._ Fig. 3a). Nevertheless, relatively small differences in unloading slope can still lead to large differences in stress at the end of the unloading branches. Furthermore, the model can struggle with tension-compression switches and predict spurious hysteresis loops. Indeed, we observe a consistent inability by the models to properly predict switches between tension and compression within the same path. This becomes clear when looking at slow cycling test paths composed of several of these switches (Fig. 8c). We plot learning curves for the test error on slow cycling paths in Fig. 20, for complete paths Figure 19: Response of Melro-decoded surrogates with different features for selected unloading/reloading test paths (\(1500\) monotonic training paths). as well as exclusively for the non-monotonic branches of the paths. In contrast with results up until now, here we see larger differences in performance for different feature sets. As expected, elastic decoders are once again shown to be unsuitable to predict non-monotonic paths, and the difference here is even more pronounced than in for single-unloading paths (_c.f._ Fig. 18) as most of the path is composed of unloading/reloading branches. The model encoded with stress invariants coming from an elastoplastic feature extractor performs best among the models we test. But crucially, none of the surrogates manages to surpass the precalibrated mesomodel in this case. As a demonstration, we select a representative path from the test dataset and plot predictions made with four different feature sets in Fig. 21. As expected, larger errors are observed for more pronounced tension-compression switches as models either over- or undershoot the stress levels at compression-tension switch points. Interestingly, most models manage to converge back to the correct stress path after reloading, since hardening behavior is completely dictated by their non-recurrent data-driven encoders. The exception is the model with stress invariant features (\(\left[I_{1}^{\overline{q}}\ J_{2}^{\overline{q}}\right]\to\textit{Melro}\)), performing significantly better than the rest but showing a number of undesired oscillations in stress response due to the (physically) recurrent nature of its features forcing its neural network encoder to operate in extrapolation. ### \(\text{FE}^{2}\) example We conclude our discussion with an \(\text{FE}^{2}\) demonstration using the proposed hybrid surrogate. We model the tapered macroscopic bar with geometry and boundary conditions shown in Fig. 22. The model is meshed with 1620 linear triangles with a single Gauss point each and is loaded in tension until plastic strain localization takes place. The combination of the tapered geometry with the several circular voids along the model result in a complex range of stress states throughout the model. In contrast to the cases considered so far, this example also covers non-proportional strain paths. To facilitate convergence, the substepping approach proposed in [58] is employed and an adaptive stepping algorithm is used at the macroscale that automatically reduces time step size and recomputes the current increment if either the micro- or macroscopic Newton-Raphson solver fails to converge. We use the \(\left[I_{1}^{\overline{q}}\ J_{2}^{\overline{q}}\right]\to\textit{Melro}\) model of the previous section as surrogate, trained on the complete set of \(1500\) monotonic training strain paths. The global load-displacement curve at the right edge of the model is plotted for the full-order \(\text{FE}^{2}\) solution and using the hybrid surrogate in Fig. 23. Since we update decoder properties in an explicit fashion (_i.e_. once per time step, see Algorithm 1), we use a displacement increment \(\Delta u=3.5\times 10^{-3}\,\mathrm{mm}\) for the approximate model, \(10\) times smaller than the one used for the full-order model. As mentioned in Section 3.5, the model by Melro _et al_. can suffer from numerical stability issues even with fixed material properties, and it is reasonable to expect these issues to become worse when letting properties evolve with time. Indeed, with no additional stabilization the model using the network fails to converge at the point marked in Fig. 23. In contrast, the stabilization procedure of Section 3.5 allows for a complete path to be obtained. For this first result, we stabilize the network for 5 epochs with a learning rate of \(1\times 10^{-5}\) for the stabilization loss (Eq. (22)) and \(1\times 10^{-9}\) for retraining on a single monotonic training path selected at random. We also consider a model with an unloading/reloading switch after the onset of macroscopic plasticity. Results are shown in Fig. 23. The surrogate approximates the full-order behavior fairly accurately and several orders of magnitude faster than the full-order model. Figure 20: Slow cycling test errors for Melro-decoded surrogates (averages of \(50\) datasets for each size). Figure 21: Response of Melro-decoded surrogates with different features for selected slow cycling test paths (\(1500\) monotonic training paths). Figure 22: FE\({}^{2}\) example: geometry, mesh and boundary conditions. Full-order (left) and surrogate-based (right) FE\({}^{2}\) simulations are compared. Figure 23: \(\mathrm{FE}^{2}\)example: load-displacement curves with and without online stabilization, compared to the ground-truth solution. Figure 24: Performance of the surrogate model for stabilization strategies of varying intensities with and without retraining after stabilization. We now look closer on the performance of the proposed online stabilization approach. We empirically find that retraining the network until every violating material point is fully stabilized is not strictly necessary in order to achieve convergence, and therefore opting for a small number of stabilization epochs proves to be an efficient approach. It is nevertheless interesting to investigate the impact of the number of stabilization epochs and of the subsequent retraining minibatch on the original dataset. We solve the monotonic example of Fig. 23a with different numbers of stabilization/retraining epochs ranging from \(2\) to \(100\) and compute the validation loss (on the \(500\)-path validation set used for model selection) at the end of every macroscopic time increment in order to keep track of how much the stabilized network deviates from its original pretrained state. Results are shown together with the corresponding load-displacement curves in Fig. 24. All curves remain stable at first, as stabilization is only triggered when the first unstable points are detected. From that point, models which do not undergo retraining after stabilization lose accuracy at a rate proportional to the number of stabilization epochs. However, this unintuitively does not lead to improved global stability: the loss of accuracy by the surrogate leads to spurious global softening (_c.f._ Fig. 24b) which in turn leads to further need for stabilization. Models stabilized for \(50\) and \(100\) epochs continuously fail to converge and we opt for terminating the simulation after \(100\) cancelled time increments. On the other hand, models retrained with as little as a single strain path (out of the original \(1500\)) after each stabilization epoch are able to maintain the original model accuracy while offering enough stability gains to allow the simulation to converge until the final step, with little change in global behavior for different stabilization regimes. More insight can be obtained on the different stabilization strategies by plotting the cumulative execution time of the simulation and the cumulative number of detected unstable strain states with time increments for different numbers of stabilization epochs. Results can be seen in Fig. 25. In general, simulations without retraining tend to run faster and result in improved stability, although any gains are quickly overshadowed by losses in accuracy (_c.f._ Fig. 24). Stabilizing for more epochs results in a reduction in the total number of unstable points detected, but beyond 5 epochs this does not result in an overall reduction in the computational cost of the simulation given the increased effort spent on individual stabilization operations. As one final result, we run the monotonic simulation with the hybrid surrogate for different time step sizes. As previously mentioned, the hybrid approach allows for explicit update of \(\mathbf{\theta}\) within an implicit simulation by obtaining the tangent stiffness matrix directly from the decoder. This however introduces a time step size dependency whose impact merits investigation. We plot in Fig. 26 predictions with step sizes spanning four orders of magnitude, including the same one used to obtain the full-order response. The combination of the explicit property update with the online stabilization procedure indeed introduces an upper bound for time step size for this specific problem. It stands to reason that the sensitivity to time step size also depends on the choice of decoder and on which material properties are included in \(\mathbf{\theta}\). Further investigation into the matter in future works is therefore warranted. Figure 25: Impact of stabilization regime on execution time and number of unstable points throughout the simulation. ## 5 Conclusions In this paper, we propose a hybrid surrogate modeling architecture for multiscale modeling of heterogeneous materials. The model is composed of a data-driven encoder for material properties and a physics-based decoder that computes stresses. In the resulting architecture, the encoder increases the flexibility of existing material models by letting their properties evolve in time, while the decoder provides beneficial bias and interpretability to the model. The model is conceived with flexibility in mind, allowing existing implementations of physics-based material models to be used with no extra modifications. Furthermore, by letting the decoder directly receive strain inputs, the encoder architecture is highly flexible and allows for preservation of frame independence. A semi-explicit online prediction algorithm is also proposed that allows for imposing extra constraints to model behavior in a semi-supervised way. We demonstrate the architecture by reproducing pressure-dependent elastoplastic behavior coming from homogenized fiber-reinforced composite micromodels. The simple model with a linear-elastic decoder learned faster than conventional data-driven surrogates, allowed for lossless feature space dimensionality reduction through the use of strain invariants, and was able to approximate path-dependent behavior through a simple history-aware feature extractor. Models with perfectly-plastic \(J_{2}\) decoders were shown to successfully learn nonlinear hardening and pressure dependency and predict unloading-reloading while being trained exclusively on monotonic data, outperforming a state-of-the-art mesomodel for composites in accuracy for arbitrary loading directions. Employing as decoder the same plasticity model used at the microscale led to highly-accurate monotonic response and fairly accurate extrapolation to unloading/reloading behavior. Finally, the model was used to solve a complex FE\({}^{2}\) model and the benefit of the online stabilization procedure was demonstrated. We find the approach to be a promising new way to build hybrid surrogates which therefore merits further research on a number of fronts. The current architecture is not by construction concerned with enforcing unconditional thermodynamic consistency or other physical constraints of interest. Although we do find empirically that well-trained surrogates with thermodynamically consistent decoders tend to perform well, some constitutive models might not be suitable for having their properties evolve in time. Fortunately, the framework can cope with extra constraints without necessarily giving up on its flexibility, by enforcing them locally through online retraining. Although training exclusively on monotonic paths already allows for path dependency to be fairly well captured, some decoders might perform better in extrapolation if trained with a (small) number of extra non-monotonic and non-proportional strain paths -- for instance when encoder and decoder can each explain the same phenomenon on their own (_e.g._ pressure dependency in the model by Melro _et al._). We also foresee combining the present approach with the one in [46] into a unified family of flexible hybrid surrogates with a range of possible combinations of feature extractors for physics-rich time convolution, fixed-property models with learned strain distributions and evolving material models. Figure 26: FE\({}^{2}\) example: Effect of time step size on surrogate predictions. ## Acknowledgements The authors gratefully acknowledge the TU Delft AI Initiative for their support through the SLIMM AI Lab. FM also acknowledges financial support from the Netherlands Organization for Scientific Research (NWO) under Vidi grant nr. 16464.
この研究では、複雑な材料挙動の並行マルチスケールシミュレーションのための代用モデルを構築するための、ハイブリッド物理的かつデータ指向学習アプローチを提示します。まず、堅牢性が高く非柔軟な物理的構成モデルから始め、データから学習された進化演算子によって、その表現力を拡張します。このことは、時間変化する材料パラメータのサブセットを許可することで実現し、データ駆動のエンコーダと物理的デコーダを組み合わせた柔軟なハイブリッドモデルに繋がります。このハイブリッドモデルは、データ駆動のエンコーダと物理的デコーダの組み合わせによって、物理的な動機づけされたバイアスを含む代用モデルを生み出します。デコーダの内部変数は、パス依存性を自然に生み出すためのメモリ機構として作用します。このアプローチの能力を démontrer、 FNNエンコーダと複数の塑性デコーダを組み合わせ
2309.07476
Causal inference in network experiments: regression-based analysis and design-based properties
Investigating interference or spillover effects among units is a central task in many social science problems. Network experiments are powerful tools for this task, which avoids endogeneity by randomly assigning treatments to units over networks. However, it is non-trivial to analyze network experiments properly without imposing strong modeling assumptions. Previously, many researchers have proposed sophisticated point estimators and standard errors for causal effects under network experiments. We further show that regression-based point estimators and standard errors can have strong theoretical guarantees if the regression functions and robust standard errors are carefully specified to accommodate the interference patterns under network experiments. We first recall a well-known result that the Hajek estimator is numerically identical to the coefficient from the weighted-least-squares fit based on the inverse probability of the exposure mapping. Moreover, we demonstrate that the regression-based approach offers three notable advantages: its ease of implementation, the ability to derive standard errors through the same weighted-least-squares fit, and the capacity to integrate covariates into the analysis, thereby enhancing estimation efficiency. Furthermore, we analyze the asymptotic bias of the regression-based network-robust standard errors. Recognizing that the covariance estimator can be anti-conservative, we propose an adjusted covariance estimator to improve the empirical coverage rates. Although we focus on regression-based point estimators and standard errors, our theory holds under the design-based framework, which assumes that the randomness comes solely from the design of network experiments and allows for arbitrary misspecification of the regression models.
Mengsi Gao, Peng Ding
2023-09-14T07:29:49
http://arxiv.org/abs/2309.07476v2
# Causal inference in network experiments: ###### Abstract Investigating interference or spillover effects among units is a central task in many social science problems. Network experiments are powerful tools for this task, which avoids endogeneity by randomly assigning treatments to units over networks. However, it is non-trivial to analyze network experiments properly without imposing strong modeling assumptions. Previously, many researchers have proposed sophisticated point estimators and standard errors for causal effects under network experiments. We further show that regression-based point estimators and standard errors can have strong theoretical guarantees if the regression functions and robust standard errors are carefully specified to accommodate the interference patterns under network experiments. We first recall a well-known result that the Hajek estimator is numerically identical to the coefficient from the weighted-least-squares fit based on the inverse probability of the exposure mapping. Moreover, we demonstrate that the regression-based approach offers three notable advantages: its ease of implementation, the ability to derive standard errors through the same weighted-least-squares fit, and the capacity to integrate covariates into the analysis, thereby enhancing estimation efficiency. Furthermore, we analyze the asymptotic bias of the regression-based network-robust standard errors. Recognizing that the covariance estimator can be anti-conservative, we propose an adjusted covariance estimator to improve the empirical coverage rates. Although we focus on regression-based point estimators and standard errors, our theory holds under the design-based framework, which assumes that the randomness comes solely from the design of network experiments and allows for arbitrary misspecification of the regression models. **Keywords:** Covariate adjustment, exposure mapping, interference, model misspecification, network-robust standard error, weighted least squares. JEL classification codes: C13, C21 Introduction Network experiments have gained growing interest across various fields, including economics, social science, public health, and tech companies (Jackson, 2008; Valente, 2010; Blake and Coey, 2014; Angelucci and Di Maro, 2016; Aral, 2016; Breza, 2016; Athey and Imbens, 2017; Athey et al., 2018; Aronow et al., 2021). They present an exceptional avenue to delve into the intricacies of interactions among units. Important examples of such experiments include Sacerdote (2001), Miguel and Kremer (2004), Bandiera and Rasul (2006), Bakshy et al. (2012), Banerjee et al. (2013), Bursztyn et al. (2014), Cai et al. (2015), Paluck et al. (2016), Beaman and Dillon (2018), Haushofer and Shapiro (2018), and Carter et al. (2021). These experiments transcend the conventional framework of individual-level randomization by exploring the effects of treatments or interventions not only on the treated individuals but also on their peers. This introduces the concept of "interference," which challenges the "stable unit treatment value assumption" (SUTVA) that rules out interference in classical causal inference (Rubin, 1980; Imbens and Rubin, 2015). Over the last decade, the study of social interactions and peer effects through structural models has gained considerable attention (Manski, 1993; Graham, 2008; Bramoulle et al., 2009; Goldsmith-Pinkham and Imbens, 2013). Distinguishing between the influence of peers' outcomes (endogenous peer effects) and the influence of peers' characteristics (contextual peer effects) can become challenging due to the simultaneous behavior of interacting agents. This challenge is known as the "reflection problem" (Manski, 1993). Angrist (2014) criticized various econometric strategies for estimating peer effects. An expanding volume of literature focuses on interference without imposing strong structural assumptions (Halloran and Struchiner, 1995; Tchetgen Tchetgen and VanderWeele, 2012; Manski, 2013; Hu et al., 2022; Viviano, 2023b). A significant portion of this literature explores scenarios with interference of arbitrary but known forms which in turn requires researchers to make specific assumptions about the extent of interference. Many papers assume correctly specified _exposure mappings_ for inference (Aronow and Samii, 2017; Baird et al., 2018; Vazquez-Bare, 2022; Owusu, 2023). These mappings impose assumptions on the interference structure in the experiment, where the treatment assignment vector affects potential outcomes through a low-dimensional function (Manski, 2013; Aronow and Samii, 2017). Some other papers assume "partial interference" (Sobel, 2006; Hudgens and Halloran, 2008; Ugander et al., 2013; Kang and Imbens, 2016; Liu et al., 2016; Basse and Feller, 2018; Qu et al., 2021; Alzubaidi and Higgins, 2023), where units are partitioned into separate clusters, and interference is restricted to occur exclusively among units within the same cluster. Conversely, more recent literature further relaxes the partial interference assumption and studies interference of unknown and arbitrary forms (Savje et al., 2021; Viviano, 2023a). Leung (2022) proposed to estimate exposure effects under a general model called "approximate neighborhood interference" (ANI) while allowing for misspecification of exposure mappings. ANI refers to the situation where treatments assigned to individuals further from the focal unit have a smaller, but potentially nonzero, effect on the focal unit's response. He considered the Horvitz-Thompson estimator and studied its consistency and asymptotic normality. For inference, he proposed a network Heteroskedasticity and Autocorrelation Consistent (HAC) covariance estimator, and studied its asymptotic bias for estimating the true covariance. However, he did not derive the point and covariance estimator directly from regression-based analysis, which is our focus below. Our paper builds upon the framework of Leung (2022), which accommodates a single large network. We enrich the discussion of the regression estimators from the design-based perspective, with a special emphasis on network experiments. The design-based inference makes no assumptions about outcome models and relies solely on the randomization mechanism. We focus on the Hajek estimator, which is numerically identical to the coefficient derived from the weighted-least-squares (WLS) fit involving unit data that relies on the inverse probability of exposure mappings (Aronow and Samii, 2017). The regression-based approach offers three notable advantages. First, it is easy to implement without too much additional programming. Second, it can provide standard errors through the same WLS fit. Third, it allows for incorporating covariates into the analysis, which can further increase the estimation precision if the covariates are predictive of the outcome. Moreover, we examine the asymptotic performance of the regression-based network HAC estimator and prove results that justify the regression-based inference for network experiments from the design-based perspective. This constitutes our first contribution. Unlike their spatial or time-series counterparts, network HAC estimators lack a theoretical guarantee of positive semi-definiteness (Kojevnikov, 2021). Also, HAC estimators are known to have poor finite sample properties (Matyas, 1999, Section 3.5). We emphasize that the asymptotic bias of the HAC estimator can be negative under interference, resulting in undercoverage of the associated confidence interval. To address these concerns, we propose a modified HAC estimator that ensures positive semi-definiteness and asymptotic conservativeness, which also performs well in finite sample simulation. This constitutes our second contribution. Furthermore, we delve into the subject of covariate adjustment. Proper covariate adjustment can enhance the accuracy of estimators in randomized experiments by accounting for the imbalance in pretreatment covariates. Recall the results in the classical completely randomized treatment-control experiment. The regression framework offers a versatile approach to incorporating covariate information with a potential of enhancing asymptotic efficiency by including the full interaction of the treatment and covariates (Fisher, 1935; Lin, 2013; Negi and Wooldridge, 2021). An expanding body of literature explores the design-based justification of regression-based covariate adjustment with different types of experimental data (Fogarty, 2018; Su and Ding, 2021; Zhao and Ding, 2022; Wang et al., 2023). Our paper studies the theoretical properties of covariate adjustment in network experiments and demonstrates the potential efficiency gain in simulation and empirical application. This constitutes our third contribution. Organization of the paperSection 2.1 sets up the framework for the design-based inference in network experiments, reviews the Horvitz-Thompson and Hajek estimators, and introduces the main assumptions from Leung (2022). Section 3 reviews the Hajek estimator recovered from the WLS fit (Aronow and Samii, 2017), proposes the regression-based HAC covariance estimator, and analyzes its asymptotic bias. Because the covariance estimator can be anti-conservative, we propose a modified covariance estimator. Section 4 considers additive and fully-interacted covariate adjustment to the WLS fit, describes associated asymptotic properties, proposes modified covariance estimators, and studies their asymptotic properties. Section 5 studies the finite sample performance of our point and covariance estimators based on simulation and illustrates the practical relevance of our results by re-analyzing the network experiments in Paluck et al. (2016) and Cai et al. (2015). Section 6 provides the concluding remarks. The appendix includes all the proofs and intermediate results. NotationLet \(\mathbb{N}\) denote the set of all non-negative integers. Let \(I_{m}\) be an \(m\times m\) identity matrix and \(\iota_{m}\) be an \(m\times 1\) vector of ones. We suppress the dimension \(m\) when it is clear from the context. Unless stated otherwise, all vectors are column vectors. Let \(\otimes\) denote the Kronecker product of matrices. Let \(1(\cdot)\) be the indicator function. Let \(\|\cdot\|\) denote the Euclidean norm, i.e., \(\|w\|=\sqrt{w^{\top}w}\) for \(w\in\mathbb{R}^{v}\). Let \(Y_{i}\sim x_{i}\) denote the least-squares regression of \(Y_{i}\) on \(x_{i}\) and focus on the associated HAC covariance estimator. The terms "regression" and "HAC covariance" refer to the numerical outputs of the WLS fit without any modeling assumptions; we evaluate their properties under the design-based framework. We use "IID" and "CLT" to denote "independent and identically distributed" and "central limit theorem," respectively. ## 2 Framework, estimators and assumptions ### Setup of network experiments We consider a finite population model, which conditions on the potential outcomes and views the treatment assignment as the only source of randomness, known as the design-based framework (Imbens and Rubin, 2015; Aronow and Samii, 2017; Abadie et al., 2020; Leung, 2022). Let \(\mathcal{N}_{n}=\{1,\ldots,n\}\) denote the set of units. The network structure is undirected, unweighted, has no self-links, and can be described using an adjacency matrix \(A=(A_{ij})_{i,j=1}^{n}\) with the \((i,j)\)th entry \(A_{ij}\in\{0,1\}\) indicating the connection between units \(i\) and \(j\). Let \(\mathcal{A}_{n}\) denote the set of all possible networks with \(n\) units. The assignment of treatments is represented by a binary vector \(D=(D_{i})_{i=1}^{n}\), where each \(D_{i}\) is a binary variable indicating whether unit \(i\) has been assigned to the treatment. We assume \(D_{i}\)'s are independent across units but not necessarily identically distributed. We define the potential outcome for each unit \(i\) as \(Y_{i}(d)\), which represents the outcome of unit \(i\) under the hypothetical scenario in which the units on the entire network are assigned the treatment vector \(d=(d_{i})_{i=1}^{n}\in\{0,1\}^{n}\). From the notation, \(Y_{i}(d)\) depends not only on \(d_{i}\), the treatment assignment of unit \(i\), but also on the treatment assignments of all other units. This results in "interference" or "spillover" between units, which is not accounted for in the standard potential outcomes model under SUTVA (Rubin, 1980; Imbens and Rubin, 2015). With binary treatments, we have \(2^{n}\) potential outcomes for each unit. We utilize the exposure mapping as defined by Aronow and Samii (2017) for dimensionality reduction and the definition of the parameter of interest, without requiring it to be correctly specified. Let \(\mathcal{T}\subseteq\mathbb{R}^{d_{T}}\) be a discrete set with \(|\mathcal{T}|\) being finite and fixed. For any \(n\), an exposure mapping is a function \(T:\mathcal{N}_{n}\times\{0,1\}^{n}\times\mathcal{A}_{n}\to\mathcal{T}\), which maps the units, the treatment assignment vector and the network structure to exposures received by a unit. Our theory requires exposure mappings with finite and fixed dimension. For continuous exposure mappings, it is conceptually straightforward by extending the propensity score to treatment density (Imbens, 2000). However, this is beyond the scope of this paper, and we leave it for future research. Define the unit \(i\)'s expected response under exposure mapping value \(t\) as \[\mu_{i}(t)=\sum_{d\in\{0,1\}^{n}}Y_{i}(d)\mathbb{P}\left(D=d\mid T_{i}=t\right), \tag{1}\] which equals the expected potential outcome of unit \(i\) over all possible treatment assignment vectors given the exposure mapping value at \(t\)(Hudgens and Halloran, 2008; Leung, 2022). Let \(\mu(t)=n^{-1}\sum_{i=1}^{n}\mu_{i}(t)\) be the finite-population average and \(\mu=(\mu(t):t\in\mathcal{T})\) be the \(|\mathcal{T}|\times 1\) vector containing all the \(\mu(t)\)'s corresponding to exposure mapping values \(t\in\mathcal{T}\). Define causal effects as linear combinations of the expected responses. We focus on inferring the general estimand \(\tau=G\mu\), where \(G\) is an arbitrary contrast matrix, and the key lies in estimating \(\mu\). To illustrate, consider a \(1\times|\mathcal{T}|\) vector \(G=(0,\ldots,1,\ldots,-1,\ldots,0)\), representing the contrast between two exposure mapping values \(t\) and \(t^{\prime}\). This vector has a value of \(1\) for the element corresponding to \(t\), a value of \(-1\) for the element corresponding to \(t^{\prime}\), and \(0\) for all other elements. Consequently, this contrast results in the exposure effect \(\tau(t,t^{\prime})=\mu(t)-\mu(t^{\prime})\). We will focus on regression-based point and covariance estimators because of their simplicity in implementation and their capacity to incorporate covariate adjustments. We will not assume the model is correctly specified, and evaluate the properties of the point and covariance estimators under the design-based framework. We focus on estimators of the form \(\hat{\tau}=G\hat{Y}\), where \(\hat{Y}\) is some regression estimator of \(\mu\). To end this subsection, we present three concrete examples of exposure mappings and discussion regarding the misspecification of exposure mappings. **Example 2.1**.: Setting \(T_{i}=T(i,D,A)=D_{i}\) is a special case of exposure mapping. With \(G=(-1,1)\), we can examine the effect of treatment itself. **Example 2.2**.: For researchers interested in the spillover effect of having at least one friend assigned to the treatment versus none such friends, they can employ the following one-dimensional exposure mapping: \(T_{i}=T(i,D,A)=1(\sum_{j=1}^{n}A_{ij}D_{j}>0)\in\{0,1\}\). With \(G=(-1,1)\), we can examine the spillover effect \(\tau(1,0)\). **Example 2.3**.: For researchers interested in both the direct effect of a treatment and the spillover effect of having at least one friend assigned to the treatment, they can employ the following two-dimensional exposure mapping: \[T_{i}=T(i,D,A)=\left(D_{i},1\left(\sum_{j=1}^{n}A_{ij}D_{j}>0\right)\right) \in\{(0,0),(0,1),(1,0),(1,1)\},\] where the first component captures the direct effect, and the second component captures the spillover effect. In this case, we have a \(2\times 2\) factorial exposure mapping. Define the contrast matrix as follows: \[G=2^{-1}\begin{pmatrix}-1&-1&1&1\\ -1&1&-1&1\\ 1&-1&1&-1\end{pmatrix}.\] In this matrix, the first row captures the direct effect of treatment, the second row captures the spillover effect of having at least one friend assigned to the treatment, and the third row captures the interaction effect of two factors. **Remark 2.1**.: While the theory can accommodate misspecified exposure mappings under certain assumptions, it still comes with a cost. Assuming that the exposure mapping is correctly specified, we can reparameterize potential outcomes as \(Y_{i}(d)=\tilde{Y}_{i}(t)\) for \(t\in\mathcal{T}\), and the average expected response at the exposure mapping value \(t\) can be defined as \(\mu(t)=n^{-1}\sum_{i=1}^{n}\tilde{Y}_{i}(t)\). Consequently, the estimand becomes independent of the treatment assignment. In contrast, when the exposure mapping is misspecified, the estimand depends on the treatment assignment, as defined in (1). This could potentially pose issues for external validity if the treatment assignment changes in future network experiments. Furthermore, with misspecification of the exposure mapping, such as choosing \(T_{i}=T_{2i}\) when the true exposure mapping is \(T_{i}=(T_{1i},T_{2i})\), we are susceptible to classic omitted-variable bias unless \(T_{1i}\) and \(T_{2i}\) are orthogonal. Under independent treatment assignment, the two factors of the exposure mapping in Example 2.3 are orthogonal. Therefore, exposure mappings in Examples 2.1 and 2.2 respectively capture the direct and spillover effects in Example 2.3. See Savje (2023) for a more general discussion of inference with misspecified exposure mappings. ### Horvitz-Thompson and Hajek estimators Inverse probability weighting is a general estimation strategy in survey sampling and causal inference. In the context of observational studies with interference, Tchetgen Tchetgen and VanderWeele (2012) and Liu et al. (2016) studied inverse probability-weighted estimators of causal effects under different assumptions on the interference pattern. In this subsection, we will review the Horvitz-Thompson and Hajek estimators for estimating population parameters based on the observed data in network experiments. The Horvitz-Thompson estimator is a weighted estimator that assigns each unit a weight equal to the inverse of its selection probability. Recall \(T_{i}=T(i,D,A)\), and define the generalized propensity score (Imbens, 2000) as \(\pi_{i}(t)=\mathbb{P}(T_{i}=t)\). The value of the propensity score is known by design and can be determined through exact calculation or approximation using Monte Carlo (Aronow and Samii, 2017). The Horvitz-Thompson estimator for \(\mu(t)\) equals \[\hat{Y}_{\text{ht}}(t)=\frac{1}{n}\sum_{i=1}^{n}\frac{1(T_{i}=t)}{\pi_{i}(t)}Y _{i}.\] The Horvitz-Thompson estimator is unbiased if the propensity score \(\pi_{i}(t)\)'s are non-zero and is consistent under additional regularity conditions. Leung (2022) focused on \(\tau(t,t^{\prime})\) and examined the asymptotic properties of the Horvitz-Thompson estimator \(\hat{\tau}_{\text{ht}}(t,t^{\prime})=\hat{Y}_{\text{ht}}(t)-\hat{Y}_{\text{ht} }(t^{\prime})\). The Hajek estimator refines the Horvitz-Thompson estimator by normalizing the Horvitz-Thompson estimator by dividing it by the sum of the individual weights involved in its definition: \[\hat{Y}_{\text{haj}}(t)=\frac{\frac{1}{n}\sum_{i=1}^{n}\frac{1(T_{i}=t)Y_{i}}{ \pi_{i}(t)}}{\frac{1}{n}\sum_{i=1}^{n}\frac{1(T_{i}=t)}{\pi_{i}(t)}}=\frac{\hat {Y}_{\text{ht}}(t)}{\hat{1}_{\text{ht}}(t)},\] where \(\hat{1}_{\text{ht}}(t)=n^{-1}\sum_{i=1}^{n}1(T_{i}=t)\pi_{i}(t)^{-1}\) is the Horvitz-Thompson estimator for constant potential outcome 1. The Hajek estimator is biased in general since \(\hat{1}_{\text{ht}}(t)\) is random, but it is consistent since \(\hat{1}_{\text{ht}}(t)\) is consistent for 1 under regularity conditions. The existing literature provides two motivations for using the Hajek estimator. First, it ensures invariance under the location shift of the outcome (Fuller, 2011). Second, empirical evidence suggests that the Hajek estimator is more stable and efficient with little cost of bias in most reasonable scenarios (Sarndal et al., 2003, pages 181-184). Leung (2022) mentioned the Hajek estimator in the footnote of his paper without detailed theory. Moreover, the Hajek estimator is more natural from the regression perspective. Numerically, the Hajek estimator is identical to the coefficient from the weighted-least-squares fit based on the inverse probability of the exposure mapping (Aronow and Samii, 2017). The regression-based approach offers three notable advantages. First, WLS is easy to implement without too much additional programming. Second, WLS provides network-robust standard errors automatically. Third, it can be easily extended to handle covariates to improve efficiency based on WLS. The main focus of our paper is to explore the design-based properties of the Hajek estimators obtained through the regression-based method and associated HAC covariance estimator. ### Main assumptions We consider the framework of approximate neighborhood interference (ANI), as described in Leung (2022). ANI refers to a situation where treatments assigned to individuals who are farther away from the focal unit have a diminishing effect on the focal unit's response, although the effect is not necessarily zero. Leung (2022) verified that ANI is applicable to well-known models of social interactions, such as the network version of the linear-in-means model (Manski, 1993) and the "complex contagion" model (Centola, 2010; Montanari and Saberi, 2010). In this subsection, we provide an overview of the key assumptions outlined in Leung (2022), which serve as the foundation for our analysis. These conditions ensure the theoretical properties of the regression-based point and covariance estimators. For readers more interested in practical applications, they have the option to skip this subsection during their initial reading and focus on the procedures and properties presented in Sections 3 and 4. To facilitate the presentation, we begin by introducing some essential definitions and notations from Leung (2022). Let \(\ell_{A}(i,j)\) denote the path distance between units \(i\) and \(j\) within network \(A\), representing the length of the shortest path connecting them. The path distance refers to the smallest number of edges that must be crossed to journey from unit \(i\) to unit \(j\) within the network. Furthermore, \(\ell_{A}(i,j)\) is defined as \(\infty\) if \(i\neq j\) and no path exists between units \(i\) and \(j\) and defined as \(0\) if \(i=j\). **Remark 2.2**.: We can extend the definition of path distance to a weighted and directed network following Auerbach and Tabord-Meehan (2023) by defining it as the smallest sum of values in matrix \(A\) along any sequence from unit \(i\) to unit \(j\). With such a network, modifications to the assumptions of the theory are necessary, given that the distance is now measured on a different scale. For a specific unit \(i\), its \(K\)-neighborhood, denoted by \(\mathcal{N}(i,K;A)=\{j\in\mathcal{N}_{n}:\ell_{A}(i,j)\leq K\}\), includes the set of units within network \(A\) that are at most at a path distance of \(K\) from unit \(i\). Define \(d_{\mathcal{N}(i,K;A)}=(d_{j}:j\in\mathcal{N}(i,K;A))\) and \(A_{\mathcal{N}(i,K;A)}=(A_{kl}:k,l\in\mathcal{N}(i,K;A))\) as the subvector of \(d\) and subnetwork of \(A\) on \(\mathcal{N}(i,K;A)\), respectively. **Assumption 1** (Exposure Mapping).: There exists a \(K\in\mathbb{N}\) not dependent on the sample size \(n\) such that for any \(n\in\mathbb{N}\) and \(i\in\mathcal{N}_{n}\), if \(\mathcal{N}(i,K;A)=\mathcal{N}(i,K;A^{\prime}),A_{\mathcal{N}(i,K;A)}=A^{ \prime}_{\mathcal{N}(i,K;A^{\prime})}\), and \(d_{\mathcal{N}(i,K;A)}=d^{\prime}_{\mathcal{N}(i,K;A^{\prime})}\), then \(T(i,d,A)=T(i,d^{\prime},A^{\prime})\) for all \(d,d^{\prime}\in\{0,1\}^{n}\) and \(A,A^{\prime}\in\mathcal{A}_{n}\). **Assumption 2** (Overlap).: \(\pi_{i}(t)\in[\underline{\pi},\bar{\pi}]\subset(0,1)\), for all \(n\in\mathbb{N},i\in\mathcal{N}_{n},t\in\mathcal{T}\), where \(\underline{\pi}\) and \(\bar{\pi}\) are some absolute constant values. **Assumption 3** (Bounded Potential Outcomes).: \(|Y_{i}(d)|<c_{Y}<\infty\), for all \(n\in\mathbb{N},i\in\mathcal{N}_{n},d\in\{0,1\}^{n}\), where \(c_{Y}\) is an absolute constant. Assumption 1 implies that the exposure mapping indicators are weakly dependent. Specifically, \(1(T_{i}=t)\perp\!\!\!\perp 1(T_{j}=t)\) if \(\ell_{A}(i,j)>2K\) for some \(K\). For instance, \(K=0\) for the exposure mapping in Example 2.1 and \(K=1\) for both in Examples 2.2 and 2.3. Assumption 2 requires the generalized propensity scores to be uniformly bounded between \(0\) and \(1\), which can be ensured by experimental design. Assumption 3 imposes uniform boundedness on the potential outcomes. Let \(D^{\prime}\) be an IID copy of \(D\). Define \(D^{(i,s)}=(D_{\mathcal{N}(i,s;A)},D^{\prime}_{\mathcal{N}_{n}\setminus \mathcal{N}(i,s;A)})\) as the concatenation of the subvector of \(D\) on \(\mathcal{N}(i,s;A)\) and the subvector of \(D^{\prime}\) on \(\mathcal{N}_{n}\backslash\mathcal{N}(i,s;A)\). Define \[\theta_{n,s}=\max_{i\in\mathcal{N}_{n}}\mathbb{E}\left[\Big{|}Y_{i}(D)-Y_{i}(D ^{(i,s)})\Big{|}\right], \tag{2}\] where the expectation is over the randomness of \(D\) and \(D^{\prime}\) with all potential outcomes fixed. The interference, caused by distant individuals with a distance of more than \(s\) from the subject, is measured as the largest expected change in any individual's potential outcome when altering the treatment assignments of those distant individuals. Mathematically, ANI assumes that as the distance \(s\) approaches infinity, the largest value of \(\theta_{n,s}\), taken over all feasible networks, converges to zero, which is formalized in Assumption 4 below. **Assumption 4** (ANI).: The \(\theta_{n,s}\) defined in (2) satisfies \(\sup_{n}\theta_{n,s}\to 0\) as \(s\to\infty\). In simpler terms, Assumption 4 stipulates that interference from distant individuals should vanish as the distance becomes large. We skip Assumption 5 in Leung (2022), which is for showing consistency of the Horvitz-Thompson estimator, and proceed to Assumption 5 below for asymptotic normality. Define \[M_{n}(m,k)=n^{-1}\sum_{i=1}^{n}|\mathcal{N}(i,m;A)|^{k} \tag{3}\] as the \(k\)-th moment of the \(s\)-neighborhood size within network \(A\). Define \[\mathcal{H}_{n}(s,m)=\left\{(i,j,k,l)\in\mathcal{N}_{n}^{4}:k\in\mathcal{N}(i, m;A),l\in\mathcal{N}(j,m;A),\ell_{A}(\{i,k\},\{j,l\})=s\right\} \tag{4}\] as the set of paired couples \((i,k)\) and \((j,l)\) such that the units within each couple are at most path distance \(m\) apart from each other, and the two pairs are exactly path distance \(s\) apart. Similarly, define \[\mathcal{J}_{n}(s,m)=\left\{(i,j,k,l)\in\mathcal{N}_{n}^{4}:k\in\mathcal{N}(i, m;A),l\in\mathcal{N}(j,m;A),\ell_{A}(i,j)=s\right\} \tag{5}\] as the set of paired couples \((i,k)\) and \((j,l)\) such that the units within each couple are at most path distance \(m\) apart from each other, and \(i\) and \(j\) are exactly path distance \(s\) apart. In Assumption 5, we replace \(\sigma_{n}^{2}\) from Leung (2022, Assumption 6) with the matrix \(\Sigma_{\text{haj}}\) below: \[\Sigma_{\text{haj}}=\text{Var}\left(n^{-1/2}\sum_{i=1}^{n}\frac{1(T_{i}=t)}{ \pi_{i}(t)}(Y_{i}-\mu(t)):t\in\mathcal{T}\right). \tag{6}\] Theorem 3.1 below will show that \(\Sigma_{\text{haj}}\) defined in (6) is the asymptotic covariance of the Hajek estimator of \(\mu\). Based on the definition of \(\theta_{n,s}\) in (2) and Leung (2022, Theorem 1), we define \[\tilde{\theta}_{n,s}=\theta_{n,\lfloor s/2\rfloor}1(s>2\max\{K,1\})+1(s\leq 2 \max\{K,1\}) \tag{7}\] where \(K\) is the constant from Assumption 1 and \(\lfloor s\rfloor\) is \(s\) rounded down to the nearest integer. **Assumption 5** (Weak Dependence for CLT).: Recall \(M_{n}(m,k)\), \(\mathcal{H}_{n}(s,m)\) and \(\Sigma_{\text{haj}}\) defined in (3), (4) and (6), respectively. There exist \(\epsilon>0\) and a positive sequence \(\{m_{n}\}_{n\in\mathbb{N}}\) such that as \(n\to\infty\) we have \(m_{n}\to\infty\) and \[\Sigma_{\text{haj}}^{-2}n^{-2}\sum_{s=0}^{n}|\mathcal{H}_{n}(s,m_{n})|\, \tilde{\theta}_{n,s}^{1-\epsilon}\to 0,\ \Sigma_{\text{haj}}^{-3/2}n^{-1/2}M_{n}(m_{n},2)\to 0, \ \Sigma_{\text{haj}}^{-1/2}n^{3/2}\tilde{\theta}_{n,m_{n}}^{1-\epsilon}\to 0,\] where the convergence of the matrices is element-wise. Assumption 5 corresponds to Assumption 3.4 of Kojevnikov et al. (2019), which limits the extent of dependence across units of \(1(T_{i}=t)\pi_{i}(t)^{-1}(Y_{i}-\mu(t))\)'s through restrictions on the network. We impose it to ensure the asymptotic normality of the Hajek estimator of \(\mu\). We defer the assumption for consistency of covariance estimation to Section 3.2. ## 3 Hajek estimator in network experiments ### WLS-based point estimation Let \(z_{i}=(1(T_{i}=t):t\in\mathcal{T})\) be the vector of exposure mapping indicators. Motivated by the use of inverse probability weighting in constructing the Hajek estimator, we consider the WLS fit \[Y_{i}\sim z_{i}\text{ with weights }w_{i}=1/\pi_{i}(T_{i}). \tag{8}\] Let \(\hat{\beta}_{\text{haj}}\) denote the resulting coefficient of \(z_{i}\). Define the concatenated Hajek estimator vector as \(\hat{Y}_{\text{haj}}=(\hat{Y}_{\text{haj}}(t):t\in\mathcal{T})\). **Proposition 3.1**.: \(\hat{\beta}_{\text{haj}}=\hat{Y}_{\text{haj}}\). Proposition 3.1 is a well known numerical result and shows the utility of WLS in reproducing the Hajek estimators (Aronow and Samii, 2017). Theorem 3.1 below states the asymptotic normality of \(\hat{\beta}_{\text{haj}}\). **Theorem 3.1**.: Under Assumptions 1-5, we have \[\Sigma_{\text{haj}}^{-1/2}\sqrt{n}\left(\hat{\beta}_{\text{haj}}-\mu\right) \xrightarrow{\mathrm{d}}\mathcal{N}(0,I).\] Theorem 3.1 ensures the consistency of \(\hat{\beta}_{\text{haj}}\) for estimating \(\mu\), and establishes \(\Sigma_{\text{haj}}\) as the asymptotic sampling covariance of \(\sqrt{n}(\hat{\beta}_{\text{haj}}-\mu)\). ### WLS-based covariance estimation The regression-based approach provides an estimator for the standard error via the same WLS fit. Denote the design matrix of the WLS fit in (8) by an \(n\times|\mathcal{T}|\) matrix \(Z=(z_{1},\ldots,z_{n})^{\top}\), where its rows are the vectors \(z_{i}\) for each unit \(i\in\mathcal{N}_{n}\). Construct the weight matrix \(W=\text{diag}\{w_{i}:i=1,\ldots,n\}\) by placing the weights \(w_{i}\) along the diagonal. Let \(Y=(Y_{1},\ldots,Y_{n})\) denote the vector of the observed outcomes. Diagonalize the residual \(e_{i}\)'s from the same WLS fit to form the matrix \(e_{\text{haj}}=\text{diag}\{e_{i}:i=1,\ldots,n\}\). Define \[\hat{V}_{\text{haj}}=(Z^{\top}WZ)^{-1}(Z^{\top}We_{\text{haj}}K_{n}e_{\text{haj }}WZ)(Z^{\top}WZ)^{-1}\] as the network-robust covariance estimator of \(\hat{\beta}_{\text{haj}}\), where \(K_{n}\) is a truncated kernel matrix with \((i,j)\)th entry \(K_{n,ij}=1(\ell_{A}(i,j)\leq b_{n})\). Here, choosing \(b_{n}>0\) places nonzero weight on pairs at most path distance \(b_{n}\) apart from each other in the network \(A\), which accounts for the network correlation. While the network-robust covariance estimator adopts the form of an HAC estimator commonly used in the time series literature and spatial econometrics literature, our paper first discusses its design-based properties under the regression-based analysis for network experiments. Kojevnikov et al. (2021) explored a broader set of kernel functions, including the truncated kernel \(K_{n}\). As demonstrated in Leung (2022, Remark 1), the truncated kernel provides better size control, especially in cases involving smaller samples, compared to alternative kernels that diminish with distance. In Section 5.1, we report the HAC covariance estimators using the kernels in Leung (2019) and Kojevnikov (2021) and illustrate their poor finite sample performance. Considering these reasons, we opt for the truncated kernel. We follow the discussion in Leung (2022) regarding the choice of the bandwidth \(b_{n}\). Define the average path length, \(\mathcal{L}(A)\), as the average value of \(\ell_{A}(i,j)\) over all pairs in the largest component of \(A\). Here, a component of a network refers to a connected subnetwork where all units within the subnetwork are disconnected from those outside of it. Let \(\delta(A)=n^{-1}\sum_{i=1}^{n}\sum_{j=1}^{n}A_{ij}\) be the average degree. Leung (2022) suggests choosing the bandwidth \(b_{n}\) as follows: \[b_{n}=\Big{\lfloor}\max\Big{\{}\tilde{b}_{n},2K\Big{\}}\Big{\rfloor}\quad\text { where }\tilde{b}_{n}=\begin{cases}\frac{1}{2}\mathcal{L}(A)&\text{ if }\mathcal{L}(A)<2\frac{\log n}{\log\delta(A)},\\ \mathcal{L}(A)^{1/3}&\text{ otherwise,}\end{cases} \tag{9}\] where \(\lfloor\cdot\rceil\) means rounding to the nearest integer. To echo Leung (2022), we also suggest researchers to report results for several bandwidths in a neighborhood of (9). The choice of bandwidth \(b_{n}\) is based on the following two reasons. First, \(b_{n}\) is set to be at least equal to \(2K\) to account for the correlation in \(\{1(T_{i}=t)\}_{i=1}^{n}\) as per Assumption 1. With that said, if the exposure mapping is correctly specified, we can simply choose \(b_{n}=2K\). Second, (9) chooses a bandwidth of logarithmic or polynomial order depending on the growth rates of the average \(K\)-neighborhood size. The logarithmic order in \(b_{n}\) applies when the growth rate is approximately exponential in \(K\) and polynomial order applies when the growth rate is approximately polynomial in \(K\). Furthermore, Leung (2022) justifies that the bandwidth in (9) satisfies Assumption 6(b)-(d) under polynomial and exponential neighborhood growth rates. **Remark 3.1**.: Clustered standard errors are also frequently used to account for network dependence (Eckles et al., 2016; Aral and Zhao, 2019; Zacchia, 2020; Abadie et al., 2023). However, determining the optimal way to partition a network into clusters for inference can be challenging. Leung (2023) establishes the conditions that validate cluster-robust methods under network dependence, with a focus on scenarios involving a small number of clusters. **Remark 3.2**.: Kojevnikov et al. (2021) provides a law of large numbers and a central limit theorem for network dependent variables. Additionally, they introduce a technique for computing standard errors that remains robust when confronted with various types of network dependencies. Their approach utilizes a network-based covariance estimator and demonstrates the consistency of the covariance estimator to the true sampling covariance. Leung (2022) proposes a covariance estimator for the Horvitz-Thompson estimator of exposure effects. Kojevnikov (2021) develops bootstrap-based alternatives to network HAC estimation. While these covariance estimators share a resemblance to the HAC estimator in terms of structure, neither of them directly originates from a regression approach. We impose Assumption 6, as introduced in Leung (2022, Assumption 7), to ensure the consistency of the covariance estimator, where \(b_{n}\) is the bandwidth defined in (9). Denote by \(\mathcal{N}_{n}:\ell_{A}(i,j)=s\)\(\}\) the \(s\)-neighborhood boundary of unit \(i\), which is the set of units exactly at a distance of \(s\) from \(i\), and \(M_{n}^{\partial}(s)=n^{-1}\sum_{i=1}^{n}|\mathcal{N}^{\partial}(i,s;A)|\), its average size across units. **Assumption 6** (For Consistency of Covariance Estimator).: (a) \(\sum_{s=0}^{n}M_{n}^{\partial}(s)\tilde{\theta}_{n,s}^{1-\epsilon}=O(1)\) for some \(\epsilon>0\), (b) \(M_{n}(b_{n},1)=o(n^{1/2})\), (c) \(M_{n}(b_{n},2)=o(n)\), (d) \(\sum_{s=0}^{n}|\mathcal{J}_{n}(s,b_{n})|\tilde{\theta}_{n,s}=o(n^{2})\). Define \(\Delta_{\text{haj}}\) as an \(n\times|\mathcal{T}|\) matrix with \((i,t)\)th element \(\Delta_{\text{haj},it}=1(T_{i}=t)\pi_{i}(t)^{-1}(Y_{i}-\mu(t))-(\mu_{i}(t)-\mu (t))\). Define \(M\) as an \(n\times|\mathcal{T}|\) matrix with \((i,t)\)th element \(M_{it}=\mu_{i}(t)-\mu(t)\). Of interest is how this regression-based covariance estimator approximates the true sampling covariance from the design-based perspective. **Theorem 3.2**.: Define \(\hat{\Sigma}_{*,\text{haj}}=n^{-1}\Delta_{\text{haj}}^{\top}K_{n}\Delta_{\text {haj}}\) and \(R_{\text{haj}}=n^{-1}M^{\top}K_{n}M\). Under Assumptions 1-4 and 6, we have \[\hat{\Sigma}_{*,\text{haj}} =\Sigma_{\text{haj}}+o_{\mathbb{P}}(1),\] \[n\hat{V}_{\text{haj}} =\hat{\Sigma}_{*,\text{haj}}+R_{\text{haj}}+o_{\mathbb{P}}(1).\] We use \({}_{*}\) to indicate that \(\hat{\Sigma}_{*,\text{haj}}\) is the "oracle" version of covariance estimator, which takes the form a HAC estimator. \(\hat{\Sigma}_{*,\text{haj}}\) centers around \(\mu_{i}(t)-\mu(t)\), the individual-level deviation of the expected response from the population average under exposure mapping value \(t\). Theorem 3.2 first demonstrates that \(\hat{\Sigma}_{*,\text{haj}}\) closely approximates the asymptotic covariance \(\Sigma_{\text{haj}}\) and then presents the asymptotic bias of network-robust covariance estimator in estimating \(\hat{\Sigma}_{*,\text{haj}}\). The bias term \(R_{\text{haj}}\) adopts the form of an HAC covariance estimator of the individual-level expected response. The covariance estimation is asymptotically exact with constant individual-level expected response under any exposure mapping value \(t\in\mathcal{T}\), which is similar to the canonical results of Neyman (1923) without interference. In some cases, the truncated kernel used in the network-robust covariance estimation \(\hat{V}_{\text{haj}}\) may not be positive semi-definite. This issue can result in an anti-conservative covariance estimator, which can in turn affect the accuracy of hypothesis testing and confidence intervals. We will address this issue in the next subsection. Now we end this subsection with a remark on the literature of HAC covariance estimators for network and spatial data. **Remark 3.3**.: Aronow and Samii (2017) studied under the assumption of correctly specified exposure mappings and focused on the Horvitz-Thompson estimator for causal effects. They also discussed the Hajek estimator and its WLS formulation. However, they did not establish the result that justifies the corresponding network HAC estimator from WLS fits, which is easy to implement for applied researchers. Leung (2022, Appendix B) gave a theoretical comparison of his variance estimator and that of Aronow and Samii (2017), indicating that the bias terms cannot generally be ordered. **Remark 3.4**.: Another related literature strand pertains to the application of HAC estimator in spatial econometrics (Andrews, 1991; Conley, 1997, 1999; Matyas, 1999; Kelejian and Prucha, 2007; Kim and Sun, 2011). We omit the discussion of time-series literature (Newey and West, 1987). In the context of spatial experiments, Wang et al. (2023b) discussed the usage of regression estimators for causal effects from the design-based perspective and showed that the spatial HAC estimator provided asymptotically conservative inference under certain assumptions. Neither Aronow and Samii (2017) nor Wang et al. (2023b) discussed how to increase efficiency by incorporating covariate information, which will be our focus in Section 4. Xu and Wooldridge (2022) recommended using spatial HAC standard errors to account for spatial correlation in two cases when the sampling probability is non-negligible: (i) assignment variables exhibit spatial correlation, or (ii) spillover effects are estimated in the model. As we assume independent treatment assignments across units, we use network HAC standard errors to take care of dependence when estimating exposure effects, which is the estimand of interest. ### Improvement on covariance estimation Because the truncated kernel is not always positive semi-definite nature, both Leung (2022)'s variance estimator and the regression-based HAC covariance estimator \(\hat{V}_{\text{haj}}\) face the problem of being anti-conservative in both asymptotic theory and finite-sample simulation. In this subsection, we tackle this issue of anti-conservativeness by proposing a modification to our covariance estimator. Our proposed modification preserves the network-robustness of the covariance estimator while ensuring that it remains positive semi-definite and conservative. Let \(Q_{n}\Lambda_{n}Q_{n}^{\top}\) be the eigendecomposition of \(K_{n}\). Since \(K_{n}\) is symmetric, all of its eigenvalues are real. We define the adjusted kernel matrix that truncates the negative eigenvalues at \(0\) as \[K_{n}^{+}:=Q_{n}\max\{\Lambda_{n},0\}Q_{n}^{\top},\] where the maximum is taken element-wise. Letting \(K_{n}^{-}:=Q_{n}|\min\{\Lambda_{n},0\}|Q_{n}^{\top}\) with the minimum taken element-wise, we can also write \(K_{n}^{+}=K_{n}+K_{n}^{-}\). By construction, the matrix \(K_{n}^{\diamond}\) (\(\diamond=+,-\)) is positive semi-definite, and we denote the \((i,j)\)th entry of \(K_{n}^{\diamond}\) as \(K_{n,ij}^{\diamond}\). We propose the adjusted HAC covariance estimator as \[\hat{V}_{\text{haj}}^{+}=(Z^{\top}WZ)^{-1}(Z^{\top}We_{\text{haj}}K_{n}^{+}e_{ \text{haj}}WZ)(Z^{\top}WZ)^{-1}. \tag{10}\] If \(K_{n}\) is positive semi-definite, we would not introduce adjustment to the covariance estimator. The idea of replacing the negative eigenvalues with non-negative values appeared in Appendix B of Kojevnikov (2021), which can be traced back to the literature on approximating a symmetric matrix by a positive definite matrix (Higham, 1988). The key distinction is that Kojevnikov (2021) applied this technique to the final HAC covariance estimator, while we apply it to the kernel matrix. There are two limitations of Kojevnikov (2021)'s approach. First, it is not suitable for estimating a single causal effect, as when the HAC estimator is scalar, it merely involves replacing a negative variance estimate with zero. In contrast, our approach is applicable to causal effects of any dimension. Second, Kojevnikov (2021)'s approach does not address the issue of anti-conservativeness, as the crucial factor for positive bias is the positive semi-definiteness of \(K_{n}\). To guarantee the asymptotic conservativeness of \(\hat{V}_{\text{haj}}^{+}\), we impose Assumption 7 below, which pertains to the properties of \(K_{n}^{-}\). Recall that \(K_{n,ij}=1(\ell_{A}(i,j)\leq b_{n})\), \(M_{n}(m,k)\) and \(\mathcal{J}_{n}(s,m)\) in (3) and (5) with \(m=b_{n}\) can be rewritten as: \[M_{n}(b_{n},k)= \frac{1}{n}\sum_{i=1}^{n}\left(\sum_{j=1}^{n}K_{n,ij}\right)^{k}\] and \[\mathcal{J}_{n}(s,b_{n})= \sum_{i=1}^{n}\sum_{j=1}^{n}1(\ell_{A}(i,j)=s)\cdot\sum_{k=1}^{n}K _{n,ik}\cdot\sum_{l=1}^{n}K_{n,jl}.\] Define \(M_{n}^{-}(b_{n},k)\) and \(\mathcal{J}_{n}^{-}(s,b_{n})\) as the counterparts of \(M_{n}(b_{n},k)\) and \(\mathcal{J}_{n}(s,b_{n})\), respectively, on \(|K_{n}^{-}|\): \[M_{n}^{-}(b_{n},k)=\frac{1}{n}\sum_{i=1}^{n}\left(\sum_{j=1}^{n}\left|K_{n,ij} ^{-}\right|\right)^{k}\] and \[\mathcal{J}_{n}^{-}(s,b_{n})=\sum_{i=1}^{n}\sum_{j=1}^{n}1(\ell_{A}(i,j)=s) \cdot\sum_{k=1}^{n}\left|K_{n,ik}^{-}\right|\cdot\sum_{l=1}^{n}\left|K_{n,jl} ^{-}\right|.\] Assumption 7 is the analog to Assumption 6 but for \(|K_{n}^{-}|\). We provide some numerical justification of Assumption 7 in Appendix S2.2. **Assumption 7**.: (a)_\(\sum_{s=0}^{n}M_{n}^{\partial}(s)\tilde{\theta}_{n,s}^{1-\epsilon}=O(1)\) for some \(\epsilon>0\), (b)_\(M_{n}^{-}(b_{n},1)=o(n^{1/2})\), (c)_\(M_{n}^{-}(b_{n},2)=o(n)\), (d)_\(\sum_{s=0}^{n}|\mathcal{J}_{n}^{-}(s,b_{n})|\tilde{\theta}_{n,s}=o(n^{2})\)._ **Theorem 3.3**.: Define \(R_{\mathrm{haj}}^{+}=n^{-1}M^{\top}K_{n}^{+}M+n^{-1}\Delta_{\mathrm{haj}}^{ \top}K_{n}^{-}\Delta_{\mathrm{haj}}\geq 0\). Under Assumptions 1-4 and 7, we have \[n\hat{V}_{\mathrm{haj}}^{+}=\hat{\Sigma}_{*,\mathrm{haj}}+R_{\mathrm{haj}}^{+} +o_{\mathbb{P}}(1),\] where \(\hat{\Sigma}_{*,\mathrm{haj}}\) is defined in Theorem 3.2. Theorem 3.3 delineates two key advantages stemming from the construction of the adjusted covariance estimator. First, it ensures that the covariance estimator \(\hat{V}_{\mathrm{haj}}^{+}\) is positive definite. Second, it produces a positively adjusted bias term \(R_{\mathrm{haj}}^{+}\), leading to the conservativeness of \(\hat{V}_{\mathrm{haj}}^{+}\) for estimating the true sampling covariance. Theorems 3.1 and 3.3 together justify the regression-based inference of \(\tau=G\mu\) from the WLS fit (8) with the point estimator \(\hat{\tau}=G\hat{\beta}_{\mathrm{haj}}\) and the adjusted regression-based HAC covariance estimator \(G\hat{V}_{\mathrm{haj}}^{+}G^{\top}\). ## 4 Regression-based covariate adjustment ### Background: covariate adjustment without interference Regression-based analysis allows for the integration of covariates into the analysis, which can lead to efficiency gain. We briefly review the theory of covariate adjustment under complete randomization without interference to provide the background for this section. Consider an experimental setup involving a binary intervention, denoted by \(\mathcal{T}=\{0,1\}\), and a population of \(n\) units with potential outcomes denoted by \(Y_{i}(0)\) and \(Y_{i}(1)\) for each unit \(i=1,\ldots,n\). The average treatment effect within the finite population is denoted by \(\tau(1,0)=\bar{Y}(1)-\bar{Y}(0)\), where \(\bar{Y}(z)=n^{-1}\sum_{i=1}^{n}Y_{i}(z)\) for \(z=0,1\). Denote by \(z_{i}\) the treatment indicator of unit \(i\) under complete randomization. The difference-in-means estimator is unbiased for \(\tau(1,0)\), and equals the coefficient of \(z_{i}\) from the Ordinary Least Squares (OLS) fit of \(Y_{i}\sim 1+z_{i}\). Given the covariate vector \(x_{i}=(x_{i1},\ldots,x_{iJ})\) for \(i=1,\ldots,n\), Fisher (1935) proposed to use the coefficient of \(z_{i}\) from the OLS fit of \(Y_{i}\sim 1+z_{i}+x_{i}\) to estimate \(\tau(1,0)\). Freedman (2008) criticized this approach, highlighting its potential for efficiency loss compared to the difference-in-means estimator. Lin (2013) introduced an improved estimator as the coefficient of \(z_{i}\) derived from the OLS fit of \(Y_{i}\sim 1+z_{i}+(x_{i}-\bar{x})+z_{i}(x_{i}-\bar{x})\) with covariates and treatment-covariate interactions. He proved that this estimator is at least as efficient as the difference-in-means and Fisher (1935)'s estimators in the asymptotic sense. We refer to the regression proposed by Fisher (1935) as the additive specification, and Lin (2013)'s regression as the fully-interacted specification to avoid any ambiguity. We expand upon their findings in the context of network experiments, which incorporate interference, through the utilization of WLS fits. We will focus on the additive and fully-interacted specification, study the design-based properties of the estimators, and compare their efficiency gains over the unadjusted counterparts. To simplify the presentation, we center the covariates at \(\bar{x}=n^{-1}\sum_{i=1}^{n}x_{i}=0\) which does not complicate the theory in the design-based framework with fixed covariates. ### Additive regression in network experiments Recall \(z_{i}=(1(T_{i}=t):t\in\mathcal{T})\) as the dummies for the exposure mapping in the network experiment. Consider the WLS fit \[Y_{i}\sim z_{i}+x_{i}\text{ with weights }w_{i}=1/\pi_{i}(T_{i}). \tag{11}\] Let \(\hat{\beta}_{\text{haj,r}}\) denote the coefficient vector of \(z_{i}\) from the above WLS fit and \(\hat{\beta}_{\text{haj,r}}(t)\) denote the element in \(\hat{\beta}_{\text{haj,r}}\) corresponding to \(1(T_{i}=t)\). We use the subscript "F" to signify Fisher (1935). Let \(\hat{\gamma}_{\text{F}}\) denote the coefficient vector of \(x_{i}\) from the same WLS fit. Let \[\hat{x}_{\text{haj}}(t)=\frac{1}{n}\sum_{i=1}^{n}\frac{1(T_{i}=t)x_{i}}{\pi_{ i}(t)}\Big{/}\hat{1}_{\text{ht}}(t)\] be the \(J\times 1\) Hajek estimator for \(\bar{x}\) under exposure mapping value \(t\) and then combine \(\hat{x}_{\text{haj}}(t)\) across all \(t\in\mathcal{T}\) to obtain the \(|\mathcal{T}|\times J\) matrix \(\hat{x}_{\text{haj}}=(\hat{x}_{\text{haj}}(t):t\in\mathcal{T})\). Recall that \(\hat{\beta}_{\text{haj}}=\hat{Y}_{\text{haj}}\) from Proposition 3.1. Proposition 4.1 below states the numerical correspondence between \(\hat{\beta}_{\text{haj,r}}\) and \(\hat{Y}_{\text{haj}}\). **Proposition 4.1**.: \(\hat{\beta}_{\text{haj,r}}=\hat{Y}_{\text{haj}}-\hat{x}_{\text{haj}}\hat{ \gamma}_{\text{r}}\)_._ Proposition 4.1 links the covariate-adjusted \(\hat{\beta}_{\text{haj,r}}\) back to the unadjusted \(\hat{\beta}_{\text{haj,r}}\), and establishes \(\hat{\beta}_{\text{haj,r}}\) as the Hajek estimator based on the covariate-adjusted outcome \(Y_{i}-x_{i}^{\top}\hat{\gamma}_{\text{r}}\). The correspondence between the WLS fit and the Hajek estimation is preserved in the additive WLS fit in (11) as well. Assumption 8 below imposes the uniform boundedness of \(x_{i}\) and adapts Assumption 5 to its version with covariate adjustment. **Assumption 8**.: (i) \(||x_{i}||<c_{x}<\infty\), where \(c_{x}\) is an absolute constant. (ii) For the covariance matrix \[\Sigma_{n}(\gamma)=\operatorname{Var}\left(n^{-1/2}\sum_{i=1}^{n}\frac{1(T_{i}= t)}{\pi_{i}(t)}(Y_{i}-x_{i}^{\top}\gamma(t)-\mu(t)):t\in\mathcal{T}\right)\] with finite and fixed vector \((\gamma(t):t\in\mathcal{T})\), there exist \(\epsilon>0\) and a positive sequence \(\{m_{n}\}_{n\in\mathbb{N}}\) such that as \(n\to\infty\) we have \(m_{n}\to\infty\) and \[\Sigma_{n}^{-2}(\gamma)n^{-2}\sum_{s=0}^{n}\left|\mathcal{H}_{n}(s,m_{n}) \right|\tilde{\theta}_{n,s}^{1-\epsilon}\to 0,\ \Sigma_{n}^{-3/2}(\gamma)n^{-1/2}M_{n}(m_{n},2)\to 0, \ \Sigma_{n}^{-1/2}(\gamma)n^{3/2}\tilde{\theta}_{n,m_{n}}^{1-\epsilon}\to 0.\] Let \(\gamma_{\textsc{f}}\) denote the finite probability limit of \(\hat{\gamma}_{\textsc{f}}\). Let \(\Sigma_{\mathrm{haj,\textsc{f}}}\) denote the analog of \(\Sigma_{\mathrm{haj}}\) in (6) defined on the covariate-adjusted outcome \(Y_{i}-x_{i}^{\top}\gamma_{\textsc{f}}\). Theorem 4.1 below states the asymptotic normality of \(\hat{\beta}_{\mathrm{haj,\textsc{f}}}\). **Theorem 4.1**.: Under Assumptions 1-4 and 8, we have \[\Sigma_{\mathrm{haj,\textsc{f}}}^{-1/2}\sqrt{n}\left(\hat{\beta}_{\mathrm{haj, \textsc{f}}}-\mu\right)\overset{\mathrm{d}}{\to}\mathcal{N}(0,I).\] The design matrix of the WLS fit in (11) equals \(C_{\textsc{f}}=(Z,X)\) where \(Z\) is an \(n\times|\mathcal{T}|\) matrix and \(X=(x_{i}:i=1,\dots,n)\) is an \(n\times J\) matrix. Diagonalize the residual \(e_{\textsc{f},i}\)'s from the WLS fit in (11) to form the matrix \(e_{\mathrm{haj,\textsc{f}}}=\mathrm{diag}\{e_{\textsc{f},i}:i=1,\dots,n\}\). Let \([\cdot]_{(1:|\mathcal{T}|,1:|\mathcal{T}|)}\) denote the upper-left \(|\mathcal{T}|\times|\mathcal{T}|\) submatrix. Let \(\hat{V}_{\mathrm{haj,\textsc{f}}}\) denote the HAC covariance estimator for \(\hat{\beta}_{\mathrm{haj,\textsc{f}}}\), which is a submatrix of the covariance estimator obtained from the WLS fit in (11): \[\hat{V}_{\mathrm{haj,\textsc{f}}}=\left[(C_{\textsc{f}}^{\top}WC_{\textsc{f}} )^{-1}(C_{\textsc{f}}^{\top}We_{\mathrm{haj,\textsc{f}}}K_{n}e_{\mathrm{haj, \textsc{f}}}WC_{\textsc{f}})(C_{\textsc{f}}^{\top}WC_{\textsc{f}})^{-1}\right] _{(1:|\mathcal{T}|,1:|\mathcal{T}|)}.\] Let \(\Delta_{\mathrm{haj,\textsc{f}}}\) denote the analog of \(\Delta_{\mathrm{haj}}\) defined on the covariate-adjusted outcome \(Y_{i}-x_{i}^{\top}\gamma_{\textsc{f}}\). Define \(M_{\textsc{f}}\) as an \(n\times|\mathcal{T}|\) matrix with \((i,t)\)th element \(M_{\textsc{f},it}=\mu_{i}(t)-\mu(t)-x_{i}^{\top}\gamma_{\textsc{f}}\). Theorem 4.2 below establishes the asymptotic bias of \(\hat{V}_{\mathrm{haj,\textsc{f}}}\) as an estimator for the asymptotic covariance of \(\hat{\beta}_{\mathrm{haj,\textsc{f}}}\). **Theorem 4.2**.: Define \(\hat{\Sigma}_{\mathrm{*,haj,\textsc{f}}}=n^{-1}\Delta_{\mathrm{haj,\textsc{f}} }^{\top}K_{n}\Delta_{\mathrm{haj,\textsc{f}}}\) and \(R_{\mathrm{haj,\textsc{f}}}=n^{-1}M_{\textsc{f}}^{\top}K_{n}M_{\textsc{f}}\). Under Assumptions 1-4, 6 and 8, we have \[\hat{\Sigma}_{\mathrm{*,haj,\textsc{f}}} =\Sigma_{\mathrm{haj,\textsc{f}}}+o_{\mathbb{P}}(1),\] \[n\hat{V}_{\mathrm{haj,\textsc{f}}} =\hat{\Sigma}_{\mathrm{*,haj,\textsc{f}}}+R_{\mathrm{haj,\textsc{f} }}+o_{\mathbb{P}}(1).\] The bias term \(R_{\mathrm{haj,\textsc{f}}}\) is an analog of \(R_{\mathrm{haj}}\) defined on the adjusted outcome \(Y_{i}-x_{i}^{\top}\gamma_{\textsc{f}}\). Given that \(K_{n}\) may not be positive semi-definite, we cannot ensure the asymptotic conservativeness of \(\hat{V}_{\mathrm{haj,\textsc{f}}}\) for estimating \(\hat{\Sigma}_{\mathrm{*,haj,\textsc{f}}}\). Similar to (10), we propose the adjusted covariance estimator as \[\hat{V}_{\mathrm{haj,\textsc{f}}}^{+}=\left[(C_{\textsc{f}}^{\top}WC_{\textsc{ f}})^{-1}(C_{\textsc{f}}^{\top}We_{\mathrm{haj,\textsc{f}}}K_{n}^{+}e_{\mathrm{haj, \textsc{f}}}WC_{\textsc{f}})(C_{\textsc{f}}^{\top}WC_{\textsc{f}})^{-1} \right]_{(1:|\mathcal{T}|,1:|\mathcal{T}|)}.\] **Theorem 4.3**.: Define \(R^{+}_{\rm{haj,F}}=n^{-1}M_{\rm{r}}^{\top}K_{n}^{+}M_{\rm{r}}+n^{-1}\Delta_{\rm{haj,F}}^{\top}K_{n}^{-}\Delta_{\rm{haj,F}}\geq 0\). Under Assumptions 1-4 and 7-8, we have \[n\hat{V}^{+}_{\rm{haj,F}}=\hat{\Sigma}_{*,\rm{haj,F}}+R^{+}_{\rm{haj,F}}+o_{\rm{ P}}(1),\] where \(\hat{\Sigma}_{*,\rm{haj,F}}\) is defined in Theorem 4.2. Theorem 4.3 ensures the asymptotic conservativeness of \(\hat{V}^{+}_{\rm{haj,F}}\) for estimating the true sampling covariance. This, together with Theorem 4.1, justify the regression-based inference of \(\tau=G\mu\) from the additive WLS fit in (11) with the point estimator \(\hat{\tau}=G\hat{\beta}_{\rm{haj,F}}\) and the adjusted regression-based HAC covariance estimator \(G\hat{V}^{+}_{\rm{haj,F}}G^{\top}\). ### Fully-interacted regression in network experiments With full interactions between the exposure mapping indicators and covariates, we consider the WLS fit \[Y_{i}\sim z_{i}+z_{i}\otimes x_{i}\text{ with weights }w_{i}=1/\pi_{i}(T_{i}), \tag{12}\] where \(\otimes\) denotes the Kronecker product. The specification (12) simply means WLS fit of \(Y_{i}\) on the dummy \(1(T_{i}=t)\)'s and the interaction \(1(T_{i}=t)x_{i}\)'s. Let \(\hat{\beta}_{\rm{haj,L}}\) denote the coefficient vector of \(z_{i}\) from the above WLS fit and \(\hat{\beta}_{\rm{haj,L}}(t)\) denote the element in \(\hat{\beta}_{\rm{haj,L}}\) corresponding to \(1(T_{i}=t)\). We use the subscript "L" to signify Lin (2013). Let \(\hat{\gamma}_{\rm{L}}(t)\) denote the coefficient vector of \(1(T_{i}=t)x_{i}\). **Proposition 4.2**.: \(\hat{\beta}_{\rm{haj,L}}(t)=\hat{Y}_{\rm{haj}}(t)-\hat{x}_{\rm{haj}}(t)^{\top} \hat{\gamma}_{\rm{L}}(t)\) for all \(t\in\mathcal{T}\). Proposition 4.2 parallels Proposition 4.1, and establishes that \(\hat{\beta}_{\rm{haj,L}}(t)\) is the Hajek estimator based on the covariate-adjusted outcome \(Y_{i}-x_{i}^{\top}\hat{\gamma}_{\rm{L}}(t)\). A key distinction is that the adjustment is now based on coefficients specific to exposure mapping values. Let \(\gamma_{\rm{L}}(t)\) be the finite probability limit of \(\hat{\gamma}_{\rm{L}}(t)\). Let \(\Sigma_{\rm{haj,L}}\) be the analog of \(\Sigma_{\rm{haj}}\) in (6) defined on the adjusted outcome \(Y_{i}-x_{i}^{\top}(\sum_{t\in\mathcal{T}}1(T_{i}=t)\gamma_{\rm{L}}(t))\). Theorem 4.4 below states the asymptotic normality of \(\hat{\beta}_{\rm{haj,L}}\). **Theorem 4.4**.: Under Assumptions 1-4 and 8, we have \[\Sigma_{\rm{haj,L}}^{-1/2}\sqrt{n}\left(\hat{\beta}_{\rm{haj,L}}-\mu\right) \overset{\rm{d}}{\rightarrow}\mathcal{N}(0,I).\] Let \(C_{\rm{L}}\) be the design matrix of the WLS fit in (12), with row vectors \((z_{i}^{\top},(z_{i}\otimes x_{i})^{\top})\). Diagonalize the residual \(e_{\rm{L},i}\)'s from the same WLS fit to form the matrix \(e_{\rm{haj,L}}={\rm{diag}}\{e_{\rm{L},i}:i=1,\ldots,n\}\). Let \(\hat{V}_{\rm{haj,L}}\) denote the HAC covariance estimator for \(\hat{\beta}_{\rm{haj,L}}\), which is a submatrix of the covariance estimator obtained from the WLS fit in (12): \[\hat{V}_{\rm{haj,L}}=\left[(C_{\rm{L}}^{\top}WC_{\rm{L}})^{-1}(C_{\rm{L}}^{ \top}We_{\rm{haj,L}}K_{n}e_{\rm{haj,L}}WC_{\rm{L}})(C_{\rm{L}}^{\top}WC_{\rm{L }})^{-1}\right]_{(1:|\mathcal{T}|,1:|\mathcal{T}|)}.\] Let \(\Delta_{\rm{haj,L}}\) be the analog of \(\Delta_{\rm{haj}}\) defined on the adjusted outcome \(Y_{i}-x_{i}^{\top}(\sum_{t\in\mathcal{T}}1(T_{i}=t)\gamma_{\rm{L}}(t))\). Define \(M_{\rm{L}}\) as an \(n\times|\mathcal{T}|\) matrix with \((i,t)\)th element \(M_{\rm{L},it}=\mu_{i}(t)-\mu(t)-x_{i}^{\top}\gamma_{\rm{L}}(t)\). **Theorem 4.5**.: Define \(\hat{\Sigma}_{*,\text{haj,L}}=n^{-1}\Delta_{\text{haj,L}}^{\top}K_{n}\Delta_{\text{haj,L}}\) and \(R_{\text{haj,L}}=n^{-1}M_{\text{\tiny L}}^{\top}K_{n}M_{\text{\tiny L}}\). Under Assumptions 1-4, 6 and 8, we have \[\hat{\Sigma}_{*,\text{haj,L}} =\Sigma_{\text{haj,L}}+o_{\mathbb{P}}(1),\] \[n\hat{V}_{\text{haj,L}} =\hat{\Sigma}_{*,\text{haj,L}}+R_{\text{haj,L}}+o_{\mathbb{P}}(1).\] Theorem 4.5 establishes the asymptotic bias of \(\hat{V}_{\text{haj,L}}\) as an estimator for the asymptotic covariance of \(\hat{\beta}_{\text{haj,F}}\). Given that \(K_{n}\) may not be positive semi-definite, we cannot ensure the asymptotic conservativeness of \(\hat{V}_{\text{haj,L}}\) for estimating \(\hat{\Sigma}_{*,\text{haj,L}}\). Similar to (10), we propose the adjusted HAC covariance estimator as \[\hat{V}_{\text{haj,L}}^{+}=\left[(C_{\text{\tiny L}}^{\top}WC_{\text{\tiny L}}) ^{-1}(C_{\text{\tiny L}}^{\top}We_{\text{haj,L}}K_{n}^{+}e_{\text{haj,L}}WC_{ \text{\tiny L}})(C_{\text{\tiny L}}^{\top}WC_{\text{\tiny L}})^{-1}\right]_{( 1:|\mathcal{T}|,1:|\mathcal{T}|)}.\] **Theorem 4.6**.: Define \(R_{\text{haj,L}}^{+}=n^{-1}M_{\text{\tiny L}}^{\top}K_{n}^{+}M_{\text{\tiny L}} +n^{-1}\Delta_{\text{haj,L}}^{\top}K_{n}^{-}\Delta_{\text{haj,L}}\geq 0\). Under Assumptions 1-4 and 7-8, we have \[n\hat{V}_{\text{haj,L}}^{+}=\hat{\Sigma}_{*,\text{haj,L}}+R_{\text{haj,L}}^{+}+o _{\mathbb{P}}(1),\] where \(\hat{\Sigma}_{*,\text{haj,L}}\) is defined in Theorem 4.5. Echoing the comment after Theorem 4.3, Theorems 4.4 and 4.6 together justify the regression-based inference of \(\tau=G\mu\) from the fully-interacted WLS fit in (12) with point estimator \(\hat{\tau}=G\hat{\beta}_{\text{haj,L}}\) and adjusted regression-based HAC covariance estimator \(G\hat{V}_{\text{haj,L}}^{+}G^{\top}\). ### Efficiency gain from covariate adjustment The regression approach provides a convenient method to include covariates, potentially leading to efficiency gain in the estimation. Su and Ding (2021) and Zhao and Ding (2022) expand upon the findings of Lin (2013) for cluster randomization and split-plot randomization, respectively. They find that employing individual-level regression with fully-interacted covariates does not necessarily lead to efficiency gains. However, they do find that efficiency gains can be achieved by employing aggregate regressions with fully-interacted covariates. Adopting the strategy of aggregate regressions poses challenges within our framework, primarily because our framework accommodates a single large network and does not impose any constraints on the partitioning of the network. If we can appropriately partition the network into clusters with equal sizes, one conjecture is that we can perform individual-level regressions and apply cluster-robust standard errors as suggested by Su and Ding (2021). For instance, consider the experiment in Section 5.2, where schools can be utilized as clusters. While it falls outside the scope of this paper, it remains an interesting direction for future research. As indicated by Theorem S1 in the Appendix, the inclusion of interactions does not guarantee asymptotic efficiency gains. This lack of improvement can be attributed to two reasons. First, the result of efficiency gains in Lin (2013) is valid only under the assumption of a constant propensity score. Second, the presence of the kernel matrix \(K_{n}\) introduces dependence among units, thereby disrupting the potential efficiency gain. Lin (2013) illustrated the efficiency gain from including fully-interacted covariates with a constant propensity score and no interference. For settings with either varying propensity scores or interference, the efficiency gain from fully-interacted covariates is not guaranteed in any of the three cases. In Appendix S3.3, we provide counterexamples with simulation results for each case. Despite the lack of theoretical guarantees for efficiency gain, we do observe that covariate adjustment improves efficiency in both simulation and empirical examples in Section 5. **Remark 4.1**.: Aronow and Samii (2017) also discussed incorporating auxiliary covariates to improve efficiency, based on the idea of the difference estimator by regression adjustment (Sarndal et al., 2003). However, they did not employ the regression coefficients as point estimates or use regression-associated standard errors for inference. Moreover, they did not discuss the design-based properties of the network HAC estimator with covariate adjustment. ## 5 Numerical examples In this section, we first examine the finite sample performance of our results with simulation and then apply our results to two empirical applications. Our analysis focuses on the exposure effect \(\tau(t,t^{\prime})=\mu(t)-\mu(t^{\prime})\), the contrast of the expected responses between two exposure mapping values. ### Simulation To achieve comparability with Leung (2022), we replicate the same scenario but with the inclusion of a covariate in the model. Regarding the results, we present the point and covariance estimators of the exposure effect from three specifications of WLS: unadjusted (Unadj), with additive covariates (Add), and with fully-interacted covariates (Sat). Additionally, we report the adjusted covariance estimator for each regression-based HAC estimator to showcase its improvement in empirical coverage rates. We also report Leung (2022)'s Horvitz-Thompson estimator and variance estimator of the exposure effect. The study encompasses two outcome models: the linear-in-means model and the complex contagion model. Define \[V_{i}(D,A,x,\varepsilon)=\alpha+\beta\frac{\sum_{j=1}^{n}A_{ij}Y_{j}}{\sum_{j= 1}^{n}A_{ij}}+\delta\frac{\sum_{j=1}^{n}A_{ij}D_{j}}{\sum_{j=1}^{n}A_{ij}}+ \xi D_{i}+\gamma x_{i}+\varepsilon_{i}. \tag{13}\] For the linear-in-means model, we set \(Y_{i}=V_{i}(D,A,x,\varepsilon)\) with \((\alpha,\beta,\delta,\gamma,\xi)=(-1,0.8,1,1,1)\). The model defines potential outcomes \(Y_{i}(D)\) through its reduced form: \[Y=\alpha(I-\beta\tilde{A})^{-1}\iota+(I-\beta\tilde{A})^{-1}(\delta\tilde{A}+ \xi I)D+(I-\beta\tilde{A})^{-1}\gamma x+(I-\beta\tilde{A})^{-1}\varepsilon,\] where \(\tilde{A}\) is the row-normalized version of \(A\) (divide each row by its row sum). For the complex contagion model, we set \(Y_{i}=1(V_{i}(D,A,x,\varepsilon)>0)\) with \((\alpha,\beta,\delta,\xi,\gamma)=(-1,1.5,1,1,1)\). The complex contagion model can be generated from the dynamic process: \[Y_{i}^{t}=1\left(\alpha+\beta\frac{\sum_{j=1}^{n}A_{ij}Y_{j}^{t-1}}{\sum_{j=1}^{n }A_{ij}}+\delta\frac{\sum_{j=1}^{n}A_{ij}D_{j}}{\sum_{j=1}^{n}A_{ij}}+\xi D_{i}+ \gamma x_{i}+\varepsilon_{i}>0\right)\] with initialization at period \(0\) as \[Y_{i}^{0}=1\left(\alpha+\delta\frac{\sum_{j=1}^{n}A_{ij}D_{j}}{\sum_{j=1}^{n}A _{ij}}+\xi D_{i}+\gamma x_{i}+\varepsilon_{i}>0\right).\] We run the dynamic process to obtain new outcomes \(Y^{t}=(Y_{i}^{t})_{i=1}^{n}\) from last period's outcomes \(Y^{t-1}\) until the first period \(T\) such that \(Y^{T}=Y^{T-1}\). We then take \(Y^{T}\) as the vector of observed outcomes \(Y\), which yields outcomes \((Y_{i}(D))_{i=1}^{n}\). As a result, this process implicitly defines potential outcomes Leung (2022, Section 3.1). Propositions 1 and 2 in Leung (2022) verified ANI for these two outcome models without covariates. We can readily extend his proof to the models with additive covariates as in (13), or covariates interacted with the network \(A\), given that the covariates are fixed. Following Leung (2022), we generate the adjacency matrix \(A\) from a random geometric graph model. Specifically, for each node \(i\), we randomly generate its position \(\rho_{i}\) in a two-dimensional space from \(\mathcal{U}([0,1]^{2})\). An edge between nodes \(i\) and \(j\) is created if the Euclidean distance between their positions is less than or equal to a threshold value \(r_{n}\): \(A_{ij}=1\{\|\rho_{i}-\rho_{j}\|\leq r_{n}\}\), where the threshold value is chosen as \(r_{n}=(\kappa/(\pi n))^{1/2}\). We set \(\kappa\) as the average degree \(\delta(A)\), calculated based on the experimental data in Section 5.2, in order to better mimic real-world scenarios. We also generate a sequence \(\{\nu_{i}\}_{i=1}^{n}\stackrel{{\text{IID}}}{{\sim}}\mathcal{N}( 0,1)\) independent of \(A\). The error term in (13) is generated as \(\varepsilon_{i}=\nu_{i}+(\rho_{i1}-0.5)\), where \(\rho_{i1}\) is the first component of \(i\)'s "location" \(\rho_{i}\) generated above. This inclusion accounts for unobserved homophily, as units with similar \(\rho_{i1}\) values are more likely to form links. Finally, we generate the covariate \(\{x_{i}\}_{i=1}^{n}\stackrel{{\text{IID}}}{{\sim}}\mathcal{N}(0,1)\). To illustrate variations in population sizes, we use the sample of the largest, two largest, and four largest treated schools from the network experiment in Section 5.2 when calibrating the network models. The network size \(n\)'s are 805, 1456, and 2725, respectively. In all cases, we treat the schools as a single network by pooling the degree sequences across them. We randomly assign treatments to units classified as eligible in the experimental data with a probability 0.5. Since we work within a finite-population framework, we generate the adjacency matrix \(A\), \(\varepsilon\)'s, and \(x\)'s once and only redraw \(D\) for each simulation draw. This differs from the superpopulation design simulation in Leung (2022), where he regenerated \(D\), \(A\) and \(\varepsilon\)'s for each simulation draw. For the spillover effect \(\tau(1,0)\), which represents the effect of having at least one treated friend versus non-treated friends, we define the exposure mapping as \(T_{i}=1(\sum_{j=1}^{n}A_{ij}D_{j}>0)\) and analyze only the population of units with at least one friend who is eligible for treatment to satisfy Assumption 2. Under the IID randomization of \(D\), we can compute the propensity score \(\pi_{i}(1)\)'s and \(\pi_{i}(0)\)'s for each student using Binomial probabilities. Tables 1-3 present results for the largest, two largest, and four largest treated schools, respectively. For each table, we provide results from two outcome models: linear-in-means and complex contagion models. The top panels of Tables 1-3 display our regression-based results. We report the estimand under "\(\tau(1,0)\)," approximated by the unbiased Horvitz-Thompson estimator \(\hat{\tau}_{\text{ht}}(1,0)=\hat{Y}_{\text{ht}}(1)-\hat{Y}_{\text{ht}}(0)\), computed over 10,000 simulation draws. We report "Oracle SE," denoted by \(\text{Var}(\hat{\tau}(1,0))^{1/2}\), which are calculated as the standard deviation of the point estimators from corresponding WLS fits over 10,000 simulation draws. For the estimation results, we conduct another independent \(10,000\) simulation draws. We present the point estimate from each WLS fit under "\(\hat{\tau}(1,0)\)." We present the HAC standard errors obtained from each WLS fit under "WLS SE," and the corresponding adjusted HAC standard errors under "WLS\({}^{+}\) SE." We report the Eicker-Huber-White standard errors assuming no interference under "EHW SE" to illustrate the degree of dependence in the data. We also report the empirical coverage rate of 95% confidence intervals (CIs) in the "Coverage" rows for the corresponding standard errors. The effective sample size of exposure mapping value \(t\) is defined as \(\hat{n}(t)=\sum_{i=1}^{n}1(T_{i}=t)\). The result tables demonstrate that the Hajek estimator can be biased when the sample size is small, but the bias diminishes as the sample size increases. Additionally, the standard errors obtained from the WLS fits can be anti-conservative, underestimating the true standard error. However, by utilizing the adjusted HAC standard errors, we can improve the empirical coverage and ensure a conservative estimation of the standard error. The coverage rate of the adjusted standard errors improves as the (effective) sample size increases. In this setting, the estimator from the fully-interacted WLS fit is at least as efficient as the estimators from the unadjusted or additive WLS fits. In the middle panel of Tables 1-3, we report the results of standard errors and coverage rates of 95% CIs using the kernel \(K_{n}^{\text{L2019}}\) in Leung (2019) where the \((i,j)\)th element is \[K_{n,ij}^{\text{L2019}}=\frac{\left|\mathcal{N}(i,b_{n};A)\cap\mathcal{N}(j,b _{n};A)\right|}{\left|\mathcal{N}(i,b_{n};A)\right|^{1/2}\left|\mathcal{N}(j, b_{n};A)\right|^{1/2}}\] and the kernel \(K_{n}^{\text{K2021}}\) in Kojevnikov (2021) where the \((i,j)\)th element is \[K_{n,ij}^{\text{K2021}}=\frac{\left|\mathcal{N}(i,b_{n};A)\cap\mathcal{N}(j, b_{n},A)\right|}{n^{-1}\sum_{k=1}^{n}\left|\mathcal{N}(k,b_{n};A)\right|},\] respectively. Both \(K_{n}^{\text{L2019}}\) and \(K_{n}^{\text{K2021}}\) are positive semi-definite, ensuring the positive semi-definiteness of the covariance estimators. However, we can see that they substantially overreject even in moderately sized samples. For the sake of comparison, the bottom panel of Tables 1-3 present the results of the Horvitz-Thompson estimator and variance estimator from Leung (2022). By comparing the "Oracle SE" from the top and bottom panels, we can see the WLS estimators from all three specifications exhibit higher efficiency compared with the Horvitz-Thompson estimator. In Table 3, our regression-based standard errors are approximately half of those reported using Leung (2022)'s method, indicating a significant spillover effect at the 5% significance level. In contrast, Leung (2022)'s method yields an insignificant effect. Moreover, Leung (2022)'s standard errors are smaller than the oracle standard errors, resulting in under coverage. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Outcome model & \multicolumn{3}{c}{Linear-in-Means} & \multicolumn{3}{c}{Complex Contagion} \\ \hline WLS specification & Unadj & Add & Sat & Unadj & Add & Sat \\ \hline \(\tau(1,0)\) & 0.564 & 0.564 & 0.564 & 0.047 & 0.047 & 0.047 \\ \(\hat{\tau}(1,0)\) & 0.474 & 0.460 & 0.460 & 0.039 & 0.037 & 0.037 \\ Oracle SE & 0.635 & 0.557 & 0.556 & 0.069 & 0.059 & 0.058 \\ WLS SE & 0.608 & 0.531 & 0.529 & 0.067 & 0.057 & 0.056 \\ WLS\({}^{+}\) SE & 0.627 & 0.547 & 0.545 & 0.076 & 0.064 & 0.064 \\ EHW SE & 0.284 & 0.241 & 0.240 & 0.052 & 0.044 & 0.044 \\ Oracle Coverage & 0.947 & 0.947 & 0.947 & 0.948 & 0.950 & 0.950 \\ WLS Coverage & 0.928 & 0.923 & 0.923 & 0.929 & 0.924 & 0.923 \\ WLS\({}^{+}\) Coverage & 0.937 & 0.933 & 0.931 & 0.960 & 0.958 & 0.958 \\ EHW Coverage & 0.606 & 0.589 & 0.589 & 0.857 & 0.850 & 0.851 \\ \hline \hline Leung (2019) SE & 0.552 & 0.481 & 0.479 & 0.065 & 0.055 & 0.055 \\ Leung (2019) Coverage & 0.899 & 0.891 & 0.890 & 0.923 & 0.920 & 0.919 \\ Kojevnikov (2021) SE & 0.547 & 0.483 & 0.481 & 0.068 & 0.059 & 0.059 \\ Kojevnikov (2021) Coverage & 0.896 & 0.891 & 0.891 & 0.932 & 0.934 & 0.933 \\ \hline \hline \multicolumn{8}{c}{Results using Leung (2022)’s method} \\ \hline \hline \(\hat{\tau}_{\text{ht}}(1,0)\) & 0.540 & & 0.048 & & \\ Oracle SE & 1.631 & & 0.145 & & \\ Leung SE & 1.587 & & 0.141 & & \\ EHW SE & 0.608 & & 0.068 & & \\ Oracle Coverage & 0.946 & & 0.951 & & \\ Leung Coverage & 0.928 & & 0.935 & & \\ EHW Coverage & 0.380 & & 0.543 & & \\ \hline \hline \end{tabular} Note: The effective sample size for each exposure mapping value is \(\hat{n}(1)=226.41\) and \(\hat{n}(0)=169.59\), with a total of \(\hat{n}(1)+\hat{n}(0)=396\). The suggested bandwidth in (9) is \(b_{n}=2\). The average path length is \(\mathcal{L}(A)=14.2916\). \end{table} Table 1: Simulation results: network size \(n=805\) \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Outcome model & \multicolumn{3}{c}{Linear-in-Means} & \multicolumn{3}{c}{Complex Contagion} \\ \hline WLS specification & Unadj & Add & Sat & Unadj & Add & Sat \\ \hline \(\tau(1,0)\) & 0.536 & 0.536 & 0.536 & 0.062 & 0.062 & 0.062 \\ \(\hat{\tau}(1,0)\) & 0.519 & 0.524 & 0.524 & 0.057 & 0.058 & 0.058 \\ Oracle SE & 0.422 & 0.408 & 0.408 & 0.050 & 0.047 & 0.047 \\ WLS SE & 0.408 & 0.394 & 0.393 & 0.049 & 0.046 & 0.045 \\ WLS\({}^{+}\) SE & 0.439 & 0.419 & 0.418 & 0.058 & 0.053 & 0.053 \\ EHW SE & 0.196 & 0.175 & 0.175 & 0.039 & 0.034 & 0.034 \\ Oracle Coverage & 0.948 & 0.947 & 0.947 & 0.947 & 0.946 & 0.947 \\ WLS Coverage & 0.935 & 0.931 & 0.930 & 0.938 & 0.930 & 0.930 \\ WLS\({}^{+}\) Coverage & 0.953 & 0.948 & 0.947 & 0.973 & 0.968 & 0.968 \\ EHW Coverage & 0.629 & 0.594 & 0.594 & 0.869 & 0.833 & 0.834 \\ \hline \hline Leung (2019) SE & 0.376 & 0.361 & 0.360 & 0.048 & 0.044 & 0.044 \\ Leung (2019) Coverage & 0.911 & 0.898 & 0.906 & 0.929 & 0.901 & 0.922 \\ Kojevnikov (2021) SE & 0.369 & 0.356 & 0.356 & 0.048 & 0.045 & 0.045 \\ Kojevnikov (2021) Coverage & 0.917 & 0.903 & 0.903 & 0.950 & 0.927 & 0.927 \\ \hline \hline \multicolumn{8}{c}{Results using Leung (2022)’s method} \\ \hline \hline \(\hat{\tau}_{\text{ht}}(1,0)\) & 0.542 & 0.061 & & \\ Leung SE & 1.079 & 0.099 & & \\ Oracle SE & 1.115 & 0.101 & & \\ EHW SE & 0.413 & 0.051 & & \\ Leung Coverage & 0.932 & 0.936 & & \\ Oracle Coverage & 0.949 & 0.948 & & \\ EHW Coverage & 0.382 & 0.582 & & \\ \hline \hline \end{tabular} Note: The effective sample size for each exposure mapping value is \(\hat{n}(1)=426.42\) and \(\hat{n}(0)=295.58\), with a total of \(\hat{n}(1)+\hat{n}(0)=722\). The suggested bandwidth in (9) is \(b_{n}=3\). The average path length is \(\mathcal{L}(A)=18.2498\). \end{table} Table 2: Simulation results: network size \(n=1456\) \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Outcome model & \multicolumn{3}{c}{Linear-in-Means} & \multicolumn{3}{c}{Complex Contagion} \\ \hline WLS specification & Unadj & Add & Sat & Unadj & Add & Sat \\ \hline \(\tau(1,0)\) & 0.557 & 0.557 & 0.557 & 0.066 & 0.066 & 0.066 \\ \(\hat{\tau}(1,0)\) & 0.554 & 0.569 & 0.569 & 0.068 & 0.071 & 0.071 \\ Oracle SE & 0.347 & 0.319 & 0.319 & 0.038 & 0.034 & 0.034 \\ WLS SE & 0.337 & 0.310 & 0.309 & 0.038 & 0.035 & 0.035 \\ WLS\({}^{+}\) SE & 0.361 & 0.330 & 0.330 & 0.045 & 0.040 & 0.040 \\ EHW SE & 0.294 & 0.284 & 0.283 & 0.038 & 0.036 & 0.036 \\ Oracle Coverage & 0.954 & 0.953 & 0.953 & 0.954 & 0.949 & 0.949 \\ WLS Coverage & 0.944 & 0.944 & 0.943 & 0.949 & 0.947 & 0.947 \\ WLS\({}^{+}\) Coverage & 0.961 & 0.959 & 0.958 & 0.979 & 0.978 & 0.978 \\ EHW Coverage & 0.906 & 0.921 & 0.921 & 0.952 & 0.959 & 0.959 \\ \hline \hline Leung (2019) SE & 0.311 & 0.284 & 0.284 & 0.037 & 0.033 & 0.033 \\ Leung (2019) Coverage & 0.929 & 0.902 & 0.927 & 0.947 & 0.915 & 0.940 \\ Kojevnikov (2021) SE & 0.318 & 0.290 & 0.289 & 0.038 & 0.034 & 0.0340 \\ Kojevnikov (2021) Coverage & 0.954 & 0.932 & 0.931 & 0.969 & 0.948 & 0.948 \\ \hline \hline \multicolumn{8}{c}{Results using Leung (2022)’s method} \\ \hline \hline \(\hat{\tau}_{\text{ht}}(1,0)\) & 0.550 & & 0.068 & & \\ Leung SE & 0.805 & & 0.076 & & \\ Oracle SE & 0.824 & & 0.079 & & \\ EHW SE & 0.284 & & 0.036 & & \\ Leung Coverage & 0.936 & & 0.943 & & \\ Oracle Coverage & 0.947 & & 0.954 & & \\ EHW Coverage & 0.525 & & 0.623 & & \\ \hline \hline \end{tabular} Note: The effective sample size for each exposure mapping value is \(\hat{n}(1)=848.71\) and \(\hat{n}(0)=595.29\), with a total of \(\hat{n}(1)+\hat{n}(0)=1444\). The suggested bandwidth in (9) is \(b_{n}=3\). The average path length is \(\mathcal{L}(A)=24.809\). \end{table} Table 3: Simulation results: network size \(n=2725\) ### Empirical Application I: Paluck et al. (2016) In this subsection, we revisit Paluck et al. (2016) and employ our regression-based analysis to study their network experiment. Their experiment investigates how an anti-conflict intervention affects the social norms of teenagers with regard to hostile behaviors like bullying, social exclusion, harassment, and spreading rumors. In the experimental design, half of 56 schools were randomly assigned to the treatment group. Within these treated schools, a subset of students was selected as eligible for treatment based on certain characteristics. Half of the eligible students were then block-randomized into treatment by gender and grade. Those treated students were invited to participate in bi-weekly meetings that incorporated an anti-conflict curriculum. Following Leung (2022), we choose self-reported data on wristband wearing as the outcome of interest, which serves as the reward for students who exhibit anti-conflict behavior. We incorporate both gender and grade for covariate adjustment. The network is measured by asking students to name up to ten students at the school they spent time with in the last few weeks. More details about this network experiment can be found in Paluck et al. (2016). To align with the outcomes reported in Leung (2022), we restrict the data to the five largest treated schools. Our primary interest lies in assessing the direct effect of the anti-conflict intervention and the spillover effect of having at least one friend assigned to the treatment versus none such friends. We first calculate the direct and spillover effects by defining two one-dimensional exposure mappings and report the results in Table 4. To examine both effects simultaneously, we define a two-dimensional exposure mapping and report the results in Table 5. The network, obtained from surveys, is directed. When calculating the number of treated friends for the exposure mappings, we take into account the direction of links. However, when computing network neighborhoods for our covariance estimators, we disregard the directionality of links to conservatively define larger neighborhoods. For each exposure mapping, our analysis involves three WLS specifications: unadjusted (Unadj), with additive covariates (Add), and with fully-interacted covariates (Sat). One-dimensional exposure mappingTo compute the direct effect, we define \(T_{i}=D_{i}\) as in Example 2.1 and limit the analysis to the "treatment population," comprising students eligible for treatment, totaling 320 students. The propensity score is \(\pi_{i}(t)=0.5\) for each student. For the spillover effect, we employ \(T_{i}=1(\sum_{j=1}^{n}A_{ij}D_{j}>0)\) as the exposure mapping as in Example 2.2, indicating whether at least one friend has been assigned to the treatment. We restrict the effective sample to units with at least one eligible friend. Under block randomization, we can compute the propensity score \(\pi_{i}(0)\) and \(\pi_{i}(1)\) for each student using Hypergeometric probabilities. The results are presented in Table 4. The suggested bandwidths in (9) are \(b_{n}=2\) for both exposure mappings. We present results for the range of bandwidths \(\{0,\ldots,3\}\), where 0 yields the standard errors in the absence of interference. The first row, labeled as "Estimate," presents the point estimate obtained from corresponding WLS fits. The rows labeled as "\(b_{n}=k\)" present the HAC standard errors with the specific bandwidth values stated. Additionally, the rows labeled as "\(K_{n}\) PSD" indicate whether the associated matrix \(K_{n}\) is positive semi-definite or not. Furthermore, the rows labeled as "WLS\({}^{+}\) SE" present the adjusted HAC standard errors with corresponding bandwidth values. The direct effect is statistically significant at 5% level across all specifications, bandwidths, and after adjustment to the covariance estimation. The spillover effect is significant at 5% level except when \(b_{n}=3\), both before and after adjustment to the covariance estimation. For the sake of comparison, Table 4 also includes the results from Leung (2022, Table 1). While our results align with the conclusions of Leung (2022), our regression-based estimation approach provides higher precision. Also, the "\(K_{n}\) PSD" lines indicating "NO" imply that Leung (2022)'s variance estimators may be anti-conservative. Two-dimensional exposure mappingWe define the two-dimensional exposure mapping and the contrast matrix \(G\) as in Example 2.3: \(T_{i}=(D_{i},1(\sum_{j=1}^{n}A_{ij}D_{j}>0))\) where \(T_{i}\) takes values of \(\{(0,0),(0,1),(1,0),(1,1)\}\). We focus on the first two components of \(\tau=G\mu\), where the first component captures the direct effect and the second component captures the spillover effect. We restrict the effective sample to students who are eligible for treatment and have at least one eligible friend, resulting in a total of 150 students. The results are presented in the top panel of Table 5. With \(K=1\) for this exposure mapping, the suggested bandwidth in (9) is \(b_{n}=2\), and we present results for the range of bandwidths \(\{0,\ldots,3\}\). We observe that the magnitude and standard errors of the direct effect remain relatively stable. Regarding the spillover effect, its magnitude notably increases, and it remains statistically significant at the 5% significance level across all specifications and bandwidths, even after adjustment to the covariance estimation. To investigate whether these changes in results arise from shifts in the target population or potential misspecification of the exposure mappings, we provide results using two one-dimensional exposure mappings and focusing on treatment-eligible students with at least one eligible friend. These results are displayed in the bottom panel of Table 5. Upon comparing the top and bottom panels, we can observe that there are minor differences in the point estimates and standard errors, but the overall message does not change. Specifically, the spillover effect is more pronounced and significant for the subset of students who are both eligible for treatment and have at least one eligible friend, in comparison to the subset with at least one eligible friend. Table 5 also demonstrates that our methods are robust to various specifications of exposure mappings. ### Empirical Application II: Cai et al. (2015) Cai et al. (2015) conducted an experiment in rural China to investigate how farmers' understanding of a weather insurance policy affects their purchasing decisions. The main outcome of interest was whether a household decided to purchase the insurance policy or not. In each village, the experiment included two rounds of information sessions to introduce the insurance product. Each round consisted of two simultaneous sessions: a simple session with less information and an intensive session. The second round of information sessions was scheduled three days after the first round, allowing farmers to communicate with friends. However, this time gap was designed to prevent all the information from the first round from spreading widely throughout the entire population via the network. While the original experiment included a village-level randomization with price variation and a second round of sessions, we focus only on the household-level randomization. For household-level randomization, Cai et al. (2015) initially computed the median values of household size and area of rice production per capita within each village. They then created dummy variables for each household, indicating whether their respective variables were above or below the median. Using this information, households were divided into four strata groups. We use the variables \(\text{Delay}_{i}\) and \(\text{Int}_{i}\) to indicate whether households attended the first round (\(\text{Delay}_{i}=0\)) or the second round (\(\text{Delay}_{i}=1\)) of sessions and whether they attended the simple (\(\text{Int}_{i}=0\)) or intensive (\(\text{Int}_{i}=1\)) sessions, respectively. In Section 2.1, we consider a binary treatment for simplicity, although this assumption is not crucial to our theory. We maintain the flexibility to extend it to discrete treatments with finite and fixed dimensions, like \(D_{i}=(\text{Delay}_{i},\text{Int}_{i})\in\{0,1\}^{2}\) in this experiment. The network information is measured by asking household heads to list five close friends, either within or outside the village, with whom they most frequently discussed rice production or financial issues. Consequently, \(A\) is directed. Moreover, respondents were also asked to rank these friends based on which one would be consulted first, second, etc. But in our paper, we do not consider this ranking and instead assign equal weight to each link. Again, we incorporate link directionality when calculating the number of treated friends for exposure mappings but omit it when defining network neighborhoods in a conservative manner for covariance estimators. Our primary interest lies in exploring the direct effect of participating in intensive sessions and the spillover effect of having at least one friend attend the first-round intensive sessions. In Table 6, we present the results of both effects by defining one two-dimensional exposure mapping and \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{3}{c}{Direct Effect} & \multicolumn{3}{c}{Spillover Effect} \\ \hline WLS specification & Unadj & Add & Sat & Unadj & Add & Sat \\ \hline Estimate & 0.1500 & 0.1465 & 0.1466 & 0.0479 & 0.0454 & 0.0451 \\ \(b_{n}=0\) & 0.0404 & 0.0398 & 0.0397 & 0.0158 & 0.0157 & 0.0156 \\ \(b_{n}=1\) & 0.0406 & 0.0403 & 0.0403 & 0.0162 & 0.0161 & 0.0159 \\ \(K_{n}\) PSD & \multicolumn{3}{c}{NO} & \multicolumn{3}{c}{NO} \\ WLS\({}^{+}\) SE & 0.0421 & 0.0417 & 0.0417 & 0.0202 & 0.0201 & 0.0198 \\ \(b_{n}=2\) & 0.0350 & 0.0346 & 0.0333 & 0.0167 & 0.0171 & 0.0164 \\ \(K_{n}\) PSD & \multicolumn{3}{c}{NO} & \multicolumn{3}{c}{NO} \\ WLS\({}^{+}\) SE & 0.0495 & 0.0486 & 0.0478 & 0.0274 & 0.0275 & 0.0269 \\ \(b_{n}=3\) & 0.0403 & 0.0390 & 0.0382 & 0.0167 & 0.0163 & 0.0158 \\ \(K_{n}\) PSD & \multicolumn{3}{c}{NO} & \multicolumn{3}{c}{NO} \\ WLS\({}^{+}\) SE & 0.0577 & 0.0568 & 0.0563 & 0.0302 & 0.0300 & 0.0296 \\ \hline \hline \multicolumn{5}{c}{Results copied from Leung (2022)} \\ \hline \hline Estimate & 0.1500 & & 0.0407 & \\ \(b_{n}=0\) & 0.0443 & & 0.0167 & \\ \(b_{n}=1\) & 0.0460 & & 0.0184 & \\ \(b_{n}=2\) & 0.0394 & & 0.0205 & \\ \(b_{n}=3\) & 0.0470 & & 0.0170 & \\ \hline \hline \end{tabular} \end{table} Table 4: Estimates and SEs (Paluck et al., 2016). two one-dimensional exposure mappings, respectively. For all exposure mappings, we restrict our effective sample to households that attended the second-round session and had at least one friend attending the first round to satisfy Assumption 2, resulting in a total of 1056 households. One-dimensional exposure mappingsTo calculate the direct effect, we define \(T_{i}=\text{Int}_{i}\) as in Example 2.1. To calculate the spillover effect, we define \(T_{i}=1(\sum_{j=1}^{n}A_{ij}(1-\text{Delay}_{j})\text{Int}_{j}>0)\) as in Example 2.2. The results are presented in the bottom panel of Table 6. Two-dimensional exposure mappingsWe define the exposure mapping and the contrast matrix \(G\) as in Example 2.3: \(T_{i}=(\text{Int}_{i},1(\sum_{j=1}^{n}A_{ij}(1-\text{Delay}_{j})\text{Int}_{j}>0))\), where \(T_{i}\) takes values of \(\{(0,0),(0,1),(1,0),(1,1)\}\). Again, we focus on the first two components of \(\tau=G\mu\) to capture the direct and spillover effects. The results are presented in the top panel of Table 6. For all three exposure mappings, the suggested bandwidth in (9) is \(b_{n}=3\), and we present results for the bandwidths in \(\{0,2,3,4\}\). The findings presented in Tables 6 do not contradict each other, demonstrating the robustness of our methods to variations in the specifications of regressions \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{3}{c}{Direct Effect} & \multicolumn{3}{c}{Spillover Effect} \\ \hline WLS specification & Unadj & Add & Sat & Unadj & Add & Sat \\ \hline \hline \multicolumn{8}{c}{Two-dimensional exposure mapping} \\ \hline \hline Estimate & 0.1552 & 0.1441 & 0.1424 & 0.1491 & 0.1471 & 0.1653 \\ \(b_{n}=0\) & 0.0510 & 0.0496 & 0.0513 & 0.0510 & 0.0500 & 0.0513 \\ \(b_{n}=1\) & 0.0516 & 0.0497 & 0.0528 & 0.0539 & 0.0538 & 0.0554 \\ \(K_{n}\) PSD & \multicolumn{3}{c}{NO} & \multicolumn{3}{c}{NO} \\ \(\text{WLS}^{+}\) SE & 0.0529 & 0.0509 & 0.0540 & 0.0552 & 0.0550 & 0.0568 \\ \(b_{n}=2\) & 0.0459 & 0.0441 & 0.0488 & 0.0547 & 0.0558 & 0.0616 \\ \(K_{n}\) PSD & \multicolumn{3}{c}{NO} & \multicolumn{3}{c}{NO} \\ \(\text{WLS}^{+}\) SE & 0.0542 & 0.0523 & 0.0576 & 0.0605 & 0.0610 & 0.0665 \\ \(b_{n}=3\) & 0.0429 & 0.0440 & 0.0444 & 0.0499 & 0.0533 & 0.0628 \\ \(K_{n}\) PSD & \multicolumn{3}{c}{NO} & \multicolumn{3}{c}{NO} \\ \(\text{WLS}^{+}\) SE & 0.0586 & 0.0581 & 0.0609 & 0.0673 & 0.0681 & 0.0764 \\ \hline \hline \multicolumn{8}{c}{One-dimensional exposure mapping} \\ \hline \hline Estimate & 0.1701 & 0.1550 & 0.1550 & 0.1681 & 0.1638 & 0.1648 \\ \(b_{n}=0\) & 0.0569 & 0.0568 & 0.0570 & 0.0534 & 0.0525 & 0.0525 \\ \(b_{n}=1\) & 0.0576 & 0.0578 & 0.0580 & 0.0576 & 0.0578 & 0.0579 \\ \(K_{n}\) PSD & \multicolumn{3}{c}{NO} & \multicolumn{3}{c}{NO} \\ \(\text{WLS}^{+}\) SE & 0.0593 & 0.0594 & 0.0597 & 0.0587 & 0.0587 & 0.0589 \\ \(b_{n}=2\) & 0.0494 & 0.0507 & 0.0506 & 0.0609 & 0.0611 & 0.0613 \\ \(K_{n}\) PSD & \multicolumn{3}{c}{NO} & \multicolumn{3}{c}{NO} \\ \(\text{WLS}^{+}\) SE & 0.0604 & 0.0609 & 0.0609 & 0.0665 & 0.0662 & 0.0663 \\ \(b_{n}=3\) & 0.0407 & 0.0400 & 0.0395 & 0.0580 & 0.0601 & 0.0607 \\ \(K_{n}\) PSD & \multicolumn{3}{c}{NO} & \multicolumn{3}{c}{NO} \\ \(\text{WLS}^{+}\) SE & 0.0613 & 0.0607 & 0.0605 & 0.0736 & 0.0733 & 0.0737 \\ \hline \hline \end{tabular} \end{table} Table 5: Estimates and SEs (\(n=150\)) (Paluck et al., 2016). and exposure mappings. Specifically, the direct effect does not exhibit significance across regression specifications and bandwidths. Meanwhile, the spillover effect is statistically significant at 5% level, although not consistently across all bandwidths. The overarching finding is closely aligned with the estimates presented in Table 2 of Cai et al. (2015). More specifically, providing intensive sessions on insurance and highlighting the anticipated benefits of the product to a specific group of farmers results in a significant and positive spillover effect on other farmers. The variation in the magnitude of the point estimates arises from Cai et al. (2015) using the count of friends attending the first-round intensive session, as opposed to solely considering whether at least one friend attended or not. ## 6 Discussion Network experiments have found extensive applications in economics, social sciences, public health, and tech companies. We propose a regression-based analysis for the estimation and inference of the exposure effects under the design-based framework. The point estimator obtained from the WLS fit is consistent and asymptotically normal for estimating the finite-population expected responses and \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multicolumn{5}{c}{Direct Effect} & \multicolumn{4}{c}{Spillover Effect} \\ \hline WLS specification & Unadj & Add & Sat & Unadj & Add & Sat \\ \hline \hline \multicolumn{5}{c}{Two-dimensional exposure mapping} \\ \hline \hline Estimate & 0.0130 & 0.0138 & 0.0109 & 0.0561 & 0.0584 & 0.0682 \\ \(b_{n}=0\) & 0.0264 & 0.0264 & 0.0265 & 0.0264 & 0.0264 & 0.0265 \\ \(b_{n}=2\) & 0.0281 & 0.0280 & 0.0277 & 0.0289 & 0.0288 & 0.0284 \\ \(K_{n}\) PSD & \multicolumn{4}{c}{NO} & \multicolumn{4}{c}{NO} \\ WLS\({}^{+}\) SE & 0.0325 & 0.0324 & 0.0321 & 0.0330 & 0.0329 & 0.0326 \\ \(b_{n}=3\) & 0.0282 & 0.0281 & 0.0276 & 0.0260 & 0.0260 & 0.0257 \\ \(K_{n}\) PSD & \multicolumn{4}{c}{NO} & \multicolumn{4}{c}{NO} \\ WLS\({}^{+}\) SE & 0.0322 & 0.0321 & 0.0314 & 0.0306 & 0.0306 & 0.0301 \\ \(b_{n}=4\) & 0.0292 & 0.0291 & 0.0283 & 0.0264 & 0.0263 & 0.0252 \\ \(K_{n}\) PSD & \multicolumn{4}{c}{NO} & \multicolumn{4}{c}{NO} \\ WLS\({}^{+}\) SE & 0.0335 & 0.0334 & 0.0324 & 0.0312 & 0.0311 & 0.0301 \\ \hline \hline \multicolumn{5}{c}{One-dimensional exposure mapping} \\ \hline \hline Estimate & 0.0131 & 0.0136 & 0.0098 & 0.0568 & 0.0591 & 0.0676 \\ \(b_{n}=0\) & 0.0247 & 0.0247 & 0.0249 & 0.0264 & 0.0263 & 0.0263 \\ \(b_{n}=2\) & 0.0258 & 0.0257 & 0.0261 & 0.0287 & 0.0287 & 0.0280 \\ \(K_{n}\) PSD & \multicolumn{4}{c}{NO} & \multicolumn{4}{c}{NO} \\ WLS\({}^{+}\) SE & 0.0299 & 0.0298 & 0.0301 & 0.0329 & 0.0328 & 0.0322 \\ \(b_{n}=3\) & 0.0256 & 0.0255 & 0.0261 & 0.0257 & 0.0258 & 0.0250 \\ \(K_{n}\) PSD & \multicolumn{4}{c}{NO} & \multicolumn{4}{c}{NO} \\ WLS\({}^{+}\) SE & 0.0289 & 0.0288 & 0.0293 & 0.0304 & 0.0303 & 0.0295 \\ \(b_{n}=4\) & 0.0257 & 0.0256 & 0.0263 & 0.0261 & 0.0260 & 0.0250 \\ \(K_{n}\) PSD & \multicolumn{4}{c}{NO} & \multicolumn{4}{c}{NO} \\ WLS\({}^{+}\) SE & 0.0295 & 0.0294 & 0.0300 & 0.0310 & 0.0308 & 0.0298 \\ \hline \hline \end{tabular} \end{table} Table 6: Estimates and SEs (Cai et al., 2015). the regression-based network HAC covariance estimator serves as an estimator of the true sampling covariance. We propose an easily implementable modified HAC covariance estimator, which ensures positive semi-definite and asymptotically conservative covariance estimation and improves empirical coverage in finite sample simulation. As a result, our regression-based inference remains valid from a design-based perspective, regardless of the specifications for the outcome model and exposure mapping. To ensure a comprehensive exploration of the topic, we discuss the regression-based analysis to recover Leung (2022)'s Horvitz-Thompson estimator in Appendix S4. This approach involves the utilization of an adjusted outcome in combination with a WLS fit, which is less natural than the regression-based analysis to recover the Hajek estimator. We also analyze the performance of the regression-based HAC covariance estimator for the Horvitz-Thompson estimator via the same WLS fit. Taking into consideration ease of implementation and efficiency, we recommend using WLS fits to obtain the Hajek estimator for practice. Our theoretical results are asymptotic. Some researchers have extended the classical Fisher randomization test to settings with interference, providing exact results in finite samples (Bowers et al., 2016; Athey et al., 2018; Basse et al., 2019; Puelz et al., 2022; Basse et al., 2023). This extension often relies on the assumption of correctly specified exposure mappings. The extension of the Fisher randomization test within our framework is more complicated. We conjecture that, under Fisher's sharp null hypothesis that the treatment does not affect any units, we can apply the standard Fisher randomization test and obtain a finite-sample exact \(p\)-value. In most cases, the weak null hypothesis of zero average exposure effects, \(G\mu=0\), is of more interest. However, its testing presents challenges as not all missing potential outcomes can be determined. Conducting Fisher randomization test with a studentized \(t\)-statistic may still remain asymptotically valid for testing the weak null hypothesis. Our paper justifies the regression-based HAC standard errors, making the application of the \(t\)-statistic more accessible and feasible. See the idea of studentizaton in Chung and Romano (2013), Wu and Ding (2021) and Zhao and Ding (2021). However, it remains a challenging problem to construct an exact Fisher randomization test for general non-sharp null hypotheses. Furthermore, extending our findings to observational data with interference remains an open question. Most papers assumed partial interference (Liu et al., 2016; Barkley et al., 2020) or neighborhood interference (Ogburn et al., 2022; Forastiere et al., 2021). Xu (2023) incorporated interference structure into the difference-in-differences estimator without assuming partial interference but with a focus on spatial data. Leung and Loupos (2023) studied nonparametric estimation of direct and spillover effects using observational data from a single network but were constrained to a specific class of exposure mappings. We plan to explore these directions in future research.
``` ネットワーク実験は、多くの社会科学的問題において、単位間の干渉や漏洩効果の調査を centrale task だと考えられます。ネットワーク実験は、ユニットに治療をランダムに割り当てることで、Endogeneity を回避する強力なツールです。しかし、ネットワーク実験を適切に分析することは、強力なモデル仮定を課すことには非難の容易な課題です。従来、多くの研究者たちは、ネットワーク実験における因果効果の点推定値と標準誤差を提案してきました。さらに、この研究では、重回帰に基づいた点推定値と標準誤差を、ネットワーク実験における干渉パターンに対応するよう、重回帰関数の指定と頑健な標準誤差を慎重に指定することで、強い理論的保証を備えることができることを示しています。まず、ハジャク推定値は、暴露の逆確率を用いた重最小二乗フィットから得られた係数と
2308.00030
Electroweak mass difference of mesons
We consider electroweak (EW) gauge boson corrections to the masses of pseudoscalar mesons to next to leading order (NLO) in $\alpha_s$ and $1/N_C$. The pion mass shift induced by the $Z$-boson is shown to be $m_{\pi^\pm}-m_{\pi^0} = -0.00201(12)$ MeV. While being small compared to the electromagnetic mass shift, the prediction lies about a factor of $\sim 4$ above the precision of the current experimental measurement, and a factor $O(10)$ below the precision of current lattice calculations. This motivates future implementations of these EW gauge boson effects on the lattice. Finally, we consider BSM contributions to the pion mass difference.
Antonio Pich, Arthur Platschorre, Mario Reig
2023-07-31T18:00:01
http://arxiv.org/abs/2308.00030v1
# Electroweak mass difference of mesons ###### Abstract We consider electroweak (EW) gauge boson corrections to the masses of pseudoscalar mesons to next to leading order (NLO) in \(\alpha_{s}\) and \(1/N_{C}\). The pion mass shift induced by the \(Z\)-boson is shown to be \(m_{\pi^{\pm}}-m_{\pi^{0}}=-0.00201(12)\) MeV. While being small compared to the electromagnetic mass shift, the prediction lies about a factor of \(\sim 4\) above the precision of the current experimental measurement, and a factor \(O(10)\) below the precision of current lattice calculations. This motivates future implementations of these EW gauge boson effects on the lattice. Finally, we consider BSM contributions to the pion mass difference. ## I Introduction At very low energies, the strong interaction of mesons is successfully described by the chiral Lagrangian, a perturbative expansion in derivatives of the Goldstone fields and light quark masses. The effective action is entirely determined by the symmetries, and once the parameters of the theory are fixed by observation of several meson quantities, a highly predictive theory emerges, chiral perturbation theory [1; 2; 3]. In QCD with 3 light flavours, the global symmetry is \(SU(3)_{L}\times SU(3)_{R}\), giving 8 Goldstone bosons after spontaneous symmetry breaking by the formation of quark condensates. Turning on quark masses, \(M_{q}=\mathrm{diag}(m_{u},m_{d},m_{s})\), explicitly breaks the flavour symmetry and the meson fields get a mass. The effective action does not allow one to obtain the meson masses purely as a function of quark masses, but it is possible to find relations that connect ratios of the meson masses to (renormalization-scheme independent) ratios of quark masses, one example being the renowned Gell-Mann-Oakes-Renner relation \(\frac{m_{KK}^{2}-m_{K0}^{2}}{m_{s}^{2}}=\frac{m_{s}-m_{d}}{m_{u}+m_{d}}\). The process of gauging part of the global symmetries also breaks the chiral flavour symmetry, generating masses for the pseudoscalar mesons. This is well-known for the case of electromagnetism (EM) which breaks the shift symmetries of the charged mesons, thereby generating the pion and kaon mass shifts: \(\delta m_{\pi}=m_{\pi^{\pm}}-m_{\pi^{0}}\). This quantity has been computed using current algebra [4] and in chiral perturbation theory with explicit resonance fields [5], giving \(\delta m_{\pi}\) compatible with the experimental result [6], \[\delta m_{\pi}|_{\mathrm{exp}}=m_{\pi^{\pm}}-m_{\pi^{0}}=4.5936\pm 0.0005\ \mathrm{MeV}\,. \tag{1}\] The pion mass shift is a quantity that can also be computed on the lattice. This direction was initiated in [7] and currently has reached a level of considerable accuracy [8; 9]. The most precise lattice result [8]: \[\delta m_{\pi}=m_{\pi^{\pm}}-m_{\pi}^{0}=4.534\pm 0.042\pm 0.043\ \mathrm{MeV}\,, \tag{2}\] is compatible with the experimental measurement in Eq. 1. While the error on the lattice still has to be substantially reduced to reach the experimental precision, given the rate of improvement of lattice precision in recent years it is not unreasonable to think that in a near future the size of both errors might be comparable. In this letter we show that heavy EW gauge bosons induce small, but possibly _observable_ mass shifts between the neutral and charged mesons, for both the pion and the kaon. Due to the chiral structure of the weak interaction, to leading order (LO) in \(G_{F}\), only the \(Z\) boson contributes to the mass shifts. Similar results to LO in \(\alpha_{s}\) were noted in [10]. By doing a calculation at NLO in both \(\alpha_{s}\) and \(1/N_{c}\), our results will show that the expected mass shift induced by the \(Z\) lies well above the uncertainty of the current experimental measurement and slightly below the lattice uncertainties. This implies that future lattice simulations should be sensitive to the effects of the EW gauge bosons, reflecting the need for an implementation on the lattice. This direction is particularly interesting to learn about flavour symmetry breaking by the weak interaction in the chiral limit. Finally, we discuss future directions including effects of new physics on the mass differences of mesons. ## II Electroweak interaction and the pion mass difference QCD with 3 light flavours has a \(SU(3)_{L}\times SU(3)_{R}\) global flavour symmetry. Starting at order \(O(p^{2})\), and neglecting momentarily quark masses, the effective Lagrangian below the chiral symmetry breaking scale is of the form: \[\mathcal{L}_{2}=\frac{F^{2}}{4}\mathrm{Tr}\left(D^{\mu}U\left(D_{\mu}U\right)^ {\dagger}\right)\,, \tag{3}\] where \(F\) is the chiral coupling constant and the \(SU(3)\) matrix \(U=\exp\left[i\frac{\sqrt{2}}{F}\Phi\right]\) incorporates the pseudoscalar Goldstone octet \[\Phi=\begin{pmatrix}\frac{x^{0}}{\sqrt{2}}+\frac{y^{0}}{\sqrt{6}}&\pi^{+}&K^{+}\\ \pi^{-}&-\frac{x^{0}}{\sqrt{2}}+\frac{y^{0}}{\sqrt{6}}&K^{0}\\ K^{-}&\overline{K}^{0}&-\frac{2}{\sqrt{6}}\eta^{0}\end{pmatrix}\,. \tag{4}\] In the SM, the \(SU(2)\times U(1)\) subgroup of this flavour symmetry is gauged. In general, gauging a subgroup of \(SU(3)_{L}\times SU(3)_{R}\) by gauge bosons \(L\) and \(R\) is done by introducing a covariant derivative of the form: \[D_{\mu}U=\partial_{\mu}U-iQ_{L}\ell_{\mu}U+iUr_{\mu}Q_{R}\,. \tag{5}\] For the SM gauge bosons this amounts to introducing: \[D_{\mu}U= \partial_{\mu}U-i\frac{g}{\sqrt{2}}\left(W_{\mu}^{+}T_{W}^{-}+W_{ \mu}^{-}T_{W}^{+}\right)U-ie\left(A_{\mu}-\tan\theta_{W}Z_{\mu}\right)[Q_{\rm em },U]-i\frac{g}{\cos\theta_{W}}Z_{\mu}T_{3L}U\,, \tag{6}\] where we have explicitly included the photon and the EW gauge bosons with the generators: \[T_{W}^{-}=\left(T_{W}^{+}\right)^{\dagger}=\begin{pmatrix}0&V_{ud}&V_{us}\\ 0&0&0\\ 0&0&0\end{pmatrix}\,, \tag{7}\] and the diagonal matrices \(T_{3L}={\rm diag}(1/2,-1/2,-1/2)\) and \(Q_{\rm em}={\rm diag}(2/3,-1/3,-1/3)\). The heavy EW gauge bosons are introduced as spurions in order to track the pattern of explicit symmetry breaking. However, since these particles lie well above the cut-off of the effective theory, usually taken to be \(\Lambda_{\chi{\rm SB}}\sim 4\pi F\), special care has to be taken in deriving explicit results from this Lagrangian. We shall return to this issue momentarily. Expanding Eq. 3 to quadratic order in \(\Phi\), we can see that non-zero Goldstone masses are generated by terms of the form: \[-\frac{F^{2}}{2}{\rm Tr}\left(Q_{L}UQ_{R}U^{\dagger}\right)\dot{=}\frac{1}{2} {\rm Tr}\left([Q_{L},\Phi][\Phi,Q_{R}]\right) \tag{8}\] where \(Q_{L}\) and \(Q_{R}\) are spurion matrices representing the action of gauge fields. One notices that not all of these terms are breaking the shift symmetries in the chiral limit, because meson self-energies are generated by loop diagrams with no external gauge bosons. Consequently, terms involving different gauge bosons do not contribute at LO to the meson masses. Since the \(W^{\pm}\) couplings are purely left-handed, they cannot contribute to \(Q_{R}\) and, therefore, do not generate any meson mass shift. The only contribution to \(Q_{R}\) comes from the spurion \(Q_{\rm em}\), which as seen from Eq. 6 occurs for both the photon and the \(Z\), and acts as: \[[Q_{\rm em},\Phi]=\begin{pmatrix}0&\pi^{+}&K^{+}\\ -\pi^{-}&0&0\\ -K^{-}&0&0\end{pmatrix}\,. \tag{9}\] This implies that only charged mesons can get a mass and this occurs through the interaction with neutral gauge bosons, which contribute as: \[\frac{eg}{2\cos\theta_{W}}{\rm Tr}\left([T_{3L},\Phi]\,[\Phi,Q_{\rm em}] \right)\left(A_{\mu}-\tan\theta_{W}Z_{\mu}\right)Z^{\mu}\,, \tag{10}\] and: \[\frac{e^{2}}{2}{\rm Tr}\left([Q_{\rm em},\Phi][\Phi,Q_{\rm em}]\right)\left(A _{\mu}-\tan\theta_{W}Z_{\mu}\right)(A^{\mu}-\tan\theta_{W}Z^{\mu})\,. \tag{11}\] Again, the term involving \(A_{\mu}Z^{\mu}\) cannot contribute to meson masses. Combining Eq. 10 and Eq. 11, and retaining only the relevant terms involving \(A_{\mu}A^{\mu}\) and \(Z_{\mu}Z^{\mu}\), the interaction reads: \[e^{2}\left(\pi^{+}\pi^{-}+K^{+}K^{-}\right)\left(A_{\mu}A^{\mu}-Z_{\mu}Z^{\mu} \right). \tag{12}\] An order of magnitude estimate can be given at this point for the \(Z\)-boson induced mass shift using naive dimensional analysis: \[\Delta m_{\pi}^{2}=\frac{e^{2}}{4\pi^{2}M_{Z}^{2}}\Lambda_{\chi{\rm SB}}^{4} \rightarrow\delta m_{\pi}\sim 0.002\ {\rm MeV}\,. \tag{13}\] The fact that this estimate lies above the current experimental uncertainty and is comparable to the lattice precision motivates us to perform a more careful analysis. As in the electromagnetic (EM) contribution [5], we capture the effects of both \(A_{\mu}\) and \(Z_{\mu}\) by adding the following local operators involving the spurion matrices \(Q_{\rm em}\) and \(Q_{L,R}^{Z}\equiv\frac{g}{\cos\theta_{W}}\,\mathcal{Q}_{L,R}\): \[\mathcal{L}_{2}^{C}=e^{2}C_{em}\langle Q_{\rm em}UQ_{\rm em}U^{\dagger} \rangle+4\sqrt{2}G_{F}C_{Z}\langle\mathcal{Q}_{L}U\mathcal{Q}_{R}U^{\dagger} \rangle\,, \tag{14}\] with \(4\sqrt{2}G_{F}\) the low-energy coupling of the \(Z\) boson, \[\mathcal{Q}_{L}=\begin{pmatrix}\frac{1}{2}-\frac{2}{3}x&0&0\\ 0&-\frac{1}{2}+\frac{1}{3}x&0\\ 0&0&-\frac{1}{2}+\frac{1}{3}x\end{pmatrix}\,, \tag{15}\] \[\mathcal{Q}_{R}=\begin{pmatrix}-\frac{2}{3}x&0&0\\ 0&\frac{1}{3}x&0\\ 0&0&\frac{1}{3}x\end{pmatrix} \tag{16}\] and \(x=\sin^{2}\theta_{W}\). The determination of \(C_{Z}\) to NLO in \(\alpha_{s}\) and \(1/N_{c}\) is the goal of this letter. The coefficients \(C_{\rm em}\) and \(C_{Z}\) are low-energy constants determined from the high-energy theory and determine the electromagnetic and electroweak meson mass differences \(\Delta m_{P}^{2}\equiv m_{P^{\pm}}^{2}-m_{P^{0}}^{2}\) of pions and kaons in the chiral limit: \[\Delta m_{\pi}^{2}=\Delta m_{K}^{2}=\frac{2e^{2}}{F^{2}}\left(C_{\rm em}-\frac{ C_{Z}}{M_{Z}^{2}}\right). \tag{17}\] In [5] it was shown that the EM mass shift from resonance exchange saturates the constant \(C_{\rm em}\) and is given in terms of the resonance parameters \(F_{V},M_{V}\) by: \[\Delta m_{\pi}^{2}|_{\rm em}=\frac{3\alpha_{\rm em}}{4\pi F^{2}}F_{V}^{2}M_{V} ^{2}\ln\frac{F_{V}^{2}}{F_{V}^{2}-F^{2}}\,. \tag{18}\] A corresponding resonance loop calculation including the \(Z\) boson in order to determine \(C_{Z}\) is subtle. The reason is that the parameter \(M_{Z}\) lies well above the cut-off, \(\Lambda_{\chi\rm SB}\), and the \(Z\) therefore must be integrated out. The resulting EFT is QCD with four-fermion operators that encode all the information of the chiral symmetry breaking by the EW bosons. Using the renormalization group (RG) to run the Wilson coefficients of these operators down to a scale \(\mu\sim 1\) GeV allows matching to the operators in Eq. 14 of the chiral Lagrangian and thereby a determination of \(C_{Z}\). ### Z-induced left-right four quark operators Integrating out the \(Z\) boson introduces 4-fermion operators that break the chiral \(SU(3)_{L}\times SU(3)_{R}\) symmetry. The relevant left-right (LR) operators are: \[[Q_{1}^{LR}]_{ijk\ell}=(\overline{q}_{Li}\gamma^{\mu}q_{Lj})\left(\overline{q} _{Rk}\gamma^{\mu}q_{R\ell}\right) \tag{19}\] \[[Q_{2}^{LR}]_{ijk\ell}=(\overline{q}_{Li}q_{Rk})\left(\overline{q}_{R}q_{Lj} \right)\,, \tag{20}\] with \(i,j,k,\ell\) being light-quark flavour indices. While \(Q_{1}^{LR}\) is generated by a \(Z\)-exchange at tree level, \(Q_{2}^{LR}\) is obtained after applying a Fierz-identity on the gluon corrections to \(Q_{1}^{LR}\). The effective lagrangian below \(M_{Z}\) reads: \[\mathcal{L}_{\rm eff}=-4\sqrt{2}G_{F}\sum_{ijk\ell}\left(\mathcal{Q}_{L} \right)_{ij}\left(\mathcal{Q}_{R}\right)_{k\ell}\left[C_{1}Q_{1}^{LR}+C_{2}Q_ {2}^{LR}\right]_{ijk\ell}\,, \tag{21}\] with \(C_{1,2}\) being the Wilson coefficients. When QCD effects are taken into account, the renormalised Wilson coefficients at the \(M_{Z}\) scale become [11]: \[C_{1} =1+\frac{\alpha_{s}}{4\pi}\frac{3}{N_{c}}\left[-\ln\frac{M_{Z}^{2 }}{\mu^{2}}-\frac{1}{6}\right]\,, \tag{22}\] \[C_{2} =\frac{\alpha_{s}}{4\pi}\left[-6\ln\frac{M_{Z}^{2}}{\mu^{2}}-1 \right]\,, \tag{23}\] where the non-logarithmic corrections are scheme dependent. The operators above will mix under RG flow and their evolution down to the scale of interest (\(\sim 1\) GeV) can be calculated by standard procedures [12], using their anomalous dimension matrices: \[\frac{d\vec{C}}{d\ln\mu}=\gamma^{T}\vec{C}\,. \tag{24}\] Up to order \(O(\alpha_{s}^{2})\), this matrix can be expanded as: \[\gamma=\frac{\alpha_{s}}{4\pi}\gamma^{0}+\left(\frac{\alpha_{s}}{4\pi}\right) ^{2}\gamma^{1}+O(\alpha_{s}^{3})\,, \tag{25}\] with \(\gamma^{0},\gamma^{1}\) given by [13]: \[\gamma^{0}=\left(\begin{array}{cc}\frac{6}{N_{c}}&12\\ 0&-6N_{c}+\frac{6}{N_{c}}\end{array}\right)\,,\quad\gamma^{1}=\left(\begin{array} []{cc}\frac{137}{6}+\frac{15}{2N_{c}}-\frac{22}{3N_{c}}f&\frac{200}{3}N_{c}- \frac{6}{N_{c}}-\frac{44}{3}f\\ \frac{71}{4}N_{c}+\frac{6}{N}-2f&-\frac{203}{6}N_{c}^{2}+\frac{479}{6}+\frac{ 15}{2N_{c}^{2}}+\frac{10}{3}N_{c}f-\frac{22}{3N_{c}}f\end{array}\right)\,. \tag{26}\] Solving Eq. 24 yields the evolution: \[\vec{C}(\mu)=T\,\exp\left[\int_{\alpha_{s}(M_{Z})}^{\alpha_{s}(\mu)}d\alpha_{ s}\frac{\gamma^{T}}{\beta(\alpha_{s})}\right]\vec{C}(M_{Z})\,, \tag{27}\] where we have introduced the QCD \(\beta\) function as: \[\beta=-2\alpha_{s}\left[\beta_{0}\frac{\alpha_{s}}{4\pi}+\beta_{1}\left( \frac{\alpha_{s}}{4\pi}\right)^{2}+O(\alpha_{s}^{3})\right]\,. \tag{28}\] The coefficients used are given by \(\beta_{0}=\frac{11N_{c}-2f}{3}\) and \(\beta_{1}=\frac{34}{3}N_{c}^{2}-\frac{10}{3}N_{c}f-\frac{N_{c}^{2}-1}{N_{c}}f\)[14] where \(f\) is the number of active flavours. To NLO and after integrating out the \(b\) and \(c\) quark, the Wilson coefficients at the scale \(\mu\sim 1\) GeV are: \[C_{1}=0.92\,,\;\;\;C_{2}=-2.45\,. \tag{29}\] Similar enhancements of \(C_{2}\) are noticed in [15]. ### Matching to the chiral Lagrangian at large \(N_{c}\) We proceed to match the resulting EFT to the chiral Lagrangian. We do so by calculating the expectation value of the matrix elements of the 4-fermion operators in the large-\(N_{c}\) limit in which products of colour-singlet currents factorise. In this limit, the operator \(Q_{1}^{LR}\) reduces to the product of a left and a right currents: \[[Q_{1}^{LR}]_{ijk\ell}=\mathcal{J}_{L,ji}^{\mu}\,\mathcal{J}_{\mu,\ell k}^{R}\,. \tag{30}\] Since the low-energy representation of these currents starts at \(O(p)\) in the chiral-perturbation-theory expansion, the large-\(N_{C}\) expression of \(Q_{1}^{LR}\) is of \(O(p^{2})\) and, therefore, does not contribute to the \(O(p^{0})\) operator in Eq. 14. Owing to its different scalar-pseudoscalar structure, the operator \(Q_{2}^{LR}\) does contribute at \(O(p^{0})\), receiving a chiral enhancement of the form: \[[Q_{2}^{LR}]_{ijk\ell} =\langle\overline{q}_{L}^{i}q_{R}^{k}\rangle\langle\overline{q}_ {R}^{\ell}q_{L}^{j}\rangle\,\left\{1+O\left(\frac{1}{N_{c}}\right)\right\} \tag{31}\] \[=\frac{1}{4}B_{0}^{2}F^{4}U_{ki}U_{j\ell}^{\dagger}\,\left\{1+O \bigg{(}\frac{1}{N_{c}}\bigg{)}\right\}+O\big{(}p^{2}\big{)}\,, \tag{32}\] with \(B_{0}=-\langle\bar{q}q\rangle/F^{2}=m_{\pi^{\pm}}^{2}/(m_{u}+m_{d})\). Matching the contribution of \(Q_{2}^{LR}\) to the effective theory, a LO estimate in \(N_{c}\) can be given for \(C_{Z}\): \[C_{Z}=-\frac{1}{4}\,B_{0}^{2}(\mu)\,F^{4}\,C_{2}(\mu)\,. \tag{33}\] One can easily check that, in the large-\(N_{c}\) limit, the \(\mu\) dependence of \(C_{2}(\mu)\) is exactly cancelled by the quark-mass factors in \(B_{0}^{2}(\mu)\), as it should. ### 1/\(N_{c}\) corrections to \(Q_{1}^{LR}\) As shown in [10], the low-energy constants in Eq. 14 can be related to the two-point correlation function of a left and a right QCD currents, \(\Pi_{LR}(Q^{2})\), which converges nicely in the UV. This fact allows one to evaluate the leading non-zero \(O(p^{0})\) contributions of \(Q_{1}^{LR}\), originating from loops of Goldstone bosons and vector and axial-vector resonance fields, which are NLO corrections in \(1/N_{c}\). The full details of the calculation are given in the Appendix. Integrating only the low-energy region \(0\leq Q^{2}\leq\mu^{2}\) (contributions from \(Q^{2}>\mu^{2}\) are already included in the Wilson coefficients), one finds \[\Delta C_{Z}|_{Q_{1}^{LR}}=\frac{3}{32\pi^{2}}\left\{\sum_{A}F_{A_{i}}^{2}M_{ A_{i}}^{4}\log\left(1+\frac{\mu^{2}}{M_{A_{i}}}\right)-\sum_{V}F_{V_{i}}^{2}M_{ V_{i}}^{4}\log\left(1+\frac{\mu^{2}}{M_{V_{i}}}\right)\right\}C_{1}(\mu)\,. \tag{34}\] Since we are interested in the matrix element of the operator \(Q_{1}^{LR}\) at around the \(\mu\sim 1\) GeV scale, we work in the lightest-resonance approximation with their couplings fixed through the Weinberg conditions [16; 17]: \[F_{V}^{2}=\frac{M_{A}^{2}}{M_{A}^{2}-M_{V}^{2}}\,F^{2}\,,\qquad F_{A}^{2}= \frac{M_{V}^{2}}{M_{A}^{2}-M_{V}^{2}}\,F^{2}\,. \tag{35}\] Within the single-resonance approximation that we have adopted, \(M_{A}=\sqrt{2}M_{V}\)[17]. For the numerical evaluation we will take \(M_{V}=M_{\rho}=775.26\pm 0.23\) MeV and \(F=F_{\pi}=92.1\pm 0.8\) MeV [14]. As expected from its loop suppression, \(\left.\Delta C_{Z}\right|_{Q_{1}^{LR}}\) is of \(O(F^{2})\sim O(N_{c})\) and, therefore, is a NLO correction in \(1/N_{c}\) of about \(O(10\%)\) with respect to the leading \(O(F^{4})\sim O(N_{c}^{2})\) contribution from \(Q_{2}^{LR}\) in Eq. 33. ### EW contribution to the pion mass difference Using Eq. 17 and the results above in Eqs. 33, 34 and 35, the pion mass shift induced by the \(Z\) reads: \[\Delta m_{\pi}^{2}|_{Z}=\frac{e^{2}}{M_{Z}^{2}}\left\{\frac{F^{2}}{2}B_{0}^{2} (\mu)C_{2}(\mu)+\frac{3}{16\pi^{2}}C_{1}(\mu)\frac{M_{A}^{2}M_{V}^{2}}{M_{A}^ {2}-M_{V}^{2}}\left[M_{V}^{2}\log\left(1+\frac{\mu^{2}}{M_{V}^{2}}\right)-M_{A }^{2}\log\left(1+\frac{\mu^{2}}{M_{A}^{2}}\right)\right]\right\}. \tag{36}\] This translates into a \(Z\)-induced pion mass difference: \[\delta m_{\pi}|_{Z}\approx\frac{\Delta m_{\pi}^{2}|_{Z}}{2m_{\pi}}=-0.00201(7)( 2)(10)\,\,\mbox{MeV}\,, \tag{37}\] where we have used \(m_{\pi}=134.9768\pm 0.0005\) MeV [14] and \((m_{u}+m_{d})/2=3.381\pm 0.040\) MeV [18]. The first error displays the parametric uncertainty induced by the different inputs. The second uncertainty accounts for the renormalization-scale dependence in the interval \(\mu\in[0.8,1.2]\) GeV which, as shown in the figure, is tiny. We have added half the difference between the LO and NLO results as an estimate of unknown higher-order effects (third error). We notice that the \(Z\)-boson contribution is about a factor of \(\sim 4\) larger than the experimental error in Eq. 1 and \(\sim O(10)\) smaller than the current lattice precision in Eq. 2, reinforcing the motivation to incorporate these effects on the lattice. The renormalization scale dependence of this result for energies in the range \([0.8,1.2]\) GeV is plotted in figure 1. ## Discussion Before closing we comment on several points that deserve mention. * The estimate in Eq. 37 is based on a NLO evaluation of the Wilson coefficients \(C_{1,2}(\mu)\), which depends on the precise values of the strong coupling at \(M_{Z}\), \(\alpha_{s}(M_{Z})=0.1184\pm 0.0008\)[18], and at the different matching scales (known to percent level or better). * Our result \(\delta m_{\pi}|_{Z}\) appears to be of the same order as the two-loop EM effect, which naively one expects to be: \[\delta m_{\pi}|_{\rm em}^{(2)}\approx\left(\frac{\alpha_{\rm em}}{2\pi}\right) \delta m_{\pi}|_{\rm em}^{(1)}\,.\] (38) * BSM models that generate 4-quark LR operators at energies below the new physics scale, \(\Lambda_{\rm NP}\gg\Lambda_{\chi\rm SB}\), will induce similar pion mass shifts. This is the case, for example, of the \(Z^{\prime}\) models studied in [11], and similar SM extensions. Since the QCD corrections dominate near the GeV scale, a reasonable estimate is just the rescaling: \[\delta m_{\pi}|_{\rm NP}=\frac{g_{\rm NP}^{2}}{\Lambda_{\rm NP}^{2}}\frac{ \delta m_{\pi}|_{Z}}{4\sqrt{2}G_{F}}\,.\] (39) If new physics is instead light, as proposed in [19; 20], one should rescale the resonance calculation for EM effects [5]. ## Acknowledgments We would like to thank Prateek Agrawal, Hector Gisbert, Victor Miralles and Fernando Romero for helpful discussions and enlightening comments on the early drafts of this letter. Antonio Pich is supported by Generalitat Valenciana, Grant No. Prometeo/2021/071, and MCIN/AEI/10.13039/501100011033, Grant No. PID2020-114473GB-I00. Arthur Platschorre is supported by a STFC Studenship No. 2397217 and Prins Bernhard Cultuurfondsbeurs No. 40038041 made possible by the Pieter Beijer fonds and the Data-Piet fonds. ## Appendix In the large-\(N_{C}\) limit, the strong interaction reduces to tree-level hadronic diagrams. Keeping only those terms that are relevant for our calculation, the effective Lagrangian describing the mesonic world contains the LO Goldstone term \(\mathcal{L}_{2}\) and the vector and axial-vector couplings (kinetic terms are omitted) [17]: \[\mathcal{L}_{V,A}=\sum_{V_{i}}\frac{F_{V_{i}}}{2\sqrt{2}}\,\langle V_{i}^{\mu \nu}f_{+\mu\nu}\rangle+\sum_{A_{i}}\frac{F_{A_{i}}}{2\sqrt{2}}\,\langle A_{i} ^{\mu\nu}f_{-\mu\nu}\rangle\,, \tag{40}\] where \(f_{\pm}^{\mu\nu}=u^{\dagger}F_{L}^{\mu\nu}u\pm uF_{R}^{\mu\nu}u^{\dagger}\) with \(U=u^{2}\) the Goldstone \(SU(3)\) matrix and \(F_{L,R}^{\mu\nu}\) the left (\(\ell^{\mu}\)) and right (\(r^{\mu}\)) field strengths. The spin-1 resonances are described through the antisymmetric tensors \(V_{i}^{\mu\nu}\) and \(A_{i}^{\mu\nu}\)[5; 21]. The left and right QCD currents are easily computed, taking derivatives with respect to the external \(\ell^{\mu}\) and \(r^{\mu}\) fields: \[\mathcal{J}_{L}^{\mu} = i\frac{F^{2}}{2}\,D^{\mu}UU^{\dagger}+\sum_{V_{i}}\frac{F_{V_{i} }}{\sqrt{2}}\,\partial_{\nu}(uV_{i}^{\mu\nu}u^{\dagger}) \tag{41}\] \[+\sum_{A_{i}}\frac{F_{A_{i}}}{\sqrt{2}}\,\partial_{\nu}(uA_{i}^{ \mu\nu}u^{\dagger})+\cdots\] while \(\mathcal{J}_{R}^{\mu}\) is obtained from this expression exchanging \(u\leftrightarrow u^{\dagger}\) and putting a negative sign in the axial contributions. The bosonization of \([Q_{1}^{LR}]_{ijk\ell}\) is formally given by [22] \[\langle[Q_{1}^{LR}(x)]_{ijkl}\rangle_{G}=\frac{\partial\Gamma}{\partial\ell _{\mu}^{ij}(x)}\,\frac{\partial\Gamma}{\partial r^{\mu,kl}(x)}-i\,\frac{ \partial^{2}\Gamma}{\partial\ell_{\mu}^{ij}(x)\,\partial r^{\mu,kl}(x)} \tag{42}\] with \(\Gamma[\ell,r]\) the effective theory generating functional. The first term is just the product of the two currents and receives \(O(p^{0})\) contributions from loop diagrams with vector and axial-vector internal propagators. The second term (the derivative of \(\mathcal{J}_{L}^{\mu}\) with respect to \(r^{\mu}\)) generates an additional \(O(p^{0})\) contribution through Goldstone loops. The combined result can be written in the form: \[\sum_{ijkl}\mathcal{Q}_{L}^{ij}\mathcal{Q}_{R}^{kl}\,\,[Q_{1}^{LR }]_{ijkl}=\frac{3}{32\pi^{2}}\,\langle\mathcal{Q}_{L}U\mathcal{Q}_{R}U^{\dagger}\rangle\] \[\quad\times\int_{0}^{\infty}dQ^{2}\,\left\{\sum_{V}\frac{F_{V_{i} }^{2}M_{V_{i}}^{4}}{M_{V_{i}}^{2}+Q^{2}}-\sum_{A}\frac{F_{A_{i}}^{2}M_{A_{i}}^ {4}}{M_{A_{i}}^{2}+Q^{2}}\right\}, \tag{43}\] where the Weinberg conditions [16] \[\sum_{i}\left(F_{V_{i}}^{2}-F_{A_{i}}^{2}\right)\,\,=\,\,F^{2}\,,\] \[\sum_{i}\left(M_{V_{i}}^{2}F_{V_{i}}^{2}-M_{A_{i}}^{2}F_{A_{i}}^{2}\right)\ =\ 0\,, \tag{44}\] have been used in order to simplify the final expression. Eq. 43 agrees with the result obtained in [10], using the alternative Proca description of spin-1 fields. Performing the integration in the low-energy region \(0\leq Q^{2}\leq\mu^{2}\) one obtains the result for \(\left.\Delta C_{Z}\right|_{Q_{1}^{LR}}\) in Eq. 34.
``` EWゲージボソンの質量補正をPseudoscalarmesonの質量に、αsと1/N_Cの次世代オーダーまで計算します。Z bosonによるpion mass shiftはm_π^+ - m_π^0 = -0.00201(12) MeV となります。電磁質量補正と比較して小さいものの、実験測定の精度に比べて約4倍、格子計算の精度に比べて約10倍ほど低い。このことから、これらのEWゲージボソンの効果を格子計算に実装する動機づけとなります。最後に、pion mass differenceに対するBSM寄与を考察します。 ``` This translation is accurate and idiomatic in Japanese. Let me know if you'd like me to generate translations of other sentences.
2309.06768
Hierarchical Time-Optimal Planning for Multi-Vehicle Racing
This paper presents a hierarchical planning algorithm for racing with multiple opponents. The two-stage approach consists of a high-level behavioral planning step and a low-level optimization step. By combining discrete and continuous planning methods, our algorithm encourages global time optimality without being limited by coarse discretization. In the behavioral planning step, the fastest behavior is determined with a low-resolution spatio-temporal visibility graph. Based on the selected behavior, we calculate maneuver envelopes that are subsequently applied as constraints in a time-optimal control problem. The performance of our method is comparable to a parallel approach that selects the fastest trajectory from multiple optimizations with different behavior classes. However, our algorithm can be executed on a single core. This significantly reduces computational requirements, especially when multiple opponents are involved. Therefore, the proposed method is an efficient and practical solution for real-time multi-vehicle racing scenarios.
Georg Jank, Matthias Rowold, Boris Lohmann
2023-09-13T07:40:05
http://arxiv.org/abs/2309.06768v1
# Hierarchical Time-Optimal Planning for Multi-Vehicle Racing* ###### Abstract This paper presents a hierarchical planning algorithm for racing with multiple opponents. The two-stage approach consists of a high-level behavioral planning step and a low-level optimization step. By combining discrete and continuous planning methods, our algorithm encourages global time optimality without being limited by coarse discretization. In the behavioral planning step, the fastest behavior is determined with a low-resolution spatio-temporal visibility graph. Based on the selected behavior, we calculate maneuver envelopes that are subsequently applied as constraints in a time-optimal control problem. The performance of our method is comparable to a parallel approach that selects the fastest trajectory from multiple optimizations with different behavior classes. However, our algorithm can be executed on a single core. This significantly reduces computational requirements, especially when multiple opponents are involved. Therefore, the proposed method is an efficient and practical solution for real-time multi-vehicle racing scenarios. ## I Introduction Planning trajectories in environments with dynamic obstacles is a major task in autonomous driving. Although approaches for traffic scenarios and racing can be similar, high speeds, small distances, and different rules pose a unique challenge in competitive driving on race tracks (like the Indy Autonomous Challenge). Trajectory planning in this environment requires rapid solving of non-convex optimization problems to generate time-optimal behavior (e.g. left or right overtake) with a corresponding feasible trajectory. The majority of recent planning approaches for racing solve the behavior and trajectory generation problem in one step by selecting the cost-minimum option from a finite number of generated trajectories [1, 2, 3]. These methods are not prone to local optima, as they cover a large region of the search space. However, they only find discrete-optimal solutions, as they do not explore all possible trajectories. We call them discrete methods in the following. Numerical optimization-based methods, on the other hand, solve an optimal control problem (OCP) with only the time or progress along a curve being discretized. Thus, they are often referred to as continuous methods. As the control problem is non-convex, they converge to different local optima, depending on the initialization. One way to consider the non-convexity is to solve multiple OCPs in parallel, one for each behavior class. However, this does not scale well for multiple opponents and relies on parallel processing capabilities to achieve low computation times. For rapid planning in an environment with multiple opponents, we propose a hierarchical planning approach that uses a spatio-temporal visibility graph to determine a high-level behavior and set the constraints for a low-level numerical optimization. In essence, we adopt discrete methods for exploration and continuous methods for exploitation, thereby combining the strengths of both approaches. ## II Related Work Discrete planning methods generate and compare a finite number of candidate trajectories. There are two main subcategories of discrete planning approaches: sampling-based and graph search methods [4]. Sampling-based methods, using a rapidly-exploring random tree (RRT) [5], generate trajectory candidates randomly with forward dynamics. These are checked for feasibility and ranked to find the discrete-optimal trajectory. Other sampling-based approaches, applied in racing, sample jerk-minimal splines [1, 3]. A major disadvantage of sampling-based methods is the large number of candidates required to plan complex driving maneuvers [6]. Graph search methods aim to reduce the number of trajectory candidates by creating a graph of feasible trajectory segments called edges. A graph search then determines the cost-minimal sequence of edges. In path-velocity decomposition [7], trivial overtaking maneuvers are planned by calculating a collision-free velocity profile on an optimal path derived from a spatial graph. Even though such approaches have been applied in racing [8], they are not time-optimal, as they do not fully capture the spatio-temporal character of the problem. Directly considering dynamic obstacles in the graph leads to spatio-temporal graphs [2]. However, this requires at least one additional dimension (time, velocity, or both). Therefore, the discretization must be kept coarse to mitigate the curse of dimensionality. Continuous methods only discretize the time or progress along a curve and solve an OCP numerically. As they converge to continuous local optima, they have become a common choice for trajectory planning in autonomous motor-sport [9, 10, 11, 12, 13, 14]. Due to the non-convexity of most planning problems in racing, different initial guesses can lead to different local optima [14]. To find the global optimum, a common approach is to solve multiple OCP in parallel, one for each homotopy class, i.e. overtaking behavior [10, 12]. The results are then compared in search of the progress-maximizing solution. This approach increases the chances of finding the global optimum at the cost of computational complexity. A different approach that reduces online computational effort is to determine overtaking with a policy learned from offline simulations [11]. While this method is fast, it is not versatile because the calculated policy is only valid for a specific track. Lim et al. [15] propose a hierarchical planning approach for traffic scenarios. The behavior is determined with a spatio-temporal graph search, and the solution is used to initialize an OCP. This method combines the ability of discrete methods to find solutions close to the global optimum with the precision of trajectories calculated with continuous methods. However, the algorithm is only viable with a low resolution of the graph, resulting in too conservative behaviors for racing. There are several ways to enforce overtaking behavior in numerical optimization algorithms. Some authors suggest initializing the OCP with a trajectory estimate, following the behavior [9, 10, 12]. Other methods convert the non-convex OCP into a convex subproblem by limiting motion to a maneuver envelope so that the behavior of the planned trajectory is more predictable [11, 16, 17]. This is especially important in racing, where following the optimal behavior is critical. ### _Contributions_ We introduce a hierarchical planning method that extends the local racing line algorithm in [13] for multi-vehicle scenarios. Inspired by [15], we combine discrete high-level behavioral planning with low-level numerical optimization. This reduces computational complexity compared to [12] and [10] and improves flexibility compared to [11]. The main contributions to the hierarchical approach are as follows: * We propose a behavioral planning step based on spatio-temporal graphs. In contrast to [15], temporal planning precedes spatial planning. Progress variants, derived from the previous planning iteration, determine the geometry of spatial planning problems that are solved with low-resolution visibility graphs. * We adapt the constraints and cost function of the time-optimal control problem in [13] to generate a feasible trajectory for the generated high-level behavior. * We perform a monte carlo simulation to compare our approach with parallel optimization-based methods and naive overtaking strategies. We analyze the results regarding computation time and driving performance. ## III Methodology Our approach operates in two modes shown in Figure 1: (1) Without any opponents in the planning horizon, the trajectory is generated according to [13]. A time-optimal control problem with a point mass model, constrained by gg-diagrams, is solved for the upcoming track section. In Sections III-A and III-B, we will briefly summarize the used track and vehicle model. (2) When opponents are present, the first step is to make a behavioral decision, whether to pass opponents on the left or right and to define a corresponding maneuver envelope. These processes are explained in Sections III-C and III-D. The second step, described in Section III-E, is to solve the time-optimal OCP with constraints adapted to comply with the determined maneuver envelope. ### _Track Model_ We use the curve-ribbon approach for modeling three-dimensional (3D) tracks, presented in [18]. The road frame \(\mathcal{R}\) moves along a 3D reference curve, called the spine. It defines the road surface, as shown in Figure 2. The arc length along the spine is denoted as \(s\), while \(n\) is the lateral displacement in the direction of the y-axis of \(\mathcal{R}\). The rotation rate of \(\mathcal{R}\) with respect to arc length \(s\) is expressed as angular velocity \({}_{\mathcal{R}}\mathbf{\Omega}_{\mathcal{R}}=\begin{bmatrix}\Omega_{x}& \Omega_{y}&\Omega_{z}\end{bmatrix}^{\top}\) in the \(\mathcal{R}\)-frame. For a detailed description of the 3D track representation, we refer to [18]. ### _Vehicle Model_ Following [13], we use a low-dimensional point mass model to describe the dynamics of the vehicle. The state \(\mathbf{x}\) is defined as \[\mathbf{x}=\begin{bmatrix}V&n&\hat{\chi}&\hat{a}_{\mathrm{x}}&\hat{a}_{ \mathrm{y}}\end{bmatrix}^{\top}, \tag{1}\] where \(V\) is the velocity and \(\hat{\chi}\) is the orientation of the velocity-aligned frame \(\mathcal{V}\) relative to the road frame \(\mathcal{R}\). The longitudinal and lateral accelerations of \(\mathcal{V}\) are given by \(\hat{a}_{\mathrm{x}}\) and \(\hat{a}_{\mathrm{y}}\), respectively. The accelerations are constrained by gg-diagrams according to [13]. The longitudinal and lateral jerks \(\hat{j}_{\mathrm{x}}\) and \(\hat{j}_{\mathrm{y}}\) form the input vector \[\mathbf{u}=\begin{bmatrix}\hat{j}_{\mathrm{x}}&\hat{j}_{\mathrm{y}}\end{bmatrix}^ {\top}. \tag{2}\] With the vertical velocity \(w\) and the angular velocity of \(\mathcal{V}\) with respect to time \({}_{\mathcal{V}}\boldsymbol{\omega}_{\mathcal{V}}=\begin{bmatrix}\hat{a}_{x} &\hat{a}_{y}&\hat{a}_{\mathrm{z}}\end{bmatrix}^{\top}\), the dynamics Fig. 1: Overview of the hierarchical planning approach. Fig. 2: 3D track with road frame \(\mathcal{R}\) and velocity frame \(\mathcal{V}\). are described by \[\dot{\mathbf{x}}=\frac{d\mathbf{x}}{dt}=\mathbf{f}(\mathbf{x},\mathbf{u})=\left[ \begin{array}{c}\hat{a}_{\mathbf{x}}-w\hat{\omega}_{\mathbf{y}}\\ V\sin(\hat{\chi})\\ \frac{\hat{a}_{\mathbf{y}}+w\hat{\omega}_{\mathbf{z}}}{V}-\Omega_{\mathbf{z}} \dot{\hat{s}}\\ \hat{j}_{\mathbf{x}}\\ \hat{j}_{\mathbf{y}}\end{array}\right]. \tag{3}\] ### _Behavioral Planning_ Given the predicted motion of the opponent vehicles, the behavioral planning step approximates an optimal overtaking trajectory and selects the fastest sequence of left or right passing decisions. Since the times and positions of overtakes depend on the progress of the ego vehicle, this is a spatio-temporal problem. We solve this problem by first sampling progress variants, second finding the optimal path for a given variant, and third checking the feasibility of the resulting trajectory. Following this order allows for the creation of low-resolution visibility graphs that take advantage of the problem geometry. #### Iii-C1 Progress Variants Progress variants are generated by following different set speed profiles \(\dot{s}_{\mathrm{set}}\), based on the optimal trajectory of the previous planning iteration \(\mathbf{x}_{\mathrm{prev}}\). To do this, we modulate acceleration with a feedback controller \(\ddot{s}=K(\dot{s}_{\mathrm{set}}-\dot{s})\). Following a certain speed profile \(\dot{s}_{\mathrm{set}}\) determines the \(s\)-coordinate where the next opponent vehicle is passed. At these passing points, the speed profiles branch out by switching to different set speed profiles. The passing points for a single opponent and set speed profiles \(\dot{s}_{\mathrm{set}}\in\{0.9\cdot\dot{s}_{\mathrm{prev}},\ 1\cdot\dot{s}_{ \mathrm{prev}},\ 1.1\cdot\dot{s}_{\mathrm{prev}}\}\) are shown in the top diagram of Figure 4. In the same diagram, we highlight two exemplary progress variants \((1,1.1)\) and \((0.9,1)\). The first variant follows \(1.1\dot{s}_{\mathrm{prev}}\) at the start of the maneuver and switches to \(1\dot{s}_{\mathrm{prev}}\) after the overtake, while the second one goes from \(0.9\dot{s}_{\mathrm{prev}}\) to \(1\dot{s}_{\mathrm{prev}}\). For multiple opponents, the aforementioned procedure can quickly result in a large number of progress variants. With three speed profiles and \(N\) opponents, \(3^{N+1}\) variants are possible. Performing spatial planning, as described in Section III-C2, for all variants would be too computationally complex. Therefore, we generate the progress variants as needed, beginning with the fastest variant. If the spatial planning step can generate a feasible trajectory for the current variant, the trajectory and corresponding behavior are applied to the numerical optimization. Otherwise, we continue with the next fastest variant. This iterative procedure is visualized in Figure 3 and promotes finding the global time-optimal solution. More details on the feasibility checks are given in Section III-C3. #### Iii-C2 Spatial Planning With the passing points of the considered progress variant, a spatial graph can be generated. We utilize visibility graphs. These are undirected graphs, connecting all vertices of obstacles with straight edges that do not cross an obstacle [19]. Originally, they were developed to find the shortest collision-free path. Compared to the spatio-temporal lattice with fixed nodes [15], the discretization, based on the corner points of moving obstacle polygons, allows for the generation of short and direct path candidates with a small number of nodes. Considering vehicle dimensions and safety distances, we virtually expand the track boundaries and opponent polygons to avoid collisions when the center point of the ego vehicle is within bounds. The visibility graphs for the progress variants \((1,1.1)\) and \((0.9,1)\) are shown in the two bottom diagrams of Figure 4. An A* search determines the optimal path to minimize the total travel distance \(\sum_{i}d_{i}\) and angle deviation \(\sum_{i}|\hat{\chi}_{i}|\) relative to the spine: \(\min_{i}\sum_{i}(w_{d}d_{i}+w_{\chi}|\hat{\chi}_{i}|)\). The search is guided by a heuristic function \(h(P)\), based on the length \(d_{\overline{PD}}\) and angle deviation \(|\hat{\chi}_{\overline{PD}}|\) of a virtual edge \(\overline{PD}\) connecting the current point \(P\) to the destination \(D\): \(h(P)=w_{d}d_{\overline{PD}}+w_{\chi}|\hat{\chi}_{\overline{PD}}|\). By discouraging long and weaving paths, the goal is to predict which path is most likely to be feasible for the given progress variant. We increase the speed of the Fig. 4: Behavioral planning for an example maneuver with a single opponent. The top figure shows the generation of progress variants, while the middle and bottom figures depict the visibility graphs for the variants (1,1,1) and (0,9,1). Fig. 3: Overview of the behavioral planning algorithm. search algorithm by applying the following simplifications to reduce the number of nodes in the graph: (1) With the help of the Ramer-Douglas-Peucker algorithm [20], we reduce the number of boundary points to a subset of points that approximates the shape. (2) We remove the boundary nodes at the start and end of the planning horizon, as the vehicle would have to drive perpendicular to the spine or in reverse track direction to reach them. #### Iii-B3 Feasibility Check Spatial planning with visibility graphs results in non-continuous curvature profiles, so the unprocessed paths are not feasible. To confirm the suitability of a path and its corresponding overtaking behavior, a cubic spline \(f_{\mathrm{spline}}(s)\) is placed through the path, as seen in Figure 4. The smoother path candidate \(n_{\mathrm{cand}}=f_{\mathrm{spline}}(s_{\mathrm{cand}}(t))\) is then combined with the considered progress variant \(\dot{s}_{\mathrm{cand}}(t)\) to form the trajectory candidate \[\mathbf{x}_{\mathrm{cand}}=\begin{bmatrix}V_{\mathrm{cand}}\\ n_{\mathrm{cand}}\\ \hat{\chi}_{\mathrm{cand}}\\ \hat{a}_{\mathrm{x,cand}}\\ \hat{a}_{\mathrm{y,cand}}\end{bmatrix}=\begin{bmatrix}\frac{\dot{s}_{\mathrm{ cand}}(1-n_{\mathrm{cand}}\mathsf{D}_{s})}{\cos\hat{\chi}}\\ n_{\mathrm{cand}}\\ \arctan f^{\prime}_{\mathrm{spline}}(s_{\mathrm{cand}})\\ V\\ V(\omega_{z}+\dot{\hat{\chi}})\end{bmatrix}. \tag{4}\] If the accelerations from the trajectory \(\mathbf{x}_{\mathrm{cand}}\) lie within the gg-diagrams, behavioral planning finishes with the trajectory estimate \(\mathbf{x}_{\mathrm{guess}}=\mathbf{x}_{\mathrm{cand}}\) and its corresponding behavior. Otherwise, the next slower progress variant is examined. ### _Maneuver Envelope Definition_ The maneuver envelope should force the solution of the OCP with initialization \(\mathbf{x}_{\mathrm{guess}}=\mathbf{x}_{\mathrm{cand}}\) to remain in the previously determined optimal behavior class. The maneuver envelopes are formed by extending obstacle polygons of the opponents to cover the side of the track where overtaking is suboptimal according to the behavioral planning step. As the vehicle travels along the planning horizon, the resulting spatio-temporal obstacle constraints form a narrowed driving corridor, as depicted in Figure 5. We reduce complexity by combining all obstacle constraints into collision constraints that describe this narrowed driving space. This is achieved by sampling the lateral restriction \(n\in[n_{r,\mathrm{coll}}(s),n_{\mathrm{l,coll}}(s)]\) for the initial guess \(\mathbf{x}_{\mathrm{guess}}\). While the complexity of the OCP is significantly reduced, information gets lost when spatio-temporal constraints are reduced to spatial constraints. As the constraints now only depend on the distance, they can influence the vehicle speed solely through the feasible curvatures in the narrowed driving space. To make the vehicle slow down when the gap for overtaking is too small, we re-introduce the spatio-temporal component by adding a constraint on vehicle progress \(s<s_{\mathrm{coll}}+V_{\mathrm{coll}}t\). This longitudinal constraint acts like a wall moving at the average speed of the obstacle. ### _Optimal Control Problem_ Following [13], the second step of our hierarchical approach solves an OCP parametrized by \(s\) for a constant spatial planning horizon \(s\in[s_{0},s_{e}]\). By using numerical optimization, we can calculate fast trajectories that are not limited by discretization. The cost function (5a) consists of three terms: a time optimality term, a term that smooths the acceleration profile by minimizing jerk, and a slack term that ensures that the soft constraints on velocity and vehicle position are fulfilled. The OCP is defined as \[\min_{\mathbf{x},\mathbf{h}} \int_{s_{0}}^{s_{e}}\frac{1}{\hat{s}}+\mathbf{u}^{\top}\mathbf{Ru} +\boldsymbol{\epsilon}^{\top}\mathbf{S}\boldsymbol{\epsilon}\;ds\] (5a) s.t. \[\mathbf{x}^{\prime}=\mathbf{f}(\mathbf{x},\mathbf{u})\frac{1}{ \hat{s}} \tag{5b}\] \[V-\epsilon_{V}\leq V_{max}\text{ with }\epsilon_{V}\geq 0\] (5c) (9a), (9b), (9c) in [13] (5d) \[n_{\mathrm{r,tr}}+d_{\mathrm{s}}\leq n\leq n_{\mathrm{l,tr}}-d_{ \mathrm{s}}\] (5e) \[-\frac{\pi}{2}\leq\hat{\chi}\leq\frac{\pi}{2}\] (5f) \[n-\epsilon_{n_{\mathrm{l,coll}}}+d_{\mathrm{s}}\leq n_{\mathrm{l, coll}}\text{ with }\epsilon_{n_{\mathrm{l,coll}}}\geq 0\] (5g) \[n+\epsilon_{n_{\mathrm{r,coll}}}-d_{\mathrm{s}}\geq n_{\mathrm{r, coll}}\text{ with }\epsilon_{n_{\mathrm{r,coll}}}\geq 0\] (5h) \[s-\epsilon_{s,\text{coll}}\leq s_{\mathrm{coll}}\text{ with }\epsilon_{s,\text{coll}}\geq 0 \tag{5i}\] with \[\mathbf{R} =\begin{bmatrix}w_{j,\mathrm{x}}&0\\ 0&w_{j,\mathrm{y}}\end{bmatrix},\] \[\boldsymbol{\epsilon} =\begin{bmatrix}1&\epsilon_{V}&\epsilon_{n_{\mathrm{l,coll}}}& \epsilon_{n_{\mathrm{r,coll}}}\epsilon_{s\mathrm{coll}}\end{bmatrix}^{\top},\] \[\mathbf{S} =\begin{bmatrix}\frac{w_{s,V,1}}{2}&\frac{w_{e,N,1}}{2}&\frac{w_{e,N,1}}{2}&\frac{w_{e,N,\mathrm{coll},1}}{2}&\frac{w_{e,N,\mathrm{coll},1}}{2}\\ \frac{w_{e,N,\mathrm{coll},1}}{2}&w_{e,V,2}&0&0&0\\ \frac{2}{w_{e,N,\mathrm{coll},1}}&0&w_{e,N,\mathrm{coll},2}&0&0\\ \frac{w_{e,N,\mathrm{coll},1}}{2}&0&0&w_{e,N,\mathrm{coll},2}&0\\ \frac{w_{e,N,\mathrm{coll},1}}{2}&0&0&0&w_{e,N,\mathrm{coll},2}\end{bmatrix}.\] Constraint (5b) enforces the equations of motion in (3). Using the diamond interpolation method presented in [13], we limit the combined accelerations in (5d). The vehicle is kept a safety margin \(d_{\mathrm{s}}\) away from the track boundaries in (5e). Inequality (5f) prevents driving in reversed track direction. The aforementioned restrictions are hard constraints that have to be satisfied for a solution to exist. However, there are cases where such a strict definition of constraints might be disadvantageous regarding the robustness of the solver. E.g., the speed limit is not safety-critical and can be violated for short periods of time. The soft velocity constraint is realized by the slack variable \(\epsilon_{V}\) in (5c). Similarly, the maneuver envelopes from Section III-D Fig. 5: Generation of maneuver envelopes. are realized as soft constraints with (5g)-(5h). With hard constraints, the result, if feasible, would be too conservative because the uncertainty of the opponent's prediction increases with distance. Following [21], our slack variables have linear and quadratic terms in the cost function. These are realized by the matrix \(\mathbf{S}\). If the initialization of the OCP violates one of the soft constraints \(x_{0}\nleq x_{\max}\), the corresponding slack variable \(\epsilon_{x}\) is initialized with the value of the excess \(\epsilon_{x,0}=x_{0}-x_{\max}\). Within and between the planning steps, the violation is gradually decreased and eventually eliminated. The linear and quadratic weights \(w_{x,1}\), \(w_{x,2}\) determine how hard the violations are penalized and are therefore used for adjusting the softness of the constraints. This is especially useful for the collision constraints (5g)-(5i). Here, the slack weights \(w_{\epsilon_{i}\text{coll},j}(s)\) for \(i\in\{n_{1},n_{r},s\}\) and \(j\in\{1,2\}\) are defined as a function of progress with parameters \(w_{\epsilon_{i}\text{coll},j,s0}\) and \(w_{\epsilon_{i}\text{coll},j,se}\) (\(w_{\epsilon_{i}\text{coll},j,s0}>w_{\epsilon_{i}\text{coll},j,se}\)) \[w_{\epsilon_{i}\text{coll},j}(s)=w_{\epsilon_{i}\text{coll},j,se}\left(\frac{ w_{\epsilon_{i}\text{coll},j,s0}}{w_{\epsilon_{i}\text{coll},j,se}}\right)^{\frac{ \epsilon_{x}-\epsilon}{\epsilon_{0}-\epsilon_{0}}}. \tag{6}\] Large slack weights close to the ego vehicle (\(s\approx 8_{0}\)) reduce the likelihood of collisions. Meanwhile, low weights at the end of the planning horizon make the solver more stable in the presence of large and sudden changes in the predicted vehicle position. For the velocity constraint (5c), we use constant slack weights \(w_{\epsilon,V,1}\) and \(w_{\epsilon,V,2}\). As long as the behavior, determined by the high-level planning step in Section III-C, remains the same, we initialize the OCP with the solution of the previous optimization \(\mathbf{x}_{\text{guess}}=\mathbf{x}_{\text{prev}}\). If the behavior changes, the smoothed trajectory, passing the feasibility check in Section III-C3, is used as a new initial guess for the OCP \(\mathbf{x}_{\text{guess}}=\mathbf{x}_{\text{cand}}\). ## IV Results To validate and evaluate the hierarchical planning approach, we perform randomized simulations with three vehicles on the Modena race track. All simulations are calculated on an Intel Core i7-5600U CPU. The planning horizon is set to \(H=s_{\text{e}}-s_{0}=300\,\mathrm{m}\) and progress variants are generated with the set speed factors \(\{0.5,1,2\}\). The slack weights \(w_{\epsilon,n_{\text{coll},1},s0/se}=50/25\), \(w_{\epsilon,n_{\text{coll},2},s0/se}=5/2.5\), \(w_{\epsilon,s_{\text{coll},1},s0/se}=20/10\) and \(w_{\epsilon,s_{\text{coll},2},s0/se}=2/1\) are selected for collision-free overtaking. The definition of all other parameters was guided by [13]. At the beginning of each simulation, two opponent vehicles are positioned at random within a range of \(120\,\mathrm{m}\) in front of the ego vehicle. Once the ego vehicle has overtaken both opponents and opened a gap of \(50\,\mathrm{m}\), the simulation is stopped. Each initial configuration is tested with the following four approaches: 1. The hierarchical approach presented in this paper 2. A pseudo-parallel optimization approach 3. Overtaking only on the left of the opponents 4. Overtaking only on the right of the opponents The pseudo-parallel approach is based on the parallel optimization from Section II. For each behavior class, we specify a maneuver envelope and solve an OCP. To initialize the different OCPs, we generate path candidates with cubic splines. These paths start at the current vehicle position, pass through points of the obstacle polygons and finish on the track spine at the end of the planning horizon. We initialize the OCP with \(\mathbf{x}_{\text{guess}}=\mathbf{x}_{\text{prev}}\) for the behavior class equal to the previous solution of the planning algorithm. For a comparison with our single-core hierarchical approach, we solve all OCPs sequentially. We name this single-core variant of parallel optimization pseudo-parallel optimization. Exemplary overtakes for approaches 3) and 4) are shown in Figure 6. When overtaking on the left is specified, the vehicle passes on the outside of the first corner. For the overtake on the right, the vehicle accelerates less at the beginning to overtake when a gap opens up on the outside of the second turn. In this scenario, the left behavior class results in an earlier overtake compared to the right one. Figure 7 shows the duration of the overtaking simulations for approaches 1)-4). Compared to the first two methods, the fixed behaviors produce significantly longer and more inconsistent overtaking times. This shows that there are multiple local optima and that it is beneficial to select the correct behavior before solving the OCP. The overtaking times from our hierarchical approach are similar to the pseudo-parallel approach that assesses all behavior classes in detail. Thus, we deduce that our proposed method selects the optimal behavior in the majority of cases. The ego vehicle considers the vehicles within its planning horizon. To evaluate the scalability of the planning approaches 1)-4), we assess the calculation times for different numbers of opponents within the planning horizon. The results are shown in Figure 8. For the constant overtaking behaviors 3) and 4), the calculation times are similar and largely unaffected by the addition of opponents. For our hierarchical approach, there is a \(0.6\,\mathrm{s}\) jump when introducing the first opponent. Adding a second opponent does not result in any further increase. Thus, computational complexity appears to scale well with the number of opponents. Conversely, calculation time increases exponentially for the pseudo-parallel approach. Instead of planning in two steps, it optimizes a trajectory for every possible behavior combination. As there are \(2^{N}\) combinations for \(N\) opponents (e.g. \(N=2\rightarrow\) [l,l,r,r], the calculation time doubles for every added opponent. Going from zero to one opponent, the computation time increases even more, as only one of the OCPs is initialized with the previous solution when there are opponents. The other initial guesses are farther from their corresponding optima, and the OCPs take longer to converge.
この論文では、複数の対戦相手とのレースに用いる階層的計画アルゴリズムを提示しています。2段階アプローチは、高レベルの行動計画ステップと低レベルの最適化ステップから構成されます。離散と連続な計画方法を組み合わせることで、私たちのアルゴリズムは、粗い区切りの制約に制約されることなく、全体的な時間最適性を促進させます。行動計画ステップでは、低解像度空間・時間可視化グラフを用いて、最も速い行動が決定されます。選択された行動に基づいて、 maneuvers envelopを計算し、その後に時間最適な制御問題における制約として適用します。この方法の性能は、複数の最適化による異なる行動クラスを持つ多重最適化を選択する並列アプローチと比較して、比較可能です。しかし、私たちのアルゴリズムは単一コアで実行できます。これは計算要件をSignificantly削減し、特に複数の対戦相手
2309.05261
Gall Bladder Cancer Detection from US Images with Only Image Level Labels
Automated detection of Gallbladder Cancer (GBC) from Ultrasound (US) images is an important problem, which has drawn increased interest from researchers. However, most of these works use difficult-to-acquire information such as bounding box annotations or additional US videos. In this paper, we focus on GBC detection using only image-level labels. Such annotation is usually available based on the diagnostic report of a patient, and do not require additional annotation effort from the physicians. However, our analysis reveals that it is difficult to train a standard image classification model for GBC detection. This is due to the low inter-class variance (a malignant region usually occupies only a small portion of a US image), high intra-class variance (due to the US sensor capturing a 2D slice of a 3D object leading to large viewpoint variations), and low training data availability. We posit that even when we have only the image level label, still formulating the problem as object detection (with bounding box output) helps a deep neural network (DNN) model focus on the relevant region of interest. Since no bounding box annotations is available for training, we pose the problem as weakly supervised object detection (WSOD). Motivated by the recent success of transformer models in object detection, we train one such model, DETR, using multi-instance-learning (MIL) with self-supervised instance selection to suit the WSOD task. Our proposed method demonstrates an improvement of AP and detection sensitivity over the SOTA transformer-based and CNN-based WSOD methods. Project page is at https://gbc-iitd.github.io/wsod-gbc
Soumen Basu, Ashish Papanai, Mayank Gupta, Pankaj Gupta, Chetan Arora
2023-09-11T06:37:12
http://arxiv.org/abs/2309.05261v1
# Gall Bladder Cancer Detection from US Images with Only Image Level Labels ###### Abstract Automated detection of Gallbladder Cancer (GBC) from Ultrasound (US) images is an important problem, which has drawn increased interest from researchers. However, most of these works use difficult-to-acquire information such as bounding box annotations or additional US videos. In this paper, we focus on GBC detection using only image-level labels. Such annotation is usually available based on the diagnostic report of a patient, and do not require additional annotation effort from the physicians. However, our analysis reveals that it is difficult to train a standard image classification model for GBC detection. This is due to the low inter-class variance (a malignant region usually occupies only a small portion of a US image), high intra-class variance (due to the US sensor capturing a 2D slice of a 3D object leading to large viewpoint variations), and low training data availability. We posit that even when we have only the image level label, still formulating the problem as object detection (with bounding box output) helps a deep neural network (DNN) model focus on the relevant region of interest. Since no bounding box annotations is available for training, we pose the problem as weakly supervised object detection (WSOD). Motivated by the recent success of transformer models in object detection, we train one such model, DETR, using multi-instance-learning (MIL) with self-supervised instance selection to suit the WSOD task. Our proposed method demonstrates an improvement of AP and detection sensitivity over the SOTA transformer-based and CNN-based WSOD methods. Project page is at [https://gbc-iitd.github.io/wsod-gbc](https://gbc-iitd.github.io/wsod-gbc). Keywords:Weakly Supervised Object Detection Ultrasound Gallbladder Cancer ## 1 Introduction GBC is a deadly disease that is difficult to detect at an early stage [15, 12]. Early diagnosis can significantly improve the survival rate [14]. Non-ionizing radiation, low cost, and accessibility make US a popular non-invasive diagnostic modality for patients with suspected gall bladder (GB) afflictions. However, identifying signs of GBC from routine US imaging is challenging for radiologists [11]. In recent years, automated GBC detection from US images has drawn increased interest [3, 5] due to its potential for improving diagnosis and treatment outcomes. Many of these works formulate the problem as an object detection, since training a image classification model for GBC detection seems challenging due to the reasons outlined in the abstract (also see Fig. 1). Recently, GBCNet[3], a CNN-based model, achieved SOTA performance on classifying malignant GB from US images. GBCNet uses a two-stage pipeline consisting of object detection followed by classification, and requires bounding box annotations for GB as well as malignant regions for training. Such bounding box annotations surrounding the pathological regions are time-consuming and require an expert radiologist for annotation. This makes it expensive and non-viable for curating large datasets for training large DNN models. In another recent work, [5] has exploited additional unlabeled video data for learning good representations for downstream GBC classification and obtained performance similar to [3] using a ResNet50 [13] classifier. The reliance of both SOTA techniques on additional annotations or data, limits their applicability. On the other hand, the image-level malignancy label is usually available at a low cost, as it can be obtained readily from the diagnostic report of a patient without additional effort from clinicians. Instead of training a classification pipeline, we propose to solve an object detection problem, which involves predicting a bounding box for the malignancy. The motivation is that, running a classifier on a focused attention/ proposal region in an object detection pipeline would help tackle the low inter-class and high intra-class variations. However, since we only have image-level labels available, we formulate the problem as a Weakly Supervised Object Detection (WSOD) problem. As transformers are increasingly outshining CNNs due to their ability to aggregate focused cues from a large area [9, 6], we choose to use transformers in our model. However, in our initial experiments SOTA WSOD methods for transformers failed miserably. These methods primarily rely on training a classification pipeline and later generating activation heatmaps using attention and drawing a bounding box circumscribing the heatmaps [10, 2] to show localiza Figure 1: (a) Low inter-class variability. The first two GBs show benign wall thickening, and the third one shows malignant thickening. However, the appearance of the GB in all three images is very similar. (b) High intra-class variability. All three images have been scanned from the same patient, but due to the sensor’s scanning plane, the appearances change drastically. tion. However, for GBC detection, this line of work is not helpful as we discussed earlier. Inspired by the success of the Multiple Instance Learning (MIL) paradigm for weakly supervised training on medical imaging tasks [22, 20], we train a detection transformer, DETR, using the MIL paradigm for weakly supervised malignant region detection. In this, one generates region proposals for images, and then considers the images as bags and region proposals as instances to solve the instance classification (object detection) under the MIL constraints [8]. At inference, we use the predicted instance labels to predict the bag labels. Our experiments validate the utility of this approach in circumventing the challenges in US images and detecting GBC accurately from US images using only image-level labels. **Contributions:** The key contributions of this work are: * We design a novel DETR variant based on MIL with self-supervised instance learning towards the weakly supervised disease detection and localization task in medical images. Although MIL and self-supervised instance learning has been used for CNNs[24], such a pipeline has not been used for transformer-based detection models. * We formulate the GBC classification problem as a weakly supervised object detection problem to mitigate the effect of low inter-class and large intra-class variances, and solve the difficult GBC detection problem on US images without using the costly and difficult to obtain additional annotation (bounding box) or video data. * Our method provides a strong baseline for weakly supervised GBC detection and localization in US images, which has not been tackled earlier. Further, to assess the generality of our method, we apply our method to Polyp detection from Colonoscopy images. ## 2 Datasets **Gallbladder Cancer Detection in Ultrasound Images:** We use the public GBC US dataset [3] consisting of 1255 image samples from 218 patients. The Figure 2: Samples from the GBCU [3] and Kvasir-SEG [17] datasets. Four images from each of the disease and non-disease classes are shown on the left and right, respectively. Disease locations are shown by drawing bounding boxes. dataset contains 990 non-malignant (171 patients) and 265 malignant (47 patients) GB images (see Fig. 2 for some sample images). The dataset contains image labels as well as bounding box annotations showing the malignant regions. Note that, we use only the image labels for training. We report results on 5-fold cross-validation. We did the cross-validation splits at the patient level, and all images of any patient appeared either in the train or validation split. **Polyp Detection in Colonoscopy Images:** We use the publicly available Kvasir-SEG [17] dataset consisting of 1000 white light colonoscopy images showing polyps (c.f. Fig. 2). Since Kvasir-SEG does not contain any control images, we add 600 non-polyp images randomly sampled from the PolypGen [1] dataset. Since the patient information is not available with the data, we use random stratified splitting for 5-fold cross-validation. ## 3 Our Method **Revisiting DETR:** The DETR [6] architectures utilize a ResNet[13] backbone to extract 2D convolutional features, which are then flattened and added with a positional encoding, and fed to the self-attention-based transformer encoder. The decoder uses cross-attention between learned object queries containing positional embedding, and encoder output to produce output embedding containing the class and localization information. The number of object queries, and the decoder output embeddings is set to 100 in DETR. Subsequently, a feed-forward network generates predictions for object bounding boxes with their corresponding labels and confidence scores. **Proposed Architecture:** Fig. 3 gives an overview of our method. We use a COCO pre-trained class-agnostic DETR as proposal generator. The learned object queries contain the embedded positional information of the proposal. Class-agnostic in Figure 3: Overview of the proposed Weakly Supervised DETR architecture. The location information in the object queries learned by the class-agnostic DETR ensures generation of high-quality proposals. The MIL framework uses the proposal embeddings generated at the class-aware branch. dicates that all object categories are considered as a single object class, as we are only interested in the object proposals. We then finetune a regular, class-aware DETR for the WSOD task. This class-aware DETR is initialized with the checkpoint of the class-agnostic DETR. The learned object queries from the class-agnostic DETR is frozen and shared with the WSOD DETR during finetuning to ensure that the class-aware DETR attends similar locations of the object proposals. The class-agnostic DETR branch is frozen during the finetuning phase. We finally use the MIL-based instance classification with the self-supervised instance learning over the finetuning branch. For GBC classification, if the model generates bounding boxes for the input image, then we predict the image to be malignant, since the only object present in the data is the cancer. **MIL Setup:** The decoder of the fine-tuning DETR generates \(R\)\(d\)-dimensional output embeddings. Each embedding corresponds to a proposal generated by the class-agnostic DETR. We pass these embeddings as input to two branches with FC layers to obtain the matrices \(X^{c}\in\mathbb{R}^{R\times N_{c}}\) and \(X^{r}\in\mathbb{R}^{R\times N_{c}}\), where \(R\) is the number of object queries (same as proposals) and \(N_{c}\) is the number of object (disease) categories. Let \(\sigma(\cdot)\) denote the softmax operation. We then generate the class-wise and detection-wise softmax matrices \(C\in\mathbb{R}^{R\times N_{c}}\) and \(D\in\mathbb{R}^{R\times N_{c}}\), where \(C_{ij}=\sigma((X^{c})_{j}^{T})i\) and \(D_{ij}=\sigma(X^{r}_{i})j\), and \(X_{i}\) denotes the \(i\)-th row of \(X\). \(C\) provides classification probabilities of each proposal, and \(D\) provides the relative score of the proposals corresponding to each class. The two matrices are element-wise multiplied and summed over the proposal dimension to generate the image-level classification predictions, \(\phi\in\mathbb{R}^{N_{c}}\): \[\phi_{j}=\sum_{i=1}^{R}C_{ij}\cdot D_{ij} \tag{1}\] Notice, \(\phi_{j}\in(0,1)\) since \(C_{ij}\) and \(D_{ij}\) are normalized. Finally, the negative log-likelihood loss between the predicted labels, and image labels \(y\in\mathbb{R}^{N_{c}}\) is computed as the MIL loss: \[\mathcal{L}_{\text{mil}}=-\sum_{i=1}^{N_{c}}[y_{i}\log\phi_{i}+(1-y_{i})\log{( 1-\phi_{i})}] \tag{2}\] The MIL classifier further suffers from overfitting to the distinctive classification features due to the mismatch of classification and detection probabilities [24]. To tackle this, we further use a self-supervised module to improve the instances. **Self-supervised Instance Learning:** Inspired by [24], we design a instance learning module with \(N_{r}\) blocks in a self-supervised framework to refine the instance scores with instance-level supervision. Each block consists of an FC layer. A class-wise softmax is used to generate instance scores \(x^{n}\in\mathbb{R}^{R\times(N_{c}+1)}\) at \(n\)-th block. \(N_{c}+1\) includes the background/ no-finding class. Instance supervision of each layer (\(n\)) is obtained from the scores of the previous layer (\(x^{(n-1)}\)). The instance supervision for the first layer is obtained from the MIL head. Suppose \(\hat{y}^{n}\in\mathbb{R}^{R\times(N_{c}+1)}\) is the pseudo-labels of the instances. An instance (\(p_{j}\)) is labelled 1 if it overlaps with the highest-scoring instance by a chosen threshold. Otherwise, the instance is labeled 0 as defined in Eq. (3): \[m_{j}^{n}=\operatorname*{argmax}_{i}x_{ij}^{(n-1)}\ ;\qquad\hat{y}_{ij}^{n}= \begin{cases}1,&IoU(p_{j},p_{m_{j}^{n}})\geq\tau\\ 0,&\text{otherwise}\end{cases} \tag{3}\] The loss over the instances is given by Eq. (4): \[\mathcal{L}_{ins}=-\frac{1}{N_{r}}\sum_{n=1}^{N_{r}}\frac{1}{R}\sum_{i=1}^{R} \sum_{j=1}^{N_{e}+1}w_{i}^{n}\hat{y}_{ij}^{n}\log x_{ij}^{n} \tag{4}\] Here \(x_{ij}^{n}\) denotes the score of \(i\)-th instance for \(j\)-th class at layer \(n\). Following [24], the loss weight \(w_{i}^{n}=x_{i\,m_{j}^{n}}^{(n-1)}\) is applied to stabilize the loss. Assuming \(\lambda\) to be a scaling value, the overall loss function is given in Eq. (5): \[\mathcal{L}=\mathcal{L}_{mil}+\lambda\mathcal{L}_{ins} \tag{5}\] **Comparison with SOTA:** Tab. 1 shows the bounding box localization results of the WSOD task. Our method surpasses all latest SOTA WSOD techniques by 9 points, and establishes itself as a strong WSOD baseline for GBC localization in US images. Our method also achieves 7-point higher AP score for polyp detection. We present visualizations of the predicted bounding boxes in Fig. 4 which shows that the localization by our method is more precise and clinically relevant as compared to the baselines. **Generality of the Method:** We assess the generality of our method by applying it to polyp detection on colonoscopy images. The applicability of our method on two different tasks - (1) GBC detection from US and (2) Polyp detection from Colonoscopy, indicates the generality of the method across modalities. **Ablation Study:** We show the detection sensitivity to the self-supervised instance learning module in Tab. 2 for two variants, (1) vanilla MIL head on DETR, and (2) MIL with self-supervised instance learning on DETR. Tab. 2 shows the Average Precision and detection sensitivity for both diseases. The results establish the benefit of using the self-supervised instance learning. Other ablations related to the hyper-parameter sensitivity is given in Supplementary Fig. S1. **Classification Performance:** We compare our model with the standard CNN-based and Transformer-based classifiers, SOTA WSOD-based classifiers, and SOTA classifiers using additional data or annotations (Tab. 3). Our method beats the SOTA weakly supervised techniques and achieves 1.2% higher sensitivity for GBC detection. The current SOTA GBC detection models require additional bounding box annotation [3] or, US videos [5, 7]. However, even without these additional annotations/ data, our method reaches 86.1% detection sensitivity. The results for polyp classification are reported in Tab. 4. Although our method has a slightly Figure 4: Qualitative analysis of the predicted bounding boxes. Ground truths are in blue, and predictions are in green. We compare with SOTA WSOD techniques and our proposed method. Our method predicts much tighter bounding boxes that cover the clinically significant disease regions. lower specificity, the sensitivity surpasses the baselines reported in literature [16], and the SOTA WSOD based baselines. ## 5 Conclusion GBC is a difficult-to-detect disease that benefits greatly from early diagnosis. While automated GBC detection from US images has gained increasing interest from researchers, training a standard image classification model for this task is challenging due to the low inter-class variance and high intra-class variability of malignant regions. Current SOTA models for GBC detection require costly bounding box annotation of the pathological regions, or additional US video data, which limit their applicability. We proposed to formulate GBC detection as a weakly supervised object detection/ localization problem using a DETR with self-supervised instance learning in a MIL framework. Our experiments show that the approach achieves competitive performance without requiring additional annotation or data. We hope that our technique will simplify the model training at \begin{table} \begin{tabular}{l c c c} \hline \hline **Method** & **Acc.** & **Spec.** & **Sens.** \\ \hline TS-CAM [10] & 0.704 \(\pm\) 0.017 & 0.394 \(\pm\) 0.042 & 0.891 \(\pm\) 0.054 \\ SCM [2] & 0.751 \(\pm\) 0.026 & 0.523 \(\pm\) 0.014 & 0.523 \(\pm\) 0.016 \\ OD-WSCL[21] & 0.805 \(\pm\) 0.056 & 0.609 \(\pm\) 0.076 & 0.923 \(\pm\) 0.034 \\ WS-DETR [19] & 0.857 \(\pm\) 0.071 & 0.812 \(\pm\) 0.088 & 0.882 \(\pm\) 0.034 \\ Point-Beyond-Class [18] & 0.953 \(\pm\) 0.007 & 0.993 \(\pm\) 0.004 & 0.924 \(\pm\) 0.011 \\ \hline Ours & 0.878 \(\pm\) 0.067 & 0.785 \(\pm\) 0.102 & 0.932 \(\pm\) 0.022 \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison with SOTA WSOD baselines in classifying Polyps from Colonoscopy images. \begin{table} \begin{tabular}{l l c c c} \hline \hline **Type** & **Method** & **Acc.** & **Spec.** & **Sens.** \\ \hline \multirow{2}{*}{CNN Classifier} & ResNet50 [13] & 0.867 \(\pm\) 0.031 & 0.926 \(\pm\) 0.069 & 0.672 \(\pm\) 0.147 \\ & InceptionV3 [23] & 0.869 \(\pm\) 0.039 & 0.913 \(\pm\) 0.032 & 0.708 \(\pm\) 0.078 \\ \hline \multirow{4}{*}{Transformer Classifier} & ViT [9] & 0.803 \(\pm\) 0.078 & 0.901 \(\pm\) 0.050 & 0.860 \(\pm\) 0.068 \\ & DEIT [25] & 0.829 \(\pm\) 0.030 & 0.900 \(\pm\) 0.040 & 0.875 \(\pm\) 0.063 \\ & PVTv2 [26] & 0.824 \(\pm\) 0.033 & 0.887 \(\pm\) 0.057 & 0.894 \(\pm\) 0.076 \\ & RadFormer [4] & 0.921 \(\pm\) 0.062 & 0.961 \(\pm\) 0.049 & 0.923 \(\pm\) 0.062 \\ \hline \multirow{4}{*}{Additional Data/ Annotation} & USCL [7] & 0.889 \(\pm\) 0.047 & 0.895 \(\pm\) 0.054 & 0.869 \(\pm\) 0.097 \\ & US-UCL [5] & 0.920 \(\pm\) 0.034 & 0.926 \(\pm\) 0.043 & 0.900 \(\pm\) 0.046 \\ & GBCNet [3] & 0.921 \(\pm\) 0.029 & 0.967 \(\pm\) 0.023 & 0.919 \(\pm\) 0.063 \\ & Point-Beyond-Class [18] & 0.929 \(\pm\) 0.013 & 0.983 \(\pm\) 0.042 & 0.731 \(\pm\) 0.077 \\ \hline \multirow{4}{*}{SOTA WSOD} & TS-CAM [10] & 0.862 \(\pm\) 0.049 & 0.879 \(\pm\) 0.049 & 0.751 \(\pm\) 0.045 \\ & SCM [2] & 0.795 \(\pm\) 0.101 & 0.783 \(\pm\) 0.130 & 0.849 \(\pm\) 0.072 \\ & OD-WSCL [21] & 0.815 \(\pm\) 0.144 & 0.805 \(\pm\) 0.129 & 0.847 \(\pm\) 0.214 \\ & WS-DETR [19] & 0.839 \(\pm\) 0.042 & 0.843 \(\pm\) 0.028 & 0.833 \(\pm\) 0.034 \\ \hline WSOD & Ours & 0.834 \(\pm\) 0.057 & 0.817 \(\pm\) 0.061 & 0.861 \(\pm\) 0.089 \\ \hline \hline \end{tabular} \end{table} Table 3: Performance comparison of our method and other SOTA methods in GBC classification. We report accuracy, specificity, and sensitivity. the hospitals with easily available data locally, enhancing the applicability and impact of automated GBC detection.
automated Gallbladder Cancer (GBC) detection from Ultrasound (US) image is an important problem, which has attracted attention from researchers. However, most of these works use difficult-to-acquire information such as bounding box annotations or additional US videos. In this paper, we focus on GBC detection using only image-level labels. Such annotation is usually available based on the diagnostic report of a patient, and does not require additional annotation effort from physicians. However, our analysis reveals that it is difficult to train a standard image classification model for GBC detection. This is due to low inter-class variance (a malignant region usually occupies only a small portion of a US image), high intra-class variance (due to the US sensor capturing a 2D slice of a 3D object leading to large viewpoint variations), and low training data availability. We posit that even when we have only image-level labels, still formulating the problem as object detection (with bounding box output) helps a
2301.01625
Beginnings of Exciton Condensation in Coronene Analog of Graphene Double Layer
Exciton condensation, a Bose-Einstein condensation of excitons into a single quantum state, has recently been achieved in low-dimensional materials including twin layers of graphene and van der Waals heterostructures. Here we examine computationally the beginnings of exciton condensation in a double layer comprised of coronene, a seven-benzene-ring patch of graphene. As a function of interlayer separation, we compute the exciton population in a single coherent quantum state, showing that the population peaks around 1.8 at distances near 2 \AA. Visualization reveals interlayer excitons at the separation distance of the condensate. We determine the exciton population as a function of the twist angle between the two coronene layers to reveal the magic angles at which the condensation peaks. As with previous recent calculations showing some exciton condensation in hexacene double layers and benzene stacks, the present two-electron reduced-density-matrix calculations with coronene provide computational evidence for the ability to realize exciton condensation in molecular-scale analogs of extended systems like the graphene double layer.
LeeAnn M. Sager, Anna O. Schouten, David A. Mazziotti
2022-12-29T18:06:54
http://arxiv.org/abs/2301.01625v1
# Beginnings of Exciton Condensation in Coronene Analog of Graphene Double Layer ###### Abstract Exciton condensation, a Bose-Einstein condensation of excitons into a single quantum state, has recently been achieved in low-dimensional materials including twin layers of graphene and van der Waals heterostructures. Here we examine computationally the beginnings of exciton condensation in a double layer comprised of coronene, a seven-benzene-ring patch of graphene. As a function of interlayer separation, we compute the exciton population in a single coherent quantum state, showing that the population peaks around 1.8 at distances near 2 A. Visualization reveals interlayer excitons at the separation distance of the condensate. We determine the exciton population as a function of the twist angle between the two coronene layers to reveal the magic angles at which the condensation peaks. As with previous recent calculations showing some exciton condensation in hexacene double layers and benzene stacks, the present two-electron reduced-density-matrix calculations with coronene provide computational evidence for the ability to realize exciton condensation in molecular-scale analogs of extended systems like the graphene double layer. pacs: 31.10.+z ## I Introduction Exciton condensation--a Bose-Einstein condensation of particle-hole pairs into a single quantum state--has generated considerable experimental and theoretical interest [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19] due to the resultant superfluidity [20; 21; 22; 23] of the constituent excitons (particle-hole pairs) allowing for the dissipationless transport of energy [24; 25], which presents the possibility for uniquely energy efficient materials. Further, the greater binding energy and lesser mass of excitonic quasiparticles relative to particle-particle Cooper pairs indicates that exciton condensation should occur at higher temperatures [26] relative to the temperatures at which traditional superconductivity--i.e., the condensation of particle-particle pairs into a single quantum state [27; 28; 29; 30]--occurs. Exciton condensates, nonetheless, have proven difficult to experimentally observe as excitons often have too short of a lifetime to allow for the simple formation of an exciton condensate; however, recent literature has established bilayer systems as being capable of demonstrating exciton condensation [16; 17; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41] likely due to the spatial separation of electrons and holes increasing excitonic lifetimes and causing them to act like oriented electric dipoles whose repulsive interactions prevent the formation of biexcitons and other competing exciton complexes such as electron-hole plasmas [39; 42]. Specifically, van der Waal heterostructures [43; 39; 44; 33] as well as graphene bilayers [17; 31; 44] demonstrate promise in the search for higher-temperature exciton condensate phases, with the tuneability of electronic states afforded by twisting graphene layers relative to each other being particularly of interest in recent literature [35; 41]. Small, molecularly-scaled systems have also been revealed to support exciton condensation via theoretical explorations utilizing a signature of such condensation found in the modified particle-hole reduced density matrix (RDM) [8; 9; 10; 11]. These molecular systems are able to be treated using theoretical approaches at lower computational costs and can be used as an analog for similar larger-scaled systems; moreover, molecular-scaled exciton condensation in and of itself may have potential applications in the design of more energy-efficient molecular-structures and devices. As such, a coronene bilayer system [45; 46]--where each coronene layer is a seven-benzene-ring patch of graphene--is an ideal candidate for theoretical study of molecularly-scaled condensation phenomena. Exciton condensation in extended graphene bilayers indicate the likelihood that, similarly, coronene bilayers demonstrate correlation consistent with exciton condensation. Conclusions drawn from such a study may prove useful in understanding the mechanism by which exciton condensation occurs in benzene-ring and graphene bilayers in general. In this paper, we computationally examine the beginnings of exciton condensation in a double layer composed of coronene. Utilizing variational 2-RDM theory [47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58], we explore the largest eigenvalue (\(\lambda_{G}\)) of the modified particle-hole reduced density matrix (\(\tilde{G}\))--which corresponds to the largest population of excitons in a single particle-hole quantum state--for various coronene-bilayer geometries, such that an eigenvalue above the Pauli-like limit of one indicates exciton condensation as more than one exciton is occupying a single state and a larger eigenvalue indicates a higher degree of exciton condensate character. We compare the maximal exciton populations (\(\lambda_{G}\)) as a function of distance between the layers of coronene and note that, near 2 A, the population peaks at around 1.8 with interlayer excitons being noted via our visualization technique at this distance. Additionally, exciton populations as a function of twist angle between the two layers are computed in an effort to reveal any "magic angles". Overall, this molecularly-scaled exploration of coronene bilayers provides computational evidence of the beginnings of exciton condensation in molecularly-scaled systems that is related to the condensation found in extended systems like graphene bilayers. ## II Theory Condensation phenomena occur when bosons--or quasibosons--aggregate into a single, low-energy quantum ground state when adequately cooled [59; 60], which results in the emergence of superfluid properties [20; 21]. For traditional bosons, a computational signature of so-called Bose-Einstein condensation occurs when the largest eigenvalue of the one-boson reduced density matrix (RDM)--expressed as \[{}^{1}D^{i}_{j}=\langle\Psi|\hat{b}^{\dagger}_{i}\hat{b}_{j}|\Psi\rangle \tag{1}\] where \(|\Psi\rangle\) is an \(N\)-boson wavefunction and \(\hat{b}^{\dagger}_{i}\) and \(\hat{b}_{i}\) are bosonic creation and annihilation operators for orbital \(i\), respectively--exceeds one [61]. As the eigenvalues of the one-boson RDM correspond to the populations of one-boson orbitals, the largest eigenvalue corresponds to the maximum number of bosons occupying a single quantum state, i.e., the degree of condensation. However, condensation in fermionic systems occurs via different mechanisms as multiple fermions cannot occupy a single orbital [62]. In traditional superconductivity, superfluidity arises due to correlations within quasibosonic particle-particle (electron-electron, Cooper) pairs, causing the constituent Cooper pairs to flow without friction [27; 29]. The signature of particle-particle condensation is the largest eigenvalue of the particle-particle RDM (2-RDM) [63; 64] given by \[{}^{2}D^{i,j}_{k,l}=\langle\Psi|\hat{a}^{\dagger}_{i}\hat{a}^{\dagger}_{j} \hat{a}_{l}\hat{a}_{k}|\Psi\rangle \tag{2}\] where \(|\Psi\rangle\) is an \(N\)-fermion wavefunction, where each number demonstrates both the spatial and spin components of the fermion, the indices \(i,j,k,l\) correspond to one-fermion orbitals in a finite basis set of rank \(r\), and \(\hat{a}^{\dagger}\) and \(\hat{a}\) depict fermionic creation and annihilation operators, respectively. The largest eigenvalue of the 2-RDM corresponds to the largest population of a single particle-particle quantum state (called a geminal [63; 64; 65; 66]), i.e., the degree of particle-particle condensation. Similarly, exciton condensation results from particle-hole pairs (excitons) condensing into a single quantum state [63; 69]. The signature of exciton condensation--denoted as \(\lambda_{G}\)--is a large eigenvalue (\(\lambda_{G}>1\)) of a modified version of the particle-hole reduced density matrix [70; 71; 70], with elements given by \[{}^{2}\tilde{G}^{i,j}_{k,l} = {}^{2}G^{i,j}_{k,l}-{}^{1}D^{i\,1}_{j}D^{l}_{j} \tag{3}\] \[= \langle\Psi|\hat{a}^{\dagger}_{i}\hat{a}_{j}\hat{a}^{\dagger}_{l }\hat{a}_{k}|\Psi\rangle-\langle\Psi|\hat{a}^{\dagger}_{i}\hat{a}_{j}|\Psi \rangle\langle\Psi|\hat{a}^{\dagger}_{l}\hat{a}_{k}|\Psi\rangle\] where \({}^{1}D\) is the one-fermion reduced density matrix (1-RDM). After modification--which removes an extraneous ground-state-to-ground-state transition--the largest eigenvalue of the particle-hole RDM corresponds to the number of particle-hole pairs (excitons) that occupy a single particle-hole quantum state and hence signifies presence and extent of exciton condensation. ## III Results Extended graphene bilayer systems have been identified as a major candidate for the creation of macroscopically-scaled exciton condensates [17; 31; 44]. In this study, we extrapolate this framework to a molecularly-sized system and use bilayers of coronene--where each layer is composed of seven, joined benzene rings--in order to probe a molecularly-scaled system whose similarity to graphene bilayers make it both a promising contender for a molecularly-scaled exciton condensate as well as an ideal analogue for exploring the correlation in layers of graphene using a system that can be directly explored by current theoretical techniques for strong electron correlation. As such, we explore relative amounts of correlation in coronene bilayer systems as a function of both interlayer distance and twist angle. ### Exciton Population with Distance To gauge the relative extents of exciton condensation as a result of varying interlayer distances between each layer of coronene and thus probing a significant range of van der Waals interactions between each layer in an attempt to identify an ideal distance for maximal correlation, interlayer spaces are varied from 1.0 A to 2.5 A, and the signature of condensation--\(\lambda_{G}\), i.e., the number of excitons condensed into a single particle-hole quantum state--is probed in the STO-6G basis using variational 2-RDM theory with a [24; 24] active space. As can be seen in Fig. 1--where the blue data indicates variational 2-RDM complete-active-space self-consistent-field (V2RDM-CASSCF) [24; 24] calculations with [X,Y] denotes an active space of X electrons in Y orbitals and the pink data indicates configuration-interaction-based complete-active-space self-consistent-field (CI-CASSCF) [10; 10] calculations--coronene bilayer systems demonstrate character of exciton condensation (\(\lambda_{G}>1\)) for a wide variety of interlayer distances with the maximal excitonic populations in the bilayer peaking at 1.824 at 2 A, although a relatively-wide plateau is noted in the range of 1.8-2.2 A, indicating that exciton condensation is relatively robust in that region of distances. Distances in the neighborhood of 2.0 A are hence ideal for the study of exciton condensate phases in bilayer systems composed of coronene, at least for twist angles around 0 degrees. Figure 2 allows for the visualization and comparison of coronene bilayer systems with differing degrees of excitonic condensation. For the particle-hole wavefunction associated with the large eigenvalue, we visualize the probability distribution of the hole (gray-violet) for a particle location in a \(2p_{z}\) orbital of one of the symmetrically-equivalent carbon atoms in the interior benzene ring (gold) using the methodology described in B for each geometry. The density cut-off for the probabilistic location of the hole differs between all three visualizations, so the magnitudes of the densities can not be directly compared between computations; however, the general trends in hole density locations can be established. For the 2.0 A calculation, which shows the maximal excitonic character of the set, the excitonic hole is highly delocalized between both layers, demonstrating a highly correlated interlayer exciton; for the 2.5 A calculation, an interlayer exciton is still observed, however, the degree of delocalization is highly decreased--with the majority of the hole population being focused on a single layer--, consistent with a lower degree of correlation and a lower signature of condensation; finally, for the 1.0 A calculation, which does not demonstrate any exciton condensation, the hole's probabilistic location is highly localized with the majority of the population in the same layer as the particle. As such, the delocalization and the interlayer location of the hole seem to be strong indicators of high degrees of exciton condensation and indicate that both factors may be necessary for a condensate to form. Note that, while the signature of exciton condensation is depressed in the [10,10] CI-CASSCF calculations relative to the [24,24] V2RDM-CASSCF calculations--which is expected as the higher active spaces allow for higher degrees of correlation, which can lead to higher signatures of condensation--the overall trends between the two sets of data are consistent, especially in the region of maximal condensation. As the results are consistent and as the [10,10] calculations are less computationally-expensive, [10,10] CI-CASSCF calculations are used throughout the exploration of the effect of twist angles on the presence and extent of exciton condensation. ### Exciton Population with Twist Angle To obtain a more complete understanding of the conditions under which exciton condensation occurs, we explore the effect of rotations between the two layers of coronene on the excitonic population in a single quantum state (i.e., \(\lambda_{G}\)). As can be seen from Fig. 3a--which scans the exciton population as a function of angles from 0 to 60 degrees, the full range of rotation before an identical configuration is obtained, for coronene bilayer systems with an interlayer distance of 2.0 A--maximal condensation character is noted in the range of no offset. However, as shown in Fig. 3b, the large degree of condensation is relatively stable in the region of small angles, particularly of interest in magic-angle graphene studies [at around 1.1 degrees,[72]], with the largest degree of condensation occurring at 0 degrees but with all angles scanned--from 0 degrees to 2 degrees by 0.5 degrees--the maximal exciton population remains above 1.45. Additionally, in order to determine whether the optimal interlayer distance is consistent between different twist angles, Fig. 2c shows a scan of the degree of condensation versus the distance between the two coronene layers for twist angles of 0 (blue), 15 (pink), and 30 (green) degrees. Interestingly, the optimal interlayer distance for both the 15 and 30 degree twist angles is decreased from 2.0 A to 1.5 A, with the 15 degree maximum at 1.5 A being significantly decreased from that for the unrotated maximum at 2.0 A and the 30 degree maximum at 1.5 A being significantly higher--showing a maximal exciton population above two, indicating that more than two excitons are occupying a single quantum state, even using the [10,10] active space with fewer degrees of correlation relative to the [24,24] active space previously used to scan degree of condensation versus interlayer distance for the unrotated bilayer system in the analysis in the preceding section. Figure 1: A scan over the exciton population in a single coherent quantum state (i.e., the largest eigenvalue of the modified particle-hole RDM) versus the distance between the two coronene layers for V2RDM-CASSCF calculations using a [24,24] active space (blue) and CI-CASSCF calculations using a [10,10] active space (pink). A STO-6G basis is utilized for both calculations. ## IV Discussion and conclusions In this study, we theoretically probe the presence and extent of exciton condensation--via the use of a quantum signature measuring the exciton population of a single particle-hole quantum state--for a variety of coronene bilayers. In these coronene bilayer systems--which are molecularly-scaled analogues of extended graphene bilayer systems--, we optimize the excitonic character versus the distance between the bilayers and find an excitonic populations of around 1.8 for interlayer distances around 2.0 A when the coronene layers have a twist angle of zero degrees; this signature of condensation is seen to be relatively robust in the region of 1.8-2.3 A, which while shorter than experimental bilayer distances of around 3.0 Figure 3: A scan over the exciton population in a single coherent quantum state (i.e., the largest eigenvalue of the modified particle-hole RDM) versus (a)small or (b)large angle variations are shown in the left-most and middle figure, and a scan over exciton population versus interlayer distance for twist angles of 0 (blue), 15 (pink), and 30 (degrees) is shown in the right-most figure. Activespace SCF calculations with a [10,10] active space with a STO-6G basis are utilized for each plot. Figure 2: Visualizations of the non-rotated coronene bilayer systems for (a) 1.5 Å, (b) 2.0 Å, and (c) 2.5 Å where the gray-violet represents the probabilistic location of the hole in the particle-hole wavefunction associated with the large eigenvalue for a particle position in a fixed atomic orbital (gold). Variational 2-RDM calculations with a [24,24] active space and STO-6G basis set are utilized for each visualization. A [46], may be attainable using either an appropriate linker or high pressure. Further, by exploring the effect of the angle of rotation between the two coronene layers (i.e., the twist angle), we discover that for distances around 2.0 A, the optimal twist angles between the layers are those corresponding to completely-aligned layers (i.e., 0, 60, 120, etc. degrees), although this maximal condensate character is rather robust for small angles around those explored in magic-angle graphene studies [72]. Moreover, by investigating the relationship between interlayer distance and excitonic populations for different twist angles, we note a large dependence on the degree of rotation on the signature of condensation. Specifically, we find the overall highest degree of exciton condensation--with an excitonic population above two in a single quantum state--for a coronene bilayer geometry corresponding to a 30 degree twist angle and a 1.5 A interlayer distance, which is slightly shorter than a carbon-carbon single bond and hence may not be an experimentally-feasible distance. As such, an experimental exploration of molecular-scaled exciton condensation in coronene bilayer systems should likely focus on untwisted bilayer geometries in the range of 2.0 A. Interestingly, our visualization technique in which an exciton--corresponding to the largest degree of condensation--is visualized by plotting a the hole's probabilistic location for a specified particle location, indicates that interlayer excitons may be required in order for the coronene bilayer system to demonstrate exciton condensation. For visualizations with the same specified particle orbital (the \(2p_{z}\) orbital on one of the symmetrically-equivalent carbon atoms in the interior benzene ring), geometries demonstrating character of exciton condensation have clear, delocalized, interlayer excitons. Further, we note that the delocalization of the hole location increases with the increase in excitonic population. An interesting future direction may be the exploration of trends in exciton condensation with system size. Such a study would likely be beneficial in extrapolating smaller-system exciton condensation results to predict behavior of extended system graphene. In prior work, we have indeed explored the relationship between system size--i.e., number of benzene units--in both horizontal bilayer systems including pentacene and hexacene [8] as well as vertically in multilayer, molecular-scale van der Waals stacks composed of benzene subunits with the latter demonstrating an almost-linear increase of condensate character with an increase in the number of layers [11]. In the case of coronene, however, such an extrapolation would prove difficult with current theoretical methodologies robust enough to capture the correlation phenomena inherent to condensation behavior as the natural progression of molecules--shown in Fig. 4--rapidly become prohibitively large especially considering double-layer systems. Another interesting direction would be determination of the temperature dependency of excitonic phenomena. Exciton condensation in coronene is a ground state phenomenon in which multiple constituent particle-hole quasibosons condense into a single quantum state; as such, we would expect it to persist at finite and small temperatures up until some critical temperature at which thermal energy is sufficient to disrupt the condensation. Determination of critical temperature can be explored theoretically, although the calculation of excited states would be required, which may be an interesting future avenue of exploration. Overall, this study identifies a candidate for molecular-scale exciton condensation--namely, a bilayer of coronene subunits with twist angles near zero degrees and interlayer separations near 2 A--, which could have applications in applications involving molecularly-scaled electronic structures and devices. Further, the clear signature of exciton condensation noted for the molecular-scale analogue of a graphene bilayer supports the idea that the interesting electronic phenomena in graphene bilayer systems could be occurring via an excitonic mechanism. The understanding gained throughout this geometric analysis of coronene bilayers illuminates the relationships between twist angle, interlayer distance, and degree of exciton condensation, which increases our understanding of geometric considerations in the design of graphene bilayer-like exciton condensate materials. ###### Acknowledgements. D.A.M. gratefully acknowledges support from the U. S. National Science Foundation Grant No. CHE-1565638 and DGE-1746045 and the ACS Petroleum Research Fund Grant No. PRF No. 61644-ND6. ## Conflict of interest The authors do not have a conflict of interest to report. ## Data availability statement The data is available from the corresponding author upon reasonable request. ## Appendix A Computational Methods The particle-particle reduced density matrix (2-RDM) for the coronene bilayers is obtained directly from the molecular structure using a variational method [47; 48; 50; 53; 73; 74]. Additional constraints allowing the 2-RDM to represent \(N\)-particle wavefunctions--i.e., \(N\)-representability conditions--, require the the particle-particle, hole-hole, and particle-hole RDMs all to be positive semi-definite. The STO-6G basis set is used for all coronene bilayer calculations, and the active space utilized is specified throughout the document, with--unless otherwise noted--[24,24] variational 2-RDM CASSCF calculations being utilized for the scan over interlayer distances and [10,10] CI-CASSCF calculations being utilized for the scan over twist angles. The 2-RDM obtained directly from the molecular structure is then utilized to construct the particle-hole RDM by the linear mapping given by: \[{}^{2}G_{k,l}^{i,j}={\delta_{l}^{j}}^{1}D_{k}^{i}+{}^{2}D_{j,l}^{i,k}. \tag{1}\] The modified particle-hole RDM can then be obtained from the particle-hole RDM according to: \[{}^{2}\tilde{G}_{k,l}^{i,j}={}^{2}G_{k,l}^{i,j}-{}^{1}D_{j}^{i}D_{k}^{l}. \tag{2}\] The eigenvalues (\(\lambda_{G,i}\)) and eigenvectors (\(\overrightarrow{v}_{G,i}\)) of the modified particle-hole matrix are calculated using an eigenvalue optimization: \[\tilde{G}\overrightarrow{v}_{G,i}=\lambda_{G,i}\overrightarrow{v}_{G,i} \tag{3}\] where the largest eigenvalue of the modified particle-hole RDM is the signature of condensation that represents the largest exciton population in a single coherent quantum state. ## Appendix B Visualization Technique The "exciton density" visualization shows the probabilistic hole location as a function of a specific particle location. This information is obtained via a matrix of atomic orbitals in terms of molecular orbitals, \(M_{\text{AO,MO}}\), calculated directly from a matrix of molecular orbitals in terms of atomic orbitals, \(M_{\text{MO,AO}}\), which is obtained as an output of the direct computation of the 2-RDM: \[M_{\text{AO,MO}}=(M_{\text{MO,AO}}^{T})^{-1}. \tag{4}\] A submatrix corresponding to the active orbitals is isolated from the overall matrix, and the eigenvector of the largest eigenvalue of the modified particle-hole RDM is reshaped as a matrix in the basis of the active orbital submatrix. The eigenvector matrix, denotes by \(V_{\text{max}}\), is then utilized to create \[(M_{\text{AO,MO}}^{\text{active}})(V_{\text{max}})(M_{\text{AO,MO}}^{\text{ active}})^{T}. \tag{5}\] which is a matrix representing electron atomic orbitals in terms of the corresponding probabilistic hole location, with the resultant coefficients contributing to other orbitals.
2309.15072
Spectral weight filtrations
We provide a description of Voevodsky's $\infty$-category of motivic spectra in terms of the subcategory of motives of smooth proper varieties. As applications, we construct weight filtrations on the Betti and \'{e}tale cohomologies of algebraic varieties with coefficients in any complex oriented ring spectrum. We show that these filtrations satisfy $\ell\mathrm{dh}$-descent, giving an effective way of calculating them in positive characteristic. In the complex motivic case, we further refine the weight filtration to one defined at the level of stable homotopy types.
Peter J. Haine, Piotr Pstrągowski
2023-09-26T17:08:53
http://arxiv.org/abs/2309.15072v1
# Spectral weight filtrations ###### Abstract. We provide a description of Voevodsky's \(\infty\)-category of motivic spectra in terms of the subcategory of motives of smooth proper varieties. As applications, we construct weight filtrations on the Betti and etale cohomologies of algebraic varieties with coefficients in any complex oriented ring spectrum. We show that these filtrations satisfy \(\ell\)dh-descent, giving an effective way of calculating them in positive characteristic. In the complex motivic case, we further refine the weight filtration to one defined at the level of stable homotopy types. 2020 Mathematics Subject Classification: Primary 13A02; Secondary 55Q10 The first-named author gratefully acknowledges support from the NSF Mathematical Sciences Postdoctoral Research Fellowship under Grant #DMS-2102957 and a grant from the Simons Foundation, 816048 LC. The second-named gratefully acknowledges support from NSF Grant #DMS-1926686 and Deutsche Forschungsgemeinschaft #EXC-2047/1 - 390685813. 5.3 The weight filtration on Borel-Moore homology via \(\ell\)dh-hyperdescent * 5.4 Filtration on cohomology and the comparison with the Gillet-Soule filtration * 6 Synthetic Betti realization * 6.1 Recollection on synthetic spectra * 6.2 Synthetic spectra as filtered spectra * 6.3 Synthetic complex Betti realization * 6.4 Comparing synthetic Betti realization and filtered Betti realization * 6.5 Synthetic real Betti realization and synthetic etale realization ## 1. Introduction ### Motivation and overview Let \(X\) be a complex variety. In his fundamental series of papers [16; 17; 18], Deligne explains how to use the algebraic structure of \(X\) to endow the rational singular cohomology \(\mathrm{H}^{*}(X(\mathbb{C});\mathbb{Q})\) with a canonical _weight filtration_ \[\mathrm{W}_{0}\mathrm{H}^{*}(X(\mathbb{C});\mathbb{Q})\subseteq\mathrm{W}_{1} \mathrm{H}^{*}(X(\mathbb{C});\mathbb{Q})\subseteq\cdots\.\] Moreover, the complexification \[\mathbb{C}\otimes_{\mathbb{Q}}\mathrm{W}_{\bullet}\mathrm{H}^{*}(X(\mathbb{C} );\mathbb{Q})\] has a canonical _mixed Hodge module_ structure on its associated graded pieces. In fact, the filtration exists _before_ passing to cohomology: Deligne shows that the singular cochain complex \(\mathrm{C}^{*}(X(\mathbb{C});\mathbb{Q})\) can be canonically refined to an object of the filtered derived \(\infty\)-category. The weight filtration contains crucial algebraic information: it is not an invariant of the topological space \(X(\mathbb{C})\). Informally, the weight filtration is obtained by resolving \(X\) by smooth proper varieties. The weight filtration on rational cohomology has been extended to a variety of contexts. In [26], Gillet and Soule show that the weight filtration can be refined to a canonical filtration on the complex of compactly supported _integral_ cochains \(\mathrm{C}^{*}_{c}(X(\mathbb{C});\mathbb{Z})\). In this paper, one of our main results is that the weight filtration is defined at a _spectral level_, even before passing to algebra. That is, we show that the weight filtration can be refined to a canonical filtration on the stable homotopy type of \(X(\mathbb{C})\) which equips the latter with a structure of a _synthetic spectrum_. Our result is based on a new description of Voevodsky's stable \(\infty\)-category of motivic spectra \(\mathrm{SH}(\mathbb{C})\) in terms of the subcategory generated by motives of smooth proper varieties. More generally, given any base field \(k\) of exponential characteristic \(e\), we give a new description of the \(\infty\)-category \(\mathrm{SH}(k)[\nicefrac{{1}}{{e}}]\) obtained from \(\mathrm{SH}(k)\) by inverting the exponential characteristic. This gives an clean constructs of filtered refinements of both Betti and etale realization with coefficients in a complex orientable cohomology theory. Applying these filtered realizations to various motivic spectra one can attach to a variety, we obtain weight filtrations on the (co)homology of varieties. In particular, we are able to construct weight filtrations on etale cohomology with coefficients in a complex orientable etale sheaf of spectra, extending Deligne's weight filtration on \(\ell\)-adic etale cohomology [19]. We also show that the induced filtration on Borel-Moore homology satisfies hyperdescent with respect to Kelly's \(\ell\)dh-topology [40]. Combined with the theory of alterations [37, Theorem 4.4; 38, Theorem 1.1; 39, Expose IX, Theoreme 1.1; 62, Theorem 1.2.5], this gives an effective way of calculating this filtration in positive characteristic. We end the paper with a conjectural picture of the existence of a synthetic realization in the etale context. In the rest of this introduction, we explain our results in more detail. ### The complex orientable case We first describe our result in its most basic case, over the complex numbers and in the case of a _complex orientable_ cohomology theory (such as complex bordism, complex K-theory, or ordinary cohomology). We make use of Voevodsky's \(\infty\)-category of motivic spectra \(\operatorname{SH}(\mathbb{C})\), and we assume that the reader is familiar with basics of motivic homotopy theory; see SS 2 or a brief review. Let \(A\in\operatorname{CAlg}(\operatorname{Sp})\) be a commutative algebra in spectra. We have the _\(A\)-linear Betti realization_ functor \[\operatorname{Be}(-;A)\colon\operatorname{SH}(\mathbb{C})\to\operatorname{ Mod}_{A}\] which is the unique symmetric monoidal left adjoint such that for any smooth \(\mathbb{C}\)-scheme \(X\), we have \[\operatorname{Be}(\Sigma_{+}^{\infty}X;A)\simeq A\otimes\Sigma_{+}^{\infty}X( \mathbb{C})\.\] That is, \(\Sigma_{+}^{\infty}X\in\operatorname{SH}(\mathbb{C})\) is sent to the the \(A\)-linear stable homotopy type of \(X(\mathbb{C})\). The functor \(\operatorname{Be}(-;A)\) encodes the theory of Betti (co)homology of varieties. In more detail, it is a left adjoint, so any \(A\)-module \(M\) determines through the right adjoint to \(\operatorname{Be}(-;A)\) a motivic spectrum over \(\mathbb{C}\). Through the formalism of six-functors of the stable motivic category, in turn any motivic spectrum determines (co)homology theories on varieties, in both ordinary and compactly supported variants, which in this case recovers Betti (co)homology with coefficients in \(M\). Our first result is that if \(A\) is complex orientable, then the \(A\)-linear Betti realization can be equipped with a canonical filtration. Recall that a _filtered spectrum_ is a functor \(X_{*}\colon\mathbb{Z}^{\operatorname{op}}\to\operatorname{Sp}\), where we regard \(\mathbb{Z}\) as a poset with the usual ordering. We write \[\operatorname{FilSp}:=\operatorname{Fun}(\mathbb{Z}^{\operatorname{op}}, \operatorname{Sp})\] for the \(\infty\)-category of filtered spectra. Every spectrum \(X\) has a canonical _Postnikov filtration_ \[\cdots\to\tau_{\geq 1}X\to\tau_{\geq 0}X\to\tau_{\geq-1}X\to\cdots\] which can be naturally refined to a lax symmetric monoidal functor \(\tau_{\geq*}\colon\operatorname{Sp}\to\operatorname{FilSp}\). **1.2.1 Theorem** (4.3.13).: _Let \(A\in\operatorname{CAlg}(\operatorname{Sp})\) be complex orientable. Then, there exists a unique colimit-preserving lax symmetric monoidal functor_ \[\operatorname{W}_{*}\operatorname{Be}(-;A)\colon\operatorname{SH}(\mathbb{C}) \to\operatorname{Mod}_{\tau_{\geq*}(A)}(\operatorname{FilSp})\] _such that on the subcategory of motivic spectra of the form \(S\simeq(\mathbb{P}^{1})^{\otimes n}\otimes\Sigma_{+}^{\infty}Y\) with \(n\in\mathbb{Z}\) and \(Y\) a smooth proper complex variety, we have a natural equivalence_ \[\operatorname{W}_{*}\operatorname{Be}(S;A)\simeq\tau_{\geq*}(\operatorname{ Be}(S;A))\.\] _We refer to \(\operatorname{W}_{*}\operatorname{Be}(-;A)\) as the filtered \(A\)-linear Betti realization functor._ Note that if \(A\) is an ordinary commutative ring, we have an identification \[\operatorname{Mod}_{\tau_{\geq*}(A)}(\operatorname{FilSp})\simeq\mathcal{D}^{ \operatorname{fil}}(A)\] with the classical filtered derived \(\infty\)-category of \(A\), obtained by localizing filtered chain complexes at filtered quasi-isomorphisms. See Proposition 4.1.7. Informally, Theorem 1.2.1 says that once we decide to equip the \(A\)-homology of each smooth proper variety \(X\) with the "trivial filtration" given by the Postnikov tower, there is a unique way to extend this to a colimit-preserving functor defined on all of \(\operatorname{SH}(\mathbb{C})\). By construction, for any motivic spectrum \(S\), the canonical map from the colimit \[\operatorname{colim}\operatorname{W}_{*}\operatorname{Be}(S;A)\to \operatorname{Be}(S;A)\] is an equivalence. This induces a filtration on homology groups of \(\operatorname{Be}(S;A)\); hence for any complex variety \(X\), we obtain a filtration on the complex oriented (co)homology of \(X(\mathbb{C})\). As a sample application, we explain how to use Theorem 1.2.1 to define virtual Euler characteristics with coefficients in Morava K-theories. This description does not rely on Bittner's presentation of the Grothendieck ring of varieties [8], and is adaptable to more general base fields. See SS 4.7. ### A new description of motivic spectra Our proof of Theorem 1.2.1 is based on the following description of the stable motivic category away from the characteristic. Our description is inspired by the work of Bachmann-Kong-Wang-Xu on the _Chow-Novikov_\(\mathrm{t}\)-_structure_ on motivic spectra [7]. Let \(k\) be a field of exponential characteristic \(e\). We say that a motivic spectrum \(S\in\mathrm{SH}(k)[\sfrac{1}{e}]\) over \(k\) is _perfect pure_ if \(S\) belongs to the smallest subcategory \[\mathrm{Pure}(k)\subseteq\mathrm{SH}(k)[\sfrac{1}{e}]\] generated under extensions and retracts by motivic Thom spectra \(\mathrm{Th}(\eta)\), where \(\eta\in\mathrm{K}_{0}(X)\) and \(X\) is a smooth proper \(k\)-variety. An _additive sheaf_\(\mathcal{F}\colon\mathrm{Pure}(k)^{\mathrm{op}}\to\mathrm{Sp}\) is a functor that sends cofiber sequences of perfect pure motivic spectra to fiber sequences of spectra; we denote the \(\infty\)-category of additive sheaves of spectra on \(\mathrm{Pure}(k)\) by \(\mathrm{Sh}_{\Sigma}(\mathrm{Pure}(k);\mathrm{Sp})\)1. Footnote 1: In § 3, we show that a spectral presheaf \(\mathcal{F}\colon\mathrm{Pure}(k)^{\mathrm{op}}\to\mathrm{Sp}\) sends preserves cofiber sequences if and only if it is additive and a sheaf with respect to a certain natural Grothendieck topology on \(\mathrm{Pure}(k)\). This justifies our terminology. #### 1.3.1. Theorem (3.3.5) _Let \(k\) be a field of exponential characteristic \(e\). The spectral Yoneda embedding \(S\mapsto\mathrm{map}_{\mathrm{SH}(k)[\sfrac{1}{e}]}(-,S)\) defines an equivalence of \(\infty\)-categories_ \[\mathrm{SH}(k)[\sfrac{1}{e}]\xrightarrow{\sim}\mathrm{Sh}_{\Sigma}(\mathrm{ Pure}(k);\mathrm{Sp})\.\] #### 1.3.2. Remark (inverting \(e\)) As usual, the reason Theorem 1.3.1 requires inverting the exponential characteristic \(e\) ultimately relies on the fact that strong resolution of singularities is not known over general base fields; instead, we use Gabber's \(\ell^{\prime}\)-alteration theorem. Our proofs are written in such a way that if one assumes strong resolution of singularities over \(k\), then the refinement of Theorem 1.3.1 without \(e\) inverted holds. By construction, the equivalence of Theorem 1.3.1 is compatible with the Chow-Novikov t-structure recently introduced by Bachmann-Kong-Wang-Xu. More precisely, the Chow-Novikov t-structure on \(\mathrm{SH}(k)[\sfrac{1}{e}]\) is identified with the canonical t-structure on additive sheaves induced by the standard t-structure on spectra. Let \(\mathrm{MGL}\in\mathrm{SH}(k)\) denote the motivic spectrum representing algebraic cobordism. If we replace \(\mathrm{SH}(k)\) with the \(\infty\)-category of \(\mathrm{MGL}[\sfrac{1}{e}]\)-modules, Theorem 1.3.1 implies that there is an equivalence of \(\infty\)-categories \[\mathrm{Mod}_{\mathrm{MGL}[\sfrac{1}{e}]}(\mathrm{SH}(k))\simeq\mathrm{PSh}_{ \Sigma}(\mathrm{Pure}_{\mathrm{MGL}}(k);\mathrm{Sp})[\sfrac{1}{e}] \tag{1.3.3}\] with additive spectral _presheaves_. Here, \[\mathrm{Pure}_{\mathrm{MGL}}(k)\subseteq\mathrm{Mod}_{\mathrm{MGL}}(\mathrm{ SH}(k))\] is the subcategory of modules of the form \(\mathrm{MGL}\otimes\Sigma_{+}^{\infty}X\), where \(X\) is smooth and proper. As explained in the work of Elmanto-Sosnilo [23, SS2.2.11], this equivalence is also a consequence of the existence of Bondarko's weight structure on \(\mathrm{MGL}\)-modules [9]. Note that if we replace \(\mathrm{MGL}\) with the motivic cohomology spectrum \(\mathrm{MZ}\), the equivalence (1.3.3) can be thought of as a homotopy-coherent refinement of the weight homology construction of Kelly-Saito [42, Theorem 2.3]. As an immediate consequence of Theorem 1.3.1, we deduce the following new universal property of \(\mathrm{SH}(k)[\sfrac{1}{e}]\). #### 1.3.4. Corollary _Let \(k\) be a field of exponential characteristic \(e\) and let \(\mathcal{C}\) be a cocomplete stable \(\infty\)-category. Then restriction along the inclusion defines an equivalence of \(\infty\)-categories_ \[\mathrm{Fun}^{\mathrm{colim}}(\mathrm{SH}(k)[\sfrac{1}{e}],\mathcal{C})\to \mathrm{Fun}^{\mathrm{cofib}}(\mathrm{Pure}(k),\mathcal{C})\] _between colimit-preserving functors \(\mathrm{SH}(k)[\sfrac{1}{e}]\to\mathcal{C}\) and functors \(\mathrm{Pure}(k)\to\mathcal{C}\) that preserve cofiber sequences._ The utility of Corollary 1.3.4 comes down to the fact that cofiber sequences \[A\xrightarrow{\partial}B\xrightarrow{\partial}C\xrightarrow{\partial}\Sigma A\] in \(\operatorname{Pure}(k)\) are easier to control than cofiber sequences of arbitrary motivic spectra. Indeed, since the \(\operatorname{MGL}[\nicefrac{{1}}{{\epsilon}}]\)-homology of a smooth proper \(k\)-scheme vanishes in negative Chow degree [7, Proposition 3.6(2)], the boundary map \[\partial\colon(\operatorname{MGL}\otimes C)[\nicefrac{{1}}{{e}}]\to( \operatorname{MGL}\otimes\Sigma A)[\nicefrac{{1}}{{e}}]\] is necessarily zero; see Proposition 3.2.6. This fact is essentially equivalent to the existence of Bondarko's weight structure on \(\operatorname{MGL}[\nicefrac{{1}}{{e}}]\)-modules. It follows that any additive functor which preserves \(\operatorname{MGL}[\nicefrac{{1}}{{\epsilon}}]\)-split cofiber sequences also preserves cofiber sequences of perfect pure motives. This implies Theorem 1.2.1: since any complex orientable \(A\in\operatorname{CAlg}(\operatorname{Sp})\) is module over \(\operatorname{Be}(\operatorname{MGL})\simeq\operatorname{MU}\) in the homotopy category of spectra and Betti realization is symmetric monoidal, the functor \[S\mapsto\tau_{\geq*}\operatorname{Be}(S;A)\] preserves \(\operatorname{MGL}\)-split cofiber sequences. ### Filtered etale realization Since our construction of the filtered Betti realization is based on properties of the \(\infty\)-category of motivic spectra itself, rather than the target of a given realization, it also allows us to prove the existence of weight filtrations in other contexts. For example, let \(k\) be a field and let \(\ell\neq\operatorname{char}(k)\) be a prime. Write \[\operatorname{Re}_{\ell}\colon\operatorname{SH}(k)\to\operatorname{Sh}^{ \operatorname{hyp}}_{\operatorname{\acute{e}t}}(\operatorname{\acute{E}}_{t} ;\operatorname{Sp})^{\wedge}_{\ell}\,\] for the \(\ell\)_-adic etale realization_ functor valued in hypercomplete sheaves of \(\ell\)-complete spectra on the small etale site of \(k\); see SS 2.4. The target can be thought of as the \(\infty\)-category of \(\ell\)-complete spectra equipped with a continuous action of the absolute Galois group \(\operatorname{Gal}(\bar{k}/k)\). We are able to equip the etale realization of any motivic spectrum over \(k\) with a weight filtration: **Theorem** (4.6.5).: _Let \(k\) be a field of exponential characteristic \(e\) and let \(\ell\neq e\) be a prime. Let \(A\in\operatorname{CAlg}(\operatorname{Sh}^{\operatorname{hyp}}_{\operatorname{ \acute{e}t}}(\operatorname{\acute{E}}_{t};\operatorname{Sp})^{\wedge}_{\ell})\) be complex orientable in the sense that there exists a morphism \(\operatorname{Re}_{\ell}(\operatorname{MGL})\to A\) of algebras in the homotopy category. There exists a unique colimit-preserving lax symmetric monoidal functor_ \[\operatorname{W}_{*}\operatorname{Re}_{\ell}(-;A)\colon\operatorname{SH}(k)[ \nicefrac{{1}}{{e}}]\to\operatorname{Fil}(\operatorname{Sh}^{\operatorname{ hyp}}_{\operatorname{\acute{e}t}}(\operatorname{\acute{E}}_{t}; \operatorname{Sp})^{\wedge}_{\ell})\] _valued in filtered hypersheaves such that for any \(S\in\operatorname{Pure}(k)\), we have_ \[\operatorname{W}_{*}\operatorname{Re}_{\ell}(S;A)\simeq\tau_{\geq*}( \operatorname{Re}_{\ell}(S;A))\.\] ### Descent and the Gillet-Soule filtration Let \(p\colon X\to\operatorname{Spec}(\mathbb{C})\) be a complex variety. Then \(X\) determines a motivic spectrum \[\operatorname{M}_{\operatorname{c}}(X):=p_{!}(\mathbf{1}_{X})\in\operatorname {SH}(\mathbb{C})\] that encodes the compactly supported cohomology of \(X\); see SS 2.2. In Corollary 2.2.12, we show that this motivic spectrum is dualizable. Thus, if \(A\) is complex orientable, then by applying filtered Betti realization to \(\operatorname{M}_{\operatorname{c}}(X)\) and its dual \(\operatorname{M}_{\operatorname{c}}(X)^{\vee}\), we obtain filtered spectra \[\operatorname{W}_{*}\operatorname{Be}(\operatorname{M}_{\operatorname{c}}(X);A )\qquad\text{and}\qquad\operatorname{W}_{*}\operatorname{Be}(\operatorname{M} _{\operatorname{c}}(X)^{\vee};A)\.\] These filtered spectra provide filtrations on the compactly supported \(A\)-cohomology and Borel-Moore \(A\)-homology of \(X\), respectively. Analogously, applying the filtered etale realization of Theorem 1.4.1 we obtain filtrations on \(\ell\)-adic etale (co)homology over an arbitrary field. Since the weight filtrations considered in this paper are defined using a somewhat abstract characterization of the stable motivic category, it is natural to ask for an explicit way to calculate these filtrations only using varieties. In both the works of Deligne [16; 17; 18] and Gillet-Soule [26], the weight filtration is obtained by repeatedly invoking resolution of singularities to resolve the starting variety by smooth projective varieties. We show that the same method can be used in our context. Since we are also interested in the case of etale cohomology over fields of positive characteristic (where resolution of singularities is not known) we work with Kelly's _\(\ell\)dh-topology_[40]. Recall that the \(\ell\)dh-topology is generated by the cdh-topology and finite flat and surjective maps of degree prime to \(\ell\); see SS5.1 for a brief review. By Gabber's \(\ell^{\prime}\)-alteration theorem [39, Expose IX, Theoreme 1.1], for any field \(k\) and prime \(\ell\neq\operatorname{char}(k)\), every \(k\)-variety admits an \(\ell\)dh-hypercover by regular \(k\)-varieties. Also note that since any cdh-cover is an \(\ell\)dh-cover, so the latter notion is strictly more general than classical resolution of singularities. **1.5.1 Theorem** (5.2.3).: _Let \(k\) be a field and \(\ell\neq\operatorname{char}(k)\) a prime. If \(X_{\bullet}\to X\) is an \(\ell\)dh-hypercover of \(k\)-schemes, then the canonical map_ \[\operatorname{colim}_{\Delta^{\operatorname{op}}}\operatorname{M_{c}}(X_{ \bullet})^{\vee}_{(\ell)}\to\operatorname{M_{c}}(X)^{\vee}_{(\ell)}\.\] _is an \(\operatorname{MGL}\)-local equivalence; that is, it becomes an equivalence after tensoring with \(\operatorname{MGL}\). In particular, it is \(\infty\)-connective with respect to the Chow-Novikov \(\operatorname{t}\)-structure._ As our filtered realization functors have coefficients in a complex oriented homology theory, they invert \(\operatorname{MGL}\)-local maps. Let us now explain how Theorem1.5.1 gives an effective way of calculating the filtration on Borel-Moore homology. To treat both the Betti and etale cases uniformly, for a variety \(X\) and \(A\in\operatorname{CAlg}(\operatorname{Sp})\) complex orientable, we write \[\operatorname{C}_{*}^{\operatorname{BM}}(X;A):=\begin{cases}\operatorname{Be} (\operatorname{M_{c}}(X)^{\vee};A)&(\text{Betti})\\ \operatorname{Re}_{\ell}(\operatorname{M_{c}}(X)^{\vee}_{(\ell)};A)&(\text{ etale})\.\end{cases}\] Informally, these are the \(A\)-linear Borel-Moore "cochains", although note that in the etale case it is a hypersheaf of spectra on the etale site of \(k\) rather than a spectrum itself. Using Theorem1.2.1 and Theorem1.4.1 these objects inherit canonical filtrations. **1.5.2 Theorem** (5.3.4).: _Let \(k\) be a field and let \(\ell\neq\operatorname{char}(k)\) be a prime. Let \(X\) be a proper \(k\)-scheme and let \(X_{\bullet}\to X\) be an \(\ell\)dh-hypercover such that for each \(i\geq 0\), the scheme \(X_{i}\) is smooth and projective. Then for any \(\ell\)-local \(A\) we have_ \[\operatorname{W_{*}}\operatorname{C}_{*}^{\operatorname{BM}}(X;A)\simeq \operatorname{colim}_{[i]\in\Delta^{\operatorname{op}}}\tau_{\geq*} \operatorname{C}_{*}^{\operatorname{BM}}(X_{i};A) \tag{1.5.3}\] _where the colimit is calculated in filtered \(\tau_{\geq*}A\)-modules. If \(X_{\bullet}\to X\) is a cdh-cover, then (5.3.5) holds for any \(A\) in which the exponential characteristic of \(k\) is invertible._ Note that the case of cohomology is more involved: although \(\operatorname{MGL}\)-locally the motivic spectrum \(\operatorname{M_{c}}(X)\) can be written as a totalization of its hypercover, the filtered realization functors need not preserve infinite limits. We analyze this situation in more detail in the case of classical integral cohomology of complex varieties, where we prove that the necessary limit can be replaced by a finite one. As a consequence, we deduce the comparison result with the Gillet-Soule filtration introduced in [26]. Given a complex variety, we write \(\operatorname{W_{*}^{GS}}\operatorname{C_{c}^{*}}(X(\mathbb{C});\mathbb{Z})\) for the Gillet-Soule weight filtration on the compactly supported integral cochains on \(X(\mathbb{C})\). **1.5.4 Theorem** (5.4.8).: _Let \(X\) be a complex variety. Then there exists a natural equivalence_ \[\operatorname{W_{*}}\operatorname{C_{c}^{*}}(X(\mathbb{C});\mathbb{Z})\simeq \operatorname{W_{*}^{GS}}\operatorname{C_{c}^{*}}(X(\mathbb{C});\mathbb{Z}) \tag{1.5.5}\] _of objects of the filtered derived \(\infty\)-category of \(\mathbb{Z}\). In other words, the filtration on compactly supported integral cochains inherited from the filtered Betti realization coincides with the Gillet-Soule filtration._ **1.5.6 Remark**.: In the case of a field of characteristic zero, an alternative way to construct filtrations on complex oriented, compactly supported cohomology appears in the recent work of Kuijper [44]. The filtrations constructed in this way also agree with the ones introduced in this paper, see Remark5.4.16. ### Synthetic Betti realization In the case of the complex Betti realization we now describe how the weight filtration can be lifted to a filtration on the stable homotopy type itself. We believe that an analogous construction should yield a similar filtration in the real Betti and etale cases, and we sketch the conjectural picture in SS 6.5. The monoidal unit \(\mathrm{S}^{0}\in\mathrm{Sp}\) of spectra is not complex orientable. However, the unit map \(\mathrm{S}^{0}\to\mathrm{MU}\) is faithfully flat and induces a cosimplicial resolution \[\mathrm{S}^{0}\xrightarrow{}\mathrm{MU}\xrightarrow{}\mathrm{MU}\otimes \mathrm{MU}\xrightarrow{}\dots\] through complex orientable ring spectra. Moreover, by the work of Hahn-Raksit-Wilson on the _even filtration2_[30], this resolution is essentially universal with respect to this property. The limit of the associated diagram Footnote 2: To be more precise, the MU-resolution of the sphere is universal as a resolution of the sphere through commutative ring spectra with even homotopy groups, i.e., an _even_ ring spectrum. However, any even ring spectrum is complex orientable, and any complex orientable spectrum can be made in an MU-algebra in the homotopy category, so we blur the distinction here. of \(\infty\)-categories of filtered modules can be thought of as a natural target of a weight filtration functor. Even better, up to completion it can be identified with the \(\infty\)-category \(\mathrm{Syn}_{\mathrm{MU}}\) of _\(\mathrm{MU}\)-based synthetic spectra_ introduced by the second-named author in [55]. The \(\infty\)-category \(\mathrm{Syn}_{\mathrm{MU}}\) is best understood as an \(\infty\)-categorical deformation encoding chromatic homotopy theory. It is a symmetric monoidal stable \(\infty\)-category and monoidal unit has a canonical (degree-shiftnh) endomorphism \(\tau\). This endomorphism \(\tau\) should be thought of as a formal parameter, and we have equivalences \[\mathrm{Syn}_{\mathrm{MU}}^{\tau=1}\simeq\mathrm{Sp}\] between the generic fiber and spectra, and \[\mathrm{Syn}_{\mathrm{MU}}^{\tau=0}\simeq\mathrm{IndCoh}(\mathcal{M}_{\mathrm{ fg}}) \tag{1.6.1}\] between the special fiber and \(\mathrm{Ind}\)-coherent sheaves on the moduli stack of formal groups3. There is a canonical fully faithful embedding \(\nu\colon\mathrm{Sp}\hookrightarrow\mathrm{Syn}_{\mathrm{MU}}\) which reduces to the identity of spectra on the generic fiber and to the association Footnote 3: In this paper, we mostly work with all (that is, not necessarily even) synthetic spectra, so that the right-hand side of (1.6.1), the right-hand side is sheaves on the moduli of formal groups in Dirac geometry of Lars Hesselholt and the second-named author, see [31, §5.2]. It is a natural enlargement of \(\mathrm{Ind}\)-coherent sheaves on the classical moduli stack where the Lie algebra line bundle \(\omega\) has a canonical square root \(\omega^{\otimes 1/2}\). \[X\mapsto\mathrm{MU}_{*}(X)\in\mathrm{IndCoh}(\mathcal{M}_{\mathrm{fg}})^{\heartsuit}\] on the special fiber. For any spectrum \(X\), the \(\tau\)-adic filtration on \(\nu(X)\) encodes the Adams-Novikov spectral sequence calculating the stable homotopy groups \(\pi_{*}(X)\). By the work of Gheorghe-Krause-Isaksen-Ricka [25], synthetic spectra are equivalent to filtered modules over the sphere spectrum equipped with the filtration \[\mathrm{fil}^{*}(\mathrm{S}^{0}):=\lim_{[n]\in\Delta}\tau_{\geq*}(\mathrm{MU} ^{\otimes n+1})\] given by descent along the faithfully flat map \(\mathrm{S}^{0}\to\mathrm{MU}\). This is essentially the filtration on the sphere spectrum known as the _Adams-Novikov filtration4_. Thus, the following realizes the promised weight filtration at the level of stable homotopy types: Footnote 4: To be more precise, [25] describes the subcategory of _even_ synthetic spectra as modules in \(\mathrm{FilSp}\) over the double-sped filtration \(\mathrm{fil}^{*}_{\mathrm{ev}}(\mathrm{S}^{0}):=\lim_{n}\tau_{\geq*}(\mathrm{ MU}^{\otimes n+1})\). The filtration \(\mathrm{fil}^{*}_{\mathrm{ev}}(\mathrm{S}^{0})\) is what is typically refereed to as the Adams-Novikov filtration. However, one can also describe the whole \(\infty\)-category \(\mathrm{Syn}_{\mathrm{MU}}\) as modules in \(\mathrm{FilSp}\) over \(\mathrm{fil}^{*}(\mathrm{S}^{0})\). This is analogous to the difference between the even filtration and its half-integer version, see [54, Remark 2.26]. **Theorem 6.3.3**.: _There exists a unique lax symmetric monoidal left adjoint_ \[\operatorname{Be}_{\operatorname{syn}}\colon\operatorname{SH}(\mathbb{C}) \to\operatorname{Syn}_{\operatorname{MU}}\] _such that for each \(S\in\operatorname{Pure}(\mathbb{C})\), we have_ \[\operatorname{Be}_{\operatorname{syn}}(S)\simeq\nu(\operatorname{Be}(S))\.\] The functor \(\operatorname{Be}_{\operatorname{syn}}\) is not strongly symmetric monoidal. To see this, note that the reduction to the special fiber \(\operatorname{Syn}_{\operatorname{MU}}\to\operatorname{Syn}_{\operatorname{MU }}^{\tau=0}\) is strongly symmetric monoidal. By construction, when restricted to synthetic spectra of the form \(\operatorname{Be}_{\operatorname{syn}}(X)\) for \(X\in\operatorname{Pure}(k)\), this reduction takes the form \[X\mapsto\operatorname{MU}_{*}(X(\mathbb{C}))\.\] This functor is only lax symmetric monoidal: since \(\operatorname{MU}_{*}\) is not a field, the Kunneth map \[\operatorname{MU}_{*}(U)\underset{\operatorname{MU}_{*}}{\otimes} \operatorname{MU}_{*}(V)\to\operatorname{MU}_{*}(U\otimes V)\] is not generally an isomorphism. For the same reason, unless \(A_{*}\) is a field, the \(A\)-linear weight filtrations of Theorem1.2.1 are only lax symmetric monoidal. The functor Theorem1.6.2 is weakly universal in the sense that if \(A\) is a complex orientable ring spectrum, there is a _realization functor_ \[\nu(A)\otimes_{\nu(\operatorname{S}^{0})}(-)\colon\operatorname{Syn}_{ \operatorname{MU}}\to\operatorname{Mod}_{\tau_{\geq*}(A)}(\operatorname{FilSp })\.\] and a canonical natural transformation \[\nu(A)\otimes_{\nu(\operatorname{S}^{0})}\operatorname{Be}_{\operatorname{syn }}(-)\to\operatorname{W}_{*}\operatorname{Be}(-;A) \tag{1.6.3}\] of functors \[\operatorname{SH}(k)\to\operatorname{Mod}_{\tau_{\geq*}(A)}(\operatorname{FilSp })\.\] We say only "weakly universal", because, due to the failure of the Kunneth formula, the natural transformation (1.6.3) is _not_ generally an equivalence. In Theorem6.4.6, we show that if the map \(\operatorname{Spec}(A_{*})\to\operatorname{\mathcal{M}}_{\operatorname{fg}}\) classifying the Quillen formal group is flat, then (1.6.3) _is_ an equivalence. In particular, this is the case when \(A=\mathbb{Q}\); hence our synthetic weight filtration refines Deligne's rational weight filtration. Similarly, for any ring map \(A\to B\) between complex orientable algebras in spectra, there is a comparison natural transformation \[\tau_{\geq*}(B)\underset{\tau_{\geq*}(A)}{\otimes}\operatorname{W}_{*} \operatorname{Be}(-;A)\to\operatorname{W}_{*}\operatorname{Be}(-;B)\.\] If \(A_{*}\to B_{*}\) is flat, then this map is an equivalence; see Corollary4.5.4. Since the synthetic refinement of the weight filtration provided by Theorem1.6.2 in particular encodes the stable homotopy type of the Betti realization, it is a much stronger invariant than the \(\mathbb{Z}\)-linear weight filtration. As one piece of evidence towards its strength, observe that since the underlying homotopy type of any complex motivic sphere has only even cells, the synthetic weight filtration restricts to a functor \[\operatorname{Be}_{\operatorname{syn}}\colon\operatorname{SH}(\mathbb{C})^{ \operatorname{cell}}\to\operatorname{Syn}_{\operatorname{MU}}^{\operatorname{ ev}}\] from the full subcategory spanned by _cellular_ motivic spectra into the full subcategory spanned by the even synthetic spectra. This restriction was previously constructed by the second-named author in [55, SS7.5]. There, it is shown that for any prime \(p\), this restriction becomes an equivalence \[(\operatorname{SH}(\mathbb{C})^{\operatorname{cell}})^{\wedge}_{p}\xrightarrow {\sim}(\operatorname{Syn}_{\operatorname{MU}}^{\operatorname{ev}})^{\wedge}_ {p}\] after \(p\)-completion [55, Theorem 7.34]. In other words, in the context of \(p\)-complete cellular motivic spectra, the synthetic weight filtration is a complete invariant. **Linear overview.** For the convenience of the reader, in SS 2, we recall the basics of motivic homotopy theory, Betti realization, and etale realization. We also prove a useful result that allows one to reduce statements about motives of arbitrary varieties to statements about motives of smooth proper varieties; see Lemma 2.2.11. In SS 3, we prove Theorem 1.3.1. In SS 4, we apply our new description of \(\operatorname{SH}(k)[\![/e]\!]\) to construct filtered refinements of Betti and etale realization; this proves Theorems 1.2.1 and 1.4.1. In SS 5, given a complex variety \(X\), we show that our filtration on the compactly supported integral cohains \(\operatorname{C}_{\mathrm{c}}^{*}(X(\mathbb{C});\mathbb{Z})\) agrees with the filtration defined by Gillet and Soule. See Theorem 5.4.8. In SS 6, we construct the synthetic Betti realization functor \[\operatorname{Be}_{\mathrm{syn}}\colon\operatorname{SH}(\mathbb{C})\to \operatorname{Syn}_{\mathrm{MU}}\] of Theorem 1.6.2 and compare synthetic Betti realization to filtered Betti realization. See Theorems 6.3.3 and 6.4.6. We conclude the paper by giving a conjectural description of a synthetic lift of a general motivic realization functor; see SS 6.5. **Acknowledgements.** We would like to thank Tom Bachmann, Bhargav Bhatt, Elden Elmanto, Shane Kelly, Adeel A. Khan, Hana Jia Kong, Jacob Lurie, Vova Sosnilo, and Mura Yakerson for insightful conversations related to this work. ## 2. Recollections on motivic homotopy theory In this section, we review some of the basic tools we need from stable motivic homotopy theory. Our account is quite brief; for more details we refer the reader to [13; 36; 47, SS2]. In SS 2.1, we recall the basic setup of stable motivic homotopy theory and the six operations. In SS 2.2, we collect some basic facts about compactly supported motives attached to schemes. In SSSS 2.3 and 2.4, we recall the basics of Betti realization and etale realization, respectively. ### Motivic spectra and the six operations Given a scheme \(S\), we write \(\operatorname{Sm}_{S}\) for the category of smooth \(S\)-schemes. Informally, the \(\infty\)-category of motivic spectra over \(S\) has the same relationship to \(\operatorname{Sm}_{S}\) as the topologists' \(\infty\)-category of spectra has to the category of finite CW-complexes. #### 2.1.1. Recollection To each scheme \(S\) we associate the symmetric monoidal \(\infty\)-category \(\operatorname{SH}(S)\) of _motivic spectra over \(S\)_. This \(\infty\)-category comes equipped with a symmetric monoidal functor \[\Sigma_{+}^{\infty}\colon\operatorname{Sm}_{S}\to\operatorname{SH}(S)\,\] where \(\operatorname{Sm}_{S}\) is has symmetric monoidal structure given by the cartesian product. This construction has the following properties: 1. The \(\infty\)-category \(\operatorname{SH}(S)\) is stable, presentable, and its tensor product preserves colimits separately in each variable. 2. The functor \(\Sigma_{+}^{\infty}\colon\operatorname{Sm}_{S}^{\mathrm{op}}\to\operatorname {SH}(S)^{\mathrm{op}}\) is a sheaf with respect to the Nisnevich topology. 3. For each \(X\in\operatorname{Sm}_{S}\), the projection \(X\times\mathbb{A}^{1}\to X\) induces an equivalence \(\Sigma_{+}^{\infty}(X\times\mathbb{A}^{1})\xrightarrow{\sim}\Sigma_{+}^{ \infty}X\). 4. The _Tate motive_ given by the cofiber \[\operatorname{S}^{2,1}:=\operatorname{cofib}(\infty\colon\Sigma_{+}^{\infty} S\to\Sigma_{+}^{\infty}(\mathbb{P}_{S}^{1}))\] of the point at infinity is \(\otimes\)-invertible in \(\operatorname{SH}(S)\). Moreover, \(\operatorname{SH}(S)\) is initial with respect to these properties; that is, given any symmetric monoidal functor \(F\colon\operatorname{Sm}_{S}\to\mathcal{C}\) satisfying properties (1)-(4), there exists a unique colimit-preserving symmetric monoidal functor \(\widetilde{F}\) fitting into a commutative triangle See [59, Corollary 2.39]. #### 2.1.2. **Recollection** (bigraded homotopy groups).: Given integers \(a,b\in\mathbb{Z}\), we have _bigraded spheres_ \[\mathrm{S}^{a,b}:=\Sigma^{a-2b}(\mathrm{S}^{2,1})^{\otimes b}\in\mathrm{SH}(S)\,\] where \(\mathrm{S}^{2,1}\) for the Tate motive. Since the Tate motive is \(\otimes\)-invertible, all bigraded spheres \(\mathrm{S}^{a,b}\) are also \(\otimes\)-invertible. Moreover, \(\mathrm{S}^{0,0}\) is the monoidal unit of \(\mathrm{SH}(S)\). For any motivic spectrum \(E\in\mathrm{SH}(S)\), the _bigraded homotopy groups_ of \(E\) are defined as \[\pi_{p,q}E:=\pi_{0}\operatorname{Map}_{\mathrm{SH}(S)}(\mathrm{S}^{p,q},E)\] the homotopy classes of maps from bigraded spheres. #### 2.1.3. **Notation** (Thom spectra).: Let \(S\) be a scheme. Given a K-theory class \(\eta\in\mathrm{K}_{0}(S)\), we write \(\operatorname{Th}_{S}(\eta)\in\mathrm{SH}(S)\) for the _motivic Thom spectrum_ associated to \(\eta\). If the base scheme is clear, we simply write \(\operatorname{Th}(\eta)\) instead of \(\operatorname{Th}_{S}(\eta)\). Importantly, the Thom spectrum \(\operatorname{Th}(\eta)\) is \(\otimes\)-invertible in \(\mathrm{SH}(S)\) with inverse \(\operatorname{Th}(-\eta)\). We write \[\Sigma^{\eta}\colon\mathrm{SH}(S)\xrightarrow{\sim}\mathrm{SH}(S)\] for the functor \(\operatorname{Th}(\eta)\otimes(-)\). #### 2.1.4. **Notation** (Eilenberg-MacLane spectra).: Let \(S\) be a scheme and let \(R\) be an ordinary commutative ring. We write \(\operatorname{M}\!R_{S}\in\mathrm{SH}(S)\) for _motivic Eilenberg-MacLane spectrum_ representing motivic cohomology with coefficients in \(R\). Note that \(\operatorname{M}\!R_{S}\) is naturally a commutative algebra in \(\mathrm{SH}(S)\) When it does not lead to confusion, we simply write \(\operatorname{M}\!R\) instead of \(\operatorname{M}\!R_{S}\). #### 2.1.5. **Recollection** (relation to Voevodsky motives).: Given a scheme \(S\), write \(\operatorname{DM}(S)\) for Voevodsky's \(\infty\)-category of motives over \(S\). If \(S\) is regular over a field with resolution of singularities, then there is an equivalence of symmetric monoidal \(\infty\)-categories \[\operatorname{DM}(S)\simeq\operatorname{Mod}_{\operatorname{MZ}}(\mathrm{SH} (S))\] between \(\operatorname{DM}(S)\) and modules in \(\mathrm{SH}(S)\) over the motivic Eilenberg-MacLane spectrum \(\operatorname{MZ}\). See [22, 12, 60]. We now review the basics of functoriality of the construction \(S\mapsto\mathrm{SH}(S)\). Our account is brief, see [14, SS1; 15, SS2.1] for a more thorough review. #### 2.1.6. **Recollection** For every morphism of schemes \(f\colon X\to Y\), we have an adjunction \[f^{*}\colon\mathrm{SH}(Y)\rightleftarrows\mathrm{SH}(X):f_{*}\.\] The functor \(f^{*}\) is the unique symmetric monoidal left adjoint that extends the functor \(\operatorname{Sm}_{Y}\to\mathrm{SH}(X)\) given by \[S\mapsto\Sigma_{+}^{\infty}(X\times_{Y}S)\.\] If \(f\colon X\to Y\) is smooth, then the forgetful functor \(\operatorname{Sm}_{X}\to\operatorname{Sm}_{Y}\) induces a functor \[f_{\sharp}\colon\mathrm{SH}(X)\to\mathrm{SH}(Y)\.\] that is left adjoint to \(f^{*}\). Importantly, \(f_{\sharp}(\mathbf{1}_{X})\simeq\Sigma_{+}^{\infty}X\). #### 2.1.7. **Recollection** (exceptional adjoints).: If \(f\colon X\to Y\) is a morphism locally of finite type, we have an 'exceptional' adjunction \[f_{!}\colon\mathrm{SH}(X)\rightleftarrows\mathrm{SH}(Y):f^{!}\] along with a natural transformation \(f_{!}\to f_{*}\). These functors are more difficult to construct, but the following are their main features from the perspective of the present work: #### 2.1.8. **Recollection** (compatibilities between the six functors).: Let \(f\colon X\to Y\) is a morphism locally of finite type. The following hold: 1. If \(f\) is proper, then \(f_{!}\simeq f_{*}\). 2. If \(f\) is etale, then \[f_{!}\simeq f_{\sharp}\qquad\text{and}\qquad f^{!}\simeq f^{*}\.\] Combined with (1) we see that for any factorization \(f=p\circ j\), where \(j\) is an open immersion and \(p\) is proper, we have \[f_{!}\simeq p_{*}\circ j_{\sharp}\.\] 3. _Atiyah duality:_ If \(f\) is smooth with relative tangent bundle \(\mathrm{T}_{f}\), then there are equivalences \[\Sigma^{-\mathrm{T}_{f}}\circ f^{!}\simeq f^{*}\qquad\text{and}\qquad f_{!} \circ\Sigma^{\mathrm{T}_{f}}\simeq f_{\sharp}\.\] 4. _Projection formula:_ There is a natural equivalence \[f_{!}(-\otimes f^{*}(-))\simeq f_{!}(-)\otimes(-)\] of functors \(\mathrm{SH}(X)\times\mathrm{SH}(Y)\to\mathrm{SH}(Y)\). 5. _Smooth projection formula:_ If \(f\) is smooth, there is a natural equivalence \[f_{\sharp}(-\otimes f^{*}(-))\simeq f_{\sharp}(-)\otimes(-)\] of functors \(\mathrm{SH}(X)\times\mathrm{SH}(Y)\to\mathrm{SH}(Y)\). 6. _Basechange:_ Given a cartesian square where \(f\) is locally of finite type, we have natural equivalences \[p^{*}f_{!}\simeq\bar{f}_{!}\bar{p}^{*}\qquad\text{and}\qquad\bar{p}_{*}\bar{f }^{!}\simeq f^{!}p_{*}\.\] 7. _Gluing:_ Given a closed immersion \(i\colon Z\hookrightarrow X\) with open complement \(j\colon U\hookrightarrow X\), there are natural cofiber sequences \[j_{!}j^{!}\xrightarrow{}\operatorname{id}_{\mathrm{SH}(X)}\xrightarrow{}i_{*} i^{*}\] and \[i_{!}i^{!}\xrightarrow{}\operatorname{id}_{\mathrm{SH}(X)}\xrightarrow{}j_{*} j^{*}\] of exact functors \(\mathrm{SH}(X)\to\mathrm{SH}(X)\). #### 2.1.9. Remark Let \(f\colon X\to Y\) be a smooth morphism of schemes. Then \(f_{!}\colon\mathrm{SH}(X)\to\mathrm{SH}(Y)\) preserves compact objects. To see this, observe that by Atiyah duality, the right adjoint to \(f_{!}\) is given by \(f^{!}\simeq\Sigma^{\mathrm{T}_{f}}\circ f^{*}\), hence preserves all colimits. #### 2.1.10. Recollection ([7, Lemma 2.5]) Let \(f\colon X\to S\) be a smooth proper morphism of schemes. Given a class \(\eta\in\mathrm{K}_{0}(X)\), we write \[\mathrm{Th}_{S}(\eta):=f_{\sharp}\,\mathrm{Th}_{X}(\eta)\.\] Write \(\mathrm{T}_{X}\) for the tangent bundle of \(X\). Then the motivic spectrum \(\mathrm{Th}_{S}(\eta)\) is dualizable in \(\mathrm{SH}(S)\) with dual \(\mathrm{Th}_{S}(-\eta-\mathrm{T}_{X})\). Using the six functors, we can define (co)homology theories associated to morphisms of schemes: #### 2.1.11. Recollection (cohomology) Fix a base scheme \(S\) and motivic spectrum \(E\in\mathrm{SH}(S)\). Let \(p\colon X\to S\) be a morphism of schemes and \(a,b\in\mathbb{Z}\). Then: 1. We have the motivic spectrum \(p_{*}p^{*}(E)\in\mathrm{SH}(S)\) encoding the _\(E\)-cohomology_ of \(X\). We write \[E^{a,b}(X/S):=\pi_{-a,-b}(p_{*}p^{*}(E))\.\] 2. If \(p\colon X\to S\) is locally of finite type, we have the motivic spectrum \(p_{*}p^{!}(E)\in\operatorname{SH}(S)\) encoding the _Borel-Moore \(E\)-homology_ or _bivariant \(E\)-homology_ of \(X\). We write \[E^{\operatorname{BM}}_{a,b}(X/S):=\pi_{a,b}(p_{*}p^{!}(E))\.\] 3. If \(p\colon X\to S\) is locally of finite type, we have the motivic spectrum \(p_{!}p^{*}(E)\in\operatorname{SH}(S)\) encoding the _compactly supported \(E\)-cohomology_ of \(X\). We write \[E^{\operatorname{a},b}_{\operatorname{c}}(X/S):=\pi_{-a,-b}(p_{!}p^{*}(E))\.\] 4. If \(p\colon X\to S\) is locally of finite type, we have the motivic spectrum \(p_{!}p^{!}(E)\in\operatorname{SH}(S)\) encoding the \(E\)_-homology_ of \(X\). We write \[E_{a,b}(X/S):=\pi_{a,b}(p_{!}p^{!}(E))\.\] ### Compactly supported motives of schemes As surveyed in Recollection 2.1.11, the six functor formalism provides a very general form of cohomology theory. However, it is often convenient to work with an alternative description, obtained by attaching to any \(S\)-scheme a suitable motivic spectrum. The relationships between schemes (such as an open-closed decomposition) can then be encoded via relationships between these motivic spectra. **2.2.1 Definition**.: Let \(p\colon X\to S\) be a locally of finite type morphism of schemes. The _(compactly supported) motive associated to \(X\)_ is the motivic spectrum over \(S\) given by \[\operatorname{M_{c}}(X/S):=p_{!}(\mathbf{1}_{X})\.\] If the base scheme \(S\) is clear from the context, then we simply write \(\operatorname{M_{c}}(X)\) for \(\operatorname{M_{c}}(X/S)\). **2.2.2 Example**.: Let \(k\) be a field and let \(p\colon X\to\operatorname{Spec}(k)\) be a smooth morphism with relative tangent bundle \(\operatorname{T}_{X}\). Then by Atiyah duality we have \[p_{!}\simeq p_{\sharp}\circ\Sigma^{-\operatorname{T}_{X}}\.\] It follows that \(\operatorname{M_{c}}(X)\) can be identified with the Thom spectrum \(\operatorname{Th}_{X}(-\operatorname{T}_{X})\) of the negative tangent bundle. This is, informally, a twisted form of the suspension spectrum of \(X\); in the particular case when \(X\) is a variety of dimension \(d\) with trivial tangent bundle, then \[\operatorname{M_{c}}(X)\simeq\Sigma^{-2d,d}\Sigma_{+}^{\infty}X\.\] **2.2.3 Observation**.: Let \(k\) be a field and let \(X\) be a smooth projective \(k\)-scheme. As a consequence of Atiyah duality, \(\operatorname{M_{c}}(X)\) is the monoidal dual of \(\Sigma_{+}^{\infty}X\), see [58, Theorem 2.2]. Moreover, Remark 2.1.9 shows that \(\operatorname{M_{c}}(X)\) is also compact. The compatibilies of the six functors show that the compactly supported motive \(\operatorname{M_{c}}(X/S)\) encodes both the Borel-Moore homology and compactly supported cohomology of \(X\) with coefficients in an arbitrary motivic spectrum over \(S\): **2.2.4 Observation**.: Let \(p\colon X\to S\) be a locally of finite type morphism of schemes and \(E\in\operatorname{SH}(S)\). Using the projection formula, the motivic spectrum encoding compactly supported \(E\)-cohomology can also be described as \[p_{!}p^{*}(E)\simeq p_{!}(\mathbf{1}_{X})\otimes E=\operatorname{M_{c}}(X/S) \otimes E\.\] **2.2.5 Observation**.: Let \(p\colon X\to S\) be a locally of finite type morphism of schemes and \(E\in\operatorname{SH}(S)\). Using the fact that \(p_{*}p^{!}\) is right adjoint to \(p_{!}p^{*}\), we have equivalences \[p_{*}p^{!}(E) \simeq\operatorname{Hom}_{\operatorname{SH}(S)}(\mathbf{1}_{S},p_ {*}p^{!}(E))\] \[\simeq\operatorname{Hom}_{\operatorname{SH}(S)}(p_{!}p^{*}( \mathbf{1}_{S}),E)\] \[\simeq\operatorname{Hom}_{\operatorname{SH}(S)}(\operatorname{M_{ c}}(X/S),E)\.\] #### 2.2.6. Observation As a consequence of Observations 2.2.4 and 2.2.5, the compactly supported cohomology and Borel-Moore homology of \(X\) over \(S\) are computed by \[E_{\mathrm{c}}^{a,b}(X/S)\simeq\pi_{-a,-b}(\mathrm{M}_{\mathrm{c}}(X/S)\otimes E)\] and \[E_{a,b}^{\mathrm{BM}}(X/S)\simeq\pi_{a,b}\operatorname{Hom}_{\mathrm{SH}(S)}( \mathrm{M}_{\mathrm{c}}(X/S),E)\.\] #### 2.2.7. Warning The isomorphisms of Observation 2.2.6 are opposite to the ones appearing in topology for usual homology and cohomology. That is, it is homology which is defined by mapping into a spectrum and cohomology which is defined using the tensor product. This is because \(\mathrm{M}_{\mathrm{c}}(X)\) encodes the _compactly supported_ theories: \(\mathrm{M}_{\mathrm{c}}(X)\) should be thought of as a "cohomological" motive, as witnessed by its contravariant functoriality of Construction 2.2.9. The formation of the compactly supported motive of a scheme commutes with basechange: **2.2.8 Lemma**.: _Given a cartesian square of schemes_ _where \(p\) is locally of finite type, there is an equivalence \(\mathrm{M}_{\mathrm{c}}(X^{\prime}/S^{\prime})\simeq f^{*}\mathrm{M}_{\mathrm{ c}}(X/S)\)._ Proof.: Using the fact that pullback and exceptional pushforward satisfy basechange, we compute \[\mathrm{M}_{\mathrm{c}}(X^{\prime}/S^{\prime}) =p!(\mathbf{1}_{X^{\prime}})\simeq p!(f^{\prime})^{*}(\mathbf{1}_ {X})\] \[\simeq f^{*}p!(\mathbf{1}_{X})\] \[=f^{*}\mathrm{M}_{\mathrm{c}}(X/S)\.\qed\] #### 2.2.9. Construction (functoriality of \(\mathrm{M}_{\mathrm{c}}\)) Consider a commutative triangle of locally of finite type morphisms of schemes The construction \(X\mapsto\mathrm{M}_{\mathrm{c}}(X/S)\) has the following functorialities: 1. _Contravariant functoriality in proper maps:_ Assume that \(f\) is proper. Using the equivalence \(f_{!}\simeq f_{*}\), the unit of the adjunction \(f^{*}\dashv f_{*}\) provides a map \[\mathrm{M}_{\mathrm{c}}(Y/S)=q_{!}(\mathbf{1}_{Y})\longrightarrow q_{!}f_{*}f^ {*}(\mathbf{1}_{Y})\simeq q_{!}f_{!}f^{*}(\mathbf{1}_{Y})\simeq q_{!}f_{!}( \mathbf{1}_{X})\simeq p_{!}(\mathbf{1}_{X})=\mathrm{M}_{\mathrm{c}}(X/S)\.\] 2. _Covariant functoriality in etale maps:_ Assume that \(f\) is etale. Using the equivalence \(f^{*}\simeq f^{!}\) the counit map of \(f_{!}\dashv f^{!}\) yields a map \[\mathrm{M}_{\mathrm{c}}(X/S)=q_{!}f_{!}(\mathbf{1}_{X})\simeq q_{!}f_{!}f_{!} f^{*}(\mathbf{1}_{Y})\simeq q_{!}f_{!}f^{!}(\mathbf{1}_{Y})\longrightarrow q_{!}( \mathbf{1}_{Y})=\mathrm{M}_{\mathrm{c}}(Y/S)\.\] **2.2.10 Lemma**.: _Let \(p\colon X\to S\) be locally of finite type morphism of schemes. Let \(i\colon Z\hookrightarrow X\) be a closed immersion with open complement \(j\colon U\hookrightarrow X\). Then the induced maps \(\mathrm{M}_{\mathrm{c}}(U/S)\to\mathrm{M}_{\mathrm{c}}(X/S)\) and \(\mathrm{M}_{\mathrm{c}}(X/S)\to\mathrm{M}_{\mathrm{c}}(Z/S)\) assemble into a natural cofiber sequence_ _in \(\mathrm{SH}(S)\)._ Proof.: There is a gluing cofiber sequence \[j_{!}j^{*}(\mathbf{1}_{X})\xrightarrow{}\mathbf{1}_{X}\xrightarrow{}i_{!}i^{*}( \mathbf{1}_{X})\] in \(\operatorname{SH}(X)\). Applying \(p_{!}\colon\operatorname{SH}(X)\to\operatorname{SH}(S)\) to this cofiber sequence and using the fact that \(i^{*}\) and \(j^{*}\) are symmetric monoidal, we obtain a cofiber sequence \[p_{!}j_{!}(\mathbf{1}_{U})\xrightarrow{}p_{!}(\mathbf{1}_{X})\xrightarrow{}p_{!}i_{!}(\mathbf{1}_{Z})\] in \(\operatorname{SH}(S)\). The claim now follows from the definition of the compactly supported motive of an \(S\)-scheme. The following is often useful, as it allows one to reduce statements about arbitrary varieties to statements about smooth proper varieties. **2.2.11 Lemma**.: _Let \(k\) be a field of exponential characteristic \(e\). Let \(\mathcal{C}\subseteq\operatorname{SH}(k)\mathopen{[\sfrac{1}{e}]}\) be a full subcategory with the following two properties:_ 1. _The subcategory_ \(\mathcal{C}\) _is closed under extensions, fibers, and retracts._ 2. _For each smooth projective_ \(k\)_-variety_ \(X\)_, we have_ \(\operatorname{M_{c}}(X)\mathopen{[\sfrac{1}{e}]}\in\mathcal{C}\)_._ _Then for any \(k\)-variety \(U\), we have \(\operatorname{M_{c}}(U)\mathopen{[\sfrac{1}{e}]}\in\mathcal{C}\)._ Proof.: We argue by induction on the dimension of \(U\). The base case is when \(\dim(U)=0\), so that \(U\) is projective. In this case, if \(k\) is perfect, then \(U\) is also smooth, and we are done. If \(k\) is not perfect, we consider the perfection \(r\colon k\hookrightarrow k^{\prime}\) given by the colimit over the Frobenius morphism. By a result of Elmanto-Khan [21, Corollary 2.1.7], the pullback functor \[r^{*}\colon\operatorname{SH}(k)\mathopen{[\sfrac{1}{e}]}\to\operatorname{SH}( k^{\prime})\mathopen{[\sfrac{1}{e}]}\] is an equivalence. Writing \(U^{\prime}\) for the basechange of \(U\) to \(k^{\prime}\), Lemma 2.2.8 shows that \[r^{*}\operatorname{M_{c}}(U/k)\simeq\operatorname{M_{c}}(U^{\prime}/k^{\prime} )\.\] Write \(\operatorname{\acute{\text{E}t}}_{k}\) and \(\operatorname{\acute{\text{E}t}}_{k^{\prime}}\) for the small etale sites of \(k\) and \(k^{\prime}\), respectively. Since \(r\) is a universal homeomorphism, the topological invariance of the etale site [28, Expose IX, Theoreme 4.10; 3, Expose VIII, Theoreme 1.1] implies that the basechange functor \[\operatorname{\acute{\text{E}t}}_{k}\to\operatorname{\acute{\text{E}t}}_{k^{ \prime}}\] is an equivalence of categories. It follows that there exists a zero-dimensional etale \(k\)-scheme \(V\) such that \(V^{\prime}\simeq U^{\prime}\) as \(k^{\prime}\)-schemes. Again applying Lemma 2.2.8, we see that \[r^{*}\operatorname{M_{c}}(V/k)\mathopen{[\sfrac{1}{e}]}\simeq r^{*} \operatorname{M_{c}}(U/k)\mathopen{[\sfrac{1}{e}]}\.\] Since \(r^{*}\) is fully faithful, we deduce that \(\operatorname{M_{c}}(V)\mathopen{[\sfrac{1}{e}]}\simeq\operatorname{M_{c}}(U) \mathopen{[\sfrac{1}{e}]}\). By assumption, \(\operatorname{M_{c}}(V)\mathopen{[\sfrac{1}{e}]}\in\mathcal{C}\), hence \(\operatorname{M_{c}}(U)\mathopen{[\sfrac{1}{e}]}\in\mathcal{C}\) as well. For the induction step, assume that \(\dim(U)>0\) and that for each \(k\)-variety \(Z\) such that \(\dim(Z)<\dim(U)\), we have \(\operatorname{M_{c}}(Z)\mathopen{[\sfrac{1}{e}]}\in\mathcal{C}\). By Lemma 2.2.10, for any closed \(Z\subseteq U\) we have a cofiber sequence \[\operatorname{M_{c}}(U\smallsetminus Z)\mathopen{[\sfrac{1}{e}]}\to\operatorname{M _{c}}(U)\mathopen{[\sfrac{1}{e}]}\to\operatorname{M_{c}}(Z)\mathopen{[\sfrac{1 }{e}]}\.\] Hence it is enough to show that, after possibly replacing \(U\) by an open dense subset, we have \(\operatorname{M_{c}}(U)\mathopen{[\sfrac{1}{e}]}\in\mathcal{C}\). Applying Lemma 2.2.10 to a decomposition into connected components, we can assume that \(U\) is connected. By further shrinking \(U\), we can also assume that \(U\) is smooth with trivial tangent bundle. By the theory of alterations we can find a finite etale cover \(V\to U\) of degree \(d\) coprime to \(e\) such that \(V\) is an open dense subset of a smooth and projective \(k\)-variety \(X\). By the inductive hypothesis and an application of Lemma 2.2.10, we deduce that \(\operatorname{M_{c}}(V)\mathopen{[\sfrac{1}{e}]}\in\mathcal{C}\). We want to deduce the same for \(\operatorname{M_{c}}(U)\mathopen{[\sfrac{1}{e}]}\). Since both \(U\) and \(V\) have trivial tangent bundles, Example 2.2.2 shows that \[\operatorname{M_{c}}(U)\simeq\Sigma^{-2d,d}\Sigma^{\infty}_{+}U\qquad\text{and} \qquad\operatorname{M_{c}}(V)\simeq\Sigma^{-2d,d}\Sigma^{\infty}_{+}V\.\] We deduce from [48, Lemma B.3] that after possibly shrinking \(U\), the motivic spectrum \(\operatorname{M_{c}}(U)[\sfrac{1}{e}]\) is a retract of \(\operatorname{M_{c}}(V)[\sfrac{1}{e}]\), ending the argument. **Corollary**.: _Let \(k\) be a field of exponential characteristic \(e\). Then for any \(k\)-variety \(X\), the motivic spectrum \(\operatorname{M_{c}}(X)[\sfrac{1}{e}]\) is a compact and dualizable object of \(\operatorname{SH}(k)[\sfrac{1}{e}]\)._ Proof.: Since compact and dualizable objects form a stable subcategory, this follows from Lemma 2.2.11 and the smooth projective case of Observation 2.2.3. ### Betti realization We now recall the basics of Betti realizations in characteristic zero. The first is over the complex numbers. #### 2.3.1. Construction (complex Betti realization).: The functor \(\operatorname{Sm_{\mathbb{C}}}\to\operatorname{Spc}\) sending a smooth \(\mathbb{C}\)-scheme to the underlying homotopy type of the topological space \(X(\mathbb{C})\) with the analytic topology is \(\mathbb{A}^{1}\)-invariant, sends elementary Nisnevich squares to pullback squares, and preserves finite products. Moreover, the functor \(\operatorname{Sm_{\mathbb{C}}}\to\operatorname{Sp}\) given by \(X\mapsto\Sigma_{\mathbb{C}}^{\infty}X(\mathbb{C})\) also inverts the Tate motive. As a consequence of the universal property of motivic spectra, this functor uniquely extends to a symmetric monoidal left adjoint \[\operatorname{Be}\colon\operatorname{SH}(\mathbb{C})\to\operatorname{Sp}\] referred to as _Betti realization_. #### 2.3.2. Example ([45, Proposition 5.10]).: There is a natural equivalence \[\operatorname{Be}(\operatorname{M}R)\simeq\operatorname{H}R\] between the Betti realization of the motivic Eilenberg-MacLane spectrum \(\operatorname{M}R\) and the usual Eilenberg-MacLane spectrum of \(R\). #### 2.3.3. Example There is an equivalence \[\operatorname{Be}(\operatorname{MGL})\simeq\operatorname{MU}\] of commutative algebras in \(\operatorname{Sp}\). #### 2.3.4. Construction (\(\operatorname{C_{2}}\)-Betti realization).: Similarly, if \(X\) is a smooth \(\mathbb{R}\)-scheme, then the complex points \(X(\mathbb{C})\) acquire an action of the Galois group \(\operatorname{C_{2}}:=\operatorname{Gal}(\mathbb{C}/\mathbb{R})\). The underlying homotopy type of \(X(\mathbb{C})\) refines to a genuine \(\operatorname{C_{2}}\)-space. Again by the universal proeprty of motivic spectra, the functor \[\operatorname{Sm_{\mathbb{R}}} \to\operatorname{Sp_{\operatorname{C_{2}}}}\] \[X \mapsto\Sigma_{\operatorname{C_{2}},+}^{\infty}X(\mathbb{C})\] uniquely extends to a symmetric monoidal left adjoint \[\operatorname{Be_{\operatorname{C_{2}}}}\colon\operatorname{SH}(\mathbb{R}) \to\operatorname{Sp_{\operatorname{C_{2}}}}\] valued in genuine \(\operatorname{C_{2}}\)-spectra. This functor is referred to as _\(\operatorname{C_{2}}\)-Betti realization_. ### Etale realization Let \(k\) be a separably closed field and \(\ell\) a prime different from \(\operatorname{char}(k)\). We now explain a construction of an etale realization functor from \(\operatorname{SH}(k)\) to \(\ell\)-complete spectra. In fact, we give a more general construction that works over any base scheme. #### 2.4.1. Notation Let \(S\) be a scheme. Write \(\operatorname{\acute{E}t}_{S}\subseteq\operatorname{Sm}_{S}\) for the full subcategory spanned by the etale \(S\)-schemes. Giving both of these categories the _etale_ topology, this inclusion \(\operatorname{\acute{E}t}_{S}\subseteq\operatorname{Sm}_{S}\) is a morphism of sites that satisfies the covering lifting property. In particular, this inclusion induces a fully faithful symmetric monoidal pullback functor \[i^{*}\colon\operatorname{Sh}_{\operatorname{\acute{e}t}}^{\operatorname{hyp} }(\operatorname{\acute{E}t}_{S};\operatorname{Sp})\hookrightarrow\operatorname {Sh}_{\operatorname{\acute{e}t}}^{\operatorname{hyp}}(\operatorname{Sm}_{S}; \operatorname{Sp})\] on etale hypersheaves of spectra. #### 2.4.2. Notation Let \(S\) be a scheme. Write \(\operatorname{SH}_{\operatorname{\acute{e}t}}(S)\) for the localization of \(\operatorname{SH}(S)\) at the _desuspensions of_ etale hypercoverings. Write \(\operatorname{L}_{\operatorname{\acute{e}t}}\colon\operatorname{SH}(S)\to \operatorname{SH}_{\operatorname{\acute{e}t}}(S)\) for the symmetric monoidal localization functor. #### 2.4.3. Equivalently, the \(\infty\)-category \(\operatorname{SH}_{\operatorname{\acute{e}t}}(S)\) can be obtained by first taking \(\mathbb{A}^{1}\)-local objects in the \(\infty\)-topos \(\operatorname{Sh}_{\operatorname{\acute{e}t}}^{\operatorname{hyp}}( \operatorname{Sm}_{S})\) of etale hypersheaves of spaces on smooth \(S\)-schemes, then \(\mathbb{P}^{1}\)-stabilizing. As a result, there is a natural symmetric monoidal left adjoint \[\operatorname{Sh}_{\operatorname{\acute{e}t}}^{\operatorname{hyp}}( \operatorname{Sm}_{S};\operatorname{Sp})\to\operatorname{SH}_{\operatorname{ \acute{e}t}}(S)\.\] #### 2.4.4. Notation Let \(\mathcal{C}\) be a presentable stable \(\infty\)-category and \(\ell\) a prime number. A morphism \(f\colon X\to Y\) in \(\mathcal{C}\) is an _\(\ell\)-equivalence_ if \(\operatorname{cofib}(f)/\ell=0\). We write \(\mathcal{C}_{\ell}^{\wedge}\subseteq\mathcal{C}\) for the localization of \(\mathcal{C}\) at the \(\ell\)-equivalences. We refer to \(\mathcal{C}_{\ell}^{\wedge}\) as the subcategory of _\(\ell\)-complete_ objects. Then inclusion \(\mathcal{C}_{\ell}^{\wedge}\subseteq\mathcal{C}\) admits a left adjoint that we denote by \((-)_{\ell}^{\wedge}\colon\mathcal{C}\to\mathcal{C}_{\ell}^{\wedge}\). The following rigidity result of Bachmann generalizes work of Ayoub [4, SS5] as well as earlier work by Bachmann [6, Theorem 6.6]. #### 2.4.5. Theorem (rigidity [5, Theorem 3.1]) _Let \(S\) be a scheme and \(\ell\) a prime number invertible on \(S\). Then the natural symmetric monoidal left adjoint_ \[\operatorname{Sh}_{\operatorname{\acute{e}t}}^{\operatorname{hyp}}( \operatorname{\acute{E}t}_{S};\operatorname{Sp})_{\ell}^{\wedge}\xrightarrow{ i_{\ell}^{i,\wedge}}\operatorname{Sh}_{\operatorname{\acute{e}t}}^{ \operatorname{hyp}}(\operatorname{Sm}_{S};\operatorname{Sp})_{\ell}^{\wedge} \xrightarrow{\sim}\operatorname{SH}_{\operatorname{\acute{e}t}}(S)_{\ell}^{\wedge}\] _is an equivalence._ #### 2.4.6. Definition (etale realization) Let \(S\) be a scheme and \(\ell\) a prime number invertible on \(S\). The _\(\ell\)-adic etale realization functor_ is the composite \[\operatorname{Re}_{\ell}\colon\operatorname{SH}(S)\xrightarrow{\operatorname {L}_{\operatorname{\acute{e}t}}}\operatorname{SH}_{\operatorname{\acute{e}t}} (S)\xrightarrow{(-)_{\ell}^{\wedge}}\operatorname{SH}_{\operatorname{ \acute{e}t}}(S)_{\ell}^{\wedge}\xrightarrow{\sim}\operatorname{Sh}_{ \operatorname{\acute{e}t}}^{\operatorname{hyp}}(\operatorname{\acute{E}t}_{S };\operatorname{Sp})_{\ell}^{\wedge}\.\] Here the last equivalence is the inverse of the rigidity equivalence of Theorem 2.4.5. Note that \(\operatorname{Re}_{\ell}\) is a composite of symmetric monoidal left adjoints, hence is a symmetric monoidal left adjoint. #### 2.4.7. Example Let \(k\) be a separably closed field and \(\ell\neq\operatorname{char}(k)\). Then \(\ell\)-adic etale realization provides a symmetric monoidal left adjoint \[\operatorname{Re}_{\ell}\colon\operatorname{SH}(k)\to\operatorname{Sp}_{\ell} ^{\wedge}\] to \(\ell\)-complete spectra. ## 3. Motivic spectra as sheaves on pure motives Let \(k\) be a field of exponential characteristic \(e\). Our goal in this section is to describe the \(\infty\)-category \(\operatorname{SH}(k)[\sfrac 1/e]\) of motivic spectra away from the characteristic in terms of motives of smooth proper \(k\)-schemes (see Theorem 3.3.5). In SS 3.1 we introduce a subcategory \(\operatorname{Pure}(k)\subseteq\operatorname{SH}(k)[\sfrac 1/e]\) of pure motives and explore its basic properties. In SS 3.2 characterizes the cofiber sequences in \(\operatorname{Pure}(k)\); see Proposition 3.2.6. In SS 3.3 we prove our alternative description of \(\operatorname{SH}(k)[\sfrac 1/e]\). #### 3.0.1. Notation Let \(k\) be a field of exponential characteristic \(e\). For the remainder of this section, we simply write \[\operatorname{SH}(k):=\operatorname{SH}(k)[\sfrac 1/e]\] for the localization of the stable motivic category away from the exponential characteristic. All of the motivic spectra appearing below are implicitly localized as well. ### Perfect pure motivic spectra We start by introducing the subcategory of 'pure motives' relevant for our work. Our definition is inspired by Bachmann, Kong, Wang, and Xu's recent introduction of the _Chow-Novikov \(\operatorname{t}\)-structure_ on motivic spectra [7]. #### 3.1.1. Definition We write \[\operatorname{Pure}(k)\subseteq\operatorname{SH}(k)\] for the smallest subcategory closed under extensions and retracts which contains the Thom spectrum \(\operatorname{Th}(\eta)\) for any smooth proper \(k\)-scheme \(X\) and any class \(\eta\in\operatorname{K}_{0}(X)\). We say a motivic spectrum \(A\) is _perfect pure_ if \(A\in\operatorname{Pure}(k)\). #### 3.1.2. Remark The connective part \(\operatorname{SH}(k)_{c\geq 0}\) of the Chow-Novikov t-structure is the closure of \(\operatorname{Pure}(k)\subseteq\operatorname{SH}(k)\) under colimits and extensions. We begin by enumerating the basic features of \(\operatorname{Pure}(k)\). **3.1.3 Lemma**.: _The following statements hold:_ 1. _Every object of_ \(\operatorname{Pure}(k)\) _is dualizable in_ \(\operatorname{SH}(k)\)_._ 2. _The subcategory_ \(\operatorname{Pure}(k)\subseteq\operatorname{SH}(k)\) _is closed under monoidal duals._ 3. _Every object of_ \(\operatorname{Pure}(k)\) _is compact in_ \(\operatorname{SH}(k)\)_._ 4. _The subcategory_ \(\operatorname{Pure}(k)\subseteq\operatorname{SH}(k)\) _is closed under tensor products._ Proof.: Items (1) and (2) are immediate from the definition of \(\operatorname{Pure}(k)\), Recollection 2.1.10, and the fact that dualizable objects are closed under extensions. Item (3) follows from item (1) and the fact that, since the unit of \(\operatorname{SH}(k)\) is compact, every dualizable object of \(\operatorname{SH}(k)\) is compact. For item (4), note that if \(X\) and \(X^{\prime}\) are smooth \(k\)-schemes and \(\eta\in\operatorname{K}_{0}(X)\) and \(\eta^{\prime}\in\operatorname{K}_{0}(X^{\prime})\), then \[\operatorname{Th}(\eta)\otimes\operatorname{Th}(\eta^{\prime})\simeq \operatorname{Th}(\eta\times\eta^{\prime})\.\] Hence the claim follows from the definition of \(\operatorname{Pure}(k)\) and the fact that smooth proper \(k\)-schemes are closed under fiber products in \(\operatorname{Sm}_{k}\). #### 3.1.4. Warning Definition3.1.1 is related to, but distinct from, the notion of a _pure motivic spectrum_ introduced in [7, Definition 2.10]. The subcategory of pure motivic spectra in the sense of Bachmann-Kong-Wang-Xu is the closure of \(\operatorname{Pure}(k)\) under filtered colimits and extensions. Using the fact that perfect pure motivic spectra are compact, it is not difficult to show that a pure motivic spectrum \(A\) is perfect pure if and only \(A\) is compact. **3.1.5 Remark**.: Since we work away from the characteristic, [7, Remark 2.19; 21, Theorem 3.2.1; 48, Proposition B.1] show that \(\operatorname{Pure}(k)\) generates \(\operatorname{SH}(k)\) under colimits and desuspensions. An important class of examples of motivic spectra are Thom spectra associated to vector bundles on Grassmannians: **3.1.6 Notation** (Grassmannians).: Let \(n\geq d\geq 0\) be integers. Write \[\operatorname{Gr}_{d}(n):=\operatorname{Gr}_{d}(\mathbb{A}_{k}^{n})\] for the Grassmanian of \(d\)-dimensional linear subspaces of \(\mathbb{A}_{k}^{n}\). Recall that \(\operatorname{Gr}_{d}(n)\) is a smooth projective variety of dimension \(d(n-d)\). **3.1.7 Example**.: Write \(\gamma_{d,n}\) for the tautological bundle of rank \(d\) over \(\operatorname{Gr}_{d}(n)\) and \[\epsilon_{d,n}:=[\gamma_{d,n}]-[\mathcal{O}_{\operatorname{Gr}_{d}(n)}^{ \oplus d}]\in\operatorname{K}_{0}(\operatorname{Gr}_{d}(n))\.\] for the associated virtual vector bundle of rank zero. Write \(\operatorname{Th}_{d}(n):=\operatorname{Th}(\epsilon_{d,n})\) for the associated Thom spectrum. Since \(\operatorname{Gr}_{d}(n)\) is smooth and proper, \(\operatorname{Th}_{d}(n)\) is perfect pure. Since \[\operatorname{MGL}\simeq\operatorname*{colim}_{d,n\to\infty}\operatorname{Th}_ {d}(n+d)\,\] we deduce that \(\operatorname{MGL}\) is a filtered colimit of perfect pure motivic spectra. We are particularly interested in cofiber sequences in \(\operatorname{Pure}(k)\); hence we make the following definitions. #### 3.1.8. Definition We say that a morphism \(f\colon B\to A\) in \(\operatorname{Pure}(k)\) is: 1. A _pure epimorphism_ if its fiber \(\operatorname{fib}(f)\) in \(\operatorname{SH}(k)\) is again a perfect pure motivic spectrum. 2. A _pure monomorphism_ if its monoidal dual \(f^{\vee}\colon B^{\vee}\to A^{\vee}\) is a pure epimorphism; equivalently, if the cofiber \(\operatorname{cofib}(f)\) in \(\operatorname{SH}(k)\) is perfect pure. The transition maps appearing in Example 3.1.7 are all pure monomorphisms: **3.1.9 Lemma**.: _Let \(d,m\geq 0\) be integers. Then the following maps are pure monomorphisms:_ 1. _The map_ \(\operatorname{Th}_{d}(m)\to\operatorname{Th}_{d+1}(m+1)\) _induced by the morphism_ \(\operatorname{Gr}_{d}(m)\to\operatorname{Gr}_{d+1}(m+1)\) _classifying_ \(\gamma_{d,m}\oplus\mathcal{O}_{\operatorname{Gr}_{d}(m)}\)_._ 2. _The map_ \(\operatorname{Th}_{d+1}(m)\to\operatorname{Th}_{d}(m+1)\) _induced by the map_ \(\operatorname{Gr}_{d+1}(m)\to\operatorname{Gr}_{d}(m+1)\) _classifying_ \[\gamma_{d+1,m}\subseteq\mathcal{O}_{\operatorname{Gr}_{d+1}(m)}^{\oplus m} \subseteq\mathcal{O}_{\operatorname{Gr}_{d+1}(m)}^{\oplus m+1}\.\] Proof.: For (1), write \(U\) for the open complement of the closed immersion \[\operatorname{Gr}_{d+1}(m)\hookrightarrow\operatorname{Gr}_{d+1}(m+1)\] induced by the inclusion \(\mathbb{A}_{k}^{m}\subseteq\mathbb{A}_{k}^{m+1}\). Note that the map \(\operatorname{Gr}_{d}(m)\to\operatorname{Gr}_{d+1}(m+1)\) factors as \[\operatorname{Gr}_{d}(m)\hookrightarrow U\hookrightarrow\operatorname{Gr}_{d+ 1}(m+1). \tag{3.1.10}\] Moreover, the left map in (3.1.10) is an affine vector bundle and hence a motivic homotopy equivalence. Applying purity (see [7, Lemma A.2]) to the open-closed decomposition (3.1.11) and the virtual vector bundle \(\epsilon_{d+1,m+1}\) gives a cofiber sequence in \(\operatorname{SH}(k)\) of the form Here, \(\mathcal{N}\) is the normal bundle of \(\operatorname{Gr}_{d+1}(m)\hookrightarrow\operatorname{Gr}_{d+1}(m+1)\). As the cofiber is perfect pure, we deduce that the first map is a pure monomorphism. For (2), note that these are the maps corresponding to the closed component in (3.1.11). Write \(\operatorname{T}_{\operatorname{Gr}_{d+1}(m+1)}\) for the tangent bundle of \(\operatorname{Gr}_{d+1}(m+1)\), and define a virtual vector bundle \(V\) on \(\operatorname{Gr}_{d+1}(m+1)\) by \[V:=\operatorname{T}_{\operatorname{Gr}_{d+1}(m+1)}\oplus\epsilon_{d+1,m+1}\.\] Applying purity to \(V\), we obtain a cofiber sequence of the form Passing to monoidal duals and applying Recollection 2.1.10, we obtain a cofiber sequence This shows that the right-hand map is a pure monomorphism, as needed. **3.1.12 Example**.: In light of Example 3.1.7 and Lemma 3.1.9, we can write \[\operatorname{MGL}\simeq\operatorname*{colim}_{d,n\to\infty}\operatorname{Th} _{d}(n+d)\] as the colimit a filtered diagram of perfect pure motivic spectra where all of the transition maps are pure monomorphisms. ### Characterization of cofiber sequences of perfect pure motivic spectra We now give a useful characterization of pure epimorphisms. In SS 3.3, we use this characterization to give a description of \(\operatorname{SH}(k)\) as an \(\infty\)-category of sheaves of spectra on \(\operatorname{Pure}(k)\). Before we start, let us recall a number of equivalent characterizations of split cofiber sequences. #### 3.2.1. **Recollection** (split cofiber sequences) If \(\mathcal{C}\) is an additive \(\infty\)-category, a cofiber sequence \[A\xrightarrow{i}B\xrightarrow{p}C \tag{3.2.2}\] is said to be _split_ if there exists a section \(s\colon C\to B\) of \(p\), which implies that \(B\simeq A\oplus C\). In this case, we say that \(i\colon A\to B\) is a _split monomorphism_, and \(p\colon B\to C\) is a _split epimorphism_. #### 3.2.3. **Recollection** Any additive functor \(\mathcal{C}\to\mathcal{D}\) of additive \(\infty\)-categories preserves split cofiber sequences. #### 3.2.4. **Recollection** Let \(\mathcal{C}\) be a symmetric monoidal stable \(\infty\)-category, and assume that the tensor product is exact separately in each variable. Let \(A\) be an \(\mathbf{E}_{1}\)-algebra in \(\mathcal{C}\). We say that a cofiber sequence \(X\to Y\to Z\) in \(\mathcal{C}\) is _\(A\)-split_ if the induced cofiber sequence is a split cofiber sequence in \(\operatorname{Mod}_{A}(\mathcal{C})\). In order to characterize pure epimorphisms, we make use of the fact that \(\operatorname{MGL}\)-homology of perfect pure motivic spectra vanishes in negative _Chow degree_: **3.2.5**.: **Lemma** ([7, Proposition 3.6(2)]).: _Let \(A\in\operatorname{SH}(k)_{c\geq 0}\) be a connective object of the Chow-Novikov \(\operatorname{t}\)-structure, and let \(d,w\in\mathbb{Z}\). If \(d-2w<0\), then \(\operatorname{MGL}_{d,w}(A)=0\)._ **3.2.6**.: **Proposition**.: _Let \(f\colon B\to A\) be a morphism in \(\operatorname{Pure}(k)\). The following are equivalent:_ 1. _The morphism_ \(f\colon B\to A\) _is a pure epimorphism._ 2. _The morphism_ \(\operatorname{MGL}\otimes f\colon\operatorname{MGL}\otimes B\to\operatorname{ MGL}\otimes A\) _is a split epimorphism of_ \(\operatorname{MGL}\)_-modules._ Proof.: (1)\(\Rightarrow\)(2) Write \(C:=\operatorname{fib}(f)\). Since \[\operatorname{MGL}\otimes C\to\operatorname{MGL}\otimes B\to\operatorname{ MGL}\otimes A\] is a cofiber sequence of \(\operatorname{MGL}\)-modules, it is enough to show that the boundary map \[\partial\colon\operatorname{MGL}\otimes A\to\Sigma(\operatorname{MGL}\otimes C)\] is zero. Since \(A\) is dualizable, we can identify the homotopy class of \(\partial\) with an element of \[\operatorname{MGL}_{-1,0}(A^{\vee}\otimes C)\.\] By Lemma 3.1.3, \(A^{\vee}\otimes C\) is again perfect pure. Hence Lemma 3.2.5 shows that \(\operatorname{MGL}_{-1,0}(A^{\vee}\otimes C)=0\). (2)\(\Rightarrow\)(1) By assumption, the boundary map \(A\to\Sigma C\) is zero after tensoring with \(\operatorname{MGL}\). Writing \(\operatorname{MGL}\) as a filtered colimit of Thom spectra of Grassmanians along pure monomorphisms as in Example 3.1.12 and using that \(A\) is compact, we deduce that there exists integers \(d,n\geq 0\) such that the composite \[A\to\Sigma C\simeq\operatorname{Th}_{0}(0)\otimes\Sigma C\to\operatorname{ Th}_{d}(n+d)\otimes\Sigma C\] is zero. Passing to the dual of the Thom spectrum, we deduce that the composite \[\operatorname{Th}_{d}(n+d)^{\vee}\otimes A\to A\to\Sigma C\] is zero. Write \[B^{\prime}:=B\times_{A}(\operatorname{Th}_{d}(n+d)^{\vee}\otimes A)\.\] Then we have a commutative diagram where the rows are cofiber sequences. Since the boundary map \(\operatorname{Th}_{d}(n+d)^{\vee}\otimes A\to\Sigma C\) is zero, we have \[B^{\prime}\simeq C\oplus(\operatorname{Th}_{d}(n+d)^{\vee}\otimes A)\.\] Since \(B^{\prime}\) is an extension of \(\operatorname{cofib}(\operatorname{S}^{0,0}\to\operatorname{Th}_{d}(n+d))^{ \vee}\otimes A\) and \(B\), we see that \(B^{\prime}\) is perfect pure. Hence its direct summand \(C\) is also perfect pure, completing the proof. ### Pure sheaves We now give a description of \(\operatorname{SH}(k)\) as an \(\infty\)-category of sheaves of spectra on \(\operatorname{Pure}(k)\). The following is the key definition of this subsection: #### 3.3.1. Definition We say a spectral presheaf \[X\colon\operatorname{Pure}(k)^{\operatorname{op}}\to\operatorname{Sp}\] is a _pure sheaf_ if \(X\) sends cofiber sequences of perfect pure motivic spectra to fiber sequences of spectra. We write \[\operatorname{Sh}_{\Sigma}(\operatorname{Pure}(k);\operatorname{Sp})\subseteq \operatorname{PSh}(\operatorname{Pure}(k);\operatorname{Sp})\] for the full subcategory spanned by the pure sheaves. #### 3.3.2. Remark A pure sheaf \(X\colon\operatorname{Pure}(k)^{\operatorname{op}}\to\operatorname{Sp}\) is in particular additive. Our terminology comes from the fact that, as a consequence of [55, Theorem 2.8], among all additive functors pure sheaves are characterized by the sheaf property with respect to the Grothendieck pretopology on \(\operatorname{Pure}(k)\) where covering families consists of a single pure epimorphism. By [55, Proposition 2.5], the left adjoint \[L\colon\operatorname{PSh}_{\Sigma}(\operatorname{Pure}(k);\operatorname{Sp}) \to\operatorname{Sh}_{\Sigma}(\operatorname{Pure}(k);\operatorname{Sp})\] to the inclusion can be identified with the sheafication functors with respect to this topology. In particular, it is t-exact with respect to the t-structures inherited from that of spectra. #### 3.3.3. The inclusion \[\operatorname{Pure}(k)\hookrightarrow\operatorname{SH}(k)\] preserves cofiber sequences. Since the target is stable and cocomplete, it follows formally that its left Kan extension defines a symmetric monoidal left adjoint \[F\colon\operatorname{Sh}_{\Sigma}(\operatorname{Pure}(k);\operatorname{Sp}) \to\operatorname{SH}(k)\.\] Its right adjoint \[G\colon\operatorname{SH}(k)\to\operatorname{Sh}_{\Sigma}(\operatorname{Pure}( k);\operatorname{Sp})\] is given by the spectral Yoneda embedding; i.e., \[G(X)(A)\simeq\operatorname{map}_{\operatorname{SH}(k)}(A,X)\.\] #### 3.3.4. Lemma _Let \(A,B\in\operatorname{Pure}(k)\) be perfect pure and let \(m<0\) be an integer. Given a map \(\Sigma^{m}B\to A\), there exists a pure epimorphism \(B^{\prime}\to B\) such that the composite_ \[\Sigma^{m}B^{\prime}\to\Sigma^{m}B\to A\] _is zero._ Proof.: Since \(m<0\), by Lemma 3.2.5 we have that \(\operatorname{MGL}_{m,0}(B^{\vee}\otimes A)=0\). Thus the composite map \[\Sigma^{m}B\to A\to\operatorname{MGL}\otimes A\] is zero. Since \(B\) is compact, we deduce that there exist integers \(n,d\geq 0\) such that \[\Sigma^{m}B\to A\to\operatorname{Th}_{d}(n)\otimes A\] is zero. By dualizing, the same follows for the composite \[\Sigma^{m}(\operatorname{Th}_{d}(n)^{\vee}\otimes B)\to\Sigma^{m}B\to A\.\] The map \(\operatorname{Th}_{d}(n)^{\vee}\otimes B\to B\) is the required pure epimorphism. Now for the promised description of \(\operatorname{SH}(k)\): **3.3.5 Theorem**.: _The symmetric monoidal functor_ \[F\colon\operatorname{Sh}_{\Sigma}(\operatorname{Pure}(k);\operatorname{Sp}) \to\operatorname{SH}(k)\] _is an equivalence._ Proof.: The \(\infty\)-category \(\operatorname{Sh}_{\Sigma}(\operatorname{Pure}(k);\operatorname{Sp})\) is generated under colimits and desuspensions by representable presheaves \(y(A)\) for \(A\in\operatorname{Pure}(k)\). These are defined as a sheafication \[y(A)(-)\mathrel{\mathop{:}}=L(\tau_{\geq 0}F(-,A))\] of the presheaf given by the connective part of the mapping spectrum. By construction as a left Kan extension, the functor \(F\) is uniquely determined by the property of being continuous and the requirement that \[F(y(A))\simeq A\in\operatorname{SH}(k)\.\] We will analyze the unit map \[X\to GF(X)\.\] for some \(X\in\operatorname{Sh}_{\Sigma}(\operatorname{Pure}(k);\operatorname{Sp})\). If \(X\simeq y(A)\) is a representable presheaf, by the above discussion this map takes the form \[L(\tau_{\geq 0}F(-,A))\to G(A)(-)\simeq F(-,A)\.\] Thus, to verify the result in this case we have to show that the map \[\tau_{\geq 0}F(-,A)\to F(-,A)\] of presheaves of spectra is a sheafication with respect to the pure epimorphism topology. This map is a connective cover before sheafication, and thus will remain so after. Thus we only have to check that \(G(F(A))\simeq F(-,A)\) is connective as a sheaf. Suppose that \(B\) is perfect pure and we have a class in \(g\in\pi_{k}G(F(A))\simeq F(B,A)\) for \(k<0\), which we can identify with a homotopy class of maps \[g\colon\Sigma^{k}B\to A\.\] By Lemma 3.3.4, we deduce that there exists a pure epimorphism \(B^{\prime}\to B\) such that \(g|_{B^{\prime}}=0\). It follows that \(F(-,A)\) is connective, as needed. Both functors preserve filtered colimits, \(F\) as it is a left adjoint and \(G\) as every perfect pure is compact. As both are also exact, we deduce that the subcategory of those \(X\in\operatorname{Sh}_{\Sigma}(\operatorname{Pure}(k);\operatorname{Sp})\) such that the unit map is an equivalence is closed under colimits and desuspensions. As \(\operatorname{Sh}_{\Sigma}(\operatorname{Pure}(k);\operatorname{Sp})\) is generated under these by \(y(A)\) for \(A\in\operatorname{Pure}(k)\), we deduce that the unit map is an equivalence for any \(X\), so that \(F\) is fully faithful. Since the essential image of \(F\) is closed under colimits and desuspensions and contains \(A\in\operatorname{SH}(k)\), we deduce that \(F\) is an equivalence, as needed. **3.3.6 Corollary**.: _Let \(\mathcal{D}\) be a stable \(\infty\)-category which admits small colimits. Restriction along the inclusion \(\operatorname{Pure}(k)\subseteq\operatorname{SH}(k)\) defines an equivalence of \(\infty\)-categories_ \[\operatorname{Fun}^{\operatorname{L}}(\operatorname{SH}(k),\mathcal{D}) \to\operatorname{Fun}^{\operatorname{cofib}}(\operatorname{Pure}(k),\mathcal{D} )\.\] _Here, the right-hand side is the full subcategory of \(\operatorname{Fun}(\operatorname{Pure}(k),\mathcal{D})\) spanned by the functors that preserve cofiber sequences._ **3.3.7 Remark** (MGL-modules).: As a consequence of Theorem3.3.5, one can deduce a presheaf description of the \(\infty\)-category of MGL-modules. This description was already known and is a consequence of the existence of Bondarko's weight structure on MGL-modules; see the work of Elmanto-Sosnilo [23, Theorem 2.2.9]. ## 4. The weight filtration on complex oriented homology Let \(A\) be an \(\mathbf{E}_{1}\)-ring spectrum. In this section, we show that if \(A\) is _complex orientable_, then the \(A\)-linearized Betti realization functor \(A\otimes\operatorname{Be}(-)\colon\operatorname{SH}(\mathbb{C})\to \operatorname{Mod}_{A}\) refines to a left adjoint \[\operatorname{W}_{*}\!\operatorname{Be}(-;A)\colon\operatorname{SH}( \mathbb{C})\to\operatorname{Mod}_{\tau_{\geq},(A)}(\operatorname{FilSp})\] valued in modules in filtered spectra over the _Postnikov filtration_ on \(A\). We refer to \(\operatorname{W}_{*}\!\operatorname{Be}(-;A)\) as the _filtered Betti realization_ functor. Note that if \(A\) is an ordinary ring, then \(\operatorname{Mod}_{\tau_{\geq},(A)}(\operatorname{FilSp})\) is coincides with the filtered derived \(\infty\)-category of \(A\) (see Proposition4.1.7); hence for a complex variety \(X\), the filtered Betti realization \(\operatorname{W}_{*}\!\operatorname{Be}(\Sigma_{+}^{\infty}X;A)\) defines a filtration on the complex \(\operatorname{C}^{*}(X(\mathbb{C});A)\). In SS5, we explain how to use filtered Betti realization to recover the Deligne-Gillet-Soule weight filtration on the compactly supported integral Betti cohomology of a complex variety. In SS4.1, we recall some background on filtered objects. In SS4.2 we set up an abstract framework for using Corollary3.3.6 to equip the (\(A\)-linear) Betti realization of a motivic spectrum with a filtration. In SS4.3, we construct the filtered Betti realization functor \(\operatorname{W}_{*}\!\operatorname{Be}(-;A)\); see Corollaries4.3.13 and 4.3.15. In SS4.4, we unpack our construction in the case of an ordinary ring. In SS4.5, we explain how filtered Betti realization interacts with changing the coefficient ring \(A\). In SS4.6, we use the general setup explained in SS4.2 to construct a filtered refinement of the \(\ell\)-adic etale realization functor \[\operatorname{Re}_{\ell}\colon\operatorname{SH}(k)\to\operatorname{Sh}_{ \operatorname{\acute{e}t}}^{\operatorname{hypp}}(\operatorname{\acute{e}t}_{S };\operatorname{Sp})^{\wedge}_{\ell}\.\] In SS4.7, we discuss how one can use filtered Betti realization to construct _virtual Euler characteristics_ associated to Morava K-theories. **4.0.1 Notation**.: Let \(k\) be a field of exponential characteristic \(e\). Throughout this section, we keep the notational convention \(\operatorname{SH}(k)\coloneqq\operatorname{SH}(k)[\nicefrac{{1}}{{e}}]\) introduced in Notation3.0.1. ### Background on filtered objects We begin by reviewing some background on filtered objects in stable \(\infty\)-categories. **4.1.1 Notation**.: Let \(\mathcal{C}\) be a stable \(\infty\)-category which admits small colimits. We write \[\operatorname{Fil}(\mathcal{C})\coloneqq\operatorname{Fun}(\mathbb{Z}^{ \operatorname{op}},\mathcal{C})\] for the \(\infty\)-category of _filtered objects_ in \(\mathcal{C}\). Here we regard \(\mathbb{Z}\) as a poset with the usual partial order, so our filtrations are _decreasing_. The colimit functor defines a left adjoint \(\operatorname{cilm}\colon\operatorname{Fil}(\mathcal{C})\to\mathcal{C}\). If \(\mathcal{C}\) has a t-structure, then there is a functor \(\tau_{\geq*}\colon\mathcal{C}\to\operatorname{Fil}(\mathcal{C})\) given by sending an object \(X\in\mathcal{C}\) to its _Postnikov filtration_ \[\cdots\xrightarrow{}\tau_{\geq n}X\xrightarrow{}\tau_{\geq n+1}X \xrightarrow{}\cdots\,\] see [56, Construction 3.3.7]. Moroever: 1. If the t-structure is right complete, then \(\operatorname{colim}\tau_{\geq*}\simeq\operatorname{id}_{\mathcal{C}}\), so that the Postnikov filtration is _exhaustive_. 2. If the t-structure is left complete, then \(\lim\tau_{\geq*}\simeq 0\), so that the Postnikov filtration is _complete_. Note that the functor \(\tau_{\geq*}\colon\mathcal{C}\to\operatorname{Fil}(\mathcal{C})\) is additive, but generally _not_ exact. #### 4.1.2. Notation Via Day convolution, the addition on \(\mathbb{Z}^{\operatorname{op}}\) and the tensor product of spectra assemble into a symmetric monoidal structure \[\otimes\colon\operatorname{FilSp}\times\operatorname{FilSp}\to\operatorname{ FilSp}\] defined by \[(X_{*}\otimes Y_{*})_{n}:=\operatorname*{colim}_{a+b\geq n}X_{a}\otimes Y_{b}\.\] #### 4.1.3. With respect to the Day convolution symmetric monoidal structure, the functor \[\tau_{\geq*}\colon\operatorname{Sp}\to\operatorname{FilSp}\] is lax symmetric monoidal. In particular, for any \(\mathbf{E}_{n}\)-ring spectrum \(A\), the filtered spectrum \(\tau_{\geq*}(A)\) acquires a natural \(\mathbf{E}_{n}\)-ring structure. Moreover, the functor \(\tau_{\geq*}\colon\operatorname{Sp}\to\operatorname{FilSp}\) refines to a functor \[\operatorname{Mod}_{A}=\operatorname{Mod}_{A}(\operatorname{Sp})\to \operatorname{Mod}_{\tau_{\geq*}(A)}(\operatorname{FilSp})\,\] which we also denote by \(\tau_{\geq*}\). We also write \[\operatorname{colim}\colon\operatorname{Mod}_{\tau_{\geq*}(A)}(\operatorname{ FilSp})\to\operatorname{Mod}_{A}\] for the induced functor. #### 4.1.4. Definition We say a filtered spectrum \(F_{*}X\) is _diagonal connective_ if for all \(n\in\mathbb{Z}\) we have \(F_{n}X\in\operatorname{Sp}_{\geq n}\). This determines a unique t-structure on filtered spectra which we call the _diagonal t-structure_. #### 4.1.5. Remark The diagonal t-structure is compatible with the symmetric monoidal structure on filtered spectra. Since any filtered spectrum of the form \(\tau_{\geq*}A\) is diagonal connective, the \(\infty\)-category \[\operatorname{Mod}_{\tau_{\geq*}A}(\operatorname{FilSp})\] of modules in filtered spectra inherits a unique t-structure for which the forgetful functor is t-exact. We also refer to this t-structure as the _diagonal t-structure_. When \(A=\operatorname{H}\!R\) is the Eilenberg-MacLane spectrum associated to an ordinary commutative ring, \(\operatorname{Mod}_{\tau_{\geq*}(\operatorname{H}\!R)}(\operatorname{FilSp})\) recovers the filtered derived \(\infty\)-category of \(R\): #### 4.1.6. Notation Let \(R\) be an ordinary commutative ring. We write \(\mathcal{D}^{\operatorname{fil}}(R)\) for the \(\infty\)-categorical enhancement of the filtered derived category of \(R\). **4.1.7 Proposition**.: _Let \(R\) be an ordinary commutative ring. There are natural symmetric monoidal equivalences_ \[\operatorname{Mod}_{\tau_{\geq*}(\operatorname{H}\!R)}(\operatorname{FilSp}) \simeq\operatorname{Fil}(\mathcal{D}(R))\simeq\mathcal{D}^{\operatorname{fil} }(R)\.\] Proof sketch.: Note that since \(\operatorname{H}\!R\) only has a nontrivial homotopy group in degree \(0\), the filtered spectrum \(\tau_{\geq*}(\operatorname{H}\!R)\) is given by \[\cdots\mathrel{\vbox{\hbox{\hbox to 0.0pt{$\sqcap$}\hbox{\raise 0.0pt\hbox{$ \sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$ \sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$ \sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{ \raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$ \sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$ \sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{ \raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{ \raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{ \raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{ \raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{ \raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{ \raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{ \raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{ \raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{ \raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{ \raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{ \raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{ \raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{ \raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{ \raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{ \raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{ \raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\sqcup$}\hbox{ \raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{ \raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{ \raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{ \raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0.0pt\hbox{$\sqcap$}\hbox{ \raise 0.0pt\hbox{$\sqcap$}\hbox{\raise 0. ### Weight contexts We now describe a general method of equipping a colimit-preserving functor defined on the stable motivic category with additional structure. #### 4.2.1. Definition Let \(k\) be a field. A _weight context_ consists of the following data: 1. Stable \(\infty\)-categories \(\mathcal{C}\) and \(\mathcal{D}\) which admit small colimits. 2. A colimit-preserving functor \(U\colon\mathcal{D}\to\mathcal{C}\). 3. An additive functor \(T\colon\mathcal{C}\to\mathcal{D}\) along with an equivalence \(U\circ T\simeq\operatorname{id}_{\mathcal{C}}\). 4. A colimit-preserving functor \(\mathbb{M}\colon\operatorname{SH}(k)\to\mathcal{C}\). A _solution_ to a weight context is a functor \(W\mathbb{M}\) making the following triangle commute \[\diagram\node{\operatorname{SH}(k)}\arrow{\operatorname{\mathbb{M}}}\arrow{ \operatorname{\mathbb{M}}}\arrow{\mathcal{C}}\arrow{U #### 4.3.3. **Recollection** (complex orientations).: Let \(A\) be an \(\mathbf{E}_{1}\)-ring spectrum. A _complex orientation_ of \(A\) is a morphism \(\operatorname{MU}\to A\) of associative algebras in the homotopy category \(\operatorname{hSp}\) of spectra. We say that \(A\) is _complex orientatable_ if there exists a complex orientation of \(A\). We refer the reader to [49; 50; 57, SS4.1] for more background on complex orientations. #### 4.3.4. **Example** (1) If \(R\) is an ordinary ring, then there is a natural map of \(\mathbf{E}_{\infty}\)-rings \(\operatorname{MU}\to\operatorname{H}\!R\). In particular, \(\operatorname{H}\!R\) is complex orientable. (2) The complex K-theory spectrum \(\operatorname{KU}\) has a canonical complex orientation. (3) For each prime \(p\) and integer \(n\geq 0\), the height \(n\) Morava K-theory \(\operatorname{K}(n)\) has a canonical complex orientation. In order to check the hypotheses of Theorem 4.2.3 for \(\operatorname{Be}(-;A)\), we need the following lemma. **4.3.5 Lemma**.: _Let \(A\) be a complex orientable \(\mathbf{E}_{1}\)-ring and let \(f\colon X\to Y\) be a map of spectra such that \(\operatorname{MU}\otimes f\) is zero. Then \(A\otimes f\) is zero as a map of \(A\)-modules._ Proof.: By the extension of scalars adjunction, it is enough to show that the map of spectra (4.3.6) \[X\simeq\operatorname{S}^{0}\otimes X\xrightarrow{\phantom{\text{\rm MU}\otimes f }\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{ \text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU} \otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{ \text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU} \otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f} \phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{ \rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f} \phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU }\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f} \phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU }\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f} \phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU }\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f} \phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU }\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f} \phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU }\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f} \phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU }\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f} \phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU }\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f} \phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU }\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f} \phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU }\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f} \phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU }\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f} \phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU }\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f} \phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU }\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f} \phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU }\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f} \phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU }\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f} \phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU }\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f} \phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU }\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f} \phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU }\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f} \phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU }\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f} \phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU }\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f} \phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU }\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f} \phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU }\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f} \phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU }\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f} \phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU }\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f} \phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU }\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f} \phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU }\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f} \phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU }\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f} \phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU }\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f} \phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU }\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f} \phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU }\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f} \phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU }\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU }\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f} \phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU }\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU }\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU }\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f} \phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU }\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU}\otimes f}\phantom{\text{\rm MU As a consequence, for a complex orientable connective \(\mathbf{E}_{1}\)-ring \(A\), the weight context of (4.3.2) has a solution. More generally, any weight context based on \(A\)-linear Betti realization has a solution. **4.3.12 Proposition**.: _Let \(A\) be a complex orientable \(\mathbf{E}_{1}\)-ring. Then any weight context of the form_ _has a unique solution \(W\mathbb{M}\colon\operatorname{SH}(\mathbb{C})\to\mathcal{D}\) satifying the following properties:_ 1. _The functor_ \(W\mathbb{M}\) _preserves colimits._ 2. _If_ \(X\in\operatorname{SH}(\mathbb{C})\) _is perfect pure, then_ \(W\mathbb{M}(X)\simeq T(\operatorname{Be}(X;A))\)_._ Proof.: By Theorem 4.2.3, it suffices to show that if \(X\to Y\to Z\) is a cofiber sequence in \(\operatorname{Pure}(\mathbb{C})\), then is a cofiber sequence in \(\mathcal{D}\). By 4.3.11, the cofiber sequence in \(\operatorname{Mod}_{A}\) is split. Since \(T\colon\operatorname{Mod}_{A}\to\mathcal{D}\) is additive, \(T\) preserves this split cofiber sequence. **4.3.13 Corollary**.: _Let \(A\) be an \(\mathbf{E}_{1}\)-ring spectrum. If \(A\) is complex orientable, then there exists a unique left adjoint_ _such that for \(X\in\operatorname{Pure}(\mathbb{C})\), we have_ \[\operatorname{W}_{*}\operatorname{Be}(X;A)\simeq\tau_{\geq*}\operatorname{Be} (X;A)\.\] Proof.: Apply Proposition 4.3.12 to the weight context (4.3.2). **4.3.14 Definition** (filtered Betti realization).: Let \(A\) be a complex orientable \(\mathbf{E}_{1}\)-ring spectrum. We call the functor \[\operatorname{W}_{*}\operatorname{Be}(-;A)\colon\operatorname{SH}(\mathbb{C}) \to\operatorname{Mod}_{\tau_{\geq*}(A)}(\operatorname{FilSp})\] of Corollary 4.3.13 the _\(A\)-linear filtered Betti realization_ functor. Pleasantly, this filtration is exhaustive: **4.3.15 Corollary**.: _Let \(A\) be a complex orientable \(\mathbf{E}_{1}\)-ring spectrum. Then the triangle of \(\infty\)-categories and left adjoints_ _canonically commutes._ Proof.: Both of the functors \(\operatorname{SH}(\mathbb{C})\to\operatorname{Mod}_{A}\) in the diagram preserve colimits. Moreover, by Corollary 4.3.13 they agree on \(\operatorname{Pure}(\mathbb{C})\subseteq\operatorname{SH}(\mathbb{C})\). Thus the conclusion follows from Corollary 3.3.6. We conclude by recording that the filtered Betti realization is compatible with t-structures. The relevant t-structure on the motivic side is the Chow-Novikov t-structure of [7], and on the filtered module side is the diagonal t-structure: **4.3.16 Lemma**.: _Let \(A\) be a complex orientable \(\mathbf{E}_{1}\)-ring spectrum. The filtered Betti realization_ \[\operatorname{W}_{*}\operatorname{Be}(-;A)\colon\operatorname{SH}(\mathbb{C}) \to\operatorname{Mod}_{\tau_{\geq*}(A)}(\operatorname{FilSp})\] _is right \(\operatorname{t}\)-exact with respect to the Chow-Novikov \(\operatorname{t}\)-structure on motivic spectra and the diagonal \(\operatorname{t}\)-structure on filtered spectra; that is, filtered Betti realization preserves connectivity._ Proof.: By definition, the connective part of the Chow-Novikov \(\operatorname{t}\)-structure is generated under colimits and extensions by perfect pure motivic spectra. Thus, it is enough to show that for \(X\) perfect pure \[\operatorname{W}_{*}\operatorname{Be}(X;A)\simeq\tau_{\geq*}(\operatorname{ Be}(X))\] is connective, which is clear. For the next result, recall that an object \(X\) of a stable \(\infty\)-category with \(\operatorname{t}\)-structure \(\mathcal{C}\) is \(\infty\)_-connective_ if \(X\in\bigcap_{n\in\mathbb{Z}}\mathcal{C}_{\geq n}\). Also recall that the \(\operatorname{t}\)-structure on \(\mathcal{C}\) is _left separated_ if \(\bigcap_{n\in\mathbb{Z}}\mathcal{C}_{\geq n}=0\). **4.3.17 Corollary**.: _Let \(A\) be a complex orientable \(\mathbf{E}_{1}\)-ring spectrum. The filtered Betti realization_ \[\operatorname{W}_{*}\operatorname{Be}(-;A)\colon\operatorname{SH}(\mathbb{C}) \to\operatorname{Mod}_{\tau_{\geq*}(A)}(\operatorname{FilSp})\] _inverts maps of motivic spectra which are \(\infty\)-connective with respect to the Chow-Novikov \(\operatorname{t}\)-structure._ Proof.: Since \(\operatorname{W}_{*}\operatorname{Be}(-;A)\) is exact, it is enough to show that if \(X\) is \(\infty\)-connective with respect to the Chow-Novikov \(\operatorname{t}\)-structure, then \(\operatorname{W}_{*}\operatorname{Be}(X;A)=0\). By Lemma4.3.16, we deduce that \(\operatorname{W}_{*}\operatorname{Be}(X;A)\) is \(\infty\)-connective with respect to the diagonal \(\operatorname{t}\)-structure, so that \(\operatorname{W}_{*}\operatorname{Be}(X;A)\) is levelwise \(\infty\)-connective. Since the standard \(\operatorname{t}\)-structure on spectra is left separated, it follows that \(\operatorname{W}_{*}\operatorname{Be}(X;A)=0\). ### The case of an ordinary ring We now unpack the filtered Betti realization in the case of an ordinary ring. **4.4.1 Notation**.: If \(R\) is an ordinary commutative ring, we simply write \[\operatorname{Be}(-;R)\colon\operatorname{SH}(\mathbb{C})\to\mathcal{D}(R)\] for \(\operatorname{Be}(-;\operatorname{H}R)\). Note that the functor \(\operatorname{Be}(-;R)\) is the unique symmetric monoidal left adjoint with the property that for any smooth \(\mathbb{C}\)-scheme \(X\), we have \[\operatorname{Be}(\Sigma_{+}^{\infty}X;R)\simeq\operatorname{C}_{*}(X( \mathbb{C});R)\.\] An important feature is that Betti realization with coefficients in an ordinary ring factors through modules over motivic cohomology: **4.4.2 Observation** (\(\operatorname{Be}(-;R)\) factors through \(\operatorname{M}\!R\)-modules).: Let \(R\) be an ordinary commutative ring. Since Betti realization \(\operatorname{Be}\colon\operatorname{SH}(\mathbb{C})\to\operatorname{Sp}\) is symmetric monoidal and \(\operatorname{Be}(\operatorname{M}\!R)\simeq\operatorname{H}\!R\), the \(R\)-linear Betti realization functor factors through \(\operatorname{M}\!R\)-modules in \(\operatorname{SH}(\mathbb{C})\). That is, \(R\)-linear Betti realization refines to a unique symmetric monoidal left adjoint \[\operatorname{Mod}_{\operatorname{M}\!R}(\operatorname{SH}(\mathbb{C}))\to \operatorname{Mod}_{\operatorname{H}\!R}(\operatorname{Sp})\simeq\mathcal{D}(R)\] fitting into a commutative square We also denote this refinement by \(\operatorname{Be}(-;R)\colon\operatorname{Mod}_{\operatorname{M}\!R}( \operatorname{SH}(\mathbb{C}))\to\mathcal{D}(R)\). In this case, Definition4.3.14 specializes to the following: #### 4.4.3. Example Let \(R\) be an ordinary commutative ring. Since the Eilenberg-MacLane spectrum \(\mathrm{H}R\) admits a canonical complex orientation, there is a filtered Betti realization functor \[\mathrm{SH}(\mathbb{C})\xrightarrow{\mathrm{W}_{*}\mathrm{Be}(-;R)}\mathrm{Mod}_{ \tau_{\geq*}(\mathrm{H}R)}(\mathrm{FilSp})\xrightarrow{\sim}\mathrm{Fil}( \mathcal{D}(R))\] Here the right-hand equivalence is provided by Proposition4.1.7. Again, filtered Betti realization with coefficients in an ordinary ring factors through modules over motivic cohomology: #### 4.4.4. Observation \((\mathrm{W}_{*}\mathrm{Be}(-;R)\) factors through \(\mathrm{M}R\)-modules Let \(R\) be an ordinary commutative ring. In light of Observation4.4.2, the filtered \(R\)-linear Betti realization functor \(\mathrm{W}_{*}\mathrm{Be}(-;R)\) refines to a unique left adjoint \[\mathrm{Mod}_{\mathrm{M}R}(\mathrm{SH}(\mathbb{C}))\to\mathrm{Fil}(\mathcal{D} (R))\] fitting into a commutative triangle We also denote this refinement by \(\mathrm{W}_{*}\mathrm{Be}(-;R)\colon\mathrm{Mod}_{\mathrm{M}R}(\mathrm{SH}( \mathbb{C}))\to\mathrm{Fil}(\mathcal{D}(R))\). ### Changing the coefficients of filtered Betti realization Let \(\phi\colon A\to B\) be a morphism of complex orientable \(\mathbf{E}_{1}\)-rings. In this subsection, we produce a comparison natural transformation \[\tau_{\geq*}(B)\underset{\tau_{\geq*}(A)}{\otimes}\mathrm{W}_{*}\mathrm{Be}(-; A)\to\mathrm{W}_{*}\mathrm{Be}(-;B)\] and show that this natural tranformation is an equivalence if \(\phi\) is flat (Corollary4.5.4). To start, we need to analyze the interaction between Postnikov filtrations and tensor products. #### 4.5.1. Observation Let \(\phi\colon A\to B\) be a morphism of \(\mathbf{E}_{1}\)-rings. Then the square commutes. Here the horizontal functors are the forgetful functors. Passing to horizontal left adjoints, there is an exchange transformation filling the square \[\mathrm{Mod}_{A}\xrightarrow{B\otimes_{A}(-)}\mathrm{Mod}_{B}\] #### 4.5.2. Construction (comparison morphism) Let \(\phi\colon A\to B\) be a morphism of complex orientable connective \(\mathbf{E}_{1}\)-rings. Define a natural transformation \[c_{\phi}\colon\tau_{\geq*}(B)\underset{\tau_{\geq*}(A)}{\otimes}\mathrm{W}_{*} \mathrm{Be}(-;A)\longrightarrow\mathrm{W}_{*}\mathrm{Be}(-;B)\] of functors \(\operatorname{SH}(\mathbb{C})\to\operatorname{Mod}_{\tau_{\geq*}(B)}(\operatorname{ FilSp})\) as follows. Note that since \(\tau_{\geq*}(B)\otimes_{\tau_{\geq*}(A)}\operatorname{W}_{*}\operatorname{Be}(-;A)\) and \(\operatorname{W}_{*}\operatorname{Be}(-;B)\) are both left adjoints, by the equivalence \[\operatorname{Fun}^{\operatorname{L}}(\operatorname{SH}(\mathbb{C}), \operatorname{Mod}_{\tau_{\geq*}(B)}(\operatorname{FilSp}))\xrightarrow{ \sim}\operatorname{Fun}^{\operatorname{cofib}}(\operatorname{Pure}(\mathbb{C}),\operatorname{Mod}_{\tau_{\geq*}(B)}(\operatorname{FilSp}))\] of Corollary 3.3.6, it suffices to construct the restriction \(c_{\phi}|_{\operatorname{Pure}(\mathbb{C})}\) to perfect pure motivic spectra. For this, we take the natural transformation \[\tau_{\geq*}(B)\mathop{\otimes}_{\tau_{\geq*}(A)}\tau_{\geq*}(\operatorname{ Be}(-;A))\xrightarrow{\operatorname{Ex}_{\phi}\operatorname{Be}(-;A)}\tau_{ \geq*}(B\otimes_{A}\operatorname{Be}(-;A))\] induced by the exchange transformation. For flat ring maps, the exchange transformation is an equivalence: **4.5.3 Lemma**.: _Let \(\phi\colon A\to B\) be a morphism of \(\mathbf{E}_{1}\)-rings. If \(\phi\) is flat, then the exchange transformation_ \[\operatorname{Ex}_{\phi}\colon\tau_{\geq*}(B)\mathop{\otimes}_{\tau_{\geq*}(A )}\tau_{\geq*}(-)\longrightarrow\tau_{\geq*}(B\otimes_{A}(-))\] _is an equivalence of functors \(\operatorname{Mod}_{A}\to\operatorname{Mod}_{\tau_{\geq*}(B)}(\operatorname{ FilSp})\)._ Proof.: Since \(\phi\) is flat, the left adjoint \(B\otimes_{A}(-)\colon\operatorname{Mod}_{A}\to\operatorname{Mod}_{B}\) is t-exact [51, Theorem 7.2.2.15]. Hence for each \(M\in\operatorname{Mod}_{A}\) and \(n\in\mathbb{Z}\), the natural map \[B\otimes_{A}\tau_{\geq n}(M)\longrightarrow\tau_{\geq n}(B\otimes_{A}M)\] is an equivalence. #### 4.5.4. **Corollary**.: _Let \(\phi\colon A\to B\) be a morphism of complex orientable \(\mathbf{E}_{1}\)-rings. If \(\phi\) is flat, then the comparison natural transformation_ \[c_{\phi}\colon\tau_{\geq*}(B)\mathop{\otimes}_{\tau_{\geq*}(A)}\operatorname{ W}_{*}\operatorname{Be}(-;A)\longrightarrow\operatorname{W}_{*}\operatorname{Be}(-;B)\] _is an equivalence of functors \(\operatorname{SH}(\mathbb{C})\to\operatorname{Mod}_{\tau_{\geq*}(B)}( \operatorname{FilSp})\)._ Proof.: Since both \(\tau_{\geq*}(B)\otimes_{\tau_{\geq*}(A)}\operatorname{W}_{*}\operatorname{Be}( -;A)\) and \(\operatorname{W}_{*}\operatorname{Be}(-;B)\) are left adjoints, by Corollary 3.3.6 it suffices to show that \(c_{\phi}\) is an equivalence when restricted to \(\operatorname{Pure}(\mathbb{C})\). The claim now follows from the definitions of \(\operatorname{W}_{*}\operatorname{Be}(-;A)\) and \(\operatorname{W}_{*}\operatorname{Be}(-;B)\) combined with Lemma 4.5.3. #### 4.5.5. The comparison natural transformation \[\mathbb{Q}\otimes_{\mathbb{Z}}\operatorname{W}_{*}\operatorname{Be}(-;\mathbb{ Z})\to\operatorname{W}_{*}\operatorname{Be}(-;\mathbb{Q})\] is an equivalence of functors \(\operatorname{SH}(\mathbb{C})\to\operatorname{Fil}(\mathbb{D}(\mathbb{Q}))\). ### Filtered etale realization Let \(k\) be a field and \(\ell\neq\operatorname{char}(k)\) a prime. In Definition 2.4.6, we recalled Bachmann's construction an \(\ell\)-adic etale realization functor \[\operatorname{Re}_{\ell}\colon\operatorname{SH}(k)\to\operatorname{Sh}_{ \operatorname{\acute{e}t}}^{\operatorname{hyp}}(\operatorname{\acute{E}t}_{S} ;\operatorname{Sp})_{\ell}^{\wedge}\.\] In this subsection, we show that the complex orientable variants of this functor have a canonical lift to filtered sheaves. **4.6.1 Definition**.: Let \(k\) be a field and \(\ell\neq\operatorname{char}(k)\) a prime. We say that \(A\in\operatorname{Alg}(\operatorname{Sh}_{\operatorname{\acute{e}t}}^{ \operatorname{hyp}}(\operatorname{\acute{E}t}_{S};\operatorname{Sp})_{\ell}^{ \wedge})\) is _complex orientable_ if there exists a map of associative algebras \[\operatorname{Re}_{\ell}(\operatorname{MGL})\to A\] in the homotopy category of \(\operatorname{Sh}_{\operatorname{\acute{e}t}}^{\operatorname{hyp}}( \operatorname{\acute{E}t}_{S};\operatorname{Sp})_{\ell}^{\wedge}\). #### 4.6.2. Remark Write \[R\colon\operatorname{Sh}_{\operatorname{\acute{e}t}}^{\operatorname{hypp}}( \operatorname{\acute{E}t}_{S};\operatorname{Sp})^{\wedge}_{\ell}\to \operatorname{SH}(k)\] for the right adjoint to \(\operatorname{Re}_{\ell}\). The condition that \(A\) is complex orientable in the sense of Definition4.6.1 is equivalent to the condition that the motivic spectrum \(R(A)\in\operatorname{Alg}(\operatorname{SH}(k))\) representing \(A\)-linear etale cohomology is orientable as a motivic spectrum. #### 4.6.3. Recall that one says that \(X\in\operatorname{Sh}_{\operatorname{\acute{e}t}}^{\operatorname{hypp}}( \operatorname{\acute{E}t}_{k};\operatorname{Sp})\) is _coconnective_ if for every \(E\in\operatorname{\acute{E}t}_{k}\), the spectrum \(X(E)\) is coconnective. This is a coconnective part of a unique t-structure which we call the _standard_ t-_structure_; see [52, SS1.3.2]. The heart can be described as the category \[\operatorname{Sh}_{\operatorname{\acute{e}t}}^{\operatorname{hypp}}( \operatorname{\acute{E}t}_{k};\operatorname{Sp})^{\heartsuit}\simeq \operatorname{Sh}_{\operatorname{\acute{e}t}}^{\operatorname{hypp}}( \operatorname{\acute{E}t}_{k};\operatorname{Ab})\simeq\operatorname{Sh}_{ \operatorname{\acute{e}t}}(\operatorname{\acute{E}t}_{k};\operatorname{Ab}),\] of etale sheaves of abelian groups on \(k\). A map \(X\to Y\) of hypercomplete sheaves is an equivalence if and only if for each \(i\in\mathbb{Z}\), the induced map \(\pi_{i}^{\heartsuit}X\to\pi_{i}^{\heartsuit}Y\) is an isomorphism. #### 4.6.4. Definition Let \(k\) be a field of exponential characteristic \(e\), let \(\ell\neq e\) be a prime, and let \(A\in\operatorname{Alg}(\operatorname{Sh}_{\operatorname{\acute{e}t}}^{ \operatorname{hypp}}(\operatorname{\acute{E}t}_{k};\operatorname{Sp})^{\wedge}_ {\ell})\). The _\(A\)-linear etale realization_ functor is the composite \[\operatorname{Re}_{\ell}(-;A)\colon\operatorname{SH}(k)\xrightarrow{ \operatorname{Re}_{\ell}}\operatorname{Sh}_{\operatorname{\acute{e}t}}^{ \operatorname{hypp}}(\operatorname{\acute{E}t}_{k};\operatorname{Sp})^{ \wedge}_{\ell}\xrightarrow{A\otimes(-)}\operatorname{Mod}_{A}(\operatorname{ Sh}_{\operatorname{\acute{e}t}}^{\operatorname{hypp}}(\operatorname{\acute{E}t}_{k}; \operatorname{Sp})^{\wedge}_{\ell})\.\] #### 4.6.5. Proposition _Let \(k\) be a field of exponential characteristic \(e\) and \(\ell\neq e\) a prime. Let \(A\in\operatorname{Alg}(\operatorname{Sh}_{\operatorname{\acute{e}t}}^{ \operatorname{hypp}}(\operatorname{\acute{E}t}_{k};\operatorname{Sp})^{\wedge}_ {\ell})\) be complex orientable. Then there exists a unique left adjoint_ \[\operatorname{W}_{*}\operatorname{Re}_{\ell}(-;A)\colon\operatorname{SH}(k) \longrightarrow\operatorname{Fil}(\operatorname{Sh}_{\operatorname{\acute{e}t }}^{\operatorname{hypp}}(\operatorname{\acute{E}t}_{k};\operatorname{Sp})^{ \wedge}_{\ell})\] _such that for \(X\in\operatorname{Pure}(k)\) and any \(n\in\mathbb{Z}\) we have_ \[\operatorname{W}_{*}\operatorname{Re}_{\ell}(X;A)\simeq(\tau_{\geq*}( \operatorname{Re}_{\ell}(X;A))^{\wedge}_{\ell}\] _the \(\ell\)-completion of the Whitehead cover of \(\operatorname{Re}_{\ell}(X;A)\) with respect to the standard t-structure._ Proof.: By Theorem4.2.3, it suffices to show that if \(X\to Y\to Z\) is a cofiber sequence in \(\operatorname{Pure}(k)\), then \[\tau_{\geq*}(\operatorname{Re}_{\ell}(X;A))\to\tau_{\geq*}(\operatorname{Re }_{\ell}(Y;A))\to\tau_{\geq*}(\operatorname{Re}_{\ell}(Z;A))\] is a cofiber sequence in \(\operatorname{Fil}(\operatorname{Sh}_{\operatorname{\acute{e}t}}^{ \operatorname{hypp}}(\operatorname{\acute{E}t}_{k};\operatorname{Sp})^{\wedge}_ {\ell})\). Since there exists a map \(\operatorname{Re}_{\ell}(\operatorname{MGL})\to A\) of the algebras in the homotopy category of \(\operatorname{Sh}_{\operatorname{\acute{e}t}}^{\operatorname{hypp}}( \operatorname{\acute{E}t}_{k};\operatorname{Sp})^{\wedge}_{\ell}\), the same argument as in Lemma4.3.5 shows that \[\operatorname{Re}_{\ell}(X;A)\to\operatorname{Re}_{\ell}(Y;A)\to\operatorname {Re}_{\ell}(Z)\] is a split cofiber sequence, hence preserved by all additive functors, such as \(\tau_{\geq*}\). #### 4.6.6. Definition We call the left adjoint functor \[\operatorname{W}_{*}\operatorname{Re}_{\ell}(-;A)\colon\operatorname{SH}(k) \to\operatorname{Fil}(\operatorname{Sh}_{\operatorname{\acute{e}t}}^{ \operatorname{hypp}}(\operatorname{\acute{E}t}_{k};\operatorname{Sp})^{ \wedge}_{\ell})\] of Proposition4.6.5 the _filtered etale realization_. #### 4.6.7. Remark Since the standard t-structure on _hypercomplete_ sheaves is left separated, same argument as in the Betti case covered in Corollary4.3.17 shows that filtered etale realization inverts Chow-Novikov \(\infty\)-connective maps. ### Virtual Euler characteristics An old conjecture of Serre, first solved by Deligne using the weight filtration, is the existence of _virtual Euler characteristics_. These are invariants \[a_{i}(X;\mathbb{Q})\in\mathbb{Z}\] of a complex variety \(X\) uniquely determined by the following properties: 1. If \(X\) is smooth and proper, then \[a_{i}(X;\mathbb{Q})=\dim_{\mathbb{Q}}\operatorname{H}^{i}(X(\mathbb{C}); \mathbb{Q})\.\] 2. If \(X\) is a variety with an open subvariety \(U\subseteq X\) with closed complement \(Z\subseteq X\), then \[a_{i}(X;\mathbb{Q})=a_{i}(U;\mathbb{Q})+a_{i}(Z;\mathbb{Q})\.\] Over a field of characteristic zero, these virtual Euler characteristics can be defined using Bittner's presentation of the Grothendieck ring of varieties [8]. In terms of the weight filtration on compactly supported cochains \(\operatorname{C}^{*}_{\operatorname{c}}(X(\mathbb{C});\mathbb{Q})\), the virtual Euler characteristic is given by the explicit formula \[a_{i}(X;\mathbb{Q})=(-1)^{i}\chi_{\mathbb{Q}}(\operatorname{gr}_{-i} \operatorname{C}^{*}_{\operatorname{c}}(X(\mathbb{C});\mathbb{Q}))\.\] Here, \[\operatorname{gr}_{i}\operatorname{C}^{*}_{\operatorname{c}}(X(\mathbb{C}); \mathbb{Q}):=\operatorname{cofib}\left(\operatorname{W}_{i+1}\operatorname{C}^ {*}_{\operatorname{c}}(X(\mathbb{C});\mathbb{Q})\to\operatorname{W}_{i} \operatorname{C}^{*}_{\operatorname{c}}(X(\mathbb{C});\mathbb{Q})\right)\] is the \(i\)-th graded piece of the weight filtration, and \(\chi_{\mathbb{Q}}\) denotes the Euler characteristic of a perfect \(\mathbb{Q}\)-module in spectra defined by the difference between the dimension in even odd degrees: \[\chi_{\mathbb{Q}}(P):=\dim_{\mathbb{Q}}\pi_{2*}(P)-\dim_{\mathbb{Q}}\pi_{2*+1 }(P)\.\] Thus, analogous to the way that Khovanov homology categorifies the Jones polynomial [43], the weight filtration can be thought of as the "geometry" behind the virtual Euler characteristics. Besides ordinary cohomology, there are other complex oriented cohomology theories which behave like fields, known as the _Morava \(\operatorname{K}\)-theories_. For each prime \(p\) and integer \(n\geq 1\), we write \(\operatorname{K}(n)\) for the _height \(n\) Morava \(\operatorname{K}\)-theory_ at the (implicit) prime \(p\). In many ways, despite the fact that their ring of coefficients \[\operatorname{K}(n)_{*}\simeq\mathbb{F}_{p}[v_{n}^{\pm 1}]\quad\text{with} \quad\deg(v_{n})=2p^{n}-2\] is of positive characteristic, these cohomology theories behave like objects of characteristic zero; see [11, 32]. This makes Morava \(\operatorname{K}\)-theories useful, for example, in problems involving orientations of orbifolds, as in Abouzaid and Blumberg's breakthrough work on the Arnold conjecture in symplectic geometry [1]. Since the ring of coefficients \(\operatorname{K}(n)_{*}\) forms a graded field and is concentrated in even degrees, analogously to the case of rational cohomology one can define an Euler characteristic of a perfect \(\operatorname{K}(n)\)-module \(P\) by a formula \[\chi_{\operatorname{K}(n)}(P):=\dim_{\operatorname{K}(n)_{*}}(\pi_{2*}(P))- \dim_{\operatorname{K}(n)_{*}}(\pi_{2*+1}(P))\.\] When applied to \(\operatorname{K}\)-cohomology of spaces, these Morava-Euler characteristics satisfy a host of useful properties, and at odd primes can be used to recover an interesting invariant of spaces called _homotopy cardinality_; see the work of Yanovski [64]. Since the \(\mathbf{E}_{1}\)-ring spectra \(\operatorname{K}(n)\) are complex orientable, one can show that the Euler characteristics defined by \[a_{i}(X;\operatorname{K}(n)):=\dim_{\mathbb{F}_{p}}\operatorname{K}(n)^{i}(X( \mathbb{C}))\] when \(X\) is smooth and proper satisfy Bittner's relation. It follows that they extend to a _virtual Morava-Euler characteristic_ defined on all complex varieties. We now show that the weight filtration on \(\operatorname{K}(n)\)-cohomology provided by Corollary 4.3.13 can be thought of as the "geometry" behind these virtual Morava-Euler characteristics. This also had the advantage of applying to etale cohomology, including in positive characteristic, where Bittner's theorem is not known to hold, see Remark 4.7.4. #### 4.7.1. Notation To keep the notation similar to the rational case, we write \[\operatorname{W}_{*}\operatorname{C}^{*}_{\operatorname{c}}(X(\mathbb{C}); \operatorname{K}(n)):=\operatorname{W}_{*}\operatorname{Be}(\operatorname{M}_ {\operatorname{c}}(X);\operatorname{K}(n))\] for the weight filtration on compactly supported \(\operatorname{K}(n)\)-linear cochains, by which we mean the filtered \(\operatorname{K}(n)\)-linear Betti realization of the compactly supported motive \(\operatorname{M}_{\operatorname{c}}(X)\) introduced in Definition 2.2.1. This is a \(\tau_{\geq*}\operatorname{K}(n)\)-module in filtered spectra. #### 4.7.2. Definition Let \(X\) be a complex variety. The _virtual Morava-Euler characteristic_ of \(X\) is defined by \[a_{i}(X;\mathrm{K}(n))=(-1)^{i}(\chi_{\mathbb{F}_{p}}\operatorname{gr}_{-i} \operatorname{C}_{\mathrm{c}}^{*}(X(\mathbb{C});\mathrm{K}(n)))\,\] the \(\mathbb{F}_{p}\)-Euler characteristic of the \(i\)-th graded piece of the weight filtration on compactly supported \(\mathrm{K}(n)\)-linear cochains. Note that since the associated graded of \(\tau_{\geq*}\mathrm{K}(n)\) is given by the homotopy groups \(\pi_{*}\mathrm{K}(n)\simeq\mathbb{F}_{p}[v_{n}^{\pm 1}]\), each graded piece of the weight filtration on \(\mathrm{K}(n)\)-linear cohomology is in particular a module over \(\pi_{0}\mathrm{K}(n)\simeq\mathbb{F}_{p}\), so that Definition 4.7.2 is well-defined. **4.7.3 Theorem**.: _The virtual Morava-Euler characteristic of Definition 4.7.2 has the following properties:_ 1. _If_ \(X\) _is smooth and proper, then_ \[a_{i}(X;\mathrm{K}(n)):=\dim_{\mathbb{F}_{p}}\mathrm{K}(n)^{i}(X(\mathbb{C}))\.\] 2. _If_ \(X\) _is a variety with an open subvariety_ \(U\subseteq X\) _with closed complement_ \(Z\subseteq X\)_, then_ \[a_{i}(X;\mathrm{K}(n))=a_{i}(U;\mathrm{K}(n))+a_{i}(Z;\mathrm{K}(n))\.\] Proof.: If \(X\) is smooth and proper, then as observed in Example 2.2.2, the motive of \(X\) can be identified with the Thom spectrum \(\operatorname{Th}_{X}(-\mathrm{T}_{X})\) of the negative tangent bundle. It follows from the definition of the filtered Betti realization on pure motives as an associated graded of the Postnikov filtration that \[\operatorname{gr}_{-i}\operatorname{C}_{\mathrm{c}}^{*}(X(\mathbb{C}); \mathrm{K}(n))\simeq\Sigma^{-i}\pi_{-i}(\operatorname{Be}(\operatorname{Th}_{X} (-\mathrm{T}_{X}))\otimes K).\] It follows that if \(X\) is smooth and proper, then \[a_{i}(X;\mathrm{K}(n)) =\dim_{\mathbb{F}_{p}}\mathrm{K}(n)_{-i}(\operatorname{Be}( \operatorname{Th}_{X}(-\mathrm{T}_{X})))\] \[=\dim_{\mathbb{F}_{p}}\mathrm{K}(n)_{-i}(\operatorname{Th}_{X( \mathbb{C})}(-\mathrm{T}_{X(\mathbb{C})}))\] \[=\dim_{\mathbb{F}_{p}}\mathrm{K}(n)^{i}(X(\mathbb{C}))\,\] where the second equality is the fact that the Betti realization takes Thom spectra to Thom spectra, and the last one is Atiyah duality. This gives the first claimed property. The second property is an immediate consequence of the localization cofiber sequence \[\operatorname{M}_{\mathrm{c}}(U)\to\operatorname{M}_{\mathrm{c}}(X)\to \operatorname{M}_{\mathrm{c}}(Z)\] of Lemma 2.2.10, exactness of filtered Betti realization, and the fact that the Euler characteristic is additive in cofiber sequences. #### 4.7.4. Remark (etale Morava-Euler characteristics) One can also define analogues of Morava K-theories in the context of etale realization; for example, as etale realizations of Voevodsky' algebraic Morava K-theories5. These will also be complex orientable, and a variation on Definition 4.7.2 will also yield a Morava-Euler characteristic in the context of etale cohomology. Since etale Morava K-theories have received comparatively little attention in the literature compared to their topological cousins, we decided against writing this section at this level of generality. ## 5. Descent and the Gillet-Soule filtration In this section, we show that filtration on compactly supported cohomology given by the filtered Betti realization functor can be calculated through an appropriate hypercover. As a consequence, we deduce that our filtration on integral cohomology of a complex variety agrees with the one constructed by Gillet-Soule in [26]. The key geometric input needed to establish the hypercover formula are Kelly's \(\ell\)dh-topology on schemes [40], and a result of Geisser on \(\ell\)dh-hypercovers [24, Theorem 1.2]. In SS5.1, we review background on the \(\ell\)dh-topology. In SS5.2, we prove that Borel-Moore homology with coefficients in an orientable motivic spectrum satisfies \(\ell\)dh-hyperdescent; see Theorem5.2.3. In SS5.3, we use \(\ell\)dh-hypercovers to calculate the weight filtration on Borel-Moore homology; see Theorem5.3.4. In SS5.4, we use our perspective on filtrations to recover the Gillet-Soule weight filtration on the compactly supported integral cochains on a complex variety; see Theorem5.4.8. ### Background on the cdh-topology and \(\ell\)dh-topology We briefly review the necessary background on the cdh- and \(\ell\)dh-topologies. For more background, see [20, SS2; 41] and [40], respectively. #### 5.1.1. **Recollection** (cdp- and cdh-topologies) 1. A family of morphisms of schemes \(\{p_{i}\colon X_{i}^{\prime}\to X\}_{i\in I}\) is _completely decomposed_ if for each \(x\in X\) there exists an \(i\in I\) and point \(x^{\prime}\in p_{i}^{-1}(x)\) such that the induced map of residue fields \(\kappa(x)\to\kappa(x^{\prime})\) is an isomorphism. 2. The _cdp-topology_ on the category of qcqs schemes is defined as follows: a sieve on a qcqs scheme \(X\) is a _cdp-covering sieve_ if and only if it contains a completely decomposed family \(\{p_{i}\colon X_{i}^{\prime}\to X\}_{i\in I}\) where each \(p_{i}\) is proper and of finite presentation. 3. The _cdh-topology_ is the topology generated by the cdp-topology and the Nisnevich topology. Also recall that every motivic spectrum satisfies cdh-descent [36, Corollary 6.25]. Moreover, for a field \(k\), every cdh-sheaf over \(k\) is automatically a cdh-hypersheaf [20, Corollary 2.4.16]. #### 5.1.2. **Recollection** (\(\ell\)dh-topology) Let \(\ell\) be a prime number. 1. A morphism of schemes \(p\colon X^{\prime}\to X\) is an _fps\(\ell^{\prime}\)-cover_ if \(p\) is finite flat and surjective, and \(p_{*}\mathcal{O}_{X^{\prime}}\) is a free \(\mathcal{O}_{X}\)-module of rank prime to \(\ell\). 2. The \(\ell\)_dh-topology_ is the topology generated by the cdh-topology and fps\(\ell^{\prime}\)-covers. #### 5.1.3. **Definition** Let \(X\) be a scheme and let \(p\colon\Delta^{\mathrm{op}}\to\mathrm{Sch}_{/X}\) be a simplicial \(X\)-scheme. We say that \(p\) is a _cdh-hypercover_ (respectively, \(\ell\)_dh-hypercover_) if for each \(i\geq 0\), the induced map \[X_{i}\to(\mathrm{cosk}_{i-1}^{X}X_{\bullet})\] is a cdh-cover (respectively, \(\ell\)dh-cover). #### 5.1.4. **Remark** Unwrapping the definition of the coskeleton, we see that \(p\) is a hypercover if and only if for each \(i\geq 0\), the matching maps \[X_{0}\to X\,\quad X_{1}\to X_{0}\times_{X}X_{0}\,\quad X_{2}\to\cdots\] are coverings. ### Hyperdescent for orientable Borel-Moore homology In this subsection, we show that Borel-Moore homology with respect to an orientable motivic spectrum satisfies \(\ell\)dh-hyperdescent. #### 5.2.1. **Notation** Throughout this subsection, we fix a base field \(k\) of exponential characteristic \(e\) and a prime \(\ell\neq e\). #### 5.2.2. Notation In Definition 2.2.1 we attached to a variety \(p\colon X\to\operatorname{Spec}(k)\) a motivic spectrum \[\operatorname{M_{c}}(X):=p_{!}(\mathbf{1}_{X})\.\] By Corollary 2.2.12, this motivic spectrum is dualizable away from the characteristic, and we write \[\operatorname{M_{c}}(X)^{\vee}_{(\ell)}\in\operatorname{SH}(k)_{(\ell)}\] for the \(\ell\)-local monoidal dual. The rest of this subsection is be devoted to the proof of the following result. **5.2.3 Theorem**.: _If \(X_{\bullet}\to X\) is an \(\ell\)dh-hypercover of \(k\)-schemes, then the natural map_ \[\operatorname*{colim}_{\Delta^{\operatorname{op}}}\operatorname{M_{c}}(X_{ \bullet})^{\vee}_{(\ell)}\to\operatorname{M_{c}}(X)^{\vee}_{(\ell)}. \tag{5.2.4}\] _is an \(\operatorname{MGL}\)-local equivalence; that is, the map (5.2.4) becomes an equivalence after tensoring with \(\operatorname{MGL}\). In particular, the map (5.2.4) is \(\infty\)-connective with respect to the Chow-Novikov \(\operatorname{t}\)-structure._ **5.2.5 Remark**.: Note that a cdh-hypercover is an \(\ell\)dh-hypercover for all \(\ell\). Hence, if \(X_{\bullet}\to X\) is a cdh-hypercover, then the \(\ell\)-localization in Theorem 5.2.3 can be replaced by localization away from the exponential characteristic \(e\). That is, the map \[\operatorname*{colim}_{\Delta^{\operatorname{op}}}\operatorname{M_{c}}(X_{ \bullet})[\nicefrac{{1}}{{e}}]^{\vee}\to\operatorname{M_{c}}(X)[\nicefrac{{ 1}}{{e}}]^{\vee}\] is also an \(\operatorname{MGL}\)-local equivalence. The proof of Theorem 5.2.3 is somewhat involved and occupies the remainder of this subsection. Our argument can be informally divided into three parts: 1. First, we show that Theorem 5.2.3 follows from an \(\ell\)dh-hyperdescent statement in Borel-Moore \(\operatorname{MGL}\)-homology. This is Lemma 5.2.8. 2. We then use the homotopy \(\operatorname{t}\)-structure to prove connectivity estimates on Borel-Moore homology of varieties with respect to a connective, orientable homology theory. This is Lemma 5.2.14. 3. Finally, we use Spitzweck's calculation of the slices of \(\operatorname{MGL}\) and our connectivity estimates to show that \(\ell\)dh-hyperdescent for motivic cohomology implies \(\ell\)dh-hyperdescent for \(\operatorname{MGL}\). For motivic cohomology the needed hyperdescent statement was proven by Geisser [24, Theorem 1.2], and later generalized by Kelly [40, Theorem 4.0.13]. **5.2.6 Convention**.: For the rest of this section, we work \(\ell\)-locally, and all motivic spectra are implicitly localized at \(\ell\). We begin with part (1), where it is convenient to employ the following notation. **5.2.7 Notation**.: If \(E\) is an \(\ell\)-local motivic spectrum, we write \[E_{X}^{\operatorname{BM}}:=E\otimes\operatorname{M_{c}}(X)^{\vee}_{(\ell)}.\] This is justified by Observation 2.2.6, since we have equivalences \[\pi_{p,q}(E_{X}^{\operatorname{BM}}) \simeq[\operatorname{S}^{p,q},E\otimes\operatorname{M_{c}}(X)^{ \vee}_{(\ell)}]\] \[\simeq[\Sigma^{p,q}\operatorname{M_{c}}(X),E]\] \[\simeq E_{p,q}^{\operatorname{BM}}(X)\.\] Note that if \(X\) is smooth and projective, then Observation 2.2.3 shows that \[E_{X}^{\operatorname{BM}}\simeq E\otimes\Sigma_{+}^{\infty}X\.\] **5.2.8 Lemma**.: _Assume that the following condition is satisfied:_ * _For any_ \(k\)_-scheme_ \(X\)_, any_ \(\ell\)_th-hypercover_ \(X_{\bullet}\to X\)_, and any_ \(s\in\mathbb{Z}\)_, the canonical comparison map of spectra_ \[\operatorname{colim}\operatorname{map}_{\operatorname{SH}(k)}(\operatorname{S}^{2 s,s},\operatorname{MGL}_{X_{\bullet}}^{\operatorname{BM}})\to\operatorname{map}_{ \operatorname{SH}(k)}(\operatorname{S}^{2s,s},\operatorname{MGL}_{X}^{ \operatorname{BM}})\] _is an equivalence._ _Then Theorem 5.2.3 holds._ Proof.: In terms of Notation 5.2.7, Theorem 5.2.3 is equivalent to showing that the natural map of \(\operatorname{MGL}\)-modules \[\operatorname{colim}\operatorname{MGL}_{X_{\bullet}}\to\operatorname{MGL}_{X} \tag{5.2.9}\] is an equivalence. Since all \(\operatorname{MGL}\)-local equivalences are Chow-Novikov \(\infty\)-connective [7, Corollary 3.17], the second part of Theorem 5.2.3 follows from the first. By [23, Theorem 2.2.9], the spectral Yoneda embedding induces an equivalence between the \(\infty\)-category of \(\operatorname{MGL}\)-modules and spectral presheaves on the thick subcategory generated by modules of the form \(\operatorname{MGL}\otimes S\), where \(S\in\operatorname{Pure}(k)\). Thus, (5.2.9) is an equivalence if and only if for any \(S\in\operatorname{Pure}(k)\), the map \[\operatorname{colim}\operatorname{map}_{\operatorname{MGL}}(\operatorname{ MGL}\otimes S,\operatorname{MGL}_{X_{\bullet}})\to\operatorname{map}_{ \operatorname{MGL}}(\operatorname{MGL}\otimes S,\operatorname{MGL}_{X}) \tag{5.2.10}\] is an equivalence. Since \(\operatorname{MGL}\) is orientable, \(\operatorname{MGL}\)-linear perfect pure motives are generated as a thick subcategory by modules of the form \[\Sigma^{2(d+s),d+s}\operatorname{MGL}_{Y}\,\] where \(Y\) is a smooth projective variety of dimension \(d\) and \(s\in\mathbb{Z}\). For any variety \(Z\), we then have \[\operatorname{map}_{\operatorname{MGL}}(\Sigma^{2(d+s),d+s} \operatorname{MGL}_{Y},\operatorname{MGL}_{Z}) \simeq\operatorname{map}_{\operatorname{MGL}}(\Sigma^{2s,s} \operatorname{MGL},\operatorname{MGL}_{Y\times Z})\] \[\simeq\operatorname{map}_{\operatorname{SH}(k)}(\operatorname{S} ^{2s,s},\operatorname{MGL}_{Y\times Z})\] Thus, to show that (5.2.10) is an equivalence it is enough to show that for each smooth projective variety \(Y\) and integer \(s\), the map \[\operatorname{colim}\operatorname{map}_{\operatorname{SH}(k)}(\operatorname{S }^{2s,s},\operatorname{MGL}_{Y\times X_{\bullet}})\to\operatorname{map}_{ \operatorname{SH}(k)}(\operatorname{S}^{2s,s},\operatorname{MGL}_{Y\times X}) \tag{5.2.11}\] is an equivalence. Since \(Y\times X_{\bullet}\to Y\times X\) is again an \(\ell\)dh-hypercover, the conclusion follows from assumption \((*)\). We now proceed with the second step of the proof, which is a vanishing result for Borel-Moore homology of varieties. The vanishing holds for motivic spectra that are connective with respect to the _homotopy \(\operatorname{t}\)-structure_, which we now recall. **5.2.12**.: **Recollection** (homotopy \(\operatorname{t}\)-structure). Write \[\operatorname{SH}(k)_{\geq 0}\subseteq\operatorname{SH}(k)\] for the full subcategory generated under colimits and extensions by \(\Sigma^{p,q}\Sigma_{+}^{\infty}X\) for \(X\in\operatorname{Sm}_{k}\) and \(p>q\). The subcategory \(\operatorname{SH}(k)_{\geq 0}\) defines the connective part of a unique \(\operatorname{t}\)-structure on \(\operatorname{SH}(k)\) called the _homotopy \(\operatorname{t}\)-structure_. This \(\operatorname{t}\)-structure has the following two properties, both proven in [35, Corollary 2.4]: 1. The homotopy \(\operatorname{t}\)-structure is left complete. That is, the natural functor \[\operatorname{SH}(k)\to\lim\Big{(}\begin{array}{c}\cdots\\ \end{array}\xrightarrow{}\operatorname{SH}(k)_{\leq 2}\xrightarrow{\tau_{\leq 1}} \operatorname{SH}(k)_{\leq 2}\xrightarrow{\tau_{\leq 0}} \operatorname{SH}(k)_{\leq 0}\end{array}\Big{)}\] is an equivalence. Hence the homotopy \(\operatorname{t}\)-structure is left separated, i.e., \(\bigcap_{d\in\mathbb{Z}}\operatorname{SH}(k)_{\geq d}=0\). 2. If \(E\) is connective, then for any smooth variety \(X\), for \(p>q+\dim(X)\) we have (5.2.13) \[E^{p,q}(X)\simeq[\Sigma^{-p,-q}\Sigma_{+}^{\infty}X,E]=0\.\] **5.2.14 Lemma**.: _Let \(E\in\operatorname{SH}(k)[\nicefrac{{1}}{{e}}]\) be connective motivic spectrum that admits a structure of an \(\operatorname{MGL}\)-module. Then for any variety \(X\) and integers \(p<q\), we have_ \[E^{\operatorname{BM}}_{p,q}(X)=0\.\] Proof.: Let us first assume that \(k\) is perfect. Recall that \(E^{\operatorname{BM}}_{p,q}(X)\simeq[\Sigma^{p,q}\mathrm{M}_{\mathrm{c}}(X),E]\) and write \(\mathcal{C}\subseteq\operatorname{SH}(k)[\nicefrac{{1}}{{e}}]\) for full subcategory of motivic spectra \(A\) such that for all \(p<q\), we have \[[\Sigma^{p,q}A,E]=0\.\] It suffices to show that \(\mathcal{C}\) satisfies the hypotheses of Lemma 2.2.11. Since \(\mathcal{C}\) is closed under extensions, fibers, and retracts, it is enough to show that if \(X\) is a smooth projective \(k\)-scheme, then \(\mathrm{M}_{\mathrm{c}}(X)[\nicefrac{{1}}{{e}}]\in\mathcal{C}\). In this case, Example 2.2.2 shows that \[\mathrm{M}_{\mathrm{c}}(X) \simeq\mathrm{Th}_{X}(-\mathrm{T}_{X})\.\] Hence we have a string of isomorphisms \[E^{\operatorname{BM}}_{p,q}(X) \simeq[\Sigma^{p,q}(\mathrm{Th}_{X}(-\mathrm{T}_{X})),E]\] \[\simeq[\operatorname{MGL}\otimes\Sigma^{p,q}(\mathrm{Th}_{X}(- \mathrm{T}_{X})),E]_{\operatorname{MGL}}\,\] where the final term denotes homotopy classes of maps of \(\operatorname{MGL}\)-modules. Write \(d:=\dim(X)\); using the Thom isomorphism we can further rewrite the right-hand side as \[[\operatorname{MGL}\otimes\Sigma^{p-2d,q-d}(\Sigma_{+}^{\infty}X),E]_ {\operatorname{MGL}} \simeq[\Sigma^{p-2d,q-d}(\Sigma_{+}^{\infty}X),E]\] \[\simeq E^{2d-p,d-q}(X)\.\] As observed in Recollection 5.2.12, the right-hand side vanishes when \(2d-p>d-q+d\), which translates to \(p<q\), as needed. If \(k\) is not perfect, then write \(k\to k^{\prime}\) for the perfection of \(k\). As in the proof of Corollary 2.2.12, we reduce to the perfect case by using the equivalence \(\operatorname{SH}(k)[\nicefrac{{1}}{{e}}]\simeq\operatorname{SH}(k^{\prime})[ \nicefrac{{1}}{{e}}]\) of [21, Corollary 2.1.7]. We now proceed with the third step of the proof, which reduces from \(\operatorname{MGL}\)-homology to motivic cohomology. We need to make use of the slice tower, which we now recall. **5.2.15 Recollection** (effective covers & slice filtration).: Let \(E\) be a motivic spectrum and \(r\in\mathbb{Z}\). We write \(\mathrm{f}_{r}E\) for the _\(r\)-th effective cover_ of \(E\). These effective covers give rise to a functorial filtration \[\cdots\to\mathrm{f}_{1}E\to\mathrm{f}_{0}E\to\mathrm{f}_{-1}E\to\cdots\to E\.\] We write \[\mathrm{s}_{r}E:=\operatorname{cofib}(\mathrm{f}_{r+1}E\to\mathrm{f}_{r}E)\] for the _\(r\)-th slice_. We also write \[\mathrm{c}_{r}E:=\operatorname{cofib}(\mathrm{f}_{r+1}E\to E)\.\] **5.2.16 Recollection** (slices of \(\operatorname{MGL}\)).: The spectrum \(\operatorname{MGL}\) is \(0\)-effective, i.e., \(\mathrm{f}_{0}\mathrm{MGL}\simeq\operatorname{MGL}\)[61, Corollary 3.2]. Assuming the Hopkins-Morel equivalence, Spitzweck calculated the slices of \(\operatorname{MGL}\) as \[s_{r}\mathrm{MGL}\simeq\mathrm{M}(\pi_{2r}\mathrm{MU})\, \tag{5.2.17}\] where on the right-hand side we have the motivic cohomology spectrum associated to \(\pi_{2r}\mathrm{MU}\), which is a free abelian group of finite rank [61, Theorem 4.7]. The Hopkins-Morel equivalence was subsequently proven by Hoyois away from the characteristic [35], showing that (5.2.17) holds \(\ell\)-locally. **5.2.18 Lemma**.: _Let \(X\) be a \(k\)-variety. Then for \(p<q+r\), the canonical map_ \[(\operatorname{MGL}_{(\ell)})^{\operatorname{BM}}_{p,q}(X)\to(\mathrm{c}_{r} \mathrm{MGL}_{(\ell)})^{\operatorname{BM}}_{p,q}(X)\] _is an isomorphism._ Proof.: By a result of Spitzweck [61, Proof of Theorem 4.7], the \((r+1)\)-st effective cover \(\mathrm{f}_{r+1}\mathrm{MGL}_{(\ell)}\) is a colimit of spectra of the form \(\Sigma^{2(r+1),r+1}\mathrm{MGL}_{(\ell)}\). In particular, \(\mathrm{f}_{r+1}\mathrm{MGL}_{(\ell)}\) is \((r+1)\)-connective in the homotopy t-structure. The desired result now follows from the cofiber sequence \[\mathrm{f}_{r+1}\mathrm{MGL}_{(\ell)}\to\mathrm{MGL}_{(\ell)}\to\mathrm{c}_{r} \mathrm{MGL}_{(\ell)}\] and Lemma 5.2.14. We now complete the promised argument. Proof of Theorem 5.2.3.: Throuthout the proof, we implicitly work \(\ell\)-locally and drop the \(\ell\)-localization from notation. By Lemma 5.2.8, it is enough to show that if \(X_{\bullet}\to X\) is an \(\ell\)dh-hypercover and \(s\in\mathbb{Z}\), then the natural map \[\mathrm{colim}\,\mathrm{map}_{\mathrm{SH}(k)}(\mathrm{S}^{2s,s},\mathrm{MGL} _{X_{\bullet}}^{\mathrm{BM}})\to\mathrm{map}_{\mathrm{SH}(k)}(\mathrm{S}^{2s,s},\mathrm{MGL}_{X}^{\mathrm{BM}})\] is an equivalence. As the standard t-structure on spectra is right complete, a diagram \(F\colon\mathcal{C}^{\otimes}\to\mathrm{Sp}\) is a colimit if and only if for each \(m\in\mathbb{Z}\), the diagram \[(\tau_{\geq m}\circ F)\colon\mathcal{C}^{\otimes}\to\mathrm{Sp}_{\geq m}\] of \(m\)-coconnective spectra is a colimit. Thus, the map \[\mathrm{colim}\,\mathrm{map}_{\mathrm{SH}(k)}(\mathrm{S}^{2s,s},\mathrm{MGL} _{X_{\bullet}}^{\mathrm{BM}})\to\mathrm{map}_{\mathrm{SH}(k)}(\mathrm{S}^{2s, s},\mathrm{MGL}_{X}^{\mathrm{BM}})\] is an equivalence if and only if for each \(m\in\mathbb{Z}\), the induced map of spectra \[\mathrm{colim}(\tau_{\geq m}\,\mathrm{map}_{\mathrm{SH}(k)}(\mathrm{S}^{2s,s},\mathrm{MGL}_{X_{\bullet}}^{\mathrm{BM}}))\to\tau_{\geq m}\,\mathrm{map}_{ \mathrm{SH}(k)}(\mathrm{S}^{2s,s},\mathrm{MGL}_{X}^{\mathrm{BM}}) \tag{5.2.19}\] has an \((m+1)\)-connective cofiber. By Lemma 5.2.18, for all \(k\)-varieties \(Z\) and integers \(k<r-s\), the map \[\pi_{k}\,\mathrm{map}_{\mathrm{SH}(k)}(\mathrm{S}^{2s,s},\mathrm{MGL}_{Z}^{ \mathrm{BM}})\to\pi_{k}\,\mathrm{map}_{\mathrm{SH}(k)}(\mathrm{S}^{2s,s}, \mathrm{c}_{r}\mathrm{MGL}_{Z}^{\mathrm{BM}})\] is an isomorphism. Thus, if \(r>m+s\), then the map (5.2.19) is equivalent to the map \[\mathrm{colim}(\tau_{\geq m}\,\mathrm{map}_{\mathrm{SH}(k)}(\mathrm{S}^{2s,s },(\mathrm{c}_{r}\mathrm{MGL})_{X_{\bullet}}^{\mathrm{BM}}))\to\tau_{\geq m}\, \mathrm{map}_{\mathrm{SH}(k)}(\mathrm{S}^{2s,s},(\mathrm{c}_{r}\mathrm{MGL})_{ X}^{\mathrm{BM}})\.\] Thus it suffices to show that for each \(r\in\mathbb{Z}\) the map \[\mathrm{colim}\,\mathrm{map}_{\mathrm{SH}(k)}(\mathrm{S}^{2s,s},(\mathrm{c}_{r }\mathrm{MGL})_{X_{\bullet}}^{\mathrm{BM}})\to\mathrm{map}_{\mathrm{SH}(k)}( \mathrm{S}^{2s,s},(\mathrm{c}_{r}\mathrm{MGL})_{X}^{\mathrm{BM}})\] is an equivalence. In other words, we have to show \(\ell\)dh-hyperdescent for \(\mathrm{c}_{r}\mathrm{MGL}\)-Borel-Moore homology of varieties. As we observed in Recollection 5.2.16, by a result of Spitzweck the slices of algebraic cobordism are given by suspensions of motivic cohomology associated to finitely generated abelian groups. It follows that \(\mathrm{c}_{r}\mathrm{MGL}\) belongs to the smallest thick subcategory containing the motivic cohomology spectrum \(\mathrm{M}\mathbb{Z}\) and closed under bigraded suspensions. Thus suffices to show that \[\mathrm{colim}\,\mathrm{map}_{\mathrm{SH}(k)}(\mathrm{S}^{2s,s},(\mathrm{M} \mathbb{Z})_{X_{\bullet}})\to\mathrm{map}_{\mathrm{SH}(k)}(\mathrm{S}^{2s,s}, (\mathrm{M}\mathbb{Z})_{X})\] is an equivalence; in other words, that \(\ell\)-localized motivic cohomology of varieties satifies \(\ell\)dh-hyperdescent. Since \(\ell\)-localized motivic cohomology has transfers along finite flat morphisms, this follows from a theorem of Geisser [24, Theorem 1.2]; see also a generalization due to Kelly [40, Theorem 4.0.13]. ### The weight filtration on Borel-Moore homology via \(\ell\)dh-hyperdescent In this subsection, we explain how Theorem5.2.3 allows one can calculate the weight filtration on Borel-Moore homology using \(\ell\)dh-hypercovers. #### 5.3.1. Recollection If \(X\) is a complex variety and \(A\in\operatorname{Alg}(\operatorname{Sp})\) is an algebra in spectra, then the Borel-Moore homology of the topological space \(X(\mathbb{C})\) with coefficients in \(A\) can be identified with the homotopy of the Betti realization of the monoidal dual of the compactly supported motive \(\operatorname{M_{c}}(X)\) of Definition2.2.1: \[\operatorname{H_{*}^{BM}}(X(\mathbb{C});A)\simeq\pi_{*}\mathrm{Be}( \operatorname{M_{c}}(X)^{\vee};A)\.\] If \(A\) is complex orientable, then Corollary4.3.13 gives a canonical lift of \(\mathrm{Be}(\operatorname{M_{c}}(X)^{\vee};A)\) to a filtered spectrum \[\operatorname{W_{*}Be}(\operatorname{M_{c}}(X);A)\in\operatorname{Mod}_{ \tau_{\geq},A}(\operatorname{Fil}\operatorname{Sp})\.\] Hence this filtration induces a _weight filtration_ on the Borel-Moore homology groups \(\operatorname{H_{*}^{BM}}(X(\mathbb{C});A)\). Analogously, if \(k\) is an arbitrary field and if \(A\in\operatorname{Alg}(\operatorname{Sh}_{\mathrm{et}}^{\operatorname{hyp}}( \operatorname{\hat{E}t}_{S};\operatorname{Sp})_{\ell}^{\wedge})\) is complex orientable, then to any \(k\)-variety \(X\) we can associate a hypercomplete etale sheaf of spectra \[\mathrm{Re}_{\ell}(\operatorname{M_{c}}(X)^{\vee}_{(\ell)};A))\.\] This hypersheaf inherits a filtration from the filtered etale realization of Definition4.6.6. #### 5.3.2. Notation To treat both the Betti and etale cases uniformly, for a variety \(X\) and \(A\) as in Recollection5.3.1 we write \[\operatorname{C_{*}^{BM}}(X;A):=\begin{cases}\mathrm{Be}(\operatorname{M_{c}}( X)^{\vee};A)&(\text{Betti})\\ \mathrm{Re}_{\ell}(\operatorname{M_{c}}(X)^{\vee}_{(\ell)};A))&(\text{\text{ \text{\text{\text{\text{\text{\text{\text{ \text{\text{\text{\ Hence the right-hand equivalence follows. **5.3.6 Corollary**.: _Let \(X\) be a proper complex variety and let \(A\in\operatorname{Alg}(\operatorname{Sp})\) be complex orientable. Then the filtration on_ \[\operatorname{H}_{*}^{\operatorname{BM}}(X(\mathbb{C});A)\simeq\pi_{*} \mathrm{C}_{*}^{\operatorname{BM}}(X;A)\] _induced from the weight filtration on the left-hand side coincides with the filtration induced by the hypercover spectral sequence._ \[\mathrm{E}_{s,t}^{1}:=\operatorname{H}_{t}^{\operatorname{BM}}(X_{s}(\mathbb{ C});A)\Rightarrow\operatorname{H}_{s+t}^{\operatorname{BM}}(X(\mathbb{C});A)\.\] Proof.: The filtered spectrum \(\operatorname{colim}_{[i]\in\Delta^{\mathrm{op}}}\tau_{\geq*}\mathrm{C}_{*}^{ \operatorname{BM}}(X_{i};A)\) appearing in Theorem5.3.4 can be identified with Deligne's decalage of the simplicial spectrum \(\mathrm{C}_{*}^{\operatorname{BM}}(X_{\bullet};A)\). See [2, SS9; 51, SS1.2.4]. By a result of Levine [46, Proposition 6.3], the resulting filtration on the homotopy groups of the colimit coincides with the one induced by the spectral sequence of geometric realization. As observed in Recollection5.3.3, any proper variety admits an \(\ell\)dh-hypercover by smooth varieties. Hence Theorem5.3.4 provides a way to explicitly calculate the weight filtration on Borel-Moore homology. If \(U\) is not necessarily proper, then the weight filtration can be calculated as follows. **5.3.7 Proposition**.: _Let \(X\) be a proper variety and \(Z\subseteq X\) a closed subvariety with open complement \(U\). Then the induced maps on Borel-Moore homology form a canonical cofiber sequence_ \[\mathrm{W}_{*}\mathrm{C}_{*}^{\operatorname{BM}}(Z;A)\to\mathrm{W}_{*} \mathrm{C}_{*}(X;A)\to\mathrm{W}_{*}\mathrm{C}_{*}^{\operatorname{BM}}(U;A).\] _In particular, the weight filtration on \(\mathrm{C}_{*}^{\operatorname{BM}}(U;A)\) is canonically determined by the weight filtrations on \(\mathrm{C}_{*}(X;A)\) and \(\mathrm{C}_{*}^{\operatorname{BM}}(Z;A)\)._ Proof.: Immediate from the localization sequence of Lemma2.2.10 and the fact that the filtered realization is exact. ### Filtration on cohomology and the comparison with the Gillet-Soule filtration In this subsection, we apply Theorem5.2.3 to compare the filtration on compactly supported integral cohomology of a complex variety with the _Gillet-Soule filtration_ introduced in [26]. Recall that the Gillet-Soule filtration refines Deligne's weight filtration on rational cohomology [17]. **5.4.1 Warning** (there are two different filtrations).: There are _two_ filtrations one can construct on cohomology using the filtered realization functors introduced in this paper. To avoid complicating notation, let us focus on the Betti case; the discussion applies equally well to the filtered etale realization. If \(A\in\operatorname{CAlg}(\operatorname{Sp})\) is complex orientable and \(X\) is a complex variety, then we have an identification \[\operatorname{H}_{\mathrm{c}}^{*}(X(\mathbb{C});A)\simeq\pi_{-*}\mathrm{Be}( \operatorname{M}_{\mathrm{c}}(X);A)\.\] A natural way to lift the right-hand side to a filtered object is to consider \[\mathrm{W}_{*}\mathrm{Be}(\operatorname{M}_{\mathrm{c}}(X);A). \tag{5.4.2}\] However, an alternative is to observe that by Corollary2.2.12, the motivic spectrum \(\operatorname{M}_{\mathrm{c}}(X)\) is dualizable; hence we can also consider the dual \[\operatorname{map}_{\tau_{\geq*}A}(\mathrm{W}_{*}\mathrm{Be}(\operatorname{M} _{\mathrm{c}}(X)^{\vee};A),\tau_{\geq*}A)\, \tag{5.4.3}\] of \(\mathrm{Be}(\operatorname{M}_{\mathrm{c}}(X)^{\vee};A)\) inside \(\operatorname{Mod}_{\tau_{\geq*}(A)}(\operatorname{FilSp})\). Recall that filtered Betti realization is not symmetric monoidal, but only _lax_ symmetric monoidal. Hence due to the failure of the universal coefficient theorem, (5.4.2) and (5.4.3) need not coincide. This failure can already be observed when \(X\) is smooth and proper in which case: 1. The filtered object (5.4.2) can be identified with the Whitehead filtration on cochains. 2. The filtered object (5.4.3) can be identified with the dual of the Whitehead filtration on chains. When \(A\) is ordinary cohomology with coefficients in a field, these two coincide. However, in general they do not coincide. #### 5.4.4. Note that out of the two ways of filtering cochains described in Warning 5.4.1, it is the _first_ one which is preferable. Indeed, if \(X\) is a proper variety, then the diagonal map \(X\to X\times X\) equips \(\mathrm{M}_{\mathrm{c}}(X)\) with a canonical structure of a commutative algebra in \(\mathrm{SH}(\mathbb{C})\). Since \(\mathrm{W}_{*}\mathrm{Be}(-;A)\) is lax symmetric monoidal, it follows that \[\mathrm{W}_{*}\mathrm{Be}(\mathrm{M}_{\mathrm{c}}(X);A)\] canonically inherits the structure of a commutative algebra in filtered \(\tau_{\geq*}A\)-modules6. Footnote 6: Dually, \(\mathrm{M}_{\mathrm{c}}(X)^{\vee}\) is a cocommutative coalgebra; however, lax monoidal functors need not preserve coalgebras. This is why (5.4.2) is preferable to (5.4.3). This is the same reason why cohomology groups of a topological space form a commutative algebra, but homology groups need not form a coalgebra unless we have some further flatness assumption. #### 5.4.5. Notation Let \(X\) be a complex variety. We write \[\mathrm{C}_{\mathrm{c}}^{*}(X(\mathbb{C});\mathbb{Z})\in\mathcal{D}(\mathbb{Z})\] for the complex of compactly supported integral cochains on \(X(\mathbb{C})\), considered as an object of the derived \(\infty\)-category. We recall the definition of the Gillet-Soule filtration. **5.4.6 Recollection** (the Gillet-Soule filtration).: If \(X\) is a proper complex variety, then using resolution of singularities we can construct a cdh-hypercover \(X_{\bullet}\to X\) by smooth proper varieties. The _Gillet-Soule filtration_ on the Betti cohomology of \(X\) is the filtration associated to the spectral sequence \[\mathrm{H}^{s}(X_{t}(\mathbb{C});\mathbb{Z})\Rightarrow\mathrm{H}^{s-t}(X( \mathbb{C});\mathbb{Z})\.\] Turning this into a filtered spectrum using decalage yields a definition \[\mathrm{W}_{*}^{\mathrm{GS}}\mathrm{C}^{*}(X(\mathbb{C});\mathbb{Z}):=\lim_{[n] \in\Delta}\tau_{\geq*}\mathrm{C}^{*}(X_{n}(\mathbb{C});\mathbb{Z})\in\mathrm{ Fil}(\mathcal{D}(\mathbb{Z}))\] If \(X\) is not necessarily proper, we embed \(X\) as an open subvariety \(X\subseteq\overline{X}\) of a proper variety \(\overline{X}\) with closed complement \(Z\) and define \[\mathrm{W}_{*}^{\mathrm{GS}}\mathrm{C}_{\mathrm{c}}^{*}(X(\mathbb{C});\mathbb{ Z}):=\mathrm{fib}\left(\mathrm{W}_{*}^{\mathrm{GS}}\mathrm{C}_{\mathrm{c}}^{*}( \overline{X}(\mathbb{C});\mathbb{Z})\to\mathrm{W}_{*}^{\mathrm{GS}}\mathrm{C}_ {\mathrm{c}}^{*}(Z(\mathbb{C});\mathbb{Z})\right)\.\] The results of [26] show that, as objects of the filtered derived \(\infty\)-category, these filtrations neither depend on the choice of the hypercover \(X_{\bullet}\) nor on the choice of the compactification \(\overline{X}\). We refer to the filtered object \(\mathrm{W}_{*}^{\mathrm{GS}}\mathrm{C}_{\mathrm{c}}^{*}(X(\mathbb{C});\mathbb{ Z})\) as the _Gillet-Soule filtration_ on \(\mathrm{C}_{\mathrm{c}}^{*}(X(\mathbb{C});\mathbb{Z})\). 4.7. **Notation** (filtered Betti realization & compactly supported cochains).: If \(X\) is a complex variety, we have an identification \[\mathrm{C}_{\mathrm{c}}^{*}(X(\mathbb{C});\mathbb{Z})\simeq\mathrm{Be}( \mathrm{M}_{\mathrm{c}}(X);\mathbb{Z})\] of objects in the derived \(\infty\)-category \(\mathcal{D}(\mathbb{Z})\). We write \[\mathrm{W}_{*}\mathrm{C}_{\mathrm{c}}^{*}(X(\mathbb{C});\mathbb{Z}):=\mathrm{W} _{*}\mathrm{Be}(\mathrm{M}_{\mathrm{c}}(X);\mathbb{Z})\] for the filtration induced by the filtered Betti realization of Definition 4.3.14. The filtration on compactly supported integral cochains inherited from the filtered Betti realization coincides with the Gillet-Soule filtration: **5.4.8 Theorem**.: _Let \(X\) be a complex variety. Then there exists an equivalence_ \[\mathrm{W}_{*}\mathrm{C}_{\mathrm{c}}^{*}(X(\mathbb{C});\mathbb{Z})\simeq \mathrm{W}_{*}^{\mathrm{GS}}\mathrm{C}_{\mathrm{c}}^{*}(X(\mathbb{C});\mathbb{ Z}) \tag{5.4.9}\] _of objects of the filtered derived \(\infty\)-category of \(\mathbb{Z}\)._ Before proceeding with the proof, let us remark that the main difficulty lies in the fact that the Gillet-Soule filtration is defined as a _limit_, whereas filtered Betti realization is a left adjoint, hence preserves _colimits_. Since we are in the stable context, finite limits and be expressed as finite colimits and vice versa, but the limit defining the Gillet-Soule filtration is a totalization of a cosimplicial object and hence is not finite. To prove Theorem5.4.8, we will show that after passing to the associated graded, the cdh-hypercover can be replaced by a suitable chain complex in effective Chow motives. Gillet and Soule's work [26] then shows that this complex of effective Chow motives can be chosen to be bounded. The key step in the proof of Theorem5.4.8 is to argue that the associated graded of the filtered Betti realization is defined on \(\operatorname{M\mathbb{Z}}_{c=0}\)-modules. This takes some preparation. #### 5.4.10. **Recollection** We write \[\operatorname{Gr}(\operatorname{Sp}):=\operatorname{Fun}(\mathbb{Z}^{\rm disc },\operatorname{Sp})\] for the \(\infty\)-category of _graded spectra_. Given a filtered spectrum \(F_{*}S\), the _associated graded_ of \(F_{*}S\) is the graded spectrum defined by \[\operatorname{gr}_{k}(F_{*}S):=\operatorname{cofib}(F_{k+1}S\to F_{k}S)\.\] A filtered spectrum \(F_{*}S\) is _complete_ if \(\lim_{n\in\mathbb{Z}}F_{n}S=0\). We write \[\operatorname{Fil}^{\wedge}(\operatorname{Sp})\subseteq\operatorname{FilSp}\] for the full subcategory spanned by the complete filtered spectra. On this subcategory, passing the associated graded functor \[\operatorname{gr}_{*}\colon\operatorname{Fil}^{\wedge}(\operatorname{Sp})\to \operatorname{Gr}(\operatorname{Sp})\] is conservative. #### 5.4.11. **Notation** Let \(\operatorname{M\mathbb{Z}}\in\operatorname{SH}(\mathbb{C})\) denote the motivic cohomology spectrum and \(\operatorname{M\mathbb{Z}}_{c=0}\simeq\operatorname{M\mathbb{Z}}_{c\leq 0}\) its connective cover in the Chow-Novikov t-structure. The first observation is that the associated graded of the \(\mathbb{Z}\)-linear filtered Betti realization factors through \(\operatorname{M\mathbb{Z}}_{c=0}\)-modules: **5.4.12**.: **Lemma**.: _There exists a left adjoint functor_ \[\operatorname{gr}_{*}\operatorname{Be}_{c=0}(-;\mathbb{Z})\colon\operatorname {Mod}_{\operatorname{M\mathbb{Z}}_{c=0}}(\operatorname{SH}(\mathbb{C}))\to \operatorname{Gr}(\mathcal{D}(\mathbb{Z}))\] _such that there is an equivalence_ \[\operatorname{gr}_{*}\operatorname{Be}_{c=0}(\operatorname{M\mathbb{Z}}_{c=0} \otimes S;\mathbb{Z})\simeq\operatorname{gr}_{*}(\operatorname{W}_{*} \operatorname{Be}(S;\mathbb{Z}))\] _natural in \(S\in\operatorname{SH}(\mathbb{C})\)._ Proof.: Write \(\operatorname{Chow}(\mathbb{C})\) for the additive \(1\)_-category_ of pure Chow motives over \(\mathbb{C}\). By [7, SS4.2], there is an equivalence of \(\infty\)-categories \[\operatorname{Mod}_{\operatorname{M\mathbb{Z}}_{c=0}}(\operatorname{SH}( \mathbb{C}))\simeq\operatorname{PSh}_{\Sigma}(\operatorname{Chow}(\mathbb{C}) ;\operatorname{Sp})\] between \(\operatorname{M\mathbb{Z}}_{c=0}\)-modules and spectral presheaves on \(\operatorname{Chow}(\mathbb{C})\). It follows that any additive functor on \(\operatorname{Chow}(\mathbb{C})\) valued in a cocomplete stable \(\infty\)-category extends uniquely to a colimit-preserving functor on all \(\operatorname{M\mathbb{Z}}_{c=0}\)-modules. The needed functor \(\operatorname{gr}_{*}\operatorname{Be}_{c=0}(-;\mathbb{Z})\) is defined as the unique colimit-preserving functor such that \[\operatorname{gr}_{n}\operatorname{Be}_{c=0}(M;\mathbb{Z}):=\Sigma^{-n} \mathrm{H}^{n}_{\operatorname{Be}}(M;\mathbb{Z})\] for any \(M\in\operatorname{Chow}(\mathbb{C})\), where the right-hand side is the homological Betti realization of a Chow motive. If \(S\in\operatorname{SH}(\mathbb{C})\) is perfect pure, then we have \(\operatorname{M\mathbb{Z}}_{c=0}\otimes S\in\operatorname{Chow}(\mathbb{C})\) so that \[\operatorname{gr}_{*}\operatorname{Be}_{c=0}(\operatorname{M \mathbb{Z}}_{c=0}\otimes S;\mathbb{Z}) \simeq\Sigma^{-n}\mathrm{H}^{n}_{\operatorname{Be}}(\operatorname{M \mathbb{Z}}_{c=0}\otimes S;\mathbb{Z})\] \[\simeq\operatorname{gr}_{*}(\operatorname{W}_{*}\operatorname{ Be}(S;\mathbb{Z}))\.\] Since both sides preserve colimits, Corollary 3.3.6 implies that this natural equivalence defined on perfect pures extends to an equivalence on all of \(\operatorname{SH}(\mathbb{C})\). Proof of Theorem 5.4.8.: By Lemma 2.2.10, the left-hand filtration takes open-closed decompositions to fiber sequences, and by definition the Gillet-Soule filtration takes open-closed decompositions to fiber sequences. Hence we can assume that \(X\) is proper. Using resolution of singularities, we can choose a cdh-hypercover \(X_{\bullet}\to X\) such that \(X_{i}\) is smooth and proper for each \(i\geq 0\). Functoriality of the filtered Betti realization applied to \(X_{\bullet}\to X\) gives a canonical comparison map \[\operatorname{W}_{*}\!\operatorname{C}^{*}(X(\mathbb{C});\mathbb{Z})\to\lim_{[ i]\in\Delta}\operatorname{W}_{*}\!\operatorname{C}^{*}(X_{i}(\mathbb{C}); \mathbb{Z})\simeq\lim\tau_{\geq*}\!\operatorname{C}^{*}(X_{i}(\mathbb{C}); \mathbb{Z})\simeq\operatorname{W}_{*}^{\operatorname{GS}}\!\operatorname{C}^{* }(X(\mathbb{C});\mathbb{Z}). \tag{5.4.13}\] We will prove that (5.4.13) is an equivalence. First, we claim that both the source and target of (5.4.13) are complete. Indeed, the target is a limit of Whitehead filtrations, which are complete, and hence is complete itself. On the other hand, since the subcategory of those motivic spectra \(S\) such that \(\operatorname{W}_{*}\!\operatorname{Be}(S;\mathbb{Z})\) is complete is thick and contains motives of all smooth and proper varieties, Lemma 2.2.11 implies that the source is also complete. We deduce that it is enough to show that (5.4.13) is an equivalence after passing to associated graded objects. By Lemma 5.4.12, the map between associated graded objects can be identified with the comparison map \[\operatorname{gr}_{*}\!\operatorname{Be}_{c=0}(\operatorname{M}\!\mathbb{Z}_ {c=0}\otimes\operatorname{M}_{\operatorname{c}}(X);\mathbb{Z})\to\lim_{[i]\in \Delta}\operatorname{gr}_{*}\!\operatorname{Be}_{c=0}((\operatorname{M}\! \mathbb{Z}_{c=0}\otimes\operatorname{M}_{\operatorname{c}}(X_{i});\mathbb{Z})\!:\] Since \(X_{\bullet}\to X\) is a cdh-cover and \(\operatorname{M}\!\mathbb{Z}_{c=0}\) is an MGL-module, by Theorem 5.2.3 and Remark 5.2.5, we have \[\operatorname{M}\!\mathbb{Z}_{c=0}\otimes\operatorname{M}_{\operatorname{c}}(X )^{\vee}\simeq\operatorname{colim}_{[i]\in\Delta^{\operatorname{op}}} \operatorname{M}\!\mathbb{Z}_{c=0}\otimes\operatorname{M}_{\operatorname{c}}(X _{i})^{\vee}\.\] Passing to monoidal duals, this shows that \[\operatorname{M}\!\mathbb{Z}_{c=0}\otimes\operatorname{M}_{\operatorname{c}}(X )\to\lim_{[i]\in\Delta}\operatorname{M}\!\mathbb{Z}_{c=0}\otimes\operatorname{M }_{\operatorname{c}}(X_{i}). \tag{5.4.14}\] We have to show that this limit is preserved by the functor \(\operatorname{gr}_{*}\!\operatorname{Be}_{c=0}(-;\mathbb{Z})\) of Lemma 5.4.12. Through the Dold-Kan correspondence, the cosimplicial object \(\operatorname{M}\!\mathbb{Z}_{c=0}\otimes\operatorname{M}_{\operatorname{c}}(X _{\bullet})\colon\Delta\to\operatorname{Chow}(\mathbb{C})\) determines a chain complex of pure Chow motives; this complex can be identified with the weight complex of [26, p. 137-138]. By [26, p.137, Theorem 2], this chain complex is homotopy equivalent to a bounded one. Using the Dold-Kan correspondence, this homotopy equivalence of chain complexes determines a map \(\operatorname{M}\!\mathbb{Z}_{c=0}\otimes X_{\bullet}\to C_{\bullet}\) of cosimplicial Chow motives which is a cosimplicial homotopy equivalence. The assumption that the chain complex associated to \(C_{\bullet}\) is bounded implies that \(C_{\bullet}\) is \(n\)-coskeletal for some \(n\). We have a commutative diagram of \(\operatorname{M}\!\mathbb{Z}_{c=0}\otimes\operatorname{M}_{\operatorname{c}}(X)\) Since the horizontal map is induced by a cosimplicial homotopy equivalence, it is an equivalence, and similarly \(\operatorname{gr}_{*}\!\operatorname{Be}_{c=0}(\operatorname{M}\!\mathbb{Z}_ {c=0}\otimes\operatorname{M}_{\operatorname{c}}(X_{\bullet});\mathbb{Z})\simeq \lim\operatorname{gr}_{*}\!\operatorname{Be}_{c=0}(C_{\bullet};\mathbb{Z})\). Thus, it is enough to show that \[\operatorname{gr}_{*}\!\operatorname{Be}_{c=0}(\operatorname{M}_{\operatorname{ c}}(X);\mathbb{Z})\to\lim\operatorname{gr}_{*}\!\operatorname{Be}_{c=0}(C_{ \bullet};\mathbb{Z})\] is an equivalence. However, as the right-hand side is a totalization of an \(n\)-coskeletal cosimplicial object, it can be identified with a finite limit. As \(\operatorname{gr}_{*}\!\operatorname{Be}_{c=0}(-;\mathbb{Z})\) is exact, it preserves finite limits, ending the argument. #### 5.4.15. Remark A key step in the proof of Theorem 5.4.8 is the boundedness result for the Gillet-Soule weight complex, which implies that the infinite cdh-hypercover \(X_{\bullet}\to X\) can be replaced by an object of finitary nature. On the other hand, as a consequence of Lemma 2.2.11, the filtered spectrum \[\mathrm{W}_{*}\mathrm{Be}(\mathrm{M}_{\mathrm{c}}(X);A)\] can _always_ be obtained from the Whitehead filtration on \(A\)-linear cochains of smooth, proper varieties using only finite limits and colimits. Unlike in the case of Borel-Moore homology covered by Theorem 5.3.4, for general \(A\) we do not if the filtration on cochains satisfies cdh-descent; although Theorem 5.4.8 shows that it does when \(A=\mathbb{Z}\). #### 5.4.16. Remark (Kuiper's work)In the case of a field of characteristic zero, a weight filtration on compactly supported cohomology with coefficients in a complex orientable ring spectrum \(A\) can also be constructed using the recent work of Kuijper [44]. We claim that this filtration agrees with the filtered realization introduced in this work applied to \(\mathrm{M}_{\mathrm{c}}(X)\). For simplicity, let us consider the complex Betti case. We have the association \[X\mapsto\tau_{\geq*}\mathrm{C}_{\mathrm{c}}^{*}(X(\mathbb{C});A)\in\mathrm{ FilSp}\,\] which we think of as a presheaf defined on smooth and proper varieties. As observed in [44, 8.3], if \(A\)-cohomology admits Gysin maps, then this presheaf satisfies descent for blow-ups squares. Moreover, if \(A\) is complex orientable then \(A\)-cohomology admits Gysin maps. Thus, by [44, Theorem 1.1], this presheaf uniquely extends to one defined on all varieties, giving the sought after weight filtration on compactly supported cohomology. Note that the two filtrations \[X\mapsto\tau_{\geq*}\mathrm{C}_{\mathrm{c}}^{*}(X(\mathbb{C});A)\qquad\text{ and}\qquad X\mapsto\mathrm{W}_{*}\mathrm{Be}(\mathrm{M}_{\mathrm{c}}(X);A)\] agree on smooth and proper varieties, have the localization property, and satisfy descent for blow-up squares. Hence the uniqueness part of Kuijper's result, shows that these filtrations necessarily agree on all complex varieties. ## 6. Synthetic Betti realization Write \(\mathrm{Syn}_{\mathrm{MU}}\) for the \(\infty\)-category of _\(\mathrm{MU}\)-based synthetic spectra_ introduced by the second-named author in [55]. The goal of the section is to show that the Betti realization functor \(\mathrm{Be}\colon\mathrm{SH}(\mathbb{C})\to\mathrm{Sp}\) refines to a lax symmetric monoidal left adjoint \[\mathrm{Be}_{\mathrm{syn}}\colon\mathrm{SH}(\mathbb{C})\to\mathrm{Syn}_{ \mathrm{MU}}\] as well as explore its basic properties. We refer to this refinement as _synthetic Betti realization_. In SS6.1, we recall the background on synthetic spectra necessary to understand the construction of the synthetic Betti realization functor. In SS6.2, we explain give an alternative description of synthetic spectra as modules in filtered spectra over the filtration on the sphere given by descent along the faithfully flat map \(\mathrm{S}^{0}\to\mathrm{MU}\). This description is later used to compare synthetic Betti realization with filtered Betti realization. In SS6.3, we construct the functor \(\mathrm{Be}_{\mathrm{syn}}\); see Theorem 6.3.3. In SS6.4, we explain the relationship between synthetic Betti realization to filtered Betti realization. In particular, if \(A\) is a Landweber exact complex oriented \(\mathbf{E}_{1}\)-ring, then the filtered Betti realization \(\mathrm{W}_{*}\mathrm{Be}(-;A)\) can be recovered from synthetic Betti realization; see Theorem 6.4.6. In SS6.5, we give conjectural description of a synthetic lift of a general motivic realization functor, such as etale realization. ### Recollection on synthetic spectra Initiated by Quillen, _chromatic homotopy theory_ studies the relationship between stable homotopy theory and the arithmetic formal groups. An important aspect of this relationship is the _Adams-Novikov spectral sequence_ \[\mathrm{H}^{s}(\mathcal{M}_{\mathrm{fg}};\omega^{t/2})\Rightarrow\pi_{s-t} \mathrm{S}^{0}\] relating cohomology of the moduli stack of formal groups to stable homotopy groups. _Synthetic spectra_ can be informally thought of as categorification of this spectral sequence. The purpose of this subsection is to briefly review what we need about synthetic spectra for this paper; we refer the reader to [55] for more details. We first recall the construction of \(\operatorname{MU}\)-based synthetic spectra from [55, SS4]. We say that a spectrum \(A\) is _finite \(\operatorname{MU}\)-projective_ if \(A\) is a compact object of \(\operatorname{Sp}\) and \(\operatorname{MU}\otimes A\) is free as an \(\operatorname{MU}\)-module; that is, there exists integers \(d_{1},\dots,d_{n}\) and an equivalence of \(\operatorname{MU}\)-modules \[\operatorname{MU}\otimes A\simeq\Sigma^{d_{1}}\operatorname{MU}\oplus \dots\oplus\Sigma^{d_{n}}\operatorname{MU}\.\] Equivalently, \(A\) is compact and \(\operatorname{MU}_{*}(A)\) is free as an \(\operatorname{MU}_{*}\)-module. We write \[\operatorname{Sp}_{\operatorname{MU}}^{\operatorname{fp}}\subseteq \operatorname{Sp}\] for the full subcategory spanned by the finite \(\operatorname{MU}\)-projective spectra. We say that map \(f\colon A\to B\) of finite \(\operatorname{MU}\)-projectives is an \(\operatorname{MU}\)_-epimorphism_ if \(f\) becomes a split epimorphism after tensoring with \(\operatorname{MU}\); equivalently, if \(\operatorname{MU}_{*}A\to\operatorname{MU}_{*}B\) is surjective. This notion of a covering equips the site \(\operatorname{Sp}_{\operatorname{MU}}^{\operatorname{fp}}\) with a Grothendieck topology. **6.1.1 Definition**.: The \(\infty\)-category of \(\operatorname{MU}\)_-based synthetic spectra_ is given by \[\operatorname{Syn}_{\operatorname{MU}}:=\operatorname{Sh}_{\Sigma}( \operatorname{Sp}_{\operatorname{MU}}^{\operatorname{fp}};\operatorname{Sp})\] the \(\infty\)-category of additive sheaves on spectra on the site \(\operatorname{Sp}_{\operatorname{MU}}^{\operatorname{fp}}\) of finite \(\operatorname{MU}\)-projective spectra. **6.1.2** (\(\operatorname{Syn}_{\operatorname{MU}}\) as a deformation of \(\operatorname{Sp}\)).: The \(\infty\)-category \(\operatorname{Syn}_{\operatorname{MU}}\) is stable and presentable. Moreover, through left Kan extension it inherits a symmetric monoidal tensor product from that of finite spectra. As we briefly explained in the introduction, the \(\infty\)-category \(\operatorname{Syn}_{\operatorname{MU}}\) is best understood as an \(\infty\)-categorical deformation of spectra in the following sense. Its monoidal unit has a canonical endomorphism \[\tau\colon\mathbf{1}_{\operatorname{Syn}}\to\mathbf{1}_{\operatorname{Syn}}\] which should be thought of as a formal parameter. Moreover, there is an equivalence \[\operatorname{Syn}_{\operatorname{MU}}^{\tau=1}\simeq\operatorname{Sp}\] between the generic fiber and spectra. The special fiber is related to arithmetic. Write \(\mathcal{M}_{\operatorname{fg}}^{\delta}\) for the _Dirac_ moduli stack of formal groups (that is, a sheaf on the category of graded-commutative rings) as defined in [31, SS5.2]. Then there is an equivalence \[\operatorname{Syn}_{\operatorname{MU}}^{\tau=0}\simeq\operatorname{IndCoh}( \mathcal{M}_{\operatorname{fg}}^{\delta})\] between the special fiber and \(\operatorname{Ind}\)-coherent sheaves on \(\mathcal{M}_{\operatorname{fg}}^{\delta}\). One can describe this \(\infty\)-category of \(\operatorname{Ind}\)-coherent sheaves on \(\mathcal{M}_{\operatorname{fg}}^{\delta}\) in terms of \(\operatorname{Ind}\)-coherent sheaves on the usual moduli stack of formal groups as follows: **6.1.3 Remark** (Dirac moduli of formal groups and its classical counterpart).: The \(\infty\)-category of \(\operatorname{Ind}\)-coherent sheaves on \(\mathcal{M}_{\operatorname{fg}}^{\delta}\) admits a fully faithful embedding \[i\colon\operatorname{IndCoh}(\mathcal{M}_{\operatorname{fg}})\hookrightarrow \operatorname{IndCoh}(\mathcal{M}_{\operatorname{fg}}^{\delta})\] from \(\operatorname{Ind}\)-coherent sheaves on the moduli stack \(\mathcal{M}_{\operatorname{fg}}\) of formal groups in classical algebraic geometry. The target is obtained from the source by attaching an anti-symmetric square root \(\omega^{\sfrac{1}{2}}\) of the Lie algebra line bundle \(\omega\in\operatorname{IndCoh}(\mathcal{M}_{\operatorname{fg}})\) in the sense that any \(\mathcal{F}\in\operatorname{IndCoh}(\mathcal{M}_{\operatorname{fg}}^{\delta})\) can be uniquely written in the form \[\mathcal{F}\simeq(i(\mathcal{F}_{0}))\oplus(\omega^{\sfrac{1}{2}}\otimes i( \mathcal{F}_{1}))\] for \(\mathcal{F}_{0},\mathcal{F}_{1}\in\operatorname{IndCoh}(\mathcal{M}_{ \operatorname{fg}})\). Informally, the additional root arises from the fact that in spectra, the Betti realization \(\operatorname{Be}(\mathbb{P}_{\mathbb{C}}^{1})\simeq\operatorname{S}^{2}\) of the Tate motive has a tensor square root, given by the \(1\)-sphere \(\operatorname{S}^{1}\). This situation is quite special to complex Betti realization. #### 6.1.4. Remark The embedding \(i\colon\operatorname{IndCoh}(\mathcal{M}_{\mathrm{fg}})\hookrightarrow \operatorname{IndCoh}(\mathcal{M}_{\mathrm{fg}}^{\delta})\) mentioned in Remark 6.1.3 can be identified with the embedding of special fibers \[(\operatorname{Syn}_{\mathrm{MU}}^{\mathrm{ev}})^{\tau=0}\hookrightarrow \operatorname{Syn}_{\mathrm{MU}}^{\tau=0}\] induced by the inclusion of _even synthetic spectra_ of [55, SS5.2] into all synthetic spectra. #### 6.1.5. Remark (Ind-coherent sheaves and Hovey's stable \(\infty\)-category) In terms of Hopf algebroids, we have a canonical equivalence \[\operatorname{IndCoh}(\mathcal{M}_{\mathrm{fg}}^{\delta})\simeq \operatorname{Stable}_{\mathrm{MU}\_MU}\] between sheaves on the Dirac moduli of formal groups and Hovey's stable \(\infty\)-category of \(\operatorname{MU}_{*}\)MU-comodules as in [33]. Under this equivalence, the subcategory \[\operatorname{IndCoh}(\mathcal{M}_{\mathrm{fg}})\subseteq\operatorname{IndCoh }(\mathcal{M}_{\mathrm{fg}}^{\delta})\] of sheaves on the classical moduli stack corresponds to the stable \(\infty\)-category of \(\operatorname{MU}_{*}\)MU-comodules concentrated in even degrees. #### 6.1.6. (synthetic analogues) The \(\infty\)-category of synthetic spectra is equipped with a fully faithful embedding \(\nu\colon\operatorname{Sp}\hookrightarrow\operatorname{Syn}_{\mathrm{MU}}\), called the _synthetic analogue_, which fits into a commutative diagram (6.1.7) The functor \(\nu\) is additive, but it is _not exact_. However, one can show that a cofiber sequence \[A\to B\to C\] of spectra is preserved by \(\nu\) if and only if \[0\to\operatorname{MU}_{*}A\to\operatorname{MU}_{*}B\to\operatorname{MU}_{*}C\to 0\] is short exact [55, Lemma 4.23]. In particular, \(\nu\colon\operatorname{Sp}\hookrightarrow\operatorname{Syn}_{\mathrm{MU}}\) preserves MU-split cofiber sequences; this is the crucial property we need to construct the synthetic lift of the Betti realization functor. ### Synthetic spectra as filtered spectra We now explain an alternative presentation of synthetic spectra in terms of filtered spectra. There are two relevant filtrations on the sphere that come from descent along the faithfully flat map \(\operatorname{S}^{0}\to\operatorname{MU}\). #### 6.2.1. Notation 1. Write \(\operatorname{fil}_{\mathrm{ev}}^{*}(\operatorname{S}^{0})\) the commutative algebra in filtered spectra defined by the limit \[\operatorname{fil}_{\mathrm{ev}}^{*}(\operatorname{S}^{0}):=\lim_{[n]\in \Delta}\tau_{\geq 2*}(\operatorname{MU}^{\otimes[n]})\.\] Here, the limit is taken over the diagram given by applying the _double-speed_ Postnikov filtration to the cobar construction of the unit \(\operatorname{S}^{0}\to\operatorname{MU}\). This filtration on the sphere is the Adams-Novikov filtration; it can also be identified with the _even filtration_ of [30, 54]. 2. Write \(\operatorname{fil}^{*}(\operatorname{S}^{0})\) for the commutative algebra in filtered spectra defined by the limit \[\operatorname{fil}^{*}(\operatorname{S}^{0}):=\lim_{[n]\in\Delta}\tau_{\geq *}(\operatorname{MU}^{\otimes[n]})\.\] We refer to \(\operatorname{fil}^{*}(\operatorname{S}^{0})\) as the _MU-descent filtration_ on \(\operatorname{S}^{0}\). The filtration \(\operatorname{fil}^{*}(\operatorname{S}^{0})\) agrees with the _half-weight even filtration_ of [54, Remark 2.26]. The following description of synthetic spectra in terms of filtered spectra is due to Gheorghe-Krause-Isaksen-Ricka [25]. See also [27, SS1.3; 54, SS3.2]. #### 6.2.2. Proposition (synthetic spectra as filtered spectra) 1. _There is an equivalence of symmetric monoidal_ \(\infty\)_-categories_ \[\Gamma^{*}\colon\operatorname{Syn}_{\operatorname{MU}}\xrightarrow{\sim} \operatorname{Mod}_{\operatorname{fil}^{*}(\operatorname{S}^{0})}(\operatorname{ FilSp})\.\] 2. _The equivalence_ \(\Gamma^{*}\) _restricts to an equivalence of symmetric monoidal_ \(\infty\)_-categories_ \[\operatorname{Syn}_{\operatorname{MU}}^{\operatorname{ev}}\xrightarrow{\sim} \operatorname{Mod}_{\operatorname{fil}^{*}_{\operatorname{uc}}(\operatorname{S }^{0})}(\operatorname{FilSp})\.\] 3. _The triangle_ \[\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{ \operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{ \operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}} \xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{ \operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{ \operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{ \operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-} {\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{ \operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{ \operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{ \operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{ \operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{ \operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{ \operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{ \operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{ \operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{ \operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{ \operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{ \operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{ \operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{ \operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{ \operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{ \operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{ \operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{ \operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{ \operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{ \operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{ \operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}} \xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{ \operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{ \operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}} \xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{ \operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{ \operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}} \xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{ \operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{ \operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}} \xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{ \operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{ \operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}} \xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{ \operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{ \operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}} \xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{ \operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{ \operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}} \xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{ \operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{ \operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}} \xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{ \operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{ \operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}} \xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{ \operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{ \operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}} \xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{ \operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{ \operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}} \xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{ \operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{ \operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}} \xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{ \operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{ \operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}} \xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{ \operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{ \operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}} \xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{ \operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{ \operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}} \xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{ \operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{ \operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}} \xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{ \operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{ \operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}} \xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{ \operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{ \operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}} \xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{ \operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{ \operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}} \xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{ \operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{ \operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{ \operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}} \xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{ \operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{ \operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}} \xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{ \operatorname{MU}}}\xy@{-}{\operatorname{Syn}_{\operatorname{MU}}}\xy@{-}{ #### 6.3.2. Example Let \(X\to Y\to Z\) is a cofiber sequence in \(\operatorname{Pure}(\mathbb{C})\). Combining Proposition3.2.6 and Lemma6.3.1 shows that \[\nu(\operatorname{Be}(X))\xrightarrow{\ where we use that \(A\otimes X\) is a retract of an MU-module to identify \(\Gamma^{*}\nu(A\otimes X)\) with \(\tau_{\geq*}(A\otimes X)\). **6.4.4 Remark**.: If \(f\colon A\to B\) is a map of complex orientable \(\mathbf{E}_{1}\)-ring spectra, then the induced map \(\nu(A)\to\nu(B)\) of \(\mathbf{E}_{1}\)-algebras in synthetic spectra gives rise to a natural transformation \[\operatorname{Re}_{A}(-)\to\operatorname{Re}_{B}(-)\.\] This natural transformation is adjoint to a comparison morphism, which we denote by \[\operatorname{Re}(f)\colon\tau_{\geq*}B\underset{\tau_{\geq*}A}{\otimes} \operatorname{Re}_{A}(-)\longrightarrow\operatorname{Re}_{B}(-)\.\] In fact, \(\operatorname{Re}(f)\) is an equivalence, as for \(X\in\operatorname{Syn}_{\operatorname{MU}}\) it can be identified with the canonical map \[\nu(B)\underset{\nu(A)}{\otimes}\nu(A)\otimes\nu(X)\to\nu(B)\otimes\nu(X)\.\] As a consequence of Corollary 3.3.6, we can make the following definition. **6.4.5 Definition**.: We write \[\phi_{A}\colon\operatorname{Re}_{A}(\operatorname{Be}_{\operatorname{syn}}(- ))\to\operatorname{W}_{*}\operatorname{Be}(-;A)\] for the unique natural transformation of colimit-preserving functors \[\operatorname{SH}(\mathbb{C})\to\operatorname{Mod}_{\tau_{\geq*}(A)}( \operatorname{FilSp})\] such that for every perfect pure \(S\in\operatorname{Pure}(\mathbb{C})\) it can be identified with the map \[\operatorname{Re}_{A}(\operatorname{Be}_{\operatorname{syn}}(S))\simeq \operatorname{Re}_{A}(\nu(\operatorname{Be}(S))\to\tau_{\geq*}(A\otimes \operatorname{Be}(S))\simeq\operatorname{W}_{*}\operatorname{Be}(S;A)\] of (6.4.3). For the following result, recall that a complex oriented ring spectrum \(A\) is said to be _Landweber exact_ if the map \(\operatorname{Spec}(A_{*})\to\operatorname{M}_{\operatorname{fg}}\) classiying the Quillen formal group is flat. For example, this is true if \(\pi_{*}(A)\) is a rational vector space. **6.4.6 Theorem**.: _Let \(X\in\operatorname{SH}(\mathbb{C})\) and let \(A\) be a complex oriented \(\mathbf{E}_{1}\)-ring. Assume that one of the following conditions holds:_ 1. _The motivic spectrum_ \(X\) _is cellular._ 2. _The complex oriented_ \(\mathbf{E}_{1}\)_-ring_ \(A\) _is Landweber exact._ _Then the map_ \[\phi_{A}\colon\operatorname{Re}_{A}(\operatorname{Be}_{\operatorname{syn}}(X ))\to\operatorname{W}_{*}\operatorname{Be}(X;A)\] _is an equivalence._ Proof.: Suppose first that \(X\) is cellular. Since both functors preserve colimits, it suffices to show that \(\phi_{A}\) is an equivalence for motivic spectra of the form \[X\simeq\operatorname{S}^{2n,n}\simeq(\mathbb{P}^{1})^{\otimes n}\.\] Since \(\operatorname{S}^{2n,n}\) is perfect pure, we see that \(\phi_{A}\) can be identified with the canonical map \[\nu(A)\otimes\nu(\operatorname{Be}(\operatorname{S}^{2n,n}))\simeq\nu(A) \otimes\nu(\operatorname{S}^{n})\to\nu(A\otimes\operatorname{S}^{n})\simeq \nu(A\otimes\operatorname{Be}(\operatorname{S}^{2n,n}))\.\] Since \(\operatorname{S}^{n}\) is MU-finite projective, [55, Lemma 4.24] implies that this map is an equivalence. In the Landweber exact case, [34, Propositions 2.12 & 2.13] shows that \(A\) is a filtered colimit of finite MU-projectives. The proof is now the same as the proof in the cellular case. **6.4.7 Remark**.: If \(f\colon A\to B\) is a morphism of complex oriented \(\mathbf{E}_{1}\)-rings, then the comparison map \[c_{f}\colon\tau_{\geq*}B\underset{\tau_{\geq*}A}{\otimes}\operatorname{W}_{*} \operatorname{Be}(-;A)\to\operatorname{W}_{*}\operatorname{Be}(-;B)\] of Construction 4.5.2 is compatible with the those of Definition 6.4.5 in the sense that we have a commutative diagram of functors \(\operatorname{Sym}_{\operatorname{MU}}\to\operatorname{Mod}_{\tau_{\geq},B}( \operatorname{FilSp})\) and natural transformations. To see this, note that all these functors preserve colimits, and so to give such a square it is enough to define it on perfect pures. If \(S\in\operatorname{Pure}(k)\), then the above square reduces to ### Synthetic real Betti realization and synthetic etale realization In this section, highly inspired by the work of Burklund-Hahn-Senger on the \(\infty\)-category of Artin-Tate real motivic spectra [10], we give conjectural description of a synthetic lift of a general motivic realization functor, such as etale realization. We first describe the main difference which makes the general case more interesting than the complex one. Notice that the complex Betti realization is valued in the \(\infty\)-category of spectra, and the synthetic lift of Theorem 6.3.3 shows that it can be naturally lifted to the \(\infty\)-category of synthetic spectra, which was constructed previously in [55]. However, in both the case of the real Betti realization \[\operatorname{Be}_{\operatorname{C}_{2}}\colon\operatorname{SH}(\mathbb{R}) \to\operatorname{Sp}^{\operatorname{C}_{2}}\] and the etale realization \[\operatorname{Re}_{\ell}\colon\operatorname{SH}(k)\to\operatorname{Sh}_{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{ \ We now fix an abstract realization functor \(\mathrm{Re}\colon\mathrm{SH}(k)\to\mathcal{C}\). To motivate the following definition, we recall [54, Proposition 3.6]. Write \(\mathrm{Perf}(\mathrm{Sp})_{\mathrm{ev}}\subseteq\mathrm{Sp}\) for the \(\infty\)-category finite spectra with an even cell decomposition. An (MU-based) _even synthetic spectrum_ can be identified with an additive sheaf \[X\colon(\mathrm{Perf}(\mathrm{Sp})_{\mathrm{ev}})^{\mathrm{op}}\to\mathrm{Sp}\] on \(\mathrm{Perf}(\mathrm{Sp})_{\mathrm{ev}}\) with respect to the topology of \(\mathrm{MU}_{*}\)-epimorphisms (equivalently, with respect to the topology where coverings are maps whose fiber is again even). Since the even cells can be identified as \(\mathrm{S}^{2k}\simeq\mathrm{Be}((\mathbb{P}^{1})^{\otimes k})\), this suggests the following notions. **6.5.3 Definition**.: Let \(\mathrm{Re}\colon\mathrm{SH}(k)\to\mathcal{C}\) be an abstract realization functor. The _\(\mathcal{C}\)-Tate motive_ is \[L_{\mathcal{C}}:=\mathrm{Re}(\mathbb{P}^{1})\.\] **6.5.4 Definition**.: Let \(\mathrm{Re}\colon\mathrm{SH}(k)\to\mathcal{C}\) be an abstract realization functor. We say that an object of \(\mathcal{C}\) is _perfect even_ if it belongs to the smallest subcategory \[\mathrm{Perf}(\mathcal{C})_{\mathrm{ev}}\subseteq\mathrm{Perf}(\mathcal{C})\] containing \(L_{\mathcal{C}}^{\otimes n}\) for all \(n\in\mathbb{Z}\) and closed under retracts and extensions. **6.5.5**.: Since \(\mathrm{Re}(\mathrm{MGL})\) is a filtered colimit of perfect evens, arguing as in Proposition 3.2.6, one shows that the following two conditions are equivalent for a map \(f\colon c\to d\) between perfect evens of Definition 6.5.4: 1. \(\mathrm{fib}(f)\in\mathcal{C}\) is perfect even. 2. \(\mathrm{Re}(\mathrm{MGL})\otimes c\to\mathrm{Re}(\mathrm{MGL})\otimes d\) admits a section. We say that a map \(f\) of perfect evens is an _even epimorphism_ if \(f\) satisfies these two equivalent conditions. **6.5.6 Definition**.: Let \(\mathrm{Re}\colon\mathrm{SH}(k)\to\mathcal{C}\) be an abstract realization functor. The _even synthetic deformation_ of \(\mathcal{C}\) is the \(\infty\)-category \[\mathrm{Syn}^{\mathrm{ev}}(\mathcal{C}):=\mathrm{Sh}_{\Sigma}(\mathrm{Perf}( \mathcal{C})_{\mathrm{ev}};\mathcal{C})\] of \(\mathcal{C}\)-valued additive sheaves with respect to the even epimorphism topology. **6.5.7 Remark**.: Since the inclusion \(\mathrm{Perf}(\mathcal{C})_{\mathrm{ev}}\hookrightarrow\mathcal{C}\) preserves cofiber sequences, its left Kan extension gives a localization functor \[\mathrm{Syn}^{\mathrm{ev}}(\mathcal{C})\to\mathcal{C}\.\] This localization should be informally thought of as expressing the target as the generic fiber of the source. **6.5.8 Remark**.: As an \(\infty\)-category, the even synthetic deformation depends only on \(\mathcal{C}\) and on the invertible object \(L_{\mathcal{C}}\). However, to define the synthetic analogue functor \(\nu\colon\mathcal{C}\to\mathrm{Syn}^{\mathrm{ev}}(\mathcal{C})\) we use more information about the functor \(\mathrm{Re}\). Recall that in the classical case, the synthetic analogue \(\nu\colon\mathrm{Sp}\hookrightarrow\mathrm{Syn}^{\mathrm{ev}}_{\mathrm{MU}}\) is given by the spectral Yoneda embedding followed by taking connective covers. The work of Burklund-Hahn-Senger suggests that in the general case, the right replacement for connectivity of spectra is that of effectivity. **6.5.9 Definition**.: Let \(\mathrm{Re}\colon\mathrm{SH}(k)\to\mathcal{C}\) be an abstract realization functor. We say that an object \(c\in\mathcal{C}\) is _effective_ if \(c\) belongs to the smallest subcategory \[\mathcal{C}^{\mathrm{eff}}\subseteq\mathcal{C}\] which contains \(\mathrm{Re}(\Sigma^{-n}\Sigma_{+}^{\infty}X)\) for \(X\in\mathrm{Sm}_{k}\) and \(n\geq 0\) and is closed under colimits. For an integer \(q\in\mathbb{Z}\), we say that an object \(c\in\mathcal{C}\) is _\(q\)-effective_ if \(c\) belongs to the smallest subcategory \[\mathcal{C}^{\mathrm{eff}}(q)\subseteq\mathcal{C}\] which contains \(L_{\mathcal{C}}^{\otimes q}\otimes E\) for \(E\in\mathcal{C}^{\mathrm{eff}}\) effective. #### 6.5.10. By construction \(\mathcal{C}^{\mathrm{eff}}(q)\) is presentable and the inclusion \(\mathcal{C}^{\mathrm{eff}}(q)\subseteq\mathcal{C}\) admits a right adjoint \[\mathrm{f}_{q}\colon\mathcal{C}\to\mathcal{C}^{\mathrm{eff}}(q)\] which we call the \(q\)-th _effective cover_. As \[\mathcal{C}^{\mathrm{eff}}(q+1)\subseteq\mathcal{C}^{\mathrm{eff}}(q)\,\] we have canonical natural transformations \(\mathrm{f}_{q+1}(-)\to\mathrm{f}_{q}(-)\) which assemble into the _slice tower_ \[\cdots\to\mathrm{f}_{q+1}(-)\to\mathrm{f}_{q}(-)\to\mathrm{f}_{q-1}(-)\to\cdots. \tag{6.5.11}\] #### 6.5.12. **Remark**.: Since \(L_{\mathcal{C}}\otimes\mathcal{C}^{\mathrm{eff}}(q)=\mathcal{C}^{\mathrm{eff}} (q+1)\), we have that for any \(c\in\mathcal{C}\), we have \[L_{\mathcal{C}}\otimes\mathrm{f}_{q}(c)\simeq\mathrm{f}_{q+1}(L_{\mathcal{C}} \otimes c)\.\] Informally, if we think of the slice tower as the variant of the Postnikov tower, tensoring with the Tate motive plays the role of the suspension. **6.5.13**.: **Definition**.: If \(c\in\mathcal{C}\), the _synthetic analogue_\(\nu(c)\in\mathrm{Syn}^{\mathrm{ev}}(\mathcal{C})\) is given by the sheafication of the presheaf \[\mathrm{f}_{0}\operatorname{Map}_{\mathcal{C}}(-,c)\colon\operatorname{Perf}( \mathcal{C})_{\mathrm{ev}}^{\mathrm{op}}\to\mathcal{C}\,\] where \(\operatorname{Map}_{\mathcal{C}}\) is the internal mapping object of \(\mathcal{C}\). The existence of the synthetic lift of \(\mathrm{Re}\) relies on the following conjecture. #### 6.5.14. **Conjecture**.: _The functor \(\nu\colon\mathcal{C}\to\mathrm{Syn}^{\mathrm{ev}}(\mathcal{C})\) preserves \(\mathrm{Re}(\mathrm{MGL})\)-split cofiber sequences._ #### 6.5.15. **Remark**.: Notice that if Conjecture 6.5.14 holds, then using Corollary 3.3.6 we can define the synthetic lift \[\mathrm{Re}^{\mathrm{syn}}\colon\mathrm{SH}(k)\to\mathrm{Syn}^{\mathrm{ev}}( \mathcal{C})\] as the unique colimit-preserving functor such that \[\mathrm{Re}^{\mathrm{syn}}(S)\simeq\nu(\mathrm{Re}(S))\] for any perfect pure \(S\). As we mentioned at the beginning of this section, our approach is inspired by the work of Burklund-Hanh-Senger, who instead of additive sheaves work with filtered objects. We now explain how the synthetic deformation presented here should conjecturally be related to the filtered object perspective of [10]. Using Remark 6.5.12, the slice tower of \(\operatorname{Map}(-,L^{\otimes 0})\) induces a filtered object \[\nu(L^{\otimes*})\in\operatorname{Fil}(\mathrm{Syn}^{\mathrm{ev}}(\mathcal{C}))\] of the form \[\cdots\to\nu(L^{\otimes 1})\to\nu(L^{\otimes 0})\to\nu(L^{\otimes-1})\to\cdots\.\] This object the slice analogue of the Postnikov tower of [53, SS5.2], which is shown in op. cit. to encode the Adams filtration. We conjecture that \(\nu(L^{\otimes*})\) has an analogous property in the context of motivic realizations, at least for Artin-Tate objects. #### 6.5.16. **Conjecture**.: _We have that:_ 1. _The internal mapping object functor_ \[\operatorname{Map}_{\mathcal{C}}(\nu(L^{\otimes*}),-)\colon\operatorname{Syn} ^{\mathrm{ev}}(\mathcal{C})\to\operatorname{Fil}(\mathcal{C})\] _can be promoted to an equivalence_ \[\operatorname{Syn}^{\mathrm{ev}}(\mathcal{C})\simeq\operatorname{Mod}_{ \operatorname{fil}^{*}(L^{\otimes 0})}(\operatorname{Fil}(\mathcal{C}))\] _between the synthetic deformation and modules over_ \[\operatorname{fil}^{*}(L^{\otimes 0})\simeq\operatorname{End}_{\mathcal{C}}(\nu(L^{ \otimes*}),\nu(L^{\otimes*}))\.\] 2. _Through the equivalence of (_1_), the synthetic realization functor_ \[\operatorname{Re}^{\operatorname{syn}}\colon\operatorname{SH}(k)\to \operatorname{Fil}(\mathcal{C})\] _can be identified on Artin-Tate objects with the functor sending_ \(X\in\operatorname{SH}^{\operatorname{AT}}(k)\) _to_ \[\cdots\to\operatorname{Re}(\operatorname{f}_{1}X)\to\operatorname{Re}( \operatorname{f}_{0}X)\to\operatorname{Re}(\operatorname{f}_{-1}X)\to\cdots\ \text{,}\] _the realization of its tower of effective covers (see_ Recollection_ 5.2.15_)._
2309.03610
Banach spaces with small weakly open subsets of the unit ball and massive sets of Daugavet and $Δ$-points
We prove that there exists an equivalent norm $\Vert\vert\cdot\vert\Vert$ on $L_\infty[0,1]$ with the following properties: (1) The unit ball of $(L_\infty[0,1],\Vert\vert\cdot\vert\Vert)$ contains non-empty relatively weakly open subsets of arbitrarily small diameter; (2) The set of Daugavet points of the unit ball of $(L_\infty[0,1],\Vert\vert\cdot\vert\Vert)$ is weakly dense; (3) The set of ccw $\Delta$-points of the unit ball of $(L_\infty[0,1],\Vert\vert\cdot\vert\Vert)$ is norming. We also show that there are points of the unit ball of $(L_\infty[0,1],\Vert\vert\cdot\vert\Vert)$ which are not $\Delta$-points, meaning that the space $(L_\infty[0,1],\Vert\vert\cdot\vert\Vert)$ fails the diametral local diameter 2 property. Finally, we observe that the space $(L_\infty[0,1],\Vert\vert\cdot\vert\Vert)$ provides both alternative and new examples that illustrate the differences between the various diametral notions for points of the unit ball of Banach spaces.
Christian Cobollo, Daniel Isert, Ginés López-Pérez, Miguel Martín, Yoël Perreau, Alicia Quero, Andrés Quilis, Daniel L. Rodríguez-Vidanes, Abraham Rueda Zoca
2023-09-07T10:06:24
http://arxiv.org/abs/2309.03610v2
Banach spaces with small weakly open subsets of the unit ball and massive sets of Daugavet and \(\Delta\)-points ###### Abstract. We prove that, for every perfect compact Hausdorff space \(\Omega\), there exists an equivalent norm \(\left\|\cdot\right\|\) on \(C(\Omega)\) with the following properties: 1. The unit ball of \((C(\Omega),\left\|\cdot\right\|)\) contains non-empty relatively weakly open subsets of arbitrarily small diameter; 2. The set of Daugavet points of the unit ball of \((C(\Omega),\left\|\cdot\right\|)\) is weakly dense; 3. The set of ccw \(\Delta\)-points of the unit ball of \((C(\Omega),\left\|\cdot\right\|)\) is norming. We also show that there are points of the unit ball of \((C(\Omega),\left\|\cdot\right\|)\) which are not \(\Delta\)-points, meaning that the space \((C(\Omega),\left\|\cdot\right\|)\) fails the diametral local diameter \(2\) property. Finally, we observe that the space \((C(\Omega),\left\|\cdot\right\|)\) provides both alternative and new examples that illustrate the differences between the various diametral notions for points of the unit ball of Banach spaces. Key words and phrases:Daugavet points; \(\Delta\)-points; points of continuity; renormings, spaces of continuous functions 2020 Mathematics Subject Classification: 46B03, 46B20, 46B22 ## 1. Introduction Recall that a Banach space \(X\) is said to have the _Daugavet property_ if every rank one bounded operator \(T:X\longrightarrow X\) satisfies the _Daugavet equation_ (DE) \[\|I+T\|=1+\|T\|,\] where \(I:X\longrightarrow X\) stands for the identity operator. Furthermore, if \(X\) has the Daugavet property, then every weakly compact operator \(T:X\to X\) satisfies (DE). Since the Daugavet equation is a stress of the operator norm's triangle inequality, it is natural to expect that it will impose severe restrictions on the underlying operator. As a matter of fact, if \(\|T\|\) is an eigenvalue of \(T\), then \(T\) satisfies the Daugavet equation, and the converse holds true if the space \(X\) is uniformly convex [7, Lemma 11.3 and Theorem 11.10]. Actually, the Daugavet property puts very strong constraints on the structure of the underlying Banach space. An old result in this line is that a Banach space with the Daugavet property cannot be linearly embedded into any Banach space with an unconditional basis (c.f. e.g. [24, Theorem 3.2]). Another restriction, this time of isometric nature, is the celebrated geometric characterisation of the Daugavet property exhibited in [18, Lemma 2.1] stated as follows: a Banach space \(X\) has the Daugavet property if, and only if, every point \(x\in S_{X}\) satisfies the following condition: given any slice \(S\) of \(B_{X}\) and any \(\varepsilon>0\), there exists \(y\in S\) such that \[\|x-y\|>2-\varepsilon.\] The latter characterisation, which can also be proved to hold true replacing slices with non-empty relatively weakly open subsets (resp. convex combinations of slices) [22, Lemma 3], shows that spaces with the Daugavet property live in the side of the universe of Banach spaces far away from Asplundness and Radon-Nikodym property. Indeed, the above characterisation allows to prove that if \(X\) has the Daugavet property, then \(X\) contains isomorphically \(\ell_{1}\), and every slice, every weakly open subset and every convex combination of slices of \(B_{X}\) has diameter two. Very recently, local versions of the Daugavet property have been considered in the following sense. **Definition 1.1**.: Let \(X\) be a Banach space and let \(x\in S_{X}\). We say that \(x\) is 1. a _Daugavet point_ if, for every slice \(S\) of \(B_{X}\) and every \(\varepsilon>0\), there exists \(y\in S\) such that \(\|y-x\|>2-\varepsilon\), 2. a _super Daugavet point_ if, for every non-empty relatively weakly open subset \(W\) of \(B_{X}\) and every \(\varepsilon>0\), there exists \(y\in W\) such that \(\|y-x\|>2-\varepsilon\), 3. a _ccs Daugavet point_ if, for every convex combination of slices \(C\) of \(B_{X}\) and every \(\varepsilon>0\), there exists \(y\in C\) such that \(\|y-x\|>2-\varepsilon\). A classical result, often known as Bourgain's lemma, establishes that every non-empty relatively weakly open subset of \(B_{X}\) contains a convex combination of slices of \(B_{X}\) (c.f. e.g. [14, Lemma II.1]). As an immediate consequence we infer that every ccs Daugavet point is a "ccw Daugavet point", meaning that the property of the definition actually holds for every convex combination of non-empty relatively weakly open subsets of \(B_{X}\). In particular, every ccs Daugavet point is a super Daugavet point. Furthermore, it is known that the mere existence of a ccs Daugavet point implies that every convex combination of slices (and of weak open subsets) of the unit ball of the underlying space has diameter \(2\)[21, Proposition 3.12]. Apart from finite dimensional considerations, this is suprisingly the only known isomorphic obstruction to the existence of diametral points, see below for more details. Variants of the above notions restricting to slices, weak open sets and convex combinations of slices and weak open sets containing the given point, were also considered. **Definition 1.2**.: Let \(X\) be a Banach space and let \(x\in S_{X}\). We say that \(x\) is 1. a _\(\Delta\)-point_ if, for every slice \(S\) of \(B_{X}\) with \(x\in S\) and every \(\varepsilon>0\), there exists \(y\in S\) such that \(\|y-x\|>2-\varepsilon\), 2. a _super \(\Delta\)-point_ if, for every non-empty relatively weakly open subset \(W\) of \(B_{X}\) with \(x\in W\) and every \(\varepsilon>0\), there exists \(y\in W\) such that \(\|y-x\|>2-\varepsilon\), 3. a _ccs \(\Delta\)-point_ if, for every convex combination of slices \(C\) of \(B_{X}\) with \(x\in C\) and every \(\varepsilon>0\), there exists \(y\in C\) such that \(\|y-x\|>2-\varepsilon\); 4. a _ccw \(\Delta\)-point_ if, for every convex combination of non-empty relatively weakly open subsets \(D\) of \(B_{X}\) with \(x\in D\) and every \(\varepsilon>0\), there exists \(y\in D\) such that \(\|y-x\|>2-\varepsilon\). The notions of Daugavet and \(\Delta\)-points were introduced in [4, Section 1], whereas the rest of notions go back to [21, Definitions 2.4 and 2.5]. See [1, 2, 16, 19, 21, 23] for further research on these notions. In particular, note that it is still unknown whether every ccs \(\Delta\)-point has to be a super \(\Delta\)-point, and whether the notions of ccs and ccw \(\Delta\)-points are different. This is due to the subtle failure of a localization of Bourgain's lemma (c.f. e.g. [21, Remark 2.3]). However, all the other notions are known to be different, and can even present extreme differences, see [21] for more details. In view of the fact that the Daugavet property imposes strong restrictions on the geometric structure of the given space, a natural question is how the mere presence of Daugavet or \(\Delta\)-points affect the geometric structure of the underlying Banach space. Yet, although it was proved in [5] that finite dimensional spaces contain no \(\Delta\)-points, and that the notion strongly negates some isometric properties of Banach spaces (asymptotic uniform smoothness and weak\({}^{*}\) asymptotic uniform convexity [2, 5], or existence of subsymmetric bases [6] as well as unconditional bases with small constants of unconditionality which are either shrinking or boundedly complete [2]), surprising examples have recently shown the strong isometric flavour of these notions. To name a few, there exists a space with a \(1\)-unconditional basis and a weakly dense subset of Daugavet point [6], there exists a Lipschitz-free space with the RNP and a Daugavet point which is both isomorphic to \(\ell_{1}\) and isometric to a dual space [2, 23], and there exists an equivalent norm on \(\ell_{2}\) for which the unit vector basis \(e_{1}\) is simultaneously a super Daugavet point and a ccw \(\Delta\)-point [15]. Actually, every infinite dimensional Banach space can be renormed with a \(\Delta\)-point [2], and every Banach space with a weakly null unconditional Schauder basis can be renormed with a super Daugavet point [15]. The various \(\Delta\)-notions can be seen as extreme opposites to the classical notions of denting points, points of continuity and points of strong regularity (also see [12] for precise quantitative formulations of this statement). They are localized versions of the so called _diametral diameter 2 properties_ (_DLD2P_, _DD2P_ and _DSD2P_) that have previously appeared in the literature under various names, but that were formally introduced in [11]. The DLD2P (resp. DD2P) was precisely defined there by asking all the elements of the unit sphere of a Banach space to be \(\Delta\)-points (resp. super \(\Delta\)-points). The DSD2P was originally defined by asking all the points inside of the unit ball of a Banach space to be ccs \(\Delta\)-points, but it turned out to be equivalent to the Daugavet property [17]. On the other hand, its restricted version (the _restricted DSD2P_[21]), as well as the DLD2P and the DD2P, are known to be strictly weaker properties. Yet, although the Daugavet property can be characterized by Daugavet, super Daugavet or ccs Daugavet points, it is currently unknown whether the three remaining diametral properties are equivalent. Furthermore, it is unknown whether the DLD2P forces all the weakly open subsets of the unit ball to have diameter 2 (but note that there exists a space with the DD2P, the restricted DSD2P and convex combinations of slices of arbitrarily small diameter in its unit ball [3]). The example from [6] provides an interesting insight to this question. Indeed, the space that was constructed there with a weakly dense subset of Daugavet points and a 1-unconditional basis admits non-empty relatively weakly open subsets of arbitrarily small diameter in its unit ball. In fact, each of the Daugavet points in the considered weakly dense set is a point of continuity for the identity mapping \(I:(B_{X},w)\to(B_{X},\|\cdot\|)\) (in other words, it has relative weak open neighborhoods of arbitrarily small diameter). However, this space cannot contain any point satisfying a stronger diametral condition, as it was proved in [6] (resp. [21]) that spaces with a 1-unconditional basis contain neither super \(\Delta\)-points nor ccs \(\Delta\)-points. Thus, at this point, a natural question is how big the set of stronger notions than Daugavet and \(\Delta\)-points can be in a Banach space where there are non-empty relatively weakly open subsets of arbitrarily small diameter. In view of this fact, during the last week of June 2023, in the framework of 2023 ICMAT-IMAG DocCourse in Functional Analysis, a supervised research program was celebrated at IMAG (Granada), where we considered the following question: How massive can the sets of Daugavet, super \(\Delta\), super Daugavet and ccs/ccw \(\Delta\)-points be in a Banach space having non-empty relatively weakly open subsets of arbitrary small diameter in its unit ball? The main goal of the project was to study the renorming techniques from [9], where it is proved that every Banach space containing \(c_{0}\) can be renormed in such a way that all the slices of the new unit ball have diameter 2, whereas it admits weakly open subsets of arbitrarily small diameter, and to try to build similar renormings in a more suitable context for our study, namely spaces of continuous functions. The idea is also inspired from the construction from [21, Section 4.6], where similar techniques where used in order to produce an example of a super Daugavet point which is not a ccs \(\Delta\)-point. The main aim of the present paper is to present the results obtained in this workshop. We prove that for every perfect compact Hausdorff space \(\Omega\), the space \(C(\Omega)\) admits an equivalent renorming such that the new unit ball contains non-empty relatively weakly open subsets of arbitrarily small diameter and such that the sets of Daugavet points and super \(\Delta\)-points are as big as they can be taking into account that its unit ball contains non-empty weak open sets of small diameter. This is a big difference with the above mentioned example of [6], where the set of super \(\Delta\)-points is empty. Furthermore, we show that this space also contains points which are simultaneously super Daugavet and ccw \(\Delta\), which is the strongest diametral notion we can get in this context. We collect the results in the following theorem. **Theorem 1.3**.: _Let \(\Omega\) be a perfect compact Hausdorff space. For every \(\varepsilon\in(0,1)\), there exists an equivalent norm \(\left\|\cdot\right\|_{\varepsilon}\) on \(C(\Omega)\) with the following properties:_ 1. _For every_ \(f\in C(\Omega)\)_,_ \(\left\|f\right\|_{\infty}\leqslant\left\|f\right\|_{\varepsilon}\leqslant \frac{1}{1-\varepsilon}\left\|f\right\|_{\infty}\)_;_ 2. _The unit ball of_ \((C(\Omega),\left\|\cdot\right\|_{\varepsilon})\) _contains non-empty relatively weakly open subsets of arbitrarily small diameter;_ 3. _The set of Daugavet points of the unit ball of_ \((C(\Omega),\left\|\cdot\right\|_{\varepsilon})\) _is weakly dense;_ 4. _The set of ccw_ \(\Delta\)_-points of the unit ball of_ \((C(\Omega),\left\|\cdot\right\|_{\varepsilon})\) _is norming (in other words, every slice of the unit ball contains a ccw_ \(\Delta\)_-point);_ 5. _There are points of the unit ball of_ \((C(\Omega),\left\|\cdot\right\|_{\varepsilon})\) _which are:_ 1. _Simultaneously super Daugavet points and ccw_ \(\Delta\)_-points;_ 2. _Simultaneously Daugavet points and preserved extreme points (hence also ccw_ \(\Delta\)_-points), but not super Daugavet points;_ 3. _Simultaneously Daugavet points and points of continuity;_ 6. _There are points of the unit ball of_ \((C(\Omega),\left\|\cdot\right\|_{\varepsilon})\) _which are not_ \(\Delta\)_-points (in other words,_ \((C(\Omega),\left\|\cdot\right\|_{\varepsilon})\) _fails the DLD2P)._ In particular, in the above renorming there are Daugavet points which are not super \(\Delta\)-points and there are ccw \(\Delta\)-points which are not super Daugavet points. Even though it was already known that these notions are not equivalent (see [21] for references), the various counterexamples from the literature were obtained with very different techniques. Theorem 1.3 shows that such counterexamples may live in the same Banach space. Furthermore, it is, to our knowledge, the first example of a Banach space which contains points which are both Daugavet and ccw \(\Delta\), but not super Daugavet. ## 2. Notation and preliminary results Given a Banach space \(X\), \(B_{X}\) (resp. \(S_{X}\)) stands for the closed unit ball (resp. the unit sphere) of \(X\). We denote by \(X^{*}\) the topological dual of \(X\). By a slice of \(B_{X}\), we mean any non-empty subset of \(B_{X}\) given as the intersection of \(B_{X}\) with an open half-space. If \(A\) is a subset of a Banach space \(X\), we denote by \(\operatorname{co}A\) (resp. \(\overline{\operatorname{co}}\,A\)) the convex hull (resp. the closure of the convex hull) of \(A\). Recall that a subset \(A\) in the unit ball of Banach space \(X\) is said to be _norming_ if \(\left\|x^{*}\right\|=\sup_{x\in A}\left|x^{*}(x)\right|\) for every \(x^{*}\in X^{*}\). In particular, if \(A\) is a symmetric subset of \(B_{X}\), then this property is equivalent to \(A\) satisfying \(B_{X}=\overline{\operatorname{co}}\,A\) (in other words, to every slice of \(B_{X}\) containing an element of \(A\)). We deal with real Banach spaces only. Given a compact Hausdorff topological space \(\Omega\), \(C(\Omega)\) stands for the classical space of scalar-valued continuous functions under the maximum norm, which will be denoted by \(\|\cdot\|_{\infty}\). A well known consequence of Rainwater's theorem [13, Corollary 3.61] is that given \(f\in C(\Omega)\) and a bounded sequence \((f_{n})\subseteq C(\Omega)\), \((f_{n}(t))\to f(t)\) for every \(t\in K\) implies that \(f_{n}\stackrel{{ w}}{{\to}}f\). Let us also recall that the space \(C(\Omega)\) has the Daugavet property if, and only if, \(\Omega\) is _perfect_ (i.e. has no isolated points) [24, P.78, Example (a)]. This property will be essential in our construction, as it allows, together with the usual separation property, to produce nice families of non-empty open subsets with pairwise disjoint closures. These arguments are standard, and we will make use of them throughout the text with no further explanation nor any explicit reference. We then recall some classical definitions from Banach space geometry. Given a convex set \(A\) in a vector space \(X\), a point \(x_{0}\in A\) is said to be _extreme_ if the condition \(x_{0}=\frac{y+z}{2}\) for \(y,z\in A\) forces \(y=z=x_{0}\). Given a bounded closed and convex subset \(C\) of a Banach space \(X\), a point \(x_{0}\in C\) is a _preserved extreme point_ if \(x_{0}\) is an extreme point in \(\overline{C}^{w^{*}}\), where the closure is taken in the \(w^{*}\)-topology of \(X^{**}\). For easy reference, let us point out the following characterisation of preserved extreme points (which proof can be found, for instance, in [20, Proposition 0.1.3]). **Proposition 2.1**.: _Let \(X\) be a Banach space and let \(C\subseteq X\) be a bounded closed and convex set. Let \(x_{0}\in C\). The following are equivalent:_ 1. \(x_{0}\) _is a preserved extreme point of_ \(C\)_;_ 2. _The slices of_ \(C\) _containing_ \(x_{0}\) _form a neighbourhood basis of_ \(x_{0}\) _in_ \(C\) _for the weak topology;_ 3. _For every pair of nets_ \((y_{s})\) _and_ \((z_{s})\) _in_ \(C\) _such that_ \(\frac{y_{s}+z_{s}}{2}\to x_{0}\) _weakly, we have_ \(y_{s}\to x_{0}\) _weakly._ Given a Banach space \(X\) and a subset \(A\subseteq X\), a point \(x_{0}\in A\) is said to be a _point of continuity_ if, for every \(\varepsilon>0\), there exists a weakly open subset \(W\subseteq A\) with \(x_{0}\in W\) and \(\operatorname{diam}\left(W\right)<\varepsilon\). Observe that this means that the identity mapping \(I:(A,w)\longrightarrow(A,\|\cdot\|)\) is continuous. In turn, this is equivalent to the fact that if a net \((a_{\alpha})\) of elements of \(A\) satisfies that \((a_{s})\xrightarrow{w}a\), then \(\|a_{s}-a\|\to 0\). A closed and bounded set \(B\) (resp. a closed convex and bounded set \(C\)) in a Banach space \(X\) is said to have the _point of continuity property_ (resp. _convex point of continuity property (CPCP)_) if, every closed subset \(A\) of \(B\) (resp. every closed and convex subset \(A\) of \(C\)) contains a point of continuity. We finally recall the definition of the "Summing Tree Simplex" from [8] that was constructed in order to distinguish between the CPCP and the PCP for subsets of Banach spaces. This set will be the stepping stone for our renormings of spaces of continuous functions. Let \(\mathbb{N}^{<\omega}\) be the set of all ordered finite sequences of positive integers including the empty sequence denoted by \(\emptyset\). If \(\alpha=(\alpha_{1},\ldots,\alpha_{n})\in\mathbb{N}^{<\omega}\), the length of \(\alpha\) is \(|\alpha|=n\) and \(|\emptyset|=0\). We use the natural order in \(\mathbb{N}^{<\omega}\) given by: \[\alpha\leq\beta\text{ if }|\alpha|\leq|\beta|\text{ and }\alpha_{i}=\beta_{i}\text{ for all }i\in\{1,\ldots,|\alpha|\},\] and \(\emptyset\preceq\alpha\) for any \(\alpha\in\mathbb{N}^{<\omega}\). We denote by \(\alpha\sim i\) the finite sequence resulting from the concatenation of an element \(\alpha\in\mathbb{N}^{<\omega}\) with the sequence \((i)\) with only one element \(i\in\mathbb{N}\). Let \((e_{\alpha})_{\alpha\in\mathbb{N}^{<\omega}}\) be the unit vector basis of \(c_{00}(\mathbb{N}^{<\omega})\) and \((e_{\alpha}^{*})_{\alpha\in\mathbb{N}^{<\omega}}\) be the sequence of biorthogonal functionals. For a given \(\alpha\in\mathbb{N}^{<\omega}\), let \[x_{\alpha}:=\sum_{\beta\preceq\alpha}e_{\beta}.\] We consider the set \[K:=\overline{\mathrm{co}}\{x_{\alpha}\}_{\alpha\in\mathbb{N}^{<\omega}}\subset S _{c_{0}}^{+}.\] Some properties of the set \(K\) are given in [8, Theorem 1.1]. In particular, it is proved there that \(K\) has the CPCP and fails the PCP. We end the present section by providing a few more properties for \(K\). **Lemma 2.2**.: _For every \(x\in K\) and for every slice \(S\) of \(K\), \(\sup_{y\in S}\|x-y\|=1\)._ Proof.: Observe that for every \(z\in\mathrm{co}\{x_{\alpha}\}_{\alpha\in\mathbb{N}^{<\omega}}\) and for every \(\alpha\in\mathbb{N}^{<\omega}\), we have \(\lim\|z-x_{\alpha\sim n}\|=1\). Thus, since every slice of \(K\) contains some \(x_{\alpha}\), and since \(x_{\alpha\sim n}\to x_{\alpha}\) weakly, the conclusion follows from an easy density argument. From the very definition that \(K\) has the CPCP it is immediate to infer that \(K\) contains non-emtpy relatively weakly open subsets of arbitrarily small diameter. However, we will describe a particular family of non-empty relatively weakly open subsets of small diameter because they will be useful in order to localise ccw \(\Delta\)-points which are not super Daugavet points in the final renorming of \(C(\Omega)\) (see Remark 3.9). **Lemma 2.3**.: _For \(n\in\mathbb{N}\) and \(\rho\in(0,1/n)\), let_ \[V_{n,\rho}:=\bigcap_{i=1}^{n}\{z\in K\colon\ e_{i}^{*}(z)>1/n-\rho\}.\] _Then \(V_{n,\rho}\) is a non-empty relatively weakly open subset of \(K\) with diameter smaller than \(2/n+2n\rho\)._ Proof.: For \(i\in\{1,\ldots,n\}\), let \(x_{i}:=x_{(i)}\). Then \(x_{0}:=\frac{1}{n}\sum_{i=1}^{n}x_{i}\in V_{n,\rho}\). Clearly, it is enough to prove that for every \(z\in\operatorname{co}\{x_{\alpha}\}_{\alpha\in\mathbb{N}^{<\omega}}\cap V_{n,\rho}\), \(\|x_{0}-z\|\leqslant 1/n+n\rho\). Fix such a \(z\), and write \(z=\sum_{l=1}^{L}\lambda_{l}x_{\alpha_{l}}\) with \(\lambda_{l}>0\), \(\sum_{l=1}^{L}\lambda_{l}=1\), and \(\alpha_{l}\in\mathbb{N}^{<\omega}\). For every \(i\in\mathbb{N}\), let \[A_{i}:=\{l\colon(i)\leq\alpha_{l}\}.\] Since \(z\in V_{n,\rho}\), we have \(e_{i}^{*}(z)=\sum_{l\in A_{i}}\lambda_{l}>1/n-\rho\) for every \(i\leqslant n\). So observe that for any given \(j\in\{1,\ldots,n\}\), we have \[\sum_{l\in A_{j}}\lambda_{l}=\sum_{i=1}^{n}\sum_{l\in A_{i}}\lambda_{l}-\sum_ {\begin{subarray}{c}i=1\\ i\neq j\end{subarray}}^{n}\sum_{l\in A_{i}}\lambda_{l}\leqslant 1-(n-1)(1/n-\rho)=1/n +(n-1)\rho.\] In the same way, \[\sum_{i>n}\sum_{l\in A_{i}}\lambda_{l}\leqslant\sum_{i\in\mathbb{N}}\sum_{l \in A_{i}}\lambda_{l}-\sum_{i=1}^{n}\sum_{l\in A_{i}}\lambda_{l}\leqslant 1-n(1/n- \rho)=n\rho.\] Now let us define \(v=x_{0}-z=\frac{1}{n}\sum_{i=1}^{n}x_{i}-\sum_{l=1}^{L}\lambda_{l}x_{\alpha_{ l}}\) and let us fix \(\beta\in\mathbb{N}^{<\omega}\). We want to evaluate \(|v(\beta)|\). There are three cases to consider. **Case 1.** If \(\beta=\emptyset\), then \(v(\beta)=0\), so there is nothing to do. **Case 2.** If \(|\beta|>1\), take \(j_{\beta}\in\mathbb{N}\) such that \((j_{\beta})\preceq\beta\). Then, either there is no \(l\in\{1,\ldots,L\}\) such that \((j_{\beta})\preceq\alpha_{l}\) in which case \(v(\beta)=0\), or there is an \(l\in\{1,\ldots,L\}\) such that \((j_{\beta})\preceq\alpha_{l}\), and \(|v(\beta)|=\left|\sum_{l=1}^{L}\lambda_{l}x_{\alpha_{l}}(\beta)\right|\leqslant \sum_{l\in A_{j_{\beta}}}\lambda_{l}\). Hence \(|v(\beta)|\leqslant\max\{1/n+(n-1)\rho,n\rho\}\leqslant 1/n+n\rho\). **Case 3.** If \(\beta=(j_{\beta})\) for some \(j_{\beta}\in\mathbb{N}\), then either \(j_{\beta}>n\), and \(|v(\beta)|\leqslant\sum_{l\in A_{j_{\beta}}}\lambda_{l}\leqslant n\rho\), or \(j_{\beta}\leqslant n\), and \(|v(\beta)|=\left|1/n-\sum_{l\in A_{j_{\beta}}}\lambda_{l}\right|\leqslant n\rho\) because \(1/n-\rho<\sum_{l\in A_{j_{\beta}}}\lambda_{l}\leqslant 1/n+(n-1)\rho\). It follows that \(\|x_{0}-z\|=\sup_{\beta\in\mathbb{N}^{<\omega}}|v(\beta)|\leqslant 1/n+n\rho\), as we wanted. We end the section with the following lemma, whose proof follows from [8, P.82], but which we establish for easy future reference. **Lemma 2.4**.: _Every \(x_{\alpha}\) is a preserved extreme point of \(K\)._ ## 3. Main result The aim of the section is to prove Theorem 1.3. Let \(\Omega\) be a perfect compact Hausdorff space. Then we can find a point \(t_{0}\in\Omega\), open subsets \(U_{0},V_{0}\) of \(\Omega\), a family \((t_{\alpha})_{\alpha\in\mathbb{N}^{<\omega}}\) of points of \(\Omega\) and families \((U_{\alpha})_{\alpha\in\mathbb{N}^{<\omega}},(V_{\alpha})_{\alpha\in\mathbb{N}^ {<\omega}}\) of open subsets of \(\Omega\) such that \[t_{0}\in U_{0}\subset\overline{U_{0}}\subset V_{0},\ t_{\alpha}\in U_{\alpha} \subset\overline{U_{\alpha}}\subset V_{\alpha}\text{ and }\overline{V_{0}}\cap\overline{V_{\alpha}}=\emptyset\text{ for every }\alpha\in\mathbb{N}^{<\omega};\] and \[\overline{V_{\alpha}}\cap\overline{V_{\beta}}=\emptyset\text{ for every distinct }\alpha,\beta\in\mathbb{N}^{<\omega}.\] By Tietze-Urysohn's lemma, we can find, for every \(\alpha\in\mathbb{N}^{<\omega}\), a continuous functions \(f_{\alpha}\in S^{+}_{C(\Omega)}\) such that \[f_{\alpha}(t):=\begin{cases}1\text{ if }t\in\overline{U_{\alpha}};\\ 0\text{ if }t\notin V_{\alpha}.\end{cases}\] We will use the functions \(f_{\alpha}\) to construct a positive isometric embedding of \(c_{0}(\mathbb{N}^{<\omega})\) into \(C(\Omega)\). For every \(z:=(z_{\alpha})_{\alpha\in\mathbb{N}^{<\omega}}\in c_{00}(\mathbb{N}^{<\omega})\), we define \[\Phi(z):=\sum_{\alpha\in\mathbb{N}^{<\omega}}z_{\alpha}f_{\alpha}.\] By the disjointness on the supports of the functions \(f_{\alpha}\), this map is clearly isometric and sends positive sequences to positive functions. Thus it can be extended by density to a positive isometric embedding \(\Phi\colon c_{0}(\mathbb{N}^{<\omega})\to C(\Omega)\). **Remark 3.1**.: By construction, we also have \[\Phi(z)(t_{\beta})=z_{\beta}=e_{\beta}^{*}(z)\] for every for every \(\beta\in\mathbb{N}^{<\omega}\) and for every \(z:=(z_{\alpha})_{\alpha\in\mathbb{N}^{<\omega}}\in c_{0}(\mathbb{N}^{<\omega})\). Let \(K_{0}:=\Phi(K)\) be the image of the subset \(K\) of \(c_{0}(\mathbb{N}^{<\omega})\) from the preliminary section, and fix \(\varepsilon\in(0,1)\). We consider the equivalent norm \(\left|\kern-1.075pt\left|\cdot\right|\kern-1.075pt\right|_{\varepsilon}\) on \(C(\Omega)\) given by the Minkowski functional of the set \[B_{\varepsilon}:=\overline{\cos}\left((2K_{0}-\mathds{1})\cup(-2K_{0}+ \mathds{1})\cup\big{(}(1-\varepsilon)B_{C(\Omega)}+\varepsilon B_{\ker( \delta_{0})}\big{)}\right),\] where \(\delta_{0}\) is the evaluation functional at the point \(t_{0}\). Observe that \((1-\varepsilon)B_{C(\Omega)}\subset B_{\varepsilon}\subset B_{C(\Omega)}\), which means \[(1-\varepsilon)\left|\kern-1.075pt\left|\cdot\right|\kern-1.075pt\right|_{ \varepsilon}\leq\left|\kern-1.075pt\left|\cdot\right|\kern-1.075pt\right|_{ \varepsilon}.\] We will prove that this renorming of \(C(\Omega)\) satisfies all the properties of Theorem 1.3. Let \(A:=(2K_{0}-\mathds{1})\), \(B:=(1-\varepsilon)B_{C(\Omega)}+\varepsilon B_{\ker(\delta_{0})}\) and \(A_{\varepsilon}:=A\cup-A\cup B\). For every \(\alpha\in\mathbb{N}^{<\omega}\), let \(h_{\alpha}:=\sum_{\beta\leq\alpha}f_{\beta}\) and \(u_{\alpha}:=2h_{\alpha}-\mathds{1}\). Observe that \(A=\overline{\cos}\{u_{\alpha}\}_{\alpha\in\mathbb{N}^{<\omega}}\). We will start by proving that our renorming produces weakly open subsets of arbitrarily small diameter in the new unit ball. **Proposition 3.2**.: _The set \(B_{\varepsilon}\) admits non-empty relatively weakly open subsets of arbitrarily small diameter._ Proof.: For \(n\in\mathbb{N}\) and \(\rho>0\), consider \[\tilde{V}_{n,\rho}:=\left\{f\in B_{\varepsilon}:f(t_{0})<-1+\rho,\text{ and }f(t_{i})>2\left(\frac{1}{n}-\rho\right)-1\text{ for every }i\in\{1,...,n\}\right\}.\] where \(i\) denotes the sequence \((i)\). Notice that \(f_{0}:=2\Phi(x_{0})-\mathds{1}\in\tilde{V}_{n,\rho}\), where \(x_{0}:=\frac{1}{n}\sum_{i=1}^{n}x_{i}\). By density, it is enough to find an upper bound for the distance of \(f\) to \(f_{0}\) for every \(f\in\tilde{V}_{n,\rho}\cap\operatorname{co}A_{\varepsilon}\). So pick such an \(f\), and write \[f=\lambda_{1}f_{1}+\lambda_{2}f_{2}+\lambda_{3}f_{3}\] with \(\lambda_{1},\lambda_{2},\lambda_{3}\in[0,1]\), \(\sum_{i=1}^{3}\lambda_{i}=1\), \(f_{1}\in A\), \(f_{2}\in-A\) and \(f_{3}\in B\). By assumption, we have \[-1+\rho>f(t_{0})\geq-\lambda_{1}+\lambda_{2}-(1-\varepsilon)\lambda_{3}\geq- \lambda_{1}-(1-\varepsilon)\lambda_{3},\] hence \[1-\rho<\lambda_{1}+(1-\varepsilon)\lambda_{3}.\] In particular, \[1-\rho<\lambda_{1}+\lambda_{3}=1-\lambda_{2},\] and we get \(\lambda_{2}<\rho\). Furthermore, since \[\lambda_{1}=1-\lambda_{2}-\lambda_{3}\leq 1-\lambda_{3},\] we have \[1-\rho<1-\lambda_{3}+(1-\varepsilon)\lambda_{3},\] and thus \(\lambda_{3}<\rho/\varepsilon\). Finally \[\lambda_{1}=1-\lambda_{2}-\lambda_{3}>1-(1+1/\varepsilon)\rho.\] It follows that \[\left|\!\left|\!\left|f-f_{1}\right|\!\right|\!\right|_{\varepsilon}\leq(1- \lambda_{1})\left|\!\left|\!\left|f_{1}\right|\!\right|\!\right|_{\varepsilon}+ \lambda_{2}\left|\!\left|\!\left|f_{2}\right|\!\right|\!\right|_{\varepsilon}+ \lambda_{3}\left|\!\left|\!\left|f_{3}\right|\!\right|\!\right|_{\varepsilon}<2 (1+1/\varepsilon)\rho.\] Write \(f_{1}:=2\Phi(z)-\mathds{1}\) with \(z\in K\). For every \(i\in\{1,\ldots,n\}\), we have \[2(1/n-\rho)-1<f(t_{i})=\lambda_{1}f_{1}(t_{i})+\lambda_{2}f_{2}(t_{i})+\lambda _{3}f_{3}(t_{i})<f_{1}(t_{i})+2(1+1/\varepsilon)\rho,\] thus \[z_{i}=\Phi(z)(t_{i})=\frac{f_{1}(t_{i})+1}{2}>\frac{1}{n}-(2+1/\varepsilon)\rho,\] which means \(z\in V_{n,\tilde{\rho}}\) for \(\tilde{\rho}=(2+1/\varepsilon)\rho\). So by Lemma 2.3, we get since \(\Phi\) is an isometry. The conclusion follows. Actually, we can say a bit more in that regard: the set \(B_{\varepsilon}\) admits points of continuity. Indeed, the latter claim immediately follows from the fact that the set \(K\) itself admits points of continuity (it has the CPCP, see [8, Theorem 1.1(c)]) together with the following transfer result. **Proposition 3.3**.: _Let \(z\) be a point of continuity of \(K\). Then \(2\Phi(z)-\mathds{1}\) is a point of continuity of \(B_{\varepsilon}\)._ Proof.: Let \(z\in K\) be a point of continuity. To show that \(f:=2\Phi(z)-\mathds{1}\) is a point of continuity of \(B_{\varepsilon}\), it is enough by density to prove that for every net \((f_{s})\) in \(\operatorname{co}A_{\varepsilon}\), we have that \(f_{s}\to f\) weakly if, and only if, \(\left|\!\left|\!\left|f-f_{s}\right|\!\right|\!\right|_{\varepsilon}\to 0\). So consider such a net, and for every \(s\), write \[f_{s}=\lambda_{s}^{1}f_{s}^{1}+\lambda_{s}^{2}f_{s}^{2}+\lambda_{s}^{3}f_{s}^ {3}\] with \(\lambda_{s}^{1},\lambda_{s}^{2},\lambda_{s}^{3}\in[0,1]\), \(\sum_{i=1}^{3}\lambda_{s}^{i}=1\), \(f_{s}^{1}\in A\), \(f_{s}^{2}\in-A\) and \(f_{s}^{3}\in B\). If \(f_{s}\to f\) weakly, then \[f_{s}(t_{0})\to f(t_{0})=-1.\] Now since \(f_{s}^{1}(t_{0})=-1\), \(f_{s}^{2}(t_{0})=1\) and \(f_{s}^{3}(t_{0})\in(-1+\varepsilon,1-\varepsilon)\) for every \(s\), it immediately follows that \[\lambda_{s}^{1}\to 1\text{ and }\lambda_{s}^{2},\lambda_{s}^{3}\to 0.\] From the above we conclude that \(f_{s}^{1}\to f\) weakly. For every \(s\), pick \(z_{s}\in K\) such that \(f_{s}^{1}=2\Phi(z_{s})-\mathds{1}\). Then \(\Phi(z_{s})\to\Phi(z)\) weakly, and since \(\Phi\) is a linear isometry, we get that \(z_{s}\to z\) weakly. But \(z\) is a point of continuity of \(K\), so it follows that \(\left|\!\left|z-z_{s}\right|\!\right|\to 0\). Going back to \(C(\Omega)\), we get that \(\left|\!\left|\!\left|\Phi(z)-\Phi(z_{s})\right|\!\right|\!\right|_{\varepsilon}\to 0\), and thus \(\left|\!\left|\!\left|f-f_{s}\right|\!\right|\!\right|_{\varepsilon}\to 0\), as we wanted. We will now prove that the set of Daugavet points of \(B_{\varepsilon}\) is weakly dense. For this purpose, we will need the following crucial approximation lemma. **Lemma 3.4**.: _Let \(\varphi\in B\), let \((W_{i})_{i\in I}\) be a collection of pairwise disjoint non-empty open subsets of \(\Omega\backslash V_{0}\), and for every \(i\in I\), let \((p_{i}^{n}),(q_{i}^{n})\) be sequences of distinct points of \(W_{i}\). Then there exists a sequence \((\varphi_{n})\subset B\) such that \(\varphi_{n}\to\varphi\) weakly and \(\varphi_{n}(p_{i}^{n})=-\varphi_{n}(q_{i}^{n})=1\) for every \(i,n\)._ Proof.: Write \(\varphi=(1-\varepsilon)f+\varepsilon g\) with \(f\in B_{C(\Omega)}\) and \(g\in B_{\ker{(\delta_{0})}}\). For every \(i\in I\), we can find sequences \((U^{n}_{i}),(\tilde{U^{n}_{i}}),(V^{n}_{i}),(\tilde{V^{n}_{i}})\) of open subsets of \(\Omega\) such that 1. The \(U^{n}_{i}\)'s and \(V^{n}_{i}\)'s have pairwise disjoint closures; 2. \(p^{n}_{i}\in\tilde{U^{n}_{i}}\subset\overline{\tilde{U^{n}_{i}}}\subset U^{n} _{i}\subset\overline{U^{n}_{i}}\subset W_{i}\); 3. \(q^{n}_{i}\in\tilde{V^{n}_{i}}\subset\overline{\tilde{V^{n}_{i}}}\subset V^{n }_{i}\subset\overline{V^{n}_{i}}\subset W_{i}\). By Tietze-Urysohn's lemma, we can find continuous functions \(b^{n}_{i}\) and \(c^{n}_{i}\) such that \(0\leqslant b^{n}_{i},c^{n}_{i}\leqslant 1\) and \[b^{n}_{i}(t):=\begin{cases}1\text{ if }t\in\tilde{U^{n}_{i}};\\ 0\text{ if }t\in\Omega\backslash U^{n}_{i};\end{cases}\text{ and }c^{n}_{i}(t):= \begin{cases}1\text{ if }t\in\tilde{V^{n}_{i}};\\ 0\text{ if }t\in\Omega\backslash V^{n}_{i}.\end{cases}\] Let \[f_{n}:=f\cdot\left(\mathds{1}-\sum_{i\in I}(b^{n}_{i}+c^{n}_{i})\right)+\sum_{ i\in I}(b^{n}_{i}-c^{n}_{i})\text{ and }g_{n}:=g\cdot\left(\mathds{1}-\sum_{i\in I}(b^{n}_{i}+c^{n}_{i})\right)+\sum_{i \in I}(b^{n}_{i}-c^{n}_{i}).\] Note that these functions are well defined and continuous by the disjointness of the closures of the \(U^{n}_{i}\)'s and \(V^{n}_{i}\)'s. Furthermore, we have that \(f_{n}=f\) and \(g_{n}=g\) on \(\Omega\backslash\left(\bigcup_{i\in I}(U^{n}_{i}\cup V^{n}_{i})\right)\), \(\left\lVert f_{n}\right\rVert_{\infty},\left\lVert g_{n}\right\rVert_{\infty} \leqslant 1\) and \(f_{n}(p^{n}_{i})=g_{n}(p^{n}_{i})=-f_{n}(q^{n}_{i})=-g_{n}(q^{n}_{i})=1\). In particular, \(f_{n}\in B_{C(\Omega)}\), \(f_{n}\to f\) weakly, \(g_{n}\in B_{\ker{(\delta_{0})}}\) and \(g_{n}\to g\) weakly. So \(\varphi_{n}=(1-\varepsilon)f_{n}+\varepsilon g_{n}\) does the job. **Proposition 3.5**.: _Let \(E\) be the set of all functions \(u\) in \(B_{\varepsilon}\) of the form \(u=\theta\lambda\psi+(1-\lambda)\varphi\), where \(\theta,\lambda,\psi,\varphi\) satisfy the following conditions:_ 1. \(\theta\in\{-1,1\}\) _and_ \(\lambda\in[0,1]\)_;_ 2. \(\psi=2h-\mathds{1}\)_, where_ \(h\in K_{0}\) _is the image of a finitely supported element of_ \(K\)_;_ 3. \(\varphi\in B\) _takes values_ \(-1\) _and_ \(1\) _on every_ \(U_{\alpha}\)_._ _Then \(E\) is weakly dense in \(B_{\varepsilon}\). Furthermore, every \(u\in E\) is a Daugavet point in \((C(\Omega),\left\lVert\cdot\right\rVert_{\varepsilon})\)._ Proof.: Let us first prove that the set \(E\) is weakly dense in \(B_{\varepsilon}\). Since the sets \(A\) and \(B\) are convex, and since \(\frac{A-A}{2}\subset B_{\ker(\delta_{0})}\subset B\), it follows from [10, Lemma 2.4] that \[\operatorname{co}\left(A\cup-A\cup B\right)=\operatorname{co}\left(A\cup B \right)\cup\operatorname{co}\left(-A\cup B\right).\] Hence, it is sufficient to prove that every element of \(\operatorname{co}\left(A\cup B\right)\), resp. of \(\operatorname{co}\left(-A\cup B\right)\), is the weak limit of a sequence in \(E\). We will do the proof for \(\operatorname{co}\left(A\cup B\right)\), the other case being analogous. Let \(u=\lambda\psi+(1-\lambda)\varphi\) with \(\lambda\in[0,1]\), \(\psi\in A\) and \(\varphi\in B\). First, write \(\psi=2h-\mathds{1}\) where \(h\in K_{0}\). Since the set of all finitely supported elements of \(K\) is dense in \(K\), and since \(K\) is mapped isometrically onto \(K_{0}\), we can find a sequence \((h_{n})\) in \(K_{0}\) which converges in norm to \(h\) and such that every \(h_{n}\) is the image of a finitely supported element of \(K\). We define \(\psi_{n}:=2h_{n}-\mathds{1}\). Next, for every \(\alpha\in\mathbb{N}^{<\omega}\), pick a sequence \((W^{n}_{\alpha})_{n\in\mathbb{N}}\) of non-empty open subsets of \(U_{\alpha}\) with pairwise disjoint closures (i.e. \(\overline{W^{n}_{\alpha}}\cap\overline{W^{m}_{\alpha}}=\emptyset\) for every \(m\neq n\)). Then pick arbitrary pairs of points \(\{p^{n}_{\alpha},q^{n}_{\alpha}\}\subset W^{n}_{\alpha}\). By Lemma 3.4, we can find a sequence \((\varphi_{n})\) in \(B\) which converges weakly to \(\varphi\) and such that \(\varphi_{n}(p^{n}_{\alpha})=-\varphi_{n}(q^{n}_{\alpha})=1\) for every \(\alpha,n\). Clearly, \(u_{n}=\lambda\psi_{n}+(1-\lambda)\varphi_{n}\) belongs to \(E\) and converges weakly to \(u\), so we are done. Now let us prove that every element of \(E\) is a Daugavet point. Since \(E\) is symmetric, it is sufficient to show that every element of the form \(u=\lambda\psi+(1-\lambda)\varphi\), where \(\psi\) and \(\varphi\) are as in the statement of the lemma, is a Daugavet point. So take such a function \(u\), and pick, for every \(\alpha\in\mathbb{N}^{<\omega}\), a point \(q_{\alpha}\in U_{\alpha}\) such that \(\varphi(q_{\alpha})=-1\). Notice that by assumption, \(\psi\) takes value \(-1\) on all but finitely many \(U_{\beta}\)'s, which means \(u(q_{\beta})=-1\) for all but finitely many \(\beta\). Let \(S\) be a slice of \(B_{\varepsilon}\). Then \(S\) has non-empty intersection with \(A\), \(-A\) or \(B\), so there are three cases to consider. **Case 1.** If \(A\cap S\) is non-empty, then \(S\) must contain one of the functions \(u_{\alpha}\) that generates \(A\) by closed convex hull. By assumption, \(\psi\) takes value \(-1\) on all but finitely many \(U_{\beta}\)'s, so since \(u_{\alpha\sim n}\to u_{\alpha}\) weakly, we can find \(n_{0}\in\mathbb{N}\) such that \(u_{\alpha\sim n_{0}}\in S\) and \(\psi\) is equal to \(-1\) on \(U_{\alpha\sim n_{0}}\). Hence \[\left|\!\left|u-u_{\alpha\sim n_{0}}\right|\!\right|_{\varepsilon}\geq\left|\! \left|u-u_{\alpha\sim n_{0}}\right|\!\right|_{\infty}\geq\left|\!\left(u-u_{ \alpha\sim n_{0}}\right)\!\right|=2.\] **Case 2.** If \(-A\cap S\) is non-empty, then \(S\) must contain one of the \(-u_{\alpha}\)'s. Since those take value \(1\) on all except finitely many \(U_{\beta}\)'s, it suffices to compute the value of \(u+u_{\alpha}\) at \(q_{\beta}\), where \(\beta\) is such that \(\psi\) is equal to \(-1\) on \(U_{\beta}\) and \(-u_{\alpha}\) is equal \(1\) on \(U_{\beta}\). **Case 3.** Assume that \(B\cap S\) is non-empty, and pick \(\tilde{\varphi}\) in this set. By assumption, there exists \(\alpha\in\mathbb{N}^{<\omega}\) such that \(u(q_{\alpha})=-1\). Fix \(\delta>0\). By continuity, we can find an open neighbourhood \(W_{\alpha}\subset U_{\alpha}\) of \(q_{\alpha}\) such that \(u\) is smaller than \(-1+\delta\) on \(W_{\alpha}\). Calling to Lemma 3.4, we can then approximate \(\tilde{\varphi}\) by a sequence of elements of \(B\) which attain value \(1\) in \(W_{\alpha}\). Hence, without lost of generality, \(\tilde{\varphi}\) satisfies this latter property, and it follows that \(\left|\!\left|\!\left|u-\tilde{\varphi}\right|\!\right|\!\right|_{\varepsilon} \geq\left|\!\left(u-\tilde{\varphi}\right)\!\right|\geq 2-\delta\). The conclusion follows by letting \(\delta\) go to \(0\). As a direct consequence, we get: **Corollary 3.6**.: _Every function \(u\in 2K_{0}-\mathds{1}\) is a Daugavet point in \((C(\Omega),\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\!\left|\!\left|\! \left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\right|\! \right|\!\!\right|\!\right|\!\right|\!\right|\!\right|\!\right|\!\right|\!\right| \!\right|\!\right|\!\right|\!\right|\!\right|\!\right|\!\right|\!\right|\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Proof.: For every \(n\in\mathbb{N}\), let \(u_{\alpha}^{n}:=2\left(h_{\alpha}+\frac{1}{n}\sum_{i=1}^{n}f_{\alpha\sim i}\right)- \mathds{1}\). We have \[\left|\!\left|\!\left|u_{\alpha}-u_{\alpha}^{n}\right|\!\right|\!\right|_{ \varepsilon}\leq\frac{1}{1-\varepsilon}\left|\!\left|u_{\alpha}-u_{\alpha}^{n} \right|\!\right|_{\infty}=\frac{1}{1-\varepsilon}\left|\!\left|\!\left|\Phi \left(\frac{1}{n}\sum_{i=1}^{n}x_{\alpha\sim i}\right)\right|\!\right|_{ \infty}=\frac{1}{n(1-\varepsilon)}.\] Consider \[\tilde{W}_{n,\rho}:=\left\{f\in B_{\varepsilon}:f(t_{0})<-1+\rho,\text{ and }f(t_{\alpha\sim i})>2\left(\frac{1}{n}-\rho\right)-1\text{ for every }i\in\{1,...,n\}\right\}.\] Then we can show as in the proof of Proposition 3.2 that the diameter of \(\tilde{W}_{n,1/n^{2}}\) goes to \(0\) as \(n\) goes to infinity. Since \(u_{\alpha}^{n}\in\tilde{W}_{n,1/n^{2}}\) and since the distance from \(u_{\alpha}\) to \(u_{\alpha}^{n}\) goes to \(0\), we get that \(u_{\alpha}\) is not a super Daugavet point. We will now show that \(B_{\varepsilon}\) contains points satisfying stronger diametral notions. We start by the following easy observation. **Lemma 3.10**.: _Let \(\varphi\in B_{\varepsilon}\) be a function taking value \(1\) and value \(-1\) on every \(U_{\alpha}\). Then \(\varphi\) belongs to \(B\)._ Proof.: Fix \(\delta\in(0,1)\). We can find \(h_{1},h_{2}\in K_{0}\), \(\psi\in B\) and \(\lambda_{1},\lambda_{2},\lambda_{3}\geq 0\) such that \(\lambda_{1}+\lambda_{2}+\lambda_{3}=1\) and \(\left|\!\left|\varphi-(\lambda_{1}(2h_{1}-\mathds{1})+\lambda_{2}(-2h_{2}+ \mathds{1})+\lambda_{3}\psi)\right|\!\right|_{\infty}<\delta\). Also, we may assume without loss of generality that \(h_{1}\) and \(h_{2}\) are the images of finitely supported elements of \(K\). So choose \(\alpha\in\mathbb{N}^{<\omega}\) belonging in neither of these supports. Then \(2h_{1}-\mathds{1}=-1\) and \(-2h_{2}+\mathds{1}=1\) on \(U_{\alpha}\). Pick \(p_{\alpha},q_{\alpha}\in U_{\alpha}\) such that \(\varphi(p_{\alpha})=-\varphi(q_{\alpha})=1\). Then \[1-\delta<-\lambda_{1}+\lambda_{2}+\lambda_{3}\psi(p_{\alpha})<-\lambda_{1}+ \lambda_{2}+\lambda_{3}=1-2\lambda_{1},\] which implies \(\lambda_{1}<\delta/2\). Furthermore, \[-1+\delta>-\lambda_{1}+\lambda_{2}+\lambda_{3}\psi(q_{\alpha})>-\lambda_{1}+ \lambda_{2}-\lambda_{3}=-1+2\lambda_{2},\] which implies \(\lambda_{2}<\delta/2\). Hence \(\lambda_{3}>1-\delta\), and it follows that \[\left|\!\left|\varphi-\psi\right|\!\right|_{\infty} \leq\lambda_{1}\left|\!\left|2h_{1}-\mathds{1}\right|\!\right|_{ \infty}+\lambda_{2}\left|\!\left|2h_{2}-\mathds{1}\right|\!\right|_{\infty}+(1 -\lambda_{3})\left|\!\left|\psi\right|\!\right|_{\infty}\] \[\quad+\left|\!\left|\varphi-(\lambda_{1}(2h_{1}-\mathds{1})+ \lambda_{2}(-2h_{2}+\mathds{1})+\lambda_{3}\psi)\right|\!\right|_{\infty}<3\delta.\] Since \(B\) is closed, the conclusion follows. **Proposition 3.11**.: _Let \(\varphi\in B_{\varepsilon}\) be a function taking value \(1\) and value \(-1\) on every \(U_{\alpha}\). Then \(\varphi\) is simultaneously a super Daugavet point and a ccw \(\Delta\)-point in \((C(\Omega),\left|\!\left|\!\left|\cdot\right|\!\right|\!\right|_{\varepsilon})\)._ Proof.: Let us assume that for every \(\alpha\in\mathbb{N}^{<\omega}\), there exists \(p_{\alpha},q_{\alpha}\in U_{\alpha}\) such that \(\varphi(p_{\alpha})=-\varphi(q_{\alpha})=1\). We first prove that \(\varphi\) is a super Daugavet point. Let \(W\) be a non-empty relatively weakly open subset of \(B_{\varepsilon}\). By Proposition 3.5 there exists a function \(u\in E\) which belongs to the weakly open set \(W\). Write \(u=\theta\lambda u_{1}+(1-\lambda)u_{2}\) with \(\theta\in\{-1,1\}\), \(\lambda\in[0,1]\), \(u_{1}\in A\) and \(u_{2}\in B\). We will assume that \(\theta=1\), the other case being analogous. By continuity of \(\varphi\), we can find for every \(\delta>0\) an open neighbourhood \(W_{\alpha}\subset U_{\alpha}\) of \(p_{\alpha}\) such that \(\varphi\) is greater than \(1-\delta\) on \(W_{\alpha}\). Up to approximating \(u_{2}\) as in Lemma 3.4, we may assume without lost of generality that \(u_{2}\) takes value \(-1\) on every \(W_{\alpha}\). Hence \(u\) takes value \(-1\) on all but finitely many \(W_{\alpha}\)'s, and it follows that \(\left|\!\left|\!\left|\varphi-u\right|\!\right|\!\right|_{\varepsilon}\geq \left|\!\left|\varphi-u\right|\!\right|_{\infty}>2-\delta\). The conclusion follows by letting \(\delta\) go to \(0\). Next, let us prove that \(\varphi\) is a ccw \(\Delta\)-point. Assume that \(\varphi\in\sum_{i=1}^{n}\lambda_{i}W_{i}\), where the \(W_{i}\)'s are non-empty relatively weakly open subset of \(B_{\varepsilon}\), \(\lambda_{i}>0\), and \(\sum_{i=1}^{n}\lambda_{i}=1\). Then write \(\varphi=\sum_{i=1}^{n}\lambda_{i}\varphi_{i}\) with \(\varphi_{i}\in W_{i}\). Since \(\left|\!\left|\varphi_{i}\right|\!\right|_{\infty}\leq 1\) for every \(i\), it follows that \(\varphi_{i}(p_{\alpha})=-\varphi_{i}(q_{\alpha})=1\). In particular, every \(\varphi_{i}\) belongs to \(B\) by Lemma 3.10. Fix \(\delta\in(0,1)\), and pick a non-empty open set \(U\subset\Omega\backslash V_{0}\) such that \(\varphi_{i}>1-\delta\) on \(U\) for every \(i\). Then fix a sequence \((t_{k})\) of distinct points of \(U\). By Lemma 3.4, we can find for every \(i\) a sequence of functions \((\varphi_{i}^{k})\) in \(B_{\varepsilon}\) which converges weakly to \(\varphi_{i}\) and such that \(\varphi_{i}^{k}(t_{k})=-1\) for every \(i,k\). Then we get that for large enough \(k\)'s, each \(\varphi_{i}^{k}\) belongs to the corresponding \(W_{i}\). Hence \(\sum_{i=1}^{n}\lambda_{i}\varphi_{i}^{k}\in\sum_{i=1}^{n}\lambda_{i}W_{i}\). Finally, notice that \[\left\|\varphi-\sum_{i=1}^{n}\lambda_{i}\varphi_{i}^{k}\right\|_{\varepsilon} \geq\left\|\varphi-\sum_{i=1}^{n}\lambda_{i}\varphi_{i}^{k}\right\|_{\infty} \geq\left|\varphi(t_{k})-\sum_{i=1}^{n}\lambda_{i}\varphi_{i}^{k}(t_{k}) \right|>2-\delta,\] since \(t_{k}\in U\). The conclusion follows by letting \(\delta\) go to \(0\). **Corollary 3.12**.: _The set of all points of \(B_{\varepsilon}\) which are simultaneously Daugavet points and ccw \(\Delta\)-points is norming._ Proof.: Every slice \(S\) of \(B_{\varepsilon}\) intersects either \(A\), \(-A\) or \(B\). But since every slice of \(A\) must contain one of the \(u_{\alpha}\)'s, and since functions taking value \(1\) and \(-1\) on every \(U_{\alpha}\) are weakly dense in \(B\), the result immediately follows from Proposition 3.8 and 3.11. Finally, let us prove that there are points of \(B_{\varepsilon}\) which are not \(\Delta\)-points. We first provide an easy expression to compute the norm of elements of \(B\). **Lemma 3.13**.: _Let \(f\in C(\Omega)\). Then, \(f\in B\) if, and only if,_ \[\max\left\{\frac{|f(t_{0})|}{1-\varepsilon},\left\|f\right\|_{\infty}\right\} \leqslant 1.\] Proof.: First, assume that \(f=(1-\varepsilon)g+\varepsilon h\) with \(\left\|g\right\|_{\infty},\left\|h\right\|_{\infty}\leqslant 1\) and \(h(t_{0})=0\). Then \[\left|f(t_{0})\right| =\left|(1-\varepsilon)g(t_{0})+\varepsilon h(t_{0})\right|\] \[=\left|(1-\varepsilon)g(t_{0})\right|\] \[\leqslant(1-\varepsilon)\|g\|_{\infty}\leqslant(1-\varepsilon).\] Moreover, \[\|f\|_{\infty}\leqslant(1-\varepsilon)\|g\|_{\infty}+\varepsilon\|h\|_{\infty} \leqslant 1-\varepsilon+\varepsilon=1.\] For the other direction, assume that \(f\) satisfies that \(\max\left\{\frac{|f(t_{0})|}{1-\varepsilon},\|f\|_{\infty}\right\}\leqslant 1\). Then, let \[g:=\frac{(f\wedge(1-\varepsilon))\vee(-1+\varepsilon)}{1-\varepsilon}\text{ and }h:=\frac{f-(1-\varepsilon)g}{\varepsilon}.\] We have that \(\|g\|_{\infty}\leqslant 1\) and \(g(t_{0})=f(t_{0})\). Moreover, if \(t\in\Omega\) is such that \(|f(t)|\leqslant 1-\varepsilon\), then \(h(t)=0\) (in particular, \(h(t_{0})=0\)). Otherwise, if \(t\) is such that \(|f(t)|>1-\varepsilon\), then \(g(t)=1-\varepsilon\) and, using that \(\|f\|_{\infty}\leqslant 1\), we get \[\left|h(t)\right| =\left|\frac{f(t)-(1-\varepsilon)g(t)}{\varepsilon}\right|\] \[=\left|\frac{f(t)-(1-\varepsilon)\text{sign}(f(t))}{\varepsilon}\right|\] \[\leqslant\left|\frac{\varepsilon}{\varepsilon}\right|=1.\] Indeed, if \(f(t)>0\), then \(f(t)\in(1-\varepsilon,1]\) and \(\text{sign}(f(t))=1\), so \(f(t)-(1-\varepsilon)\text{sign}(f(t))=f(t)-(1-\varepsilon)\in(0,\varepsilon]\); if \(f(t)<0\), then \(f(t)\in[-1,-1+\varepsilon)\) and \(\text{sign}(f(t))=-1\), thus \(f(t)+1-\varepsilon\in[-\varepsilon,0)\). Hence, we conclude that \(h\in B_{\ker(\delta_{0})}\), and so \(f=(1-\varepsilon)g+\varepsilon h\in B\), which finishes the proof. **Proposition 3.14**.: _The space \((C(\Omega),\left|\!\left|\!\left|\cdot\right|\!\right|\!\right|_{\varepsilon})\) fails the DLD2P._ Proof.: Pick any \(s\in U_{0}\) distinct from \(t_{0}\). Calling to Tieze-Urysohn's lemma, we can construct \(f\in(1-\varepsilon)B_{C(\Omega)}\subset B_{\varepsilon}\) such that \(f(t_{0})=1-\varepsilon\) and \(f(s)=-1+\varepsilon\). We will show that \(\tilde{f}:=\dfrac{f}{\left|\!\left|\!\left|f\right|\!\right|\!\right|_{\varepsilon}}\) is not a \(\Delta\)-point. To do so, pick \(0<\eta<2-3\varepsilon\), and consider \[S:=\{\varphi\in B_{\varepsilon}\colon\varphi(t_{0})-\varphi(s)>2-2\varepsilon -\eta\}.\] We have \[\tilde{f}(t_{0})-\tilde{f}(s)=\dfrac{f(t_{0})-f(s)}{\left|\!\left|\!\left|\! \left|f\right|\!\right|\!\right|_{\varepsilon}}\geqslant 2-2\varepsilon,\] so \(S\) is a slice of \(B_{\varepsilon}\). Also, notice that Since \(B_{\varepsilon}=\overline{\cos}\,A_{\varepsilon}\), it suffices, by [16, Lemma 2.1], to provide a uniform bound away from \(2\) for the distance from \(\tilde{f}\) to elements of \(S\cap A_{\varepsilon}\). Now observe that for every \(\varphi\in S\), we have \(\varphi(t_{0})>1-2\varepsilon-\eta\) and \(\varphi(s)<-1+2\varepsilon+\eta\), hence \(S\cap A=\emptyset\) and \(S\cap-A=\emptyset\). So it suffices to deal with elements of \(B\). Pick \(\varphi\in S\cap B\). On the one hand, we have that \[1\geqslant\left|\!\left|\!\left|f\right|\!\right|\!\right|_{\varepsilon} \geqslant\dfrac{(\delta_{0}-\delta_{s})(f)}{2-\varepsilon}=\dfrac{f(t_{0})-f( s)}{2-\varepsilon}=\dfrac{2-2\varepsilon}{2-\varepsilon}=1-\dfrac{\varepsilon}{2- \varepsilon}.\] On the other hand, we have, by Lemma 3.13, \[\left|\!\left|\!\left|f-\varphi\right|\!\right|\!\right|_{\varepsilon} \leqslant\max\left\{\dfrac{|f(t_{0})-\varphi(t_{0})|}{1-\varepsilon},\left|\! \left|f-\varphi\right|\!\right|_{\infty}\right\}\leqslant 2-\varepsilon,\] since \(\dfrac{|f(t_{0})-\varphi(t_{0})|}{1-\varepsilon}\leqslant\dfrac{ \varepsilon+\eta}{1-\varepsilon}\leqslant 1-\varepsilon\) and \(\left|\!\left|f-\varphi\right|\!\right|_{\infty}\leqslant\left|\!\left|f \right|\!\right|_{\infty}+\left|\!\left|\varphi\right|\!\right|_{\infty} \leqslant 2-\varepsilon\). Therefore, \[\left|\!\left|\!\left|\tilde{f}-\varphi\right|\!\right|\!\right|_{\varepsilon} \leqslant\left|\!\left|\!\left|\tilde{f}-f\right|\!\right|\!\right|_{ \varepsilon}+\left|\!\left|\!\left|f-\varphi\right|\!\right|\!\right|_{ \varepsilon}\leqslant 1-\left|\!\left|\!\left|f\right|\!\right|\!\right|_{ \varepsilon}\leqslant 1-\left|\!\left|\!\left|f\right|\!\right|\!\right|\!\right|+2- \varepsilon\leqslant\dfrac{\varepsilon}{2-\varepsilon}+2-\varepsilon=2-\dfrac{ \varepsilon(1-\varepsilon)}{2-\varepsilon}.\] Consequently, \(\sup_{\varphi\in S\cap B}\left|\!\left|\tilde{f}-\varphi\right|\!\right|\! \right|_{\varepsilon}<2\), and \(\tilde{f}\) is not a \(\Delta\)-point. ## Acknowledgements We thank the organisers of 2023 ICMAT-IMAG Doc-Course in Functional Analysis for the support and the hospitality during the development of the supervised research program that resulted in this paper. This work was supported by MCIN/AEI/10.13039/501100011033: grant PID2021-122126NB-C31 (Lopez-Perez, Martin, Quero and Rueda Zoca), grant PID2021-122126NB-C33 (Cobollo and Quilis) and grant PID2019-105011GB-I00 (Cobollo); Junta de Andalucia: Grants FQM-0185 (Lopez-Perez, Martin, Quero and Rueda Zoca); Grant PGC2018-097286-B-I00 by MCIU/AEI/FEDER, UE (Rodriguez-Vidanes). The research of Ch. Cobollo was also supported by Generalitat Valenciana (through Project PROMETEU/2021/070 and the predoctoral contract CIACIF/2021/378), and by Universitat Politecnica de Valencia. The research of G. Lopez-Perez and M. Martin was also supported by MICINN (Spain) Grant CEX2020-001105-M (MCIU, AEI). The work of Y. Perreau was supported by the Estonian Research Council grant SJD58. The research of A. Quero was also supported by the Spanish Ministerio de Universidades through a predoctoral contract FPU18/03057. The research of A. Quilis was also supported by GA23-04776S and project SGS23/056/OHK3/1T/13. The research of D.L. Rodriguez-Vidanes was also supported by MCIU and the European Social Fund through a "Contraro Predoctoral para la Formacion de Doctores, 2019" (PRE2019-089135) and by the "Instituto de Matematica Interdisciplinar" (IMI). The research of A. Rueda Zoca was also funded by Fundacion Seneca: ACyT Region de Murcia grant 21955/PI/22 and by Generalitat Valenciana project CIGE/2022/97.
**Please note:** * Your translation should be grammatically correct and natural-sounding. * You may use any appropriate mathematical notation. * Please provide only the translation, not a sentence-by-sentence breakdown. **Example:** Given: "The set of integers is countable." **Answer:** 整数の集合はカウント可能である。 Let me know if you have any other sentences you would like translated.
2309.13403
Game of Travesty: Decoy-based Psychological Cyber Deception for Proactive Human Agents
The concept of cyber deception has been receiving emerging attention. The development of cyber defensive deception techniques requires interdisciplinary work, among which cognitive science plays an important role. In this work, we adopt a signaling game framework between a defender and a human agent to develop a cyber defensive deception protocol that takes advantage of the cognitive biases of human decision-making using quantum decision theory to combat insider attacks (IA). The defender deceives an inside human attacker by luring him to access decoy sensors via generators producing perceptions of classical signals to manipulate the human attacker's psychological state of mind. Our results reveal that even without changing the classical traffic data, strategically designed generators can result in a worse performance for defending against insider attackers in identifying decoys than the ones in the deceptive scheme without generators, which generate random information based on input signals. The proposed framework leads to fundamental theories in designing more effective signaling schemes.
Yinan Hu, Quanyan Zhu
2023-09-23T15:27:26
http://arxiv.org/abs/2309.13403v1
# Game of travesty: decoy-based psychological cyber deception for proactive human agents ###### Abstract The concept of cyber deception has been receiving emerging attention. The development of cyber defensive deception techniques requires interdisciplinary work, among which cognitive science plays an important role. In this work, we adopt a signaling game framework between a defender and a human agent to develop a cyber defensive deception protocol that takes advantage of the cognitive biases of human decision-making using quantum decision theory to combat insider attacks (IA). The defender deceives an inside human attacker by luring him to access decoy sensors via generators producing perceptions of classical signals to manipulate the human attacker's psychological state of mind. Our results reveal that even without changing the classical traffic data, strategically designed generators can result in a worse performance for defending against insider attackers in identifying decoys than the ones in the deceptive scheme without generators, which generate random information based on input signals. The proposed framework leads to fundamental theories in designing more effective signaling schemes. ## I Introduction Cyber deception has been a growing class of proactive defense techniques over the past several decades that contribute to combat increasingly intelligent, stealthy, and sophisticated attacks. Important cyber deception technologies including moving target defense [1], honey-x [2] (such as honeypots, honeytokens), etc help defenders reach a better security outcome against ever-growingly sophisticated attacks and threats, among which advanced persistent threats (APT) and insider threats [3] serve as two typical examples. Reports have revealed that cyber deception technologies have reduced the cost arising from data breaches by 51% in 2022 [4]. Cyber deception techniques take advantage of the human aspects to achieve two-fold purposes: one is to conceal the truth, and the other is to reveal the false. The ultimate goal of applying defensive cyber deception techniques is to delay, stop, or interrupt attacks. Many techniques can achieve the concept of deception: dazzling, mimicking [5], inventing, decoying [6]. Useful defensive deception protocols characterize the strategic interactions among three classes of agents: defenders, users, and adversaries. A useful framework to design cyber deception mechanisms needs to capture several main features. First, the defender must strategically treat users and adversaries with different purposes. In general, the defender should enhance the efficacy of access for a normal user and reduce the efficacy of access for adversaries. In addition, a sophisticated adversary behaves intelligently but also suffers from limitations arising from human aspects. Inter-disciplinary work is needed to help to develop next-generation deception techniques incorporating psychological models to characterize the behaviors of human attackers and system users. The interdisciplinary nature of the concept of deception constitutes a major challenge for researchers in building cyber deceptive defense systems. Many game-theoretical models [7] characterize the methods and mechanisms in the concept of deception in detection frameworks in cyber security. One major limitation of applying game-theoretical formulation solely to formulate threats is that such models often assume all agents are fully rational, while in real practices the behaviors of attackers and defenders often deviate from rationality [7], in part because devices in the network are operated by humans. One aspect of making breakthroughs in research in deception techniques is to adopt more accurate models in cognition to form more accurate predictions of the human attacker's behaviors. Such a direction is called cyber-psychology. Studies have shown that human reveals bounded rationality in decision-making due to a variety of cognitive biases. As a result, biases have played a cornerstone component in a successful deception mechanism not only in cyber security but also in social sciences. There are some other phenomena in cognitive science such as order effect, disjunction effect, violation of the total law of probability, etc, that are missed by previous deception mechanisms. New models need to be raised to characterize those phenomena. Game-theoretical models [7] assume that both the sensor and the receiver present full rationality and may lead to a more conservative strategy for the defensive systems that manipulate data incoming to the sensors. One difference between human decision-making theories and general decision theories is that human suffers from cognitive bias of various kinds such as margin effect, order effect, etc, that incur human agents to arrive at choices leading to suboptimal outcomes. There is literature catching the cognitive biases of humans arising from risk preferences and applying alternative frameworks such as quantal response [8] or prospect theory [9]. In behavioral economics [10], human's bounded rationality is presented in a variety of ways. Recently, there are experimental studies [11] where experts play attackers who aim to hack into systems to gather information and aim to avoid decoys, while the defense system adopts cyber deception and cyber psychology techniques to prevent real systems from being attacked and credentials from being stolen. The goal of the experimental study is to verify/invalidate the two hypotheses: one, defensive cyber tools and psychological deception impede attackers who seek to penetrate computer systems and exfiltrate information; and two, defensive deception tools are effective even if an attacker is aware of their use. Experimental data essentially show that both hypotheses are true. But there is a lack of theories explaining why they are true. Constructing theories characterizing human agents' behaviors that take advantage of bounded rationality is beneficial in understanding human behavior to counteract them. The quantum decision theories [12] catch the bounded rationality arising from order effect, disjunct effect, and violation of the total law of probability. We are not arguing that the human brain acts like a quantum computer in the physical sense. Instead, we argue that the quantum decision theory functions as a _generative parsimonious generative black-box model_ for human's decision-making processes that have been corroborated by experiments such as [13]. In this paper, we consider a scenario where sensors generate manipulated data for receivers, who are human agents. We assume the sensors constitute as part of the defense systems and the human agents want to attack sensors. The defensive systems aim at deceiving the human agents to mislead them to attack the wrong nodes. Such a system is called human-sensor system in cybersecurity. The purpose of this paper is to develop an appropriate framework decoying as a method of cyber deception to characterize the sensor's manipulation of the traffic data and the attacker's strategies for attacking the sensors. The challenge is to consider the receivers are human agents who make decisions suffering from a variety of bounded rationality arising from cognitive biases such as marginal effect, order effect, violation of the total law of probability, etc. In this paper, we propose the 'traversty game' (TG) framework as a signaling game framework based on quantum information for constructing a cyber defensive deception to bring forth a desirable security scenario where the defender interacts with human adversaries to reduce the efficacy of the attacks by taking advantage of bounded rationality in human's decision-making process. The defender, or the deceptive defensive system, has a private type that characterizes the state of the system. That is, what connects the network and the human agent could be a regular network sensor or a decoy. It is common knowledge that a normal sensor and a decoy produce traffic data whose message volumes obey different distributions. The defensive system contains a sensor and a generator. The sensor collects original data traffic from the network and distorts data traffic. The generator is a mechanism that produces verbal messages to manipulate the human agent's perception of the classical traffic data. The cyber deceptive defensive system associates classical (maybe distorted) traffic data with the manipulated perception on the traffic data to deliver to human agents composite signals, which are characterized as 'prospect states' [14] in quantum decision theory [12]. Upon receiving the prospect states, the human agent (receiver) formulates the true state of the defensive system (a normal sensor or a decoy) into a quantum hypothesis testing problem and designs optimal prospect operator-valued measurements to minimize his weighted risk. The human agent then decides whether to access the system or not. The goal of the human agent, no matter his type, is to access sensors and avoid decoys. We thus formulate the human agent's objective as the weighted Bayesian risk that depends on the misdetection error rate and false alarm error rate. After the generator is implemented, both the defensive system and the human agent update their intelligence of each other through the Bayes' rule. The optimal behavior of the defensive deceptive system is to guide the human agent to access the true sensor while preventing it from accessing the decoy if the type were normal and vice versa. Correspondingly, the optimal behavior of the human agent is to access the system if he were to find out the defensive system was likely normal and vice versa. Furthermore, we adopt the concept of repeated games with incomplete information [15][16] to study the temporal evolution of the strategies of both the defender and the human agent when both parties gather more and more information about each other. We formulate the decision problem for the human agent and derive that under mild assumptions, the anticipated behavior of the human attacker resembles quantum likelihood ratio test (QLRT). In the meantime, we formulate the defender's problem of designing optimal mixed type-dependent prospect states as a mixed-integer programming problem. We characterize how defense systems could make up the weakness of attackers as human agents by taking advantage of the bounded rationality. In particular,we adopt the concept of prospect probabilities [14], where the likelihood consists of two terms: utility factor and attraction factor [17]. The utility factor represents probability that arises from the classical signals, while the latter term does not arise from the actual data traffic but the perception of the data traffic due to quantum interference of different psychological state corresponding to the same classical signal. The attraction factor could lead the human agent towards (or away from) a certain choice. The main contribution of this work is two-fold. First, we develop a holistic framework to capture cyber-psychology techniques, specifying how a defender could implement cyber deception techniques by manipulating perceptions of signals to mislead an inside human attacker using her bounded rationality. Second, we illustrate and analyze human attacker's detection performance of decoys to show how strategically designed perceptions can influence human's decision-making and thus mitigate insider's attacks. Our analytical and numerical results provide hope for building next-generation deception mechanisms to combat human-related insider attacks in network security. The rest of the paper is organized as follows. In section II we formulate the human-sensor system in cyber deception as a signaling game. In section II-E we characterize the optimal behavior of the human agent and the cyber defensive system using the concept of equilibrium. In section III we extend our signaling game formulation into a dynamic scenario, studying how the efficacy of the attacks evolve through time and how the defensive system and human agent can change their strategies as they both gather more intelligence from each other. In section IV we provide a numerical case study on honeypots to illustrate our proposed framework. Finally we conclude in section V. ### _Related work:_ Game theory for cyber deceptionIn network security, game-theoretic frameworks have been widely applied for building proactive defense, particularly defensive cyber deception [18] to enhance the security and privacy for network users. Games with incomplete information [15] provide a generic protocol to characterize the asymmetry of information induced by deception. Typical game frameworks that have been applied in network security include zero-sum games [7], Bayesian Stackelberg security games [19], partially observable stochastic games [20]. These complete or incomplete information game frameworks capture and interpret the attackers' and defenders' behaviors by computing appropriate concepts of equilibrium, depending on the information structure. In this work, we adopt the framework of a signaling game to characterize the relationship between the defensive deception system and human attacker, yet introduce quantum decision theory and the concept of quantum information to exploit the cognitive bias of the human attackers. Cyber deception through manipulating psychological statesThere has been surging studies in formulating cyber deception techniques via psychological manipulation. Authors in [11] have experimentally verified that it is not only the messages from the defensive system but also the perception of the messages, that will influence the human attacker's behavior. Authors in [21] propose an instance-based-learning (IBL) model of a human insider attacker using adaptive-control-of-thought-rational (ACT-R) [22] as the cognitive architecture, which takes into consideration features in cognitive science: forgetting, power law of practice, partial matching process, etc. The IBL model also formulates how memory retrieval dynamics lead to cognitive biases. Our proposed framework adopts quantum decision theory, a generative parsimonious model to capture other biases in human's decision-making process [12] such as order effect and disjunction fallacy, etc. In addition, our proposed work focuses on how the defender system takes advantage of human attacker's biases by designing strategic generators in decoy systems that take advantage of human attacker's biases to combat. Insider threat/attack mitigation designsSeveral works have proposed guidelines for adopting honeypots in insider threats mitigation programs [23][24]. Game-theoretical frameworks have been adopted for formulating insider threats. Authors in [25] use game-theoretical frameworks to develop detection mechanisms for insider-threats. Authors in [26] adopt risk measures and extend risk analysis to cooperate with organizational culture. These works seek to contribute to an assessable understanding of the behaviors of adversarial insiders to develop more accurate best-responding strategies to combat insider threats but ignore the human aspects that lead to non-compliance of fully rational behaviors, such as the cognitive biases of various kinds in the human decision-making process. The authors in [27] have adopted the framework of mechanism design to address compliance and non-compliance for selfish and adversarial insiders. Our work adopts the concept of decoys to detect and monitor the behavior of insider attackers, deterring them from accessing normal sensors by taking advantage of cognitive biases of insiders to strategically design different perceptions of messages to influence their decision-making. ### _Notations_ Throughout the paper, we use the following notations. We may introduce new notations in specific paragraphs later. We use the following notations: \(\mathcal{H}_{\mathcal{C}}\): the (Hilbert) space overall signals; \(|s\rangle\in\mathcal{H}_{\mathcal{C}}\): a generic state associated with signal \(s\in S\); \(\mathcal{H}_{\mathcal{C}}\): the (Hilbert) space over all states of mind; \(|\mathbf{\varphi}\rangle\in\mathcal{H}\): a generic state of mind; \(\mathcal{H}=\mathcal{H}_{\mathcal{C}}\otimes\mathcal{H}\): the Hilbert space (over the set of real numbers \(\mathbb{R}\)) of all 'prospects'. \(\mathcal{H}^{*}\): the dual space of \(\mathcal{H}\); \(\mathcal{S}\): the subset of positive,Hermitian, bounded operators on \(\mathcal{H}\) whose trace is \(1\); \(S\): the space of signals; \(\Delta(\cdot)\): the set of probability measures over the given space; **1**: the identity operator. Its domain and range depend on the context; \(p_{k}\in\Delta(X)\): the common prior/common posterior belief of the true state after \(k-1\) observations have been generated; \(X=\{1,2\}\): the state space of the system. A generic state is denoted as \(x\): \(x=1,2\) represents the system is abnormal and normal, respectively; We denote \(\dim(X)=M=2\). \(a,b\in\mathbb{R}^{S\times K}\): generic perception matrices from the defender based on its true type \(0,1\). In addition, for any operator \(A\in B(\mathcal{H})\), we denote its conjugate transpose as \(A^{\dagger}\). ## II The Formulation of Traversy Game ### _Purpose of formulation_ Insider threats [3] has long been an important issue in cyber-security because, in contrast to external attackers, insiders are aware of the structure of the defensive system, know about vulnerability, and more likely to launch strategic attacks to destroy the system effectively. Thus defensive deception techniques such as decoy systems have been implemented for the detection and mitigation of insider threats [24]. The Fig. 1: The human-sensor system in a network security scheme. The defensive cyber deception system consists of a normal sensor and a decoy, each of which is cascaded with a generator. The human agent is a receiver taking manipulated network traffic associated with perception messages. The normal sensor and the decoy produce manipulated traffic data obeying different distributions. The location of the decoy is a private type for the defensive system. The receiver is also associated with three private types: user, prospect attacker, and quantum attacker. The goal of the defensive system lure the human attacker to access the decoy rather than the normal sensor. The goal of a human attacker aims at recognizing the decoy to avoid while making use of normal sensors. goal of designing a defensive deception mechanism is to expose the vulnerabilities of their decoys to attract adversaries to access honeypots/decoys to trigger alerts and thus the defensive system can gather their information. To address the challenge, we need a novel configuration of decoy sensors and normal sensors to develop a next-generation defensive cyber deception system that exploits human biases to attract human attackers to focus on the decoy. Previous literature has pointed out [7] that future defensive systems in network security must also consider human factors when predicting attackers' behaviors. Human agents are subject to decision-making biases and exhibit other non-rational behavior. To this end, it is effective to introduce cyber-psychological techniques. Cyber-psychology [28] is the scientific field that integrates human behavior and decision-making into the cyber domain allowing us to understand, expect, and further influence attacker's behaviors. Experimental studies [11] have shown that by providing information to the human factor, their mental model on the cyber defensive system was influenced and their decisions changed accordingly. To our best knowledge, there is still a lack of theoretical frameworks to interpret and address how those cyber-psychological methods can work effectively to mitigate attacks. ### _The game formulation_ In this section, we propose a game-theoretical framework on cyber defensive deception systems that mitigates insider threats by adopting cyber-psychological techniques. We show that cyber-psychological techniques can demonstrate a better deterrence of insider threats than their classical counterparts. We consider the protocols whose scheme is depicted in Figure 1. In short, the defensive deception system (she) and the receiver (human agent, he) play a signaling game \(\mathcal{G}\). The defensive system consists of two sensors: one normal, and one decoy, each of which is cascaded with a generator that generates psychological messages reflecting the perception of the manipulated data traffic. The defensive system connects one of the sensors to the human agent. The (human) receiver knows there is a decoy sensor but the placement of the decoy is unknown and serves as the defensive system's private type \(x\in\{0,1\}\). He could only make decisions based on classical traffic data and the perception messages associated with the data. Normal sensor accepts observations obeying distribution \(g_{0}\), while decoy accepts observations obeying \(g_{1}\). Denote \(\hat{s}\) as random variables characterizing the random observations \(s\) as corresponding realizations. We say that human-agent faces a hypothesis testing problem: \[H_{1}:\hat{s}\sim g_{1}(s),\ \ H_{0}:\hat{s}\sim g_{0}(s).\] The goal of the defense system is to strategically configure the normal sensor, the decoy sensor, as well as the generators to attract human attackers to access the decoy. The defensive system earns a reward when the adversarial agent accesses the decoy since such access provides information and triggers the alert of the defensive system [11]. Meanwhile, the cyber deception system obtains observations \(y\) from the network traffic (see figure 1). Both normal and decoy sensors produce manipulated observations \(s^{\prime}\) and pass them into cascading generators. Based on manipulated observations \(s^{\prime}\) and the private type \(x_{0}\), the connecting generator produces psychological signals characterized by a set of coefficients \(\{a_{sk},b_{sk}\}_{s,k}\). The distorted observations together with psychological signals constitute the prospect state \(|\Phi_{1}\rangle,|\Phi_{0}\rangle\) in the following way: \[|\Phi_{1}(s)\rangle=\sum_{k}a_{sk}|s\varphi_{k}\rangle,\ |\Phi_{0}(s)\rangle= \sum_{k}b_{sk}|s\varphi_{k}\rangle, \tag{1}\] where we inherit Dirac's notations as introduced in section I. Such a quantum state can be interpreted as messages announced to change the human agents' perceptions. Such a generator produces stochastic messages manipulating the user's perception of messages. One example is an announcement like 'the message comes from a real sensor'. For instance, authors in [29] conducted experiments where the test takers are informed of the existence of a decoy system. The quantum-mechanical representation of messages can be referred to in [12]. Upon observing \(|\Phi\rangle\), the human agent, randomly measures the prospect using one of the prospect basis \(|s\varphi_{k}\rangle\) and updates his prior belief on the defender's type. His mind is a composite prospect state [14]. The human agent arrives at a decision \(\alpha=\delta(|s\varphi_{k}\rangle)\in[0,1]\), indicating the probability that the attacker thinks the hypothesis \(H_{1}\) holds true. Sensor's (Defender's) problemThe defender strategically designs manipulated classical observations from both sensors to mislead or promote human judgment. In the meantime, the defender creates type-dependent perceptions \(a=(a_{sk})_{s,k},b=(b_{sk})_{sk}\in\mathbb{R}^{S\times K}\) regarding every signal \(s\in S\) corresponding to the type \(x=1\) and \(x=0\) accordingly. The defender will earn a positive reward when the normal user accesses the normal sensor and a negative one when the attacker accesses the normal user or avoid accessing the decoy. Sensor's actions and strategiesDepending on the true type \(x\) of deception system and well as the classical signal \(s\), a generic defender's action involves a pair of prospect states \(|\Phi_{x}(s)\rangle\in\mathcal{H}\), We may also equivalently characterize the sensor's actions as two matrices \((a_{sk},b_{sk})\) since they can be written as in (1). If we consider that the defensive system adopts mixed strategies, we could characterize the mixed strategies as density operators \(\rho_{1},\rho_{0}\) as follows: \[\rho_{1}=\sum_{s,k,k^{\prime}}f_{1}(s)a_{sk}a_{sk^{\prime}}|s \varphi_{k}\rangle\langle s\varphi_{k}^{\prime}|, \tag{2}\] \[\rho_{0}=\sum_{s,k,k^{\prime}}f_{0}(s)b_{sk}b_{sk^{\prime}}|s \varphi_{k}\rangle\langle s\varphi_{k}^{\prime}|, \tag{3}\] where \(f_{1},f_{0}\) are probability density functions over \(M\). Another way to characterize the sensor's actions is via the utility factor and attraction factor. Denote \[\langle\Phi_{1}|P|\Phi_{1}\rangle =\sum_{s,k,k^{\prime}}a_{sk}a_{sk^{\prime}}\langle s\Phi_{k}|P|s \varphi_{k^{\prime}}\rangle\] \[=\sum_{s,k}a_{sk}^{2}\langle s\varphi_{k}|P|s\varphi_{k}\rangle+ \sum_{s,k\neq k^{\prime}}a_{sk}a_{sk^{\prime}}\langle s\varphi_{k}|P|s\varphi_ {k^{\prime}}\rangle\] \[\equiv u_{1}(s)+q_{1}(s)\] \[\langle\Phi_{0}|P|\Phi_{0}\rangle =\sum_{s,k,k^{\prime}}b_{sk}b_{sk^{\prime}}\langle s\varphi_{k}|P| s\varphi_{k^{\prime}}\rangle\] \[=\sum_{s,k}b_{sk}^{2}\langle s\varphi_{k}|P|s\varphi_{k}\rangle+ \sum_{s,k\neq k^{\prime}}b_{sk}b_{sk^{\prime}}\langle s\varphi_{k}|P|s\varphi_ {k^{\prime}}\rangle\] \[\equiv u_{0}(s)+q_{0}(s),\] where \(u\) is the utility factor and \(q\) is the attraction factor of the prospect state upon the decision operator \(P\). We here define \[u_{1}(s) =\sum_{|s\varphi_{k}\rangle\in\mathcal{R}}a_{sk}^{2}, \tag{4}\] \[q_{1}(s) =\sum_{|s\varphi_{k}\rangle,|s\varphi_{k}\rangle\in\mathcal{R}}a _{sk}a_{sk^{\prime}},\] (5) \[u_{0}(s) =\sum_{|s\varphi_{k}\rangle\in\mathcal{R}}b_{sk}^{2},\] (6) \[q_{0}(s) =\sum_{|s\varphi_{k}\rangle,|s\varphi_{k^{\prime}}\rangle\in \mathcal{R}}b_{sk}b_{sk^{\prime}} \tag{7}\] According to [30], we adopt some calibration rules to construct the attraction factor \(p\) so that it is related to the utility factor \(u\) as \[q_{j}(s)=\zeta\min\{u_{j}(s),1-u_{j}(s)\},\ j=0,1,\] where we can further denote \(\zeta\in[-1,1],\ \text{and}\ \text{notice}\ \text{again}\ \text{that}\ u_{j}(s)\in[0,1].\ \text{Furthermore}\ u_{j}(s)=1\ \text{only}\ \text{when}\ \text{all}\ |s\varphi_{k}\rangle\in\mathcal{R}\ \text{for}\ \text{all}\ k\in K\ \text{for}\ \text{the}\ \text{given}\ s.\ \text{The opposite goes with}\ u_{j}(s)=0.\ \text{A}\ \text{similar}\ \text{ goes with}\ u_{0}(s)=1.\ \text{Here}\ \text{we}\ \text{use}\ \text{the parameter}\ \zeta\) to simplify the hyperbolic tangent function used in [17]. We introduce the following assumption: **Assumption 1**.: _The coefficients \(a,b\in\mathbb{R}^{S\times K}\) as in (1) exist for every \(u_{1},u_{0}\in[0,1]^{S}\)._ Assumption 1 guarantees that we can construct \(a_{sk},b_{sk}\) using these equalities (of course, there may not only be one exact choice of \(a,b\) reaching the same utility factor and attraction factor). Now it is equivalent to use the quantities defined in (4)(5)(6)(7) to characterize the defender system's behavior. Defender's utility/loss functionWe now formulate the defender's loss function to minimize. The goal of the defender is to mitigate the human attacker's performance in identifying decoys so his objective function is the genuine detection rate introduced in (16). This is because every time the human attacker commits an error, or equivalently, access to the decoy sensor, an alert will be triggered and the defensive system can gather intelligent information from the human agent [18]. The defensive deception system designs type-dependent distributions \(\rho_{1},\rho_{0}\) under the type \(x\) by minimizing the following objective \(J_{S}^{x}:X\times\Delta(M)\times\mathcal{B}(\mathcal{H})\rightarrow\mathbb{R}\) as \[J_{S}^{x}(x,\rho_{x},P_{1}^{*})=\text{Tr}(\rho_{1}P_{1}^{*}),\] where \(P_{1}^{*}\in B(\mathcal{H})\) denotes the optimal prospect-projection-based decision policy for the human agent. Using the theory of potential games [31], we know the human is equivalent to minimize the following objective \(J_{D}:\Delta(M)\times\mathcal{B}(\mathcal{H})\rightarrow\mathbb{R}\): \[\begin{split}&\min_{\begin{subarray}{c}a,b\\ f_{1},f_{0}\end{subarray}}J_{D}(a,b,P_{1}^{*})=J_{S}^{1}(1,\rho_{1},P_{1}^{*}) +J_{S}^{0}(0,\rho_{0},P_{1}^{*})\\ &\Leftrightarrow\min_{\begin{subarray}{c}\rho_{1},f_{0}\end{subarray} }\delta_{\mathcal{H}(\left|s\varphi_{k}\right\rangle)>0}\langle s\varphi_{k}| \rho_{1}|s\varphi_{k}\rangle+1,\end{split} \tag{8}\] where we compute the trace using the prospect basis \(\{\left|s\varphi_{k}\right\rangle\}_{s,k}\). If we adopt \(u_{1},u_{0},f_{1},f_{0}\) as the defender's type-dependent strategy, we can introduce the objective function \(F:[0,1]\times[0,1]\times L^{1}(S)\times L^{1}(S)\rightarrow\mathbb{R}\) as follows: \[\min_{\begin{subarray}{c}f_{1},f_{0}\\ f_{1},f_{0}\end{subarray}}F(u_{1},u_{0},f_{1},f_{0})\Leftrightarrow\min_{ \begin{subarray}{c}f_{1},f_{0}\\ f_{1}^{*},f_{0}\end{subarray}}\sum_{\begin{subarray}{c}a\in\mathcal{H}_{s} \end{subarray}}f_{1}(s)u_{1}(s). \tag{9}\] with \(\mathcal{H}_{s}:=\{s:\exists k,\ |s\varphi_{k}\rangle\in\mathcal{R}\}\). **Proposition 1**.: _Let \((a^{*},b^{*})\) be an optimal solution for the optimization problem (8). Let \(u_{1}^{*},u_{0}^{*}:S\rightarrow[0,1],q_{1}^{*},q_{0}^{*}:S\rightarrow[-1,1]\) be the optimal solution for the optimization problem (9). Then we can construct the relation in (4)(5)(6)(7)._ The proof can be viewed in the appendix VI-A. The belief updatesUpon receiving the prospect state \(\left|\Phi\right\rangle\in\mathcal{H}\), the human agent first updates the prior belief regarding the defender's type into posterior belief: \[p(H_{x}|\left|s\varphi_{k}\right\rangle)=\frac{p(H_{x})\text{Tr}(P_{sk}\rho_ {x}P_{sk}^{\dagger})}{p(H_{1})\text{Tr}(P_{sk}\rho_{1}P_{sk}^{\dagger})+p(H_{ 0})\text{Tr}(P_{sk}\rho_{0}P_{sk}^{\dagger})},\ x=0,1, \tag{10}\] where \(P_{sk}\in\mathcal{H}\) is the projection operator upon a specific the prospect state basis \(\left|s\varphi_{k}\right\rangle\): that is, \(P_{sk}=\left|s\varphi_{k}\right\rangle\langle s\varphi_{k}\right|\). Human's actionsThe human agent first can estimate the defender's strategies, characterized as mixed prospect states \(\rho_{1}^{*},\rho_{0}^{*}\) at equilibrium. Thus he can construct two density operators under each hypothesis as psychological prospects in (2)(3). The human's action \(\alpha\in[0,1]\) characterizes the probability that the human agent thinks the traffic data come from a decoy (therefore not to access). The human agent arrives at a decision rule \(\delta:\mathcal{H}\rightarrow[0,1],\ \alpha=\delta(\left|s\varphi_{k}\right\rangle)\) upon receiving the prospect state \(\left|s\varphi_{k}\right\rangle\in\mathcal{H}\) from the deceptive defense system through a measurement operator \(P\in B(\mathcal{H})\) as follows: \[\delta(\left|s\varphi_{k}\right\rangle)=\langle s\varphi_{k}|P|s\varphi_{k}\rangle. \tag{11}\] Equivalently, the human agent's strategy space is the space of all projective operator-valued measurements (POVM). The human agent applies the concept of Neyman-Pearson hypothesis testing scenario [32]: that is, a human agent aims at maximizing the probability of detection (accessing the normal user) while constraining the probability of false alarm (choosing to access while the target sensor is a decoy). Based on the \(\left|s\varphi_{k}\right\rangle\), the human attacker's empirical false alarm rate is \(p(H_{0}|\left|s\varphi_{k}\right\rangle)\) so we can express his strategy space \(A_{H}\) as follows. \[A_{H}=\{\delta:\mathcal{H}\rightarrow[0,1]:\delta(\left|s\varphi_{k}\right\rangle) \text{$p(H_{0}|\left|s\varphi_{k}\right\rangle)<\beta$}\},\] where \(\beta\) is the tolerance that the human agent could have regarding his false alarm rate. The posterior belief \(p(H_{j}|\left|s\varphi_{k}\right\rangle)\) is expressed in (10). Human agent's type-dependent utility/loss function The human attacker wants to avoid decoys and access normal sensors. The human agent also suffers from cognitive biases characterized by quantum decision theory. Now the prior belief \(p\) is constructed and updated in terms of the defense system's type \(x\). We now assume that the human agent arrives at a decision based on the posterior belief \(p(H_{1}|\Phi)\): if it is too high, then the human agent will choose \(0\) to avoid the cost of low. Human's optimization problem can be expressed as \[\max_{\delta\in A_{H}}\delta(|s\varphi_{k}\rangle)p(H_{1}|\,|s\varphi_{k}\rangle)\] ### _Game elements_ We can now summarize our discussions in the previous section and propose our novel protocol for the game in the following definition. **Definition 1** ('Traversty game'(TG)).: _We define 'game of travesty', a signaling game_ \[\mathcal{G}=\langle\mathcal{I},X,A_{S},A_{H},F_{S},J_{H},p\rangle,\] _where \(\mathcal{I}=\{defender,human\ attacker\}\) represents the set of players; \(x\in X=\{0,1\}\) be the defender's type (normal or decoy); \(A_{S}=M\times H\) be the classical message space from the defender; \(A_{H}\subset[0,1]\) represents the human agent's action space; \(\mathcal{H}\) be the space of perceptual message from the generator; \(F_{S}:[0,1]^{2}\times[L^{1}(S)]^{2}\times B(\mathcal{H})\rightarrow\mathbb{R}\) be the defender's objective function; \(J_{H}:M\times\mathcal{H}\times A_{H}\rightarrow\mathbb{R}\) be the human agent's type-dependent objective function; \(p\in\Delta(X)\) be the common prior belief of the private types of the defender and the human agent._ ### _Relation to classical signaling games_ The proposed traversty game can be considered as a generalization of the hypothesis testing game raised in [33] with two-sided incomplete information, heterogeneous receivers, and adoption of quantum probabilistic model. The framework in [33] consolidates hypothesis testing formulation into signaling game framework [34] where one party, upon knowing the true hypothesis, can strategically manipulate observations to undermine the detection performance. If the defender cannot design perceptions of classical messages using generators, then the travesty game framework reduces to hypothesis testing game framework in [33]. The adoption of the prospect state enhances the cyber deception design by taking advantage of human's bounded rationality to provide the defender extra degrees of freedom. Such degrees of freedom characterize how the human agents' perceptions' of classical messages can contribute to their decision-making process. There are several scenarios where the defender's strategies are reduced to classical counterparts. Denote \(a,b\) as the matrices of coefficients of the defender in (1). Then when \(a=R_{1}I,b=R_{0}I\), where \(R_{0},R_{1}\) are some column permutation matrices and \(I\) is an identity matrix, then the 'quantum effect' vanishes as the defender associates a unique fundamental'mindset'' regarding every classical signal \(s\). ### _Equilibrium Analysis_ We aim at computing the perfect Bayesian Nash equilibrium [35] (PBNE) to characterize the behaviors of the defense system and the human agents. We can define PBNE of the game \(\mathcal{G}\) as follows: **Definition 2** (Perfect Bayesian Nash Equilibrium for the game \(\mathcal{G}\)).: _We define the perfect Bayesian Nash equilibrium (PBNE) of the signaling game \(\mathcal{G}\) as the following tuple \((u_{1}^{*},u_{0}^{*},\delta^{*},p)\) meeting the following requirements:_ 1. _(Human agent's sequential rationality)_ \[\delta^{*}(|s\varphi_{k}\rangle)\in\arg\min_{\delta\in A_{H}}J_{H}(|s\varphi_{ k}\rangle,u_{1}^{*},u_{0}^{*},\delta),\] (12) 2. _(Defensive system's sequential rationality)_ \[(u_{1}^{*},u_{0}^{*})\in\arg\min_{u_{0},u_{1}}F(u_{1},u_{0},f_{1},f_{0},\delta ^{*}),\;x\in\{0,1\},\] (13) 3. _(Belief consistency) The belief is updated according to Bayes' rule:_ \[p(H_{j}|\,|s\varphi_{k}\rangle)=\frac{p(H_{j},|\,|s\varphi_{k}\rangle)\langle s \varphi_{k}|\rho_{j}|s\varphi_{k}\rangle}{\sum_{f=0,1}p(H_{f})\langle s \varphi_{k}|\rho_{f}^{*}|s\varphi_{k}\rangle},\;\;j=0,1.\] We can derive the human agent's optimal decision rule as follows. **Proposition 2**.: _Consider \(\mathcal{G}\) to be the travesty game in definition (1). Let \((a_{k}^{*},b_{k}^{*})_{\delta\in S\times\delta\times K}\) be defender's coefficients of optimal type-dependent prospect states satisfying the utility factors \(u_{1}^{*},u_{0}^{*}\) in (4)(6), which are characterized as the defender's strategies at equilibrium (13). Then the human attacker's optimal decision rule \(\delta^{*}:\mathcal{H}\rightarrow[0,1]\) at equilibrium defined in (12) receiving the prospect state \(|s\varphi_{k}\rangle\) reduced from superposition state \(|\Phi\rangle\) can be derived as_ \[\delta^{*}(|s\varphi_{k}\rangle)=\begin{cases}1&\frac{f_{1}(s)(\delta^{*}_{ \delta})^{2}}{f_{0}(s)(\delta^{*}_{\delta})^{2}}>(\frac{1}{p}-1)\frac{p(H_{0} )}{p(H_{1})},\\ 0&\text{otherwise}\end{cases} \tag{14}\] Proof.: See the appendix. The optimal decision rule \(\delta^{*}\) decomposes the space of prospect states \(\mathcal{H}\) into region of rejection \(\mathcal{R}\) and region of acceptance \(\mathcal{R}_{0}\) as follows: \[\mathcal{R}=\text{span}\{|s\varphi_{k}\rangle\}_{\delta(|s\varphi_{k}\rangle)= 1},\;\mathcal{R}^{\perp}=\text{span}\{|s\varphi_{k}\rangle\}_{\delta(|s \varphi_{k}\rangle)=0}.\] Referring to the definition of the decision rule (11) we notice that the diagonal elements of \(P_{1}\) have been specified. We now assume the off-diagonal elements as \[\langle s\varphi_{k^{\prime}}|P_{1}|s\varphi_{k}\rangle=\begin{cases}\frac{1} {N_{s}}&|s\varphi_{k}\rangle,|s\varphi_{k^{\prime}}\rangle\in\mathcal{R},\\ 0&\text{otherwise},\end{cases} \tag{15}\] where \(N_{s}\) is the number of vectors among \(\{|s\varphi_{k}\rangle\}\) that lie in \(\mathcal{R}\). **Proposition 3**.: _The operator \(P_{1}\) defined in (15) and (14) is a projection operator._ Proof.: It is clear that \(P_{1}\geqslant 0\) from proposition 2. From (15) we know \(P_{1}\) is symmetric. In addition \(P_{1}^{2}|s\varphi_{k}\rangle=P_{1}(\sum_{|s\varphi_{k}\rangle\in\mathcal{R}} \frac{1}{N_{s}}|s\varphi_{k}\rangle)=P_{1}|s\varphi_{k}\rangle\) so \(P_{1}^{2}=P_{1}\). Thus \(P_{1}\) is a projection operator [36]. **Assumption 2** (No change of classical message).: _We assume that the defensive deception system does not change the classical message. That is, \(g_{1}=f_{1},\,g_{0}=f_{0}\)._ Equipped with the human agent's optimal decision rule \(\delta^{*}\) in (2), we can simplify (13) and derive the following. **Proposition 4**.: _Let assumption 2 hold. Let \(\mathcal{G}\) be the signaling game in Definition 1. Let \(\delta^{*}\) be the human attacker's optimal decision rule defined in (12) upon receiving prospect states with coefficients \(a^{*},b^{*}\) defined in (1). Denoting \(\tau_{\varepsilon}=\frac{P(P_{1})P_{0}(s)}{P(k_{0})f_{1}(s)}\frac{1}{P}-1\), we thus derive the defender's strategies \(\mu^{*}_{1}(s),\mu^{*}_{0}(s)\) at equilibrium defined in (13) as by the following cases for every \(s\in S\):_ 1. _When_ \(\tau_{\varepsilon}>1\)_, we pick_ \(u^{*}_{1}(s)=0\) _and thus_ \(u^{*}_{0}(s)=0\)_;_ 2. _When_ \(0<\tau_{\varepsilon}<1\)_, we pick region of acceptance until_ \[1-u_{1}(s)=\tau_{\varepsilon}.\] _so_ \(1-u^{*}_{0}(s)=1\) _or equivalently_ \(u^{*}_{0}(s)=0\)_. Then_ \(u^{*}_{1}(s)=1-\tau_{\varepsilon}\)_._ _The corresponding region of classical rejection can be written as \(\mathcal{R}_{s}=\{s:\,0<\tau_{\varepsilon}<1\}\)._ Proof.: The proof is provided in the appendix VI-C. After obtaining \(u^{*}_{1}(s),u^{*}_{0}(s),\,s\in S\), we can reconstruct the optimal prospect states \(a^{*},b^{*}\) by solving (7)(5)(6)(4). To measure the efficacy of cyber deception systems in counteracting the attacks, we can define the genuine detection rate and false alarm rate \[P_{D}(\tau)=\text{Tr}(\rho^{*}_{1}P^{*}_{1}(\tau)),\,P_{F}(\tau)=\text{Tr}( \rho^{*}_{0}P^{*}_{1}(\tau)). \tag{16}\] As a comparison, we denote the vanilla detection rate and false alarm rate of the insider attack (IA) as \[\bar{P}_{D}(\tau)=\sum_{s:\delta^{*}(s;\tau)=1}f_{1}(s),\,\bar{P}_{F}(\tau)= \sum_{s:\delta^{*}(s;\tau)=1}f_{0}(s). \tag{17}\] We now show that the role of the generator is to create more room for the attacker to deceive the human attacker to lower their probability of identifying the decoy system. **Remark:** We also find out when \(\tau\rightarrow\infty\) (the whole region of \(S\) is of classical acceptance region ) or \(\tau\to 0\)(the whole region of \(S\) is of classical rejection region), the detection rate \(P_{D}(\tau)\) is close to \(\bar{P}_{D}(\tau)\). That is, the quantum effect in decision-making vanishes when the prospect probability is close to 1 or 0, which is consistent with the discussion in Vincent's work [30] in quantum prospect theory. ### _Some metrics evaluating the quantum advantage/disadvantage_ Quantum advantage and quantum disadvantageWe can define the following metrics as quantum advantages as follows. **Definition 3** (Quantum advantage/disadvantage).: _We define the quantum advantage of_ \[QA(\tau)=P_{D}(\tau)/\bar{P}_{D}(\tau),\] _where \(P_{D}:\mathbb{R}\rightarrow\mathbb{R}\) is the detection rate for the human attacker under manipulation defined in (16) and \(\bar{P}_{D}:\mathbb{R}\rightarrow\mathbb{R}\) be the counterpart for a non-adversarial human attacker without bounded rationality defined in (17)._ The quantum advantage (QA) is a crucial evaluation for the effect of introducing the generator in the defender system. It depends on the threshold \(\tau\) as well as the calibration parameter \(\zeta\). It measures the impact of manipulation of mind states upon human attacker's performance on detecting decoys. We say that the human attacker gains a quantum advantage in identifying decoys if \(QA(\tau)>1\) and suffers a quantum disadvantage if \(QA(\tau)<1\). **Proposition 5**.: _Let \(QA\) be the quantum advantage in definition 3. Then for all choices of \(\tau>0\) and for all choices of \(f_{1},f_{0}\in L^{1}(S)\), we arrive \(P_{D}(\tau)\leqslant\bar{P}_{D}(\tau)\)._ \[0\leqslant QA(\tau)\leqslant 1+\zeta.\] Proof.: See the appendix VI-D ## III Dynamic scenario We now extend \(\mathcal{G}\) into a multi-stage game \(\mathcal{G}^{N}\) with finite horizon \(N\). For each stage \(k\in[N]\), the sensor system generates manipulated observations and the human agent launches access to one of the sensors. After both the defense system and the human agent take actions, cost/reward is incurred. The system belief on the defender's true type is updated. We assume that the defender never changes his type during the game. Therefore, the defender system exposes more about his type (normal sensor or decoy) as she produces more messages. We introduce the concept of the history of the actions taken by both the sensor and the human agent as follows. **Definition 4** (History of action profiles).: _We define the history of action profiles up to stage \(N\), denoted as \(h^{(j)}\in\mathcal{H}^{\otimes j}\times[0,1]^{\otimes j},\,j\in[N_{1}]\), as follows:_ \[h^{(j)}=(\psi^{(j)},\alpha^{(j)}),\] _where \(|\psi\rangle^{(j)}=(|\psi\rangle,\ldots,|\psi\rangle_{j})\in\mathcal{H}^{ \otimes j}\) as a generic history of base vectors from the prospect state up to stage \(j\) and \(\delta^{(j)}(\psi^{(j)})\in[0,1]^{\otimes j}=A^{\otimes j}_{H}\) refers to history of the detector's actions up to stage \(j\)._ In general, at the beginning of every stage \(j\), the defender's mixed strategy and the human agent's optimal decision rule should depend on the history \(h^{(j-1)}\). Here we denote \(\psi\in\{|\varphi_{\mathbb{R}}\rangle\}_{s,k}\) as a generic base vector in the prospect state basis. We assume in the following that **Assumption 3** (Action-independent assumption).: _At every stage \(j\in[N_{1}]\), the human attacker's optimal decision rule \(\delta^{(j)*}(\cdot|h^{(j)}\rangle\in\bar{\Gamma}^{(j)})\), the attacker's optimal mixed strategies of generating manipulated messages \(u^{(j)*}_{1}(\cdot|h^{(j)}\rangle,u^{(j)*}_{0}(\cdot|h^{(j)}\rangle)\in[0,1]^{S}\) and the posterior belief \(p(\cdot|h^{(j)}\rangle)\) depend only on the attacker's history of mixed strategies. Specifically, we have for \(k\in\{0,1\}\),_ \[\bar{\delta}^{(j)*}(\psi_{j}|h^{(j)}) =\bar{\delta}^{(j)*}(\psi_{j}|\psi^{(j)}), \tag{18}\] \[u^{(j)*}_{k}(\cdot|\,h^{(j)}) =u^{(j)*}_{k}(\cdot|\,\psi^{(j)}),\] (19) \[p(H_{k}|h^{(j)}) =p(H_{k}|\,\psi^{(j)}). \tag{20}\] The three assumptions (18)(19)(20) imply the only useful information accumulated throughout stages is the attacker's mixed strategies. The multi-stage game \(\mathcal{G}^{N_{1}}\) based on the base game \(\mathcal{G}\) is played as follows: before stage 1, the defender observes from Nature his type (\(H_{0}\) or \(H_{1}\)); at the beginning of stage \(j\in[N_{1}]\), the sensor observes the sender's message and sends prospect state \(|\Phi\rangle\in\mathcal{H}\) according on his mixed strategies \(\rho_{1}^{(j)},\rho_{0}^{(j)}\in B(\mathcal{H})\) to the human agent, who makes a decision based on the current prospect state and the history of prospect states \(\psi^{(j)}\in\mathcal{H}^{\otimes j}\) regarding the defender's type. At stage \(j\in[N]\), we are now ready to define the human agent's hypothesis testing game problem for the human agent as \[\max_{\delta^{(j)}\in\bar{\Gamma}^{(j)}} \delta^{(j)}(\psi_{j})p(H_{1}|\;\psi^{(j)}),\] (21) s.t. \[\delta^{(j)}(\psi_{j})p(H_{0}|\psi^{(j)})<\beta^{(j)}.\] We still inherit the substitutions and characterize the defender's strategies at stage \(j\) as the pairs \(u_{1}^{j},u_{0}^{j}\in\mathbb{R}^{S}\). Then we can equivalently express the defender's problem at stage \(j\) upon knowing \(\delta^{(j)*}\) as follows: \[\max_{u_{1}^{j},u_{0}^{j}\in\mathbb{R}^{S}}\;\sum_{s\in\mathcal{G}^{j}}f_{1}(s )u_{1}^{j}(s) \tag{22}\] with \(R_{i}^{j}=\{s:f_{1}(s)u_{1}^{j}(s)>\tau f_{0}(s)u_{0}^{j}(s)\}\). We now argue that the sequential perfect Bayesian Nash equilibrium (s-PBNE) by applying one-shot deviation principle [35] into solving (22) and (21). **Proposition 6**.: _Let \(\mathcal{G}^{N}\) be the multistage game of finite horizon \(N\). Let the assumption 2 hold. The samples of signals generated during the \(j\) stages are denoted as \(\{s_{t}\}_{t\leq j}\). Then we derive the sequential perfect Bayesian Nash equilibrium as the following tuple \(\langle u_{1}^{j*},u_{0}^{j*},\delta^{(j)*},p\rangle\) as_ \[u_{1}^{j*}(s) =\begin{cases}0&\tau_{s}^{(j)}>1,\\ 1-\tau_{s}^{(j)}&\text{otherwise}.\end{cases} \tag{23}\] \[u_{0}^{j*}(s) =\begin{cases}0&\tau_{s}^{(j)}>1,\\ 1&\text{otherwise}.\end{cases}\] (24) \[\delta^{(j)*}(\psi_{j}|\;h^{(j-1)}) =\begin{cases}1&\prod_{t\leq j-1}\frac{f_{1}(s)(a_{0}^{(j)})^{2} }{f_{0}(s_{t})(b_{0}^{(j)})^{2}}>\left(\frac{1}{B^{\prime}}-1\right)\frac{p( H_{0})}{p(H_{1})},\\ 0&\text{otherwise}\end{cases}\] Proof.: We can derive the equilibrium by backward induction [15], alternatively solving the optimization problem for every stage \(j\in[N]\). The equilibrium results in proposition 6 implies how the defender should change the way of configuring prospect states produced by the generator based human attacker's action history and similarly, how the human attacker adopts her optimal decision threshold based on the history of classical signals received. ## IV Case Study: honeypot detection In this section, we apply the proposed cyber deception scheme discussed in section II to implement cyber-psychological techniques to build next-generation honeypots [24] to mitigate inside human attacks. A honeypot is a monitored and regulated decoy disguised as a valuable asset to attract attackers to compromise so as to detect, deflect, and to gather information for cyber attacks in networks. According to [37], honeypots can help enhance system security in the following ways: to begin with, honeypots squander attacker's resources without giving away valuable information in return; also, honeypots serve as an intrusion detection node, providing warnings for system administrative; last but not least, once compromised, honeypots provide useful information for network administrative to analyze on the attacker. However, honeypots can also be identified by proactive attackers and become ineffective, especially when they are at fixed locations and isolated from the network system. Attackers can adopt proactive detection techniques, such as those in [33], to identify honeypots more accurately and further either implement anti-honeypot techniques [38]. Here inspired by the experiments introduced in [11], we undermine the attacker's performance in identifying honeypots using cyber-psycholoigical techniques. Specifically, we adopt generators to produce verbal messages to change the perception of attacker's judgment upon the type of the sensors that they receive traffic data from. ### _The dataset_ To simulate normal traffic and honeypot-related traffic, we select a portion of the KDD cup 1999 dataset [39], which was generated partially for the 1998 DARPA intrusion detection program. The raw dataset is binary, containing five million connection records from 7 weeks of network traffic. There are in total \(N=494021\) pieces of connection records in our selected dataset, of which 396093 come from honeypot-related traffic. We assume that all attack traffic is attracted by honeypots and all normal traffic is collected by normal sensors since regular users have no reason to access honeypots. Thus we can estimate a prior belief regarding the type of sensors as \(p(H_{1})\approx 0.802,\;p(H_{0})\approx 0.198\). The signal \(s\) of log-in attempts as a feature obeys Bernoulli distribution: \(s=0\) means that log-in attempt is successful and \(s=1\) means the log-in attempt is a failure. Honeypots and normal sensors respond failure/success to attacker's log-in attempts with different parameters \(\theta_{1},\theta_{0}\), i.e. \[g_{1}(1) =\theta_{1},\;g_{1}(0)=1-\theta_{1}, \tag{25}\] \[g_{0}(1) =\theta_{0},\;g_{0}(0)=1-\theta_{0}.\] In our selected dataset, there are 69939 out of 97928 successful log-in attempts in normal traffic, while there are 3298 out of 396093 successful log-in attempts in honeypot traffic. Thus we select \(\theta_{0}\approx 0.719,\theta_{1}\approx 0.008\). We adopt the scheme in Figure 1 to implement psychological techniques to mislead attackers by creating perceptions of messages based on the traffic data. For every message \(s\), the generator produces verbal messages leading the human attackers to generate perceptions regarding the type of sensor. We denote the number of base states in the mind space to be \(K\), indicating how likely the human attacker thinks the defender is a honeypot: upon receiving the classical signal \(s\), the state \(|s\phi_{1}\rangle\) indicates that human thinks the defender is the most likely a honeypot, while the state \(|s\varphi_{K}\rangle\) represents that human thinks the defender is the least likely the honeypot. If the generator wants to keep the human agent uninformed, as described in [11], he can produce a prospect state as a uniform superposition of all the basis as follows: \[|\Phi(s)\rangle=\frac{1}{\sqrt{K}}\sum_{j=1}^{K}|s\varphi_{j}\rangle.\] For every realization of message \(s\), the generator designs optimal type-dependent prospect state coefficients \(a^{*},b^{*}\in\mathbb{R}^{S\times K}\) in (4)-(7) via PBNE in proposition 4, indicating perception of the likelihood of honeypot the defender imposes the human agent upon delivering the message \(s\). We also analyze the optimal decision rules of human agents under the verbal messages of generators at equilibrium. ### _Numerical Results_ We select parameters \(\beta=0.4,\zeta=0.2\) and the number of base states in the mind space \(K=4\). In Figure 2, we plot the cyber defender's optimal strategies \(u_{0}^{*},u_{1}^{*}\) at equilibrium in terms of various choices of \(\beta\). We observe that in the classical rejection region (i.e. the space of signals that causes a human attacker to identify that the sensor is a decoy), the generator in the defender system produces perceptions leading to only'rejects' with a certain probability. In Figure 5 we plot the defender's strategies at equilibrium in terms of the coefficients \(a,b\) of the prospect states produced by the generators. The coefficients suggest an optimal way of mixing different weights of psychological minds regarding every classical signal \(s\). We observe that when \(\beta\) becomes close to 1, the defender's equilibrium strategies are close to \(u_{1}(0)=1,u_{0}(0)=1\). On the other hand, if \(\beta\) is close to 0, the defender's strategies converge to \(u_{1}=0,u_{0}=0\), corresponding to the upper right and lower left corner of the ROC curves (to be described later) characterizing the detection performance. In Figure 3, we plot the receiver-operational-characteristic (ROC) curve. We observe that depending on different calibration parameters \(\zeta\), the human agent's detection performances vary, but in general are all worse than fully rational counterparts. In particular, higher \(\zeta\) leads to better detection performance, as higher \(\zeta\) the quantum inference strengthens the probability of correct identification when the decoy sensor is connected to the human agent. ### _Multi-stage analysis_ To have a better understanding of the In Figure 4, we plot the evolution of defender's optimal strategies \(\{u_{1}^{*},u_{0}^{*\}\}_{I\in[N]}\) (that is, optimal type-dependent utility factors) through time at equilibrium as introduced in (24)(23). We select the time horizon \(N=30\) and fix the prior belief \(p(H_{1}),p(H_{0})\). We observe that the defender's stage equilibrium strategies converge to a pooling strategy \(u_{0}=u_{1}=1\), suggesting that the attacker can make the prospect states totally uninformative to the human agent by designing false perceptions upon the signals. ## V Conclusion In this work, we have proposed the game of travesty (TG) to design a novel defensive deception to combat the proactive detection of decoys from insider human attackers. The defensive deception system is a signaling game where the defender consists of a sensor or a decoy cascaded by a generator, which converts classical signals into prospect states to manipulate the perception of messages into human attackers. We have analyzed the behaviors of the inside human attacker as well as the defender by computing the perfect Bayesian Nash equilibrium. Furthermore, we analyze the human attacker's performance of detecting decoys at equilibrium and compare it with the ones without manipulation of perceptions of classical signals. We have illustrated via ROC curves that the insider human attacker performs worse than the ones with full rationality, giving the defender more room to evade detection when she implements decoys in the network. Fig. 3: ROC curves human agent’s detection performance \(\zeta\). We choose distributions of classical signals under each state as \(g_{1},g_{0}\) given in (25) Fig. 2: The defender’s optimal strategies \(u_{1}^{*}\) (upper figure) and \(u_{0}^{*}\) (lower figure) at PBNE in \(\mathcal{G}\) under different choices of \(\beta\). We set the calibration parameter \(\zeta=0.2\) and the tolerance \(\beta=0.4\). The classical signal obeys truncated Gaussian as in (25) with support of length \(S=2\). The dimension of mind states \(K=4\).
サイバーデceptionの概念は注目を集めています。サイバー防御的デceptionの開発には、多様な分野が関わっており、認知科学は重要な役割を果たします。この研究では、守備者と人間代理人とのシグナルゲームフレームワークを採用して、量子意思決定理論を用いたInsider Attack(IA)への防御的なデceptionプロトコルを開発します。守備者は、insider human attaackerをデcoyセンサーにアクセスさせることを誘導する。これは、古典的な信号を生成するジェネレーターを使用して人間の攻撃者心理状態を操作するものです。私たちの研究の結果は、古典的なトラフィックデータを変更せずに、戦略的に設計されたジェネレーターは、decoyを識別する防御者のパフォーマンスを、ジェネレーターなしのデceptionスキームと比較して、悪化させる可能性を示しています。提案されたフレームワークは、より効果的なシグナルスキームを設計するための基本的な理論を提供します。
2309.03604
Estimating the Coverage Measure and the Area Explored by a Line-Sweep Sensor on the Plane
This paper presents a method for determining the area explored by a line-sweep sensor during an area-covering mission in a two-dimensional plane. Accurate knowledge of the explored area is crucial for various applications in robotics, such as mapping, surveillance, and coverage optimization. The proposed method leverages the concept of coverage measure of the environment and its relation to the topological degree in the plane, to estimate the extent of the explored region. In addition, we extend the approach to uncertain coverage measure values using interval analysis. This last contribution allows for a guaranteed characterization of the explored area, essential considering the often critical character of area-covering missions. Finally, this paper also proposes a novel algorithm for computing the topological degree in the 2-dimensional plane, for all the points inside an area of interest, which differs from existing solutions that compute the topological degree for single points. The applicability of the method is evaluated through a real-world experiment.
Maria Costa Vianna, Eric Goubault, Luc Jaulin, Sylvie Putot
2023-09-07T09:57:26
http://arxiv.org/abs/2309.03604v1
# Estimating the Coverage Measure and the Area Explored by a Line-Sweep Sensor on the Plane ###### Abstract This paper presents a method for determining the area explored by a line-sweep sensor during an area-covering mission in a two-dimensional plane. Accurate knowledge of the explored area is crucial for various applications in robotics, such as mapping, surveillance, and coverage optimization. The proposed method leverages the concept of coverage measure of the environment and its relation to the topological degree in the plane, to estimate the extent of the explored region. In addition, we extend the approach to uncertain coverage measure values using interval analysis. This last contribution allows for a guaranteed characterization of the explored area, essential considering the often critical character of area-covering missions. Finally, this paper also proposes a novel algorithm for computing the topological degree in the 2-dimensional plane, for all the points inside an area of interest, which differs from existing solutions that compute the topological degree for single points. The applicability of the method is evaluated through a real-world experiment. Plane exploration; topological degree; robotics; interval analysis. ## I Introduction Mobile robots are increasingly being used to carry out dangerous tasks that otherwise would put human lives at risk, such as bomb disposal, firefighting, and search and rescue missions. Their use in these situations can considerably reduce the risk to human workers while providing more detailed and accurate information about the situation. Additionally, mobile robots can be equipped with specialized tools, such as cameras, grippers, and cutting devices, that enable them to perform a wide range of tasks that would be difficult or impossible for humans to do. In the context of these operations, the robotic platform often needs to perform an area-covering mission. During these missions, a designated part of the robot's environment is thoroughly searched or monitored to develop a complete understanding of the situation or identify potential threats or opportunities. Determining the area explored by a mobile robot during an area-covering mission is important to establish if the mission is successful. It is also essential for validating path-planning algorithms that will lead to complete coverage of an area of interest [1] or complete avoidance of an area of risk. Overall, determining the explored area is essential for ensuring efficient and safe operations, planning future actions, and gaining valuable insights from the acquired data. In addition, we are also interested in determining the coverage measure of a point in the environment. The coverage measure represents how many times this point was covered by the robot's sensors or tools, in other words, how many times it was explored. Counting the number of times an area was explored is of interest for different reasons, for example, when assessing revisiting missions. In these missions the robot is required to come back to a previous point, therefore to revisit it, to improve the quality of information collected around this point through redundancy. Indeed, studies have shown that target classification improves dramatically when a multi-view approach is adopted. Usually, single-view approaches do not provide enough information to make a confident identification with, for example, Synthetic Aperture Sonars (SAS) [2] and Synthetic Aperture Radars [3]. A multi-view method is also essential when recognizing or reconstructing 3-dimensional objects from 2-dimensional data such as camera images [4]. In these examples, counting how many times a point or an area, as a set of points, has already been explored will be essential to determine the mission completeness. On the contrary, if the robot is not supposed to cover areas previously visited, the coverage measure will be useful for planning optimal paths, reducing unnecessary effort. In this context, in this work, we present a technique for quantifying the extent of coverage achieved by a mobile robot during a sweep exploration in a two-dimensional environment. Sweep exploration refers to missions where the robot uses a line-sweep sensor. Line-sweep sensors are one-dimensional sensors that provide data along a single axis and must sweep the environment in order to create a two-dimensional representation of its surroundings. With this purpose, we establish a relation between the exploration problem and the topological degree and we demonstrate how it can be used to determine the coverage measure. Topological concepts have already been explored for counting [5] and for addressing coverage problems in robotics contexts, e.g. [6, 7]. The main advantage of the approach presented in this paper, is that we determine the number of times an area was explored, with the coverage measure, and different from more common approaches, such as grid-based analysis, our topological method does not require a previous discretization of the environment into fixed cells. We demonstrate that the whole environment can be characterized from very basic information on the robot's state and on the range of visibility of the exploration sensors, resulting in a method of low computational complexity. This approach has already been explored at [8], but here we deepen its mathematical definition and extend it to address previous limitations such as the coverage measure of points on the maximal range of visibility and of points that are swept on the opposite direction of movement. We also address the crucial issue of uncertainty in a robot's trajectory to achieve a guaranteed estimation of the explored area. In [9], a method to estimate the explored area considering the uncertain state of a robot was presented. We extend their method by introducing the concept of uncertain coverage measure. Our last contribution is an algorithm for computing the winding number of a continuous cycle with respect to all the point in the two-dimensional plane. Algorithms for general topological degree computation have already been proposed by different works [10, 11]. However, methods available in the literature will compute the winding number of a cycle with respect to a single point, needing to be applied to each point individually for a full characterization of the plane. In this context, we present a set-membership approach that efficiently determines the winding number for a whole area of interest. The resulting algorithm and all the concepts defined in this work are applied to determine the area explored by a real autonomous underwater vehicle doing an exploration mission with two line-sweep sensors. ## II Problem Statement We are interested in the problem of a mobile robot that explores an unknown planar environment. We assume that the robot's pose can be fully described by a function of time: \(\boldsymbol{x}:\mathbb{R}\rightarrow\mathbb{R}^{3}\) that is at least \(C^{2}\). The robot's visible area at time \(t\) is a subset \(\mathbb{V}(t)\subset\mathbb{R}^{2}\) of the environment that is sensed by the robot's embedded exteroceptive sensors. We define \(\mathbb{V}\) as a set-valued function that depends on the robot's pose and the geometry and technology of the sensors employed. In this work, we focus on the problem of line-sweep exploration sensors and we treat the example of one that osculates the environment on the robot's left side as it moves around the plane, Figure 1. In this context, the robot's pose at instant \(t\) can be represented by the vector \[\boldsymbol{x}(t)=\begin{pmatrix}x(t)&y(t)&\psi(t)\end{pmatrix}^{T}\] where the pair \((x,y)\) represents the robot's position in the plane and \(\psi\) its orientation. Let \(L\in\mathbb{R}^{+}\) be the sensor's visible range, the visible set in this configuration can be defined as \[\mathbb{V}(t)=\{\boldsymbol{p}\in\mathbb{R}^{2}|p_{rx}=0\text{ and }0\leq p_{ry}\leq L\} \tag{1}\] where \[\boldsymbol{p}_{r}=\begin{pmatrix}p_{rx}&p_{ry}\end{pmatrix}^{T}=R^{-1}(\psi(t ))(\boldsymbol{p}-\begin{pmatrix}x&y\end{pmatrix}^{T}) \tag{2}\] represents in the robot's coordinate frame a point \(\boldsymbol{p}\) in the environment and \(R(\psi(t))\) is the rotation matrix associated with the robot's orientation angle \(\psi(t)\). The set \(\mathbb{A}_{\mathbb{E}}\) corresponds to the area explored by the robot during a time interval \([0,T]\), for some maximal value \(T>0\). It can be defined as the union of the robot's visible area along its trajectory \[\mathbb{A}_{\mathbb{E}}=\bigcup_{t\in[0,T]}\mathbb{V}(t) \tag{3}\] Figure 2 shows the resultant \(\mathbb{A}_{\mathbb{E}}\) if we consider the illustrated robot's trajectory and the visible set function described by (1). The robot's visibility region in this case can be parameterized by \(u\in U\subseteq\mathbb{R}\). In the considered example \(U=[0,L]\) represents the lateral distance of a point in the visible area to the robot. We can define the sweep function \(\boldsymbol{f}:U\times[0,T]\rightarrow\mathbb{R}^{2}\) as a continuously differentiable function whose image over the space \(U\times t\), with \(t\in[0,T]\), represents the visible area \(\mathbb{V}(t)\), \[\mathbb{V}(t)=\boldsymbol{f}(U,t) \tag{4}\] By analogy to a common terminology adopted in sonar imagery [12], we name space \(W=U\times[0,T]\) the Waterfall Fig. 1: (a): Mobile robot with a line sweep exploration sensor on the plane. At instant \(t\) the point \(\boldsymbol{p}\) is sensed by the robot ; (b): The point \(\boldsymbol{p}_{r}\) is the representation of point \(\boldsymbol{p}\) in the robot’s coordinate frame \(X_{r}Y_{r}\). Fig. 2: Area explored by a line-sweep sensor on the robot’s left side along its trajectory. Space. Points in \(W\) are of the form \((u,t)\), \(u\) representing the parameterization of the visible area, \(t\) the time of exploration. All points \((u,t)\in W\) are points that were in the robot's visible area at least once and therefore, points that were explored during the mission. The robot's pose \(\mathbf{x}\), its visible area \(\mathbb{V}\) and \(\mathbb{A}_{\mathbb{E}}\) are all defined inside an absolute coordinate system, the Mosaic Space \(M\subseteq\mathbb{R}^{2}\) or the World Frame, as it is usually called in robotics. The sweep function \(\mathbf{f}\) maps points from the Waterfall to the Mosaic space, Figure 3. The coverage measure, or how many times a point in the environment was explored by the robot during a mission, is given by the function \(c_{m}:M\rightarrow\mathbb{N}_{0}\). A point is considered to be revisited if once in the robot's visibility range, it goes out of reach and then is sensed again later in time. In Figure 4, for example, point \(\mathbf{p}\) is sensed for the first time at instant \(t_{1}\) and revisited at instant \(t_{2}\), in this case, \(c_{m}(\mathbf{p})=2\). Let \(det\) be the determinant function and \(J_{\mathbf{f}}\) represents the Jacobian matrix of the sweep function. We adopt the following condition: \[\forall\mathbf{w}\in W,det(J_{\mathbf{f}}(\mathbf{w}))>0 \tag{5}\] that implies that the robot is constantly moving and that the sensor sweeps the environment on the same direction of its advancement movement. By assuming this condition is met, we can say that the number of times that a point appears in the waterfall space corresponds to the number of times that this point was explored during a mission. If \(Ker\)\(\mathbf{f}\) is the kernel of function \(\mathbf{f}\), considering the definitions stated in this Section: for \(\mathbf{p}\in M\), it can be concluded that \[c_{m}(\mathbf{p})=\#Ker\ (\mathbf{f}-\mathbf{p}) \tag{6}\] The explored area \(\mathbb{A}_{\mathbb{E}}\) can be characterized as the set of points that were sensed by the robot at least once and therefore in terms of the coverage measure of its points: \[\mathbb{A}_{\mathbb{E}}=\{\mathbf{p}\in M|c_{m}(\mathbf{p})\geq 1\} \tag{7}\] Describing the mosaic space using the coverage measure of its points is the method adopted in this work for defining the explored area. To achieve this, the following section establishes a connection between the topological degree and the coverage measure and this relation is explored with this purpose. ## III Coverage Measure and Topological Degree In [8] a relation between the coverage measure of a point in the plane and the topological degree has been explored. Here we give a general axiomatic definition of the notion of topological degree and recap the main properties that we use. **Definition 1** (Topological degree).: _Let \(D\) be an open subset of \(\mathbb{R}^{n}\) and \(\mathbf{f}\) a continuous function from its closure \(\overline{D}\) to \(\mathbb{R}^{n}\). A degree of \(\mathbf{f}\) is a family of functions \(deg:\ (\mathbf{f},D,\mathbf{p})\rightarrow\mathbb{Z}\) for all \(D\) open subsets of \(\mathbb{R}^{n}\), \(\mathbf{f}\) continuous and \(\mathbf{p}\in\mathbb{R}^{n}\backslash\mathbf{f}(\partial D)\) such that:_ * _(identity)_ \(deg(Id_{D},D,\mathbf{p})=1\) _if_ \(\mathbf{p}\in D\)__ * _(excision)_ \(deg(\mathbf{f},D,\mathbf{p})=deg(\mathbf{f},D_{1},\mathbf{p})+deg(\mathbf{f},D_{2},\mathbf{p})\) _where_ \(D_{1}\)_,_ \(D_{2}\) _are opens in_ \(D\) _with_ \(\mathbf{p}\not\in\mathbf{f}(\overline{D}(D_{1}\cup D_{2}))\)__ * _(homotopy invariance)_ \(deg(\mathbf{h}(\alpha,.),D,\mathbf{p}(\alpha))\) _is independent of_ \(\alpha\) _for any homotopy_ \(\mathbf{h}:\ [0,1]\times\overline{D}\rightarrow\mathbb{R}^{n}\)_, and_ \(\mathbf{p}(\alpha)\not\in\mathbf{h}(\alpha,\partial D)\) _for all_ \(\alpha\in[0,1]\)_._ When such a family of function exists, it is known to be unique [13]. In particular, when \(\mathbf{f}\) is at least continuously differentiable, and \(\mathbf{p}\) is a regular value of \(\mathbf{f}\) (i.e. the determinant of the Jacobian of \(\mathbf{f}\), \(det(J_{\mathbf{f}})\), is non zero on each \(\mathbf{d}\) with \(\mathbf{f}(\mathbf{d})=\mathbf{p}\)): \[deg(\mathbf{f},D,\mathbf{p})=\sum_{\mathbf{d}\in\mathbf{f}^{-1}(\mathbf{p})}sign(det(J_{\mathbf{f}}( \mathbf{d}))) \tag{8}\] As well known in complex analysis, the topological degree of differentiable functions from the unit ball \(D^{2}\) in \(\mathbb{R}^{2}\) to \(\mathbb{R}^{2}\) is linked to the winding number of \(\mathbf{f}(\partial D^{2})\). We are going to take the homological view on winding numbers in this paper. Let \(S^{1}=\partial D^{2}\) be the 1-sphere, \(\mathbf{p}\) a point in the interior of the image by \(\mathbf{f}\) of \(D^{2}\). Function \(\mathbf{f}\) maps \(S^{1}\), on a cycle in \(\mathbb{R}^{2}\), and the winding number is the number of times this cycle turns around \(\mathbf{p}\). By convention, counterclockwise turns count positively and clockwise turns negatively. **Definition 2** (Winding number).: _Let \(\mathbf{f}:\ D^{2}\rightarrow\mathbb{R}^{2}\) be a continuous function and \(\mathbf{p}\in\mathbf{f}(D^{2})\backslash\mathbf{f}(S^{1})\). Consider its restriction \(\mathbf{f}_{|S^{1}}:\ S^{1}\rightarrow\mathbb{R}^{2}\backslash\{\mathbf{p}\}\). It induces a linear map in homology:_ \[\tilde{\mathbf{f}}:\ H_{1}(S^{1})\to H_{1}(\mathbb{R}^{2}\backslash\{\mathbf{p}\})\] _i.e. from \(\mathbb{Z}\) to \(\mathbb{Z}\), i.e. is of the form \(\tilde{\mathbf{f}}(C)=\eta C\), where \(C\) represents an equivalence class in \(H_{1}(S^{1})\). This \(\eta\) is called the winding number of \(\gamma=\mathbf{f}(S^{1})\) around point \(\mathbf{p}\in\mathbf{f}(D^{2})\backslash\mathbf{f}(S^{1})\). For all other points in \(\mathbb{R}^{2}\backslash\partial D^{2}\) the winding number is set to zero._ Fig. 3: Waterfall and Mosaic Spaces for the line-sweep sensor example. We can now state the relation between the topological degree and the winding number: **Lemma 1**.: _Let \(\mathbf{f}\) be a continuously differentiable map from \(D^{2}\) to \(\mathbb{R}^{2}\) and let \(\mathbf{y}\in\mathbb{R}^{2}\backslash\mathbf{f}(\partial D^{2})\) such that \(\mathbf{f}^{-1}(\mathbf{y})\) is finite and \(\mathbf{y}\) is a regular point for \(\mathbf{f}\). Then \(deg(\mathbf{f},D^{2},\mathbf{y})\) is equal to the winding number \(\eta(\mathbf{f}(\partial D^{2}),\mathbf{y})\) of \(\mathbf{f}(\partial D^{2})\) at \(\mathbf{y}\)._ Proof.: For all \(\mathbf{y}\in\mathbb{R}^{2}\backslash\mathbf{f}(\partial D^{2})\), either there exists no \(\mathbf{d}\) such that \(\mathbf{y}=f(\mathbf{d})\), or there exists a finite, non-zero number of \(\mathbf{d}\), \(\mathbf{d}_{1},\ldots,\mathbf{d}_{m}\) in \(D^{2}\), such that \(\mathbf{f}(\mathbf{d}_{i})=\mathbf{y}\). In the first case, this means that both, \(deg(\mathbf{f},D^{2},\mathbf{y})\) is zero and \(\mathbf{y}\) is in the complement of \(\mathbf{f}(D^{2})\) and the winding number \(\eta(\mathbf{f}(\partial D^{2}),\mathbf{y})\) is also zero. In the second case, \(\mathbf{y}\) being regular for \(\mathbf{f}\), we have \(deg(\mathbf{f},D,\mathbf{y})=\sum\limits_{i=1}^{m}sign(det(J_{\mathbf{f}}(\mathbf{d}_{i})))\). Take small enough open neighborhoods \(U_{i}\) of \(\mathbf{d}_{i}\) in \(D\) such that the sign of \(det(J_{\mathbf{f}}(\mathbf{d}))\) is the same as the sign of \(det(J_{\mathbf{f}}(\mathbf{d}_{i}))\) for all \(\mathbf{d}\in U_{i}\). This is always possible since \(J_{\mathbf{f}}\) is continuous. Note that this implies that \(\mathbf{f}\) restricted to \(U_{i}\) induces an homeomorphism onto its image. Also we can always choose the \(U_{i}\) to have empty pairwise intersections and to have \(\mathbf{f}\) being an homeomorphism from \(\overline{U}_{i}\) onto its image, by taking them small enough (the \(\mathbf{d}_{i}\) are isolated points within \(D\)). Now, the map \(\tilde{\mathbf{f}}\) is the same as the map induced in homology \(\tilde{\mathbf{f}}\) by \(\mathbf{f}:\,D^{2}\backslash\bigcup\limits_{i=1}^{m}U_{i}\rightarrow\mathbb{R}^{ 2}\backslash\{\mathbf{y}\}\). We note also that within \(D^{2}\backslash\bigcup\limits_{i=1}^{m}U_{i}\), the cycle \(\partial D^{2}\) is homologous to the sum of the \(\partial(U_{i})\), for \(i=1,\ldots,m\). Hence \(\tilde{\mathbf{f}}(\partial D^{2})=\sum\limits_{i=1}^{m}\tilde{\mathbf{f}}(\partial( U_{i}))\). But \(\mathbf{f}(\partial(U_{i}))\) is a Jordan curve homeomorphic (by \(\mathbf{f}\)) to \(\partial(U_{i})\), since we chose \(U_{i}\) such that \(\mathbf{f}\) restricted to \(\overline{U_{i}}\) onto its image is a homeomorphism. Hence \(\tilde{\mathbf{f}}(\partial U_{i})\) is either plus or minus identity, according to the orientation of \(\tilde{\mathbf{f}}(\partial U_{i})\), i.e. \(\tilde{\mathbf{f}}(\partial U_{i})=sign(det(J_{\mathbf{f}}(\mathbf{d})))\) for any \(\mathbf{d}\in U_{i}\), which we know is equal to \(sign(det(J_{\mathbf{f}}(\mathbf{d}_{i}))\). Hence \[\eta(\mathbf{f}(\partial D^{2}),\mathbf{y})=\sum\limits_{i=1}^{m}sign(det(J_{\mathbf{f}}( \mathbf{d}_{i})))=deg(\mathbf{f},D^{2},\mathbf{y})\] Now let \(\mathbf{f}\) represent the sweep function, mapping from the Waterfall Space \(W\), which is homeomorphic to \(D^{2}\), to the Mosaic Space \(M\). According to (8) and under hypothesis (5), for \(\mathbf{p}\in\mathbb{R}^{n}\backslash\mathbf{f}(\partial W)\), \[deg(\mathbf{f},W,\mathbf{p})=\sum\limits_{\mathbf{w}\in\mathbf{f}^{-1}(\mathbf{p})}+1=\#Ker\ (\mathbf{f}-\mathbf{p}) \tag{9}\] Finally, from (6), it can be concluded that \(deg(\mathbf{f},W,\mathbf{p})=c_{m}(\mathbf{p})\). Moreover, from Definition 2, \[\eta(\gamma,\mathbf{p})=c_{m}(\mathbf{p}), \tag{10}\] where \(\gamma=\mathbf{f}(\partial W)\) represents the sensor's contour, a counter-clockwise oriented closed curve that surrounds all the points that have been explored, Figure 5, and \(\eta(\gamma,\mathbf{p})\) is its winding number with respect to \(\mathbf{p}\). Throughout the remainder of this Section, we extend the relation between the coverage measure and the topological degree so it comprehends more general scenarios. ### _Coverage Measure for Points with Undefined Winding Numbers_ When the robot's pose and its visible set are well defined, the coverage measure of all the points in the environment during a mission can be uniquely determined. However, if we adopt the method proposed by [8], using relation (10), the coverage measure of a point \(\mathbf{p}\in\gamma\) will be undefined considering the definition of winding numbers. For example, in Figure 6, point \(\mathbf{p}_{1}\in\gamma\) is the image by \(\mathbf{f}\) of a point \((0,t)\in W\), for some \(t\in[0,T]\). This point is inside the robot's visible area \(\mathbb{V}(t)\) and according to the definition of the coverage measure on (6), \(c_{m}(\mathbf{p}_{1})=1\) even if \(\eta(\gamma,\mathbf{p}_{1})\) is undefined. In this context, to extend the validity of (10), we define a bounded function \(\overline{\eta}\) as the extension of the winding number function to the full domain \(\mathbf{f}(W)\). For that, we consider the following-adapted from [14]: **Definition 3** (Limit Superior).: _Let \(M\) be a metric space and \(g\) a function from \(M\) to \(\mathbb{R}\). For any limit point \(\mathbf{y}\in M\) the limit superior, when it exists, is defined as:_ \[\underset{\mathbf{p}\rightarrow\mathbf{y}}{limsup}g(\mathbf{p})=\lim\limits_{e\to 0}\ (sup\{g(\mathbf{p})\ |\ \mathbf{p}\in B(\mathbf{y},\epsilon)\backslash\{\mathbf{y}\}\})\] _where \(B(\mathbf{y},\epsilon)\) denotes the ball within \(M\), centered at \(\mathbf{y}\), of radius \(\epsilon\)._ The sweep function \(\mathbf{f}\) is a continuous map from a compact subset \(W\) to \(\mathbb{R}^{2}\), therefore \(\mathbf{f}(W)\backslash\mathbf{f}(\partial W)\) is composed of a Fig. 5: The sensor’s contour \(\gamma\) for the mission represented in Figure 2. Fig. 6: The coverage measure of point \(\mathbf{p}_{1}\) is equal to \(1\) and of point \(\mathbf{p}_{2}\) is equal to \(2\), but the winding number of \(\gamma\) with respect to these points is undefined. disjoint union of opens \(V_{i}\), \(i\in I\), for some index set \(I\). All points of \(\mathbf{f}(\partial W)\) are limits of some sequence of points \(\mathbf{f}(\mathbf{y})\), with \(\mathbf{y}\in\hat{W}\). We can now state: **Lemma 2**.: _Consider a function \(w:\ \bigcup\limits_{i\in I}V_{i}\rightarrow\mathbb{Z}\). Suppose that \(w\) is bounded on \(\bigcup\limits_{i\in I}V_{i}\) then there is an upper semi-continuous extension of \(w\), \(\overline{w}:\ \mathbf{f}(W)\rightarrow\mathbb{Z}\) defined as:_ \[\overline{w}(\mathbf{p})=\left\{\begin{array}{ll}w(\mathbf{p})&\text{ if }\mathbf{p}\in \bigcup\limits_{i\in I}V_{i}\\ \underset{\mathbf{p}^{\prime}\in\bigcup\limits_{i\in I}V_{i}\rightarrow\mathbf{p}}{ limsup}w(\mathbf{p}^{\prime})&\text{ otherwise}\end{array}\right.\] Proof.: This is immediate: the limit sup exists since \(w\) is bounded on \(\bigcup\limits_{i\in I}V_{i}\), and the definition of \(\overline{w}\) precisely imposes that \(\overline{w}\) is upper semi-continuous. Supposing that the number of connected components of \(\mathbf{f}(W)\backslash\mathbf{f}(\partial W)\) is finite, as the winding number is constant on each component, this defines a bounded function \(\eta\) that we can extend to the full domain \(\mathbf{f}(W)\) by Lemma 2 to obtain \(\overline{\eta}\). Finally, if the condition expressed in (5) is satisfied, we can say that for any \(\mathbf{p}\in M\), \[\overline{\eta}(\gamma,\mathbf{p})=c_{m}(\mathbf{p}) \tag{11}\] Considering Definition 3, if \(\mathbf{p}\in\gamma\), its coverage measure will be equal to the coverage measure of points on the open \(V_{i}\) with the biggest winding number value for which \(\mathbf{p}\) is a limit, as expected by the original definition on (6). This new definition extends the applicability of the method but condition (5) is still necessary for (11) to be true. Next section introduces new concepts to remove this constraint. ### _Coverage Measure for Points Swept Backwards_ Condition (5) is necessary for (11) to be true. It ensures that the area surrounded by the sensor's contour \(\gamma\) never shrinks during a mission and that \(\gamma\) is indeed an enclosing curve for \(\mathbb{A}_{\mathbb{E}}\). If condition (5) is not satisfied, the inconsistency in the equality (11) is illustrated in Figures 7,8 and 9. At the beginning of the mission, in Figure 7, the robot moves from its initial state \(\mathbf{x}(0)\) to state \(\mathbf{x}(t_{1})\), \(t_{1}>0\). During the interval \([0,t_{1}]\), condition (5) is satisfied. Point \(\mathbf{p}\in M\) is sensed for the first time at instant \(\hat{t}_{1}\in[0,t_{1}]\) and this occurrence is represented in the mission's Waterfall Space \(W\) by point \(\mathbf{w}_{1}\). The sensor's contour associated with this first part of the mission is the closed curve \(\gamma_{1}=\mathbf{f}(\partial([0,L]\times[0,t_{1}]))\) and \(\eta(\gamma_{1},\mathbf{p})=sign(det(J_{\mathbf{f}}(\mathbf{w}_{1})))=1\) is indeed equal to the coverage measure of \(\mathbf{p}\) at \(t_{1}\). The mission continues as the robot advances to state \(\mathbf{x}(t_{2})\), \(t_{2}>t_{1}\) and point \(\mathbf{p}\) is revisited at \(\hat{t}_{2}\). For the time interval \([0,t_{2}]\), we have \(\mathbf{f}^{-1}(\mathbf{p})=\{\mathbf{w}_{1},\mathbf{w}_{2}\}\) and \(\gamma_{2}=\mathbf{f}(\partial([0,L]\times[0,t_{2}]))\) represents the sensor's contour. As illustrated in Figure 8, at \(\hat{t}_{2}\), point \(\mathbf{p}\) is swept in the opposite direction with respect to the robot's advancement movement. In this context, the Jacobian of function \(\mathbf{f}\) at \(\mathbf{w}_{2}\) is negative and \[\eta(\gamma_{2},\mathbf{p})=\sum_{i=1}^{2}sign(det(J_{\mathbf{f}}(\mathbf{w}_{i})))=1-1=0\] although, according to (6), \(c_{m}(\mathbf{p})=2\) at \(t_{2}\). Exploration ends at state \(\mathbf{x}(T)\), \(T>t_{2}\) and the complete mission is represented in Figure 9. Point \(\mathbf{p}\) is sensed for the third and last time at \(\hat{t}_{3}\) and at the end of the mission \(\mathbf{f}^{-1}(\mathbf{p})=\{\mathbf{w}_{1},\mathbf{w}_{2},\mathbf{w}_{3}\}\). At \(\hat{t}_{3}\), point \(\mathbf{p}\) is sensed by a forward movement of the sensor on the plane, therefore, \[\eta(\gamma,\mathbf{p})=\sum_{i=1}^{3}sign(det(J_{\mathbf{f}}(\mathbf{w}_{i})))=1-1+1=1\] but \(c_{m}(\mathbf{p})=3\) is expected. Fig. 8: Condition established in Equation (5) is not satisfied for all the points in \(W\). At \(t_{2}\), \(c_{m}(\mathbf{p})=2\). Fig. 7: Mission during time interval \([0,t_{1}]\), point \(\mathbf{p}\) is sensed for the first time at \(\hat{t}_{1}\) and \(c_{m}(\mathbf{p})=1\). Fig. 9: The mission ends at \(T\) and the point \(\mathbf{p}\) is sensed for the last time at \(\hat{t}_{3}\), the final coverage measure of this point is \(3\) although \(\eta(\gamma,\mathbf{p})=1\). To address this problem, we can divide the Waterfall Space \(W\) into two sets, \(\mathbb{S}^{+}\) and \(\mathbb{S}^{-}\), \[\mathbb{S}^{+} =\{\mathbf{y}\in W|det(J_{\mathbf{f}}(\mathbf{y}))>0)\} \tag{12}\] \[\mathbb{S}^{-} =\{\mathbf{y}\in W|det(J_{\mathbf{f}}(\mathbf{y}))<0)\} \tag{13}\] We define two new positively oriented contours, \(\gamma^{+}\) and \(\gamma^{-}\) as the image by \(\mathbf{f}\) of the boundaries of these sets, as illustrated in Figure 10, \[\gamma^{+} =\mathbf{f}(\partial\mathbb{S}^{+}) \tag{14}\] \[\gamma^{-} =\mathbf{f}(\partial\mathbb{S}^{-}) \tag{15}\] For a regular value \(\mathbf{p}\in M\) we will have \(Ker\ (\mathbf{f}-\mathbf{p})\subset\mathbb{S}^{+}\cup\mathbb{S}^{-}\), furthermore we can say that \[Ker\ (\mathbf{f}-\mathbf{p})=Ker(\mathbf{f}-\mathbf{p})_{\mathbb{S}^{+}}\cup Ker(\mathbf{f}-\mathbf{p}) _{\mathbb{S}^{-}} \tag{16}\] and we can rearrange (6): \[c_{m}(\mathbf{p}) =\#Ker\ (\mathbf{f}-\mathbf{p})_{\mathbb{S}^{+}}+\#Ker\ (\mathbf{f}-\mathbf{p})_{ \mathbb{S}^{-}} \tag{17}\] \[c_{m}(\mathbf{p}) =\sum_{\mathbf{w}\in f^{-1}_{\mathbb{S}^{+}}(\mathbf{p})}+1\ \ \ +\sum_{\mathbf{w}\in f^{-1}_{ \mathbb{S}^{-}}(\mathbf{p})}+1 \tag{18}\] Considering the definitions of sets \(\mathbb{S}^{+}\) and \(\mathbb{S}^{-}\) on (12) and (13), respectively, \[c_{m}(\mathbf{p})=\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! A generalization of the results stated in the remaining of this Section for all the points in the plane can be easily obtained considering a decomposition of cycles \(\gamma\in[\gamma]\) in \(\gamma^{+}\) and \(\gamma^{-}\) as proposed in (20). ## IV Computing the Coverage Measure We are interested in determining the coverage measure of all the points inside an area of interest. Thus, we developed an algorithm, that is presented in this Section, for computing the extended winding number function \(\overline{\eta}\) for a cycle \(\gamma:S_{1}\rightarrow\mathbb{R}^{2}\) with respect to all the points inside a subset of \(\mathbb{R}^{2}\). We also present its extension for dealing with an uncertain cycle \([\gamma]\). ### _Computing the Extended Winding Number of \(\gamma\)_ Let \(\mathbb{W}_{i}\) be a winding set associated with a cycle \(\gamma\), defined for a natural number \(i\), by definition \[\mathbb{W}_{i}:=\{\mathbf{p}\in\mathbb{R}^{2}|\eta(\gamma,\mathbf{p})\geq i\} \tag{24}\] There are, for example, two non-empty winding sets associated with the curve \(\gamma\) of Figure 5, \(\mathbb{W}_{1}\) and \(\mathbb{W}_{2}\) represented in Figure 12. As demonstrated in [16], the winding number \(\eta(\gamma,\mathbf{p})\) of any point \(\mathbf{p}\in\mathbb{R}^{2}\setminus\gamma\) can be calculated using the winding sets of \(\gamma\), \[\eta(\gamma,\mathbf{p})=\sum_{i>0}\chi_{\mathbb{W}_{i}}(\mathbf{p}) \tag{25}\] where \(\chi_{\mathbb{W}_{i}}\) is the characteristic function for the winding set \(\mathbb{W}_{i}\). Equations (24) and (25) are still valid if \(\eta\) is replaced by its extension \(\overline{\eta}\). The algorithm starts by computing all the non-empty winding sets \(\mathbb{W}_{i}\), for \(i\in\mathbb{N}\), associated with the sensor's contour \(\gamma\), through a combinatorial approach. For that, we consider that a self-intersection or vertex of \(\gamma\) is determined by two parameters \(t_{0},t_{1}\in S_{1}\), \(t_{0}\neq t_{1}\) and that it is a point \(\mathbf{p}\) such that \(\mathbf{p}=\gamma(t_{0})=\gamma(t_{1})\). The multiplicity of such a self-intersection is the number, finite or infinite, of distinct \(t\in S_{1}\) such that \(\mathbf{p}=\gamma(t)\) minus one. Then, we make the following assumptions, similar to those of [17], so that the winding number of a point can be easily obtained using (25): * \(\gamma\) has a finite number of self-intersections, each one of them with multiplicity one. * in addition, we assume the two tangent vectors to \(\gamma\) at each vertex to be linearly independent. Such a cycle divides \(\mathbb{R}^{2}\backslash\gamma\) into a finite number of connected open regions, one of which is not compact. Each one of these regions can be seen as a \(2-cell\) of the CW-complex \(C(\gamma)\), constructed from the cycle \(\gamma\). To be fully formal, we would need to use the fact that \(\gamma\) determines a cell decomposition of the one-point compactification of the plane, homeomorphic to the 2-sphere \(S_{2}\), Figure 13. The 0-cells of \(C(\gamma)\) are self-intersections of \(\gamma\), and the 1-cells are parts of the curve separating the 2-cells, connected components of \(\gamma\) minus its self-intersections. Since all open 2-cells are homotopy equivalent to a point within that cell and considering the degree axioms presented in Definition 1, we can conclude that all the points within the same open 2-cell of \(C(\gamma)\) have the same winding number with respect to \(\gamma\). In this context, a correct and coherent numbering of the 2-cells is enough for determining the winding number value of all the points in the plane. For this purpose, we can use a combinatorial rule proposed by Mobius in 1865 [18]. The rule says that two contiguous regions that are separated by a 1-cell are numbered with a value that must differ by exactly 1. The winding number of the region on the left is greater, considering the curve's orientation. This method leads to a unique numbering of the space considering that the winding number in the non-compact region, to whom we will be referring as \(A_{0}\), is known and equal to \(0\) for all of its points. This is true because since is not bounded by \(\mathbf{f}(\partial W)\), differently from the other 2-cells of \(C(\gamma)\), we know that \(A_{0}\subseteq\mathbb{R}^{2}\backslash\mathbf{f}(W)\). This implies, from Definition 2, that for any \(\mathbf{p}\in A_{0}\), \(\eta(\gamma,\mathbf{p})=0\). As a direct application of Mobius rules, a method proposed by Alexander [17] allows a coherent numbering of the regions only through an analysis of the tangent vectors to the curve on its self-intersections. Let \(\mathbf{v}\) be a vertice of \(\gamma\) represented by the pair \((t_{0},t_{1})\). Considering the assumptions adopted for \(\gamma\), a self-intersection \(\mathbf{v}\) will divide the plane into four regions. There are only two rules for numbering these four regions, according to whether \(\dot{\gamma}(t_{1})\) goes from the right to the left or the left to the right with respect to \(\dot{\gamma}(t_{0})\), as illustrated in Figure 14. In Figures 15,16 and 17 we consecutively apply the Alexander numbering rules to the example considered previously. We start by numbering regions around \(\mathbf{v}_{0}\), Figure 15. We assume that \(A_{0}\) has a winding number value of \(0\) and that the later self-intersection, represented by the dashed line, crosses the previous one from left to the right. The same is done around vertices \(\mathbf{v}_{1}\) and \(\mathbf{v}_{2}\) at Figure 16 and 17, respectively, resulting in a complete characterization of the plane in terms of winding number values. Once a numbering is obtained for all the regions according to Alexander's rules, we can construct the winding sets \(W_{i}\) of \(\gamma\), for \(i\in\mathbb{N}\), as the closure of the union of the regions with a number greater than or equal to \(i\)[16]. Then, the winding number for a point can be easily computed using (25). ### _Computing the Extended Winding Number of \([\gamma]\)_ If the sensor's contour \(\gamma\) is uncertain, the winding sets associated with the mission will also be uncertain. An uncertain set can be represented as a thick set, the following definition was proposed in [19]. **Definition 4**.: _We denote \(\llbracket\mathbb{X}\rrbracket\in\mathbb{I}\mathscr{P}(\mathbb{R}^{n})\) a thick set of \(\mathbb{R}^{n}\) if there are two subsets of \(\mathbb{R}^{n}\) called the lower bound \(\mathbb{X}^{-}\) and the upper bound \(\mathbb{X}^{+}\) such that_ \[\llbracket\mathbb{X}\rrbracket =\llbracket\mathbb{X}^{-},\mathbb{X}^{+}\rrbracket \tag{26}\] \[=\llbracket\mathbb{X}\in\mathscr{P}(\mathbb{R}^{n})\ \rvert\ \mathbb{X}^{-}\subseteq\mathbb{X}\subseteq\mathbb{X}^{+}\}\] Fig. 16: Numbering of regions according to Alexander around \(\mathbf{v}_{1}\). Fig. 17: Numbering of regions according to Alexander around \(\mathbf{v}_{2}\). Fig. 18: Representation of thick sets. _A thickest partitions the environment into three zones, the clear zone \(\mathbb{X}^{-}\), the penumbra \(\mathbb{X}^{+}\backslash\mathbb{X}^{-}\) (both illustrated in Figure 18) and the dark zone \(\mathbb{R}^{n}\backslash\mathbb{X}^{+}\)._ Let \(\mathbb{W}_{i}^{\gamma}\), with \(i\in\mathbb{N}\), be a winding set associated with a cycle \(\gamma\). To the set \([\gamma]\) of all the possible sensor's contour we associate \([\![\mathbb{W}_{i}]\!]=[\mathbb{W}_{i}^{-},\mathbb{W}_{i}^{+}]\), such that, \[\mathbb{W}_{i}^{-} =\bigcap_{\gamma\in[\gamma]}\mathbb{W}_{i}^{\gamma} \tag{27}\] \[\mathbb{W}_{i}^{+} =\bigcup_{\gamma\in[\gamma]}\mathbb{W}_{i}^{\gamma} \tag{28}\] In the exploration context, the clear zone of \([\![\mathbb{W}_{i}]\!]\), represented by \(\mathbb{W}_{i}^{-}\), translates as a set of points that were certainly explored at least \(i\) times. Analogously, the dark zone \(\mathbb{R}^{2}\backslash\mathbb{W}_{i}^{+}\) is a set of points that have a coverage measure smaller than \(i\), independently of which of the functions in \([\mathbf{x}]\) is the ground truth. The penumbra \(\mathbb{W}_{i}^{+}\backslash\mathbb{W}_{i}^{-}\) is a set of points whose coverage measure is equal to \(i\) for some \(\gamma\in[\gamma]\). We redefine the characteristic function to deal with thick sets on the plane, we have \([\chi]:\mathbb{R}^{2}\to\mathbb{IN}_{0}\) and \[[\chi]_{[\![\mathbb{W}_{1}]\!]}(\mathbf{p})=\begin{cases}[1,1],&\text{if }\mathbf{p}\in \mathbb{W}_{i}^{-},\\ [0,1],&\text{if }\mathbf{p}\in\mathbb{W}_{i}^{+}\backslash\mathbb{W}_{i}^{-},\\ [0,0],&\text{otherwise}\end{cases} \tag{29}\] Then, we have \[[\![\overline{\eta}]\!]([\gamma]\!],\mathbf{p})=\sum_{i>0}\chi_{[\![\mathbb{W}_{i} ]\!]}(\mathbf{p}) \tag{30}\] In Figure 19 we have an illustration of thick sets \([\![\mathbb{W}_{1}]\!]\) and \([\![\mathbb{W}_{2}]\!]\) for the example considered through out this paper and in Figure 20 the resultant coverage measure considering these sets. This defines the notion of uncertain winding number (and uncertain coverage measure). Under some assumptions, given below, that are realistic for applications, we need only a slightly generalized Alexander rule to efficiently compute the uncertain coverage measure. As in [20], we will suppose that \([\mathbf{x}]\) is given by two time-varying sets: an outer approximation of the set of the robot's pose, \([\mathbf{s}](t)\), at time \(t\), in the plane, and \([\mathbf{v}](t)\), an outer-approximation of the set of linear velocities of the robot, at time \(t\), in the plane. Hence: \[\begin{array}{cccc}[\mathbf{s}]:&\mathbb{R}&\to&\mathbb{R}^{2}\\ [\mathbf{v}]:&\mathbb{R}&\to&\mathbb{R}^{2}\end{array}\] Consider the following notion of uncertain self-intersection. These are points \(\mathbf{p}\) in the plane such that \(\mathbf{p}\in[\mathbf{s}](t_{1})\cap[\mathbf{s}](t_{2})\) for some \(t_{1}<t_{2}\). The set of pairs of such times \(t_{1}\), \(t_{2}\), for a given \(\mathbf{p}\), is denoted by \(T_{x}\). Supposing that for all \(\mathbf{p}\) uncertain self-intersection, for all \((t_{1},t_{2})\in T_{x}\), for all \(v_{1}\in[\mathbf{v}](t_{1})\), \(v_{2}\in[\mathbf{v}](t_{2})\), \(v_{1}\) is not colinear with \(v_{2}\) (or \(v_{1}\) and \(v_{2}\) are transverse to each other), we get the following uncertain Alexander rules: ### _Implementation_ The method above was numerically implemented using the Codac library [21].1 We consider that we have on the input of the algorithm a well defined function or a tube describing the robot's pose \(\mathbf{x}\), speed \(\dot{\mathbf{x}}\) and acceleration \(\ddot{\mathbf{x}}\). From these inputs, the sensor's contour \(\gamma\) is obtained through a concatenation of \(\mathbf{x}=\mathbf{f}(0,[0,T])\) with \(\mathbf{x}_{aux1}=\mathbf{f}([0,L],T)\), \(\mathbf{x}_{R}=\mathbf{f}(L,[0,T])\) and \(\mathbf{x}_{aux2}=\mathbf{f}([0,L],0)\), as illustrated in Figure 3 and we have Footnote 1: The code is available on GitHub github.com/marialuizacvianna/extended_winding \[\gamma=\mathbf{x}*\mathbf{x}_{aux1}*\mathbf{x}_{R}^{-1}*\mathbf{x}_{aux2}^{-1}\] where \(\mathbf{x}_{R}^{-1}(t)=\mathbf{x}_{R}(T-t)\) and \(\mathbf{x}_{aux2}^{-1}(t)=\mathbf{x}_{aux2}(T-t)\). We parameterize \(\gamma\) with \(\tau\in[0,1]\) that is not a time representation. The speed vector along \(\gamma\) can be computed using \(\dot{\mathbf{x}}\) and \(\ddot{\mathbf{x}}\). The next step in the algorithm is to compute the set of time pairs \(\mathbb{T}\) that represent the self-intersections of \(\gamma\). \[\mathbb{T}=\{(\tau_{1},\tau_{2})\in[0,1]^{2}|\tau_{1}<\tau_{2}\text{ and }\gamma(\tau_{1})=\gamma(\tau_{2})\}\] This set can be obtained with the algorithm presented in [22] available in [21]. For the example considered throughout this Fig. 20: Coverage measure considering the uncertain winding sets associated with \([\gamma]\). paper, first presented in Figure 2, we obtain the following set of self-intersections \[\mathbb{T}=\{(\tau_{1},\tau_{4}),(\tau_{2},\tau_{5}),(\tau_{6},\tau_{7}),(\tau_{ 3},\tau_{8})\}\] where \(0\leq\tau_{1}<\tau_{2}<\ldots<\tau_{8}\leq 1\). These pairs correspond to the vertices illustrated in Figure 13: \(\mathbf{v}_{0}=\gamma(\tau_{3})=\gamma(\tau_{8})\), \(\mathbf{v}_{1}=\gamma(\tau_{6})=\gamma(\tau_{7})\), \(\mathbf{v}_{2}=\gamma(\tau_{0})=\gamma(\tau_{1})\) and \(\mathbf{v}_{3}=\gamma(\tau_{2})=\gamma(\tau_{5})\). Then, the set of 1-cells of \(\gamma\) can be defined as \[\mathbb{E}=\{a_{0},a_{1},a_{2},a_{3},a_{4},a_{5},a_{6},a_{7}\}\] where \(\partial a_{i}=\gamma(\tau_{i+1})-\gamma(\tau_{i})\), for \(i=1,\ldots\#\mathbb{E}-1\) and \(\partial a_{0}=\gamma(\tau_{1})-\gamma(\tau_{\#\mathbb{E}})\). Determining if a vector \(\mathbf{a}\) crosses another vector \(\mathbf{b}\) from the right to the left can be mathematically translated by the cross product \(\mathbf{a}\times\mathbf{b}\) being positive. In this case, to each of the vertices represented by a pair \((\tau_{i},\tau_{j})\in\mathbb{T}\) we associate an update value \(u\in\{-1,+1\}\) that determines if \(\gamma_{j}\) crosses \(\partial(\dot{U}_{i})\) from the right to the left \(u=-1\) or the left to the right \(u=+1\). We use the update value of each edge's initial vertex and the combinatorial method presented in this Section for defining a winding number value for the area on its right and left sides. Finally, the winding sets can be easily obtained knowing that \(\partial\mathbb{W}_{i}\) is a concatenation of the edges in \(\mathbb{E}\) for which the value on the area on its left side is equal or greater than \(i\). We choose to represent sets using interval arithmetic and we rely on interval analysis tools [23], such as separators and a Set Inversion Via Interval Analysis (SIVIA) algorithm [24], for classifying, in terms of their coverage measure, all the points inside an area of interest. The set inversion algorithm bisects the environment, up to a precision that is chosen by the user, such that the plane is divided into boxes that do not intersect \(\gamma^{+}\) and \(\gamma^{-}\). The advantage of this method is that it is known, from the properties of the topological degree, that all the points that belong to a set in the plane that does not intersect the considered cycles will have the same winding number value. Therefore, this method limits the number of computations that have to be done to determine the winding number for all the points inside an area. For boxes \([\mathbf{b}]\in\mathbb{IR}^{2}\) for which \([\mathbf{b}]\cap\gamma^{+}\neq\emptyset\) or \([\mathbf{b}]\cap\gamma^{-}\neq\emptyset\) is true, an uncertain winding number value will be computed. For that, we use the following adaptation of the characteristic function for thick sets to deal with sets of \(\mathbb{R}^{2}\) on the input: \([\chi]:\mathscr{D}(\mathbb{R}^{2})\to\mathbb{IN}_{0}\), \[[\chi]_{[\mathbb{W}_{i}]}([\mathbf{b}])=\begin{cases}[1,1],&\text{if for all }\mathbf{p}\in[\mathbf{b}]\,\ \mathbf{p}\in\mathbb{W}_{i}^{-},\\ &\\ \end{cases} \tag{31}\] ## V Experiments We apply the method presented in this paper on a dataset acquired during a mission performed by the AUV daurade, Figure 22, on November 2015. This robot was built by ECA robotics and used by Direction General de l'Armement - Techniques Navales (DGA - TN) and by the Service Hydrographique et Oceanographique de la Marine (SHOM). The mission took place in the Road-Sted of Brest (Britanny, France), it consists of a 45 minutes survey path. Daurade explores using two side-scan sonars, one that explores its right side and the other its left side. The visible area of both sensors can be individually modeled as a line-sweep sensor on the plane. Assuming a configuration in which there is no visibility gap and no overlap between the range of visibility of the two sensors, the whole can be represented as a line-sweep sensor. Fig. 21: Uncertain Alexander numbering with \(w\in\mathbb{Z}\): (a): \([\mathbf{v}](t_{2})\) comes from the right; (b): \([\mathbf{v}](t_{2})\) comes from the left. Fig. 22: The AUV Daurade. The robot's pose underwater is estimated by the integration of data acquired by an Inertial Measurement Unit (IMU) coupled with a Doppler Velocity Logger (DVL) and a pressure sensor, for depth estimation. Initially, we assume that this estimation \(\tilde{\mathbf{x}}\) is exact, as illustrated in Figure 23, and that the robot maintains a constant depth during the mission, resulting in the sensor's contour \(\tilde{\gamma}\) presented in Figure 24. Figure 25 displays the separation of \(\tilde{\gamma}\) into \(\tilde{\gamma}^{+}\) and \(\tilde{\gamma}^{-}\). The characterization of the explored area is done by calculating winding numbers \(\eta(\tilde{\gamma}^{+},\mathbf{p})\) and \(\eta(\tilde{\gamma}^{-},\mathbf{p})\) for all \(\mathbf{p}\) inside the area considered of interest. The algorithm proposed in Section IV is used for this purpose. In Figure 26 we can see the resultant paving. Uncertain boxes, surrounding contours \(\tilde{\gamma}^{+}\) and \(\tilde{\gamma}^{-}\) are represented in black. The uncertain winding Fig. 23: Estimated robot’s trajectory \(\tilde{\mathbf{x}}\) without incertitude. The robot is represented at its final pose at the end of the mission. number value for each of these boxes can also be defined with the proposed algorithm, in Figure 27, we give an overview of the classification of these boxes for a part of the mission. Then, if we take into consideration the incertitude around sensors measurements, propagated through integration during pose estimation, we obtain \([\mathbf{x}]\), Figure 28. We represent the uncertain pose by a guaranteed envelope of the ground truth \(\mathbf{x}^{*}\) using a box-valued function named tube on the interval analysis literature. The sonar's contour \([\gamma]\) will also be uncertain and represented by a tube, as displayed in Figure 29. In the considered scenario, some self-intersections of \([\gamma]\) do not respect the conditions established by our algorithm, notably, the non colinearity condition that ensures that the Fig. 27: Coverage measure for boxes that intersect the sensor’s contour. Fig. 26: Result of the SIVIA algorithm for the classification of the explored area. Boxes in black have an uncertain coverage measure value. Fig. 28: The inclusion function \([\mathbf{x}]\). Fig. 29: \([\gamma]\). environment is divided into four regions around the self-intersection so the Alexander rules can be applied for numbering. As a result, the problem at hand cannot be directly solved using the proposed method. We apply, however, our algorithm around one uncertain self-intersection in \([\gamma]\) that respects our limitation in order to exemplify the extension of the Alexander algorithm to uncertain curves, as it was presented in Figure 21. The result is illustrated in Figure 30. One can note that the method presented in this paper can still be used to characterize the whole environment in this situation. For that, the mission must be divided into multiple missions, along the time of exploration, that respect individually the required constraints. ## VI Conclusion In conclusion, this article has extended the link between the topological degree and the line-sweep exploration problem, allowing for a characterization of the area explored by a mobile robot in a two-dimensional plane. An interval analysis-based algorithm for computing the winding number for all the points inside a set has also been proposed, and its efficiency and scalability make it suitable for deployment on resource-constrained robotic platforms. A real-world experiment has shown that the proposed algorithm consistently produces reliable characterizations of the explored area, but it has also shown the limitations of the method that should be addressed by future work. Other future research directions may involve extending the algorithm to three-dimensional environments and exploration sensors with a two-dimensional visible area. Furthermore, the algorithm's applicability in collaborative multi-robot systems and its integration with simultaneous localization and mapping (SLAM) techniques could be explored. For the latter, we could imagine a scenario where the coverage measure is used to reduce the exteroceptive data that has to be compared to find possible feature matching, therefore, reducing the complexity of SLAM algorithms. Finally, we will examine the link between uncertain topological degrees and methods based on persistent homology, as in e.g. [7]. ## Acknowledgments We acknowledge the support of the "Engineering of Complex Industrial Systems" Chair Ecole Polytechnique-ENSTA Paris-Telecom Paris, partially funded by DGA/AID, Naval Group, Thales and Dassault Aviation.
この論文は、2次元平面におけるエリアカバーミッション中にラインスweepセンサーが探索したエリアを決定する手法を提示する。 探索エリアの正確な知識は、ロボット分野における様々なアプリケーション(マッピング、監視、 COVERAGE 最適化など)において重要である。この提案手法は、環境の COVERAGE メasures とその平面におけるトポロジーの程度との関係を、探索領域の程度を推定する。さらに、不確実な COVERAGE メasure の値を用いる間接解析を拡張してアプローチを拡張した。この最後の貢献は、探索エリアの確実に特徴付けるのに役立つ。特に、エリアカバーミッションの重要な特性を考慮すると、重要な役割を果たす。最後に、この論文は、興味のあるエリアのすべての点で2次元平面におけるトポロジーの度を計算する、新しいアルゴリズムを提案する。これは、単一点でのトポロジーの度数を計算する既存のソリューション
2309.14378
Randomized term grouping over physical law on digital quantum simulation
We introduce a randomized algorithm based on qDrift to compute Hamiltonian dynamics on digital quantum computers. We frame it as physDrift because conservation laws in physics are obeyed during evolution of arbitrary quantum states. Empirically we achieved better spectral error reduction with hydrogen chain model compared to previous protocols. Noisy model are investigated as well and we characterised them in the circuit with different schemes, i.e. an attenuation of the measured expectation value is fixed by keeping the circuit depth the same and depolarising error is simulated with randomly applied Pauli gates. This makes it our proposal particularly feasible for implementing and testing on present-day noisy hardware.
Songqinghao Yang
2023-09-24T13:15:39
http://arxiv.org/abs/2309.14378v2
# Randomized term grouping over physical law ###### Abstract We introduce a randomized algorithm based on qDrift to compute Hamiltonian dynamics on digital quantum computers. We frame it as phyDrift because conservation laws in physics are obeyed during evolution of arbitrary quantum states. Empirically we achieved better spectral error reduction with hydrogen chain model compared to previous protocols. Noisy model are investigated as well and we characterised them in the circuit with different schemes, i.e. an attenuation of the measured expectation value is fixed by keeping the circuit depth the same and depolarising error is simulated with randomly applied Pauli gates. This makes it our proposal particularly feasible for implementing and testing on present-day noisy hardware. ## I Introduction One of the advantage of quantum computing over classical computation is to simulate complex quantum systems. This idea was originally proposed famously by Richard Feynman[1]: "Let the computer itself be built of quantum mechanical elements which obey quantum mechanical laws." In the past few decades, various algorithms have been put forward. Seth Lloyds[2] first formulated the problem for local Hamiltonian simulation where k-local means, for \(H=\sum_{j}^{L}H_{j}\), each term acts on at most k qubits rather than all n qubits1: given any Hamiltonian evolution unitary \(U=e^{-iHt}\), we can find a set of quantum gates that could approximate the evolution: Footnote 1: In this case the problem is reduced to polynomial space: \(L=\text{poly}(n)\) since \(L\leq\sum_{j=1}^{k}\binom{n}{j}\leq c\binom{n}{j}\leq\frac{n^{n-c}}{(-1)!}\) \[U\approx V=V_{1}V_{2}\ldots V_{N} \tag{1}\] He then explicitly used compilation method called product formula or trotterisation to find out how the error \(\epsilon(U)=\|e^{-iHt}-V\|^{2}\), scales with system parameters: Consider many-body Hamiltonian as \(H=\sum_{k=1}^{L}h_{k}H_{k}\), where \(H_{k}\) is one sub-term (usually a Pauli string) in the Hamiltonian and \(h_{k}\) is the associated coefficient. The first order trotterisation (or lie trotter) is defined as: \[S_{1}(t)=\prod_{k=1}^{L}\exp(-ih_{k}H_{k}t) \tag{2}\] the scaling follows as: \[N_{t}=\mathcal{O}(\frac{(tL\Lambda)^{2}}{\epsilon}) \tag{3}\] where \(\Lambda:=\max_{k}\|h_{k}\|\). So to make better approximation we need to increase the time steps \(N_{t}\): \[e^{-i\sum_{k=1}^{L}h_{k}H_{k}}\approx S_{1}(\frac{t}{N})^{N}=S_{1}(\delta t)S _{1}(\delta t)\ldots S_{1}(\delta t) \tag{4}\] Built upon that, higher order3 formula is derived for better scaling[3] in terms of Trotter errors but comes with a trade-off between number of gates required and precision per Trotter step and we can demonstrate that the balance is optimised around \(2^{nd}\) and \(4^{th}\) order in AppendixB: Footnote 3: Suzuki product formulae are symmetric and therefore only defined for even orders. \[S_{2}(t)=\prod_{k=1}^{L}\exp\biggl{(}-ih_{k}H_{k}\frac{t}{2}\biggr{)}\prod_{ k=L}^{1}\exp\biggl{(}-ih_{k}H_{k}\frac{t}{2}\biggr{)} \tag{5}\] \[S_{2k}(t)=S_{2k-2}(p_{k}t)^{2}S_{2k-2}\left((1-4p_{k})t\right)S_{2k-2}(p_{k}t )^{2} \tag{6}\] where \(p_{k}=1/(4-4^{1/(2k-1)})\) and they scale as: \[N=\mathcal{O}(\frac{(tL\Lambda)^{1+\frac{1}{2k}}}{\epsilon^{1/2k}}) \tag{7}\] The better accuracy comes from the fact that with recursive formula in both forward and backward direction some of the error terms in the non-commutativity will cancel out. More advanced techniques like classical optimization[4], linear combination of unitaries(LCU)[5], quantum signal processing[6] and truncated taylor series[7] have demonstrated improvements in different aspects afterwards. For example, with the truncation order for LCU the number of gates needed per step is approximately: \[\mathcal{K}=\mathcal{O}(\frac{log(\frac{1}{\epsilon})}{loglog(\frac{1}{\epsilon} )}) \tag{8}\] while Trotterization has the exponential dependency, i.e. fourth-order \(\mathcal{O}(\frac{1}{\epsilon^{1/4}})\). Not only does Taylor series give logarithmically better scaling, it also comes with a better empirical result[8](tightening the bound between theoretical prediction and empirical results has been heavily discussed[9]). We categorized them in FIG.1 above. But product formula remain to be the most popular one, particular for near-term hardware simulation[10; 11]. This is because: 1. Norm of the wavefunction is preserved: errors in expectation values are expected to oscillate, vs in non-unitary evolution errors accumulate, e.g. \[\|1-iHt\|\approx 1+\|H\|t>1\] with LCU. 2. it has simple implmentation with no overhead (unitary operation can be implemented deterministically, without ancilla qubits) However, despite achieving tighter error bound[8] for trtterisation and progress in tuning the parameter, e.g. \(p_{k}\)[16] to make simulation more accurate, problems remain with the intrinsic property of the algorithm: the exponential dependency of the recursive form \(\mathcal{O}(5^{k})\) and the inclusion of number of Hamiltonian terms L. The second issue is particularly prominent in large system like quantum chemistry where complex electronic structure involves a lot of Pauli strings after some transformation. Consequently, several protocols focused on non-deterministic construction. In contrast to deterministic algorithms that produce unitary quantum circuits, randomized algorithms either randomly permute the ordering of unitary gates during simulation[17], which we will refer to 'random permutation', or utilize mixing lemma[18; 19] on quantum channels (qDrift[20]) to make better asymptotic scaling. In this paper we will concentrate on these two major approaches and demonstrate the advantage of our algorithm by combining physical laws both theoretically and empirically. Specifically we show that simulating the evolution while preserving particle number makes the state-vector more accurate, which we refer as physDrift. We will show subsequently that this could be achieved by term grouping. Then deterministic methods are also included for benchmarking. We demonstrate that physDrift has smaller error and significantly reduces the leakage to unphysical system in both short and long time. That is to say, the conservation law is obeyed during evolution. Finally, to implement the circuits on real hardware we need to consider errors. So depolarising error is added and the simulation is again benchmarked for comparison. In the rest of the paper, we will first provide some background on the model we use in section II. Then we briefly review qDrift as well as random permutation in section III. In section IV the protocol for physDrift is laid out and we evaluate the theoretical result as well as empirical result with other algorithms. In the second part of the section symmetric protection technique is introduced for deterministic methods and we benchmark all algorithms on real chemical system, e.g. hydrogen chains. We make summary in section V and comment on possible future improvements. ## II Background Hamiltonian evolution operator is hard to be directly implemented on quantum circuits. In this section we explain the method used to decompose and transform the unitary \(e^{-iHt}\) and then integrate with near-term hardware. ### Chemistry model We focus on electronic structure problems, which could be initialized with electronic model (Molecular Hamiltonian or Fermi-Hubbard model in physics): \[H=\sum_{pq}h_{pq}a_{p}^{\dagger}a_{q}+\frac{1}{2}\sum_{pqrs}h_{pqrs}a_{p}^{ \dagger}a_{q}^{\dagger}a_{r}a_{s}+h_{nuc} \tag{9}\] where \(h_{nuc}\) is the nuclear energy and we ignore it in the Born-Oppenheimer approximation, which makes it a Figure 1: Summarized state-of-art method in digital quantum simulation (DQS). We mention alternative approaches briefly. Adaptive trotter[12] framework was recently proposed with better empirical error bound however this method involves feed-forwarding circuits and is not suitable for near-term application. Yoshida[13] used a leapfrog scheme for higher orders and it has been improved since[14]. With recent proposal on specific condition[15] to reduce the exponential parameter dependency in Multiproduct formula[5], the approach of parallelisation of product formula has become more practical. constant. \(a_{p},a_{p}^{\dagger}\) are fermionic operator satisfying: \[a_{1}^{\dagger}\left|0\right\rangle_{1}=\left|1\right\rangle_{1},\quad a_{1}^{ \dagger}\left|1\right\rangle_{1}=0,\quad a_{1}\left|0\right\rangle_{1}=0,\quad a _{1}\left|1\right\rangle_{1}=\left|0\right\rangle_{1} \tag{10}\] and they anti-commute with operators with different labels. The coefficients are defined as: \[h_{pq}=\int_{-\infty}^{\infty}\psi_{p}^{*}(x_{1})\left(-\frac{\nabla^{2}}{2}+V (x_{1})\right)\psi_{q}(x_{1})\mathrm{d}^{3}x_{1} \tag{11}\] and they anti-commute with operators with different labels. The coefficients are defined as: \[h_{pqs}=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\psi_{p}^{*}(x_{1})\psi_{ q}^{*}(x_{2})\left(\frac{1}{\left|x_{1}-x_{2}\right|}\right)\psi_{r}(x_{2}) \psi_{s}(x_{1})\mathrm{d}^{3}x_{1}\mathrm{d}^{3}x_{2} \tag{12}\] where \(V(x)\) is the mean-field potential. In general \(h_{pq}\) represents the integral contains kinetic energy + electron-nucleus attraction while \(h_{pqs}\) is the two-body interaction term. Together with the creation and annihilation operator, we can classify terms in Molecular Hamiltonian as in TABLE.I. In particular, we have integrated \(h_{pqpq}\) into \(h_{pqqp}\). \(h_{pqpq}\) is an exchange coupling - it's only non-zero when the two electrons have the same spin so that they have antisymmetric spatial state which reduces the coulomb repulsion. So this term tends to induce ferromagnetic coupling between neighbouring spins. As we make the separation between sites bigger, the \(h_{pqpq}\) term gets exponentially smaller we can consider only the \(h_{pqqp}\) term. Correlated excitation between electrons means that the interaction depends on the location of the electrons, i.e. if electron 1 in r jumps to p, electron 2 in q stays in q. This can be seen directly from the expansion of the operator using fermionic anti-commutation relations: \[a_{p}^{\dagger}a_{q}^{\dagger}a_{q}a_{r}=(a_{p}^{\dagger}a_{r}+a_{r}^{\dagger }a_{p})(a_{q}^{\dagger}a_{q}) \tag{13}\] ### Transformation Using the Jordan-wigner transformation[21], we can map fermionic operator to the space spanned by Pauli operators: \[a_{j}^{\dagger}=\begin{bmatrix}0&0\\ 1&0\end{bmatrix}=\frac{X_{j}-iY_{j}}{2} \tag{14}\] \[a_{j}=\begin{bmatrix}0&1\\ 0&0\end{bmatrix}=\frac{X_{j}+iY_{j}}{2} \tag{15}\] To given a concrete example, consider the excitation term: \[h_{pq}(a_{p}^{\dagger}a_{q}+a_{q}^{\dagger}a_{p})=\frac{h_{pq}}{2}\left(\prod _{j=q+1}^{p-1}Z_{j}\right)(X_{p}X_{q}+Y_{p}Y_{q}) \tag{16}\] Other mappings like Bravyi Kitaev transformation[22] and parity transformtion[23] are also available. But here we will stick to the simplest approach and leave improvements for future work. ### Pauli gadgets With Hamiltonian as Pauli strings in hand we consider the circuit in FIG.3 which is basically the controlled rotational gate in Z. where the CNOT gates take the role as parity checker connecting all qubits together and the factor of 2 in the rotational gate comes from the fact that Pauli group, \begin{table} \begin{tabular}{c c} Physical meaning & Operator \\ \hline electron number counting & \(h_{pq}a_{p}^{\dagger}a_{p}\) \\ electron excitation1 & \(h_{pqs}a_{p}^{\dagger}a_{q}a_{p}\) \\ Coulomb repulsion & \(h_{pqpq}a_{p}^{\dagger}a_{q}^{\dagger}a_{q}a_{r}\) \\ correlated excitation & \(h_{pqs}a_{p}^{\dagger}a_{q}^{\dagger}a_{r}a_{s}\) \\ Scatter & \(h_{pqs}a_{p}^{\dagger}a_{q}^{\dagger}a_{r}a_{s}\) \\ \end{tabular} \end{table} Table 1: The five classes of sub-Hamiltonian with second quantized operators. In general there few terms in each of the class because of symmetric reduction. Figure 2: Quantum circuit realisation of \(U(t)=e^{-itZZZ}\). which belongs to \(\mathrm{SU}(2)\), is actually a double cover of \(\mathrm{SO}(3)\). We just need to use the relation \(HZH=X\) and \(Y=(SH)Z(SH)^{\dagger}\), where \(\mathrm{H}\) is the Hadamard gate and \(\mathrm{S}\) being the phase gate, to compute universal Pauli strings as in FIG.3. To simulate all Pauli strings we need to concatenate our circuits using Theorem1 **Theorem 1**.: _For any two sub-terms \(H_{i}\), \(H_{j}\) in Hamiltonian \(H=\sum_{k}^{L}h_{k}H_{k}\), if they satisfy,_ \[[H_{i},H_{j}]=0 \tag{17}\] _then we have:_ \[e^{-i(H_{i}+H_{j})t}=e^{-iH_{i}t}e^{-iH_{j}t} \tag{18}\] So if the Pauli strings commute with one another, direct connecting Pauli gadgets together will give us the exact evolution operator. We also get the intuition that with more commutative terms, the error goes down. Bound has been proved through both commutator[8] and anti-commutator[24] relations. ## III Randomized protocols In this section we begin by reviewing on density matrix: A classical distribution \(p_{i}\) of quantum states gives a mixed state and must be described using a density matrix, \[\rho=\sum_{i=1}^{m}p_{i}\underbrace{\left|i\right\rangle}_{\text{pure state}}\left\langle i\right|\] . If \(m>1\), \(\rho\) is a mixed density matrix. And we remind that for any \(\rho\) the following properties are obeyed: * \(Tr(\rho)=1\) * \(\rho>0\) * Hermitian We also clarify some of the notations. All capital curly letter refer to the quantum channel, which correspondingly evolves density matrix by unitary operators, e.g. \(\mathcal{U}=e^{iHt}\rho e^{-iHt}\). In the remaining paper we use \(\mathcal{U}\) as the actual evolution channel and \(\mathcal{V},\mathcal{E}\) as approximations. We will sometimes utilize liouvillian representation for quantum channel: \[e^{iHt}\rho e^{-iHt}=e^{t\mathcal{L}}=\sum_{k=0}^{\infty}\frac{t^{k}\mathcal{ L}^{k}(\rho)}{k!} \tag{19}\] where \(\mathcal{L}(\rho)=i(H\rho-\rho H)\), similarly. And we have similarly as above the composition of Hamiltonians with sub-terms: \[\mathcal{L}(\rho)=\sum_{j}h_{j}\mathcal{L}_{j} \tag{20}\] \[\mathcal{L}_{j}(\rho)=i(H_{j}\rho-\rho H_{j}) \tag{21}\] A quantum channel describes how a particular state or more usefully, a mixture of quantum states evolves to another set of quantum states. The most general 'channel' is a CPTP(ompletely positive, trace preserving) map, i.e. something that maps a density matrix onto another density matrix: \[\rho\rightarrow\sum_{i}M_{i}^{\dagger}\rho M_{i},\text{ where }\sum_{i}M_{i}^{ \dagger}M_{i}=1 \tag{22}\] where \(M_{i}\) is called Kraus operator[25]. Thus, classically sampling a series of exponentials defines a valid CPTP map with \(M_{i}=\sqrt{p_{i}}U_{i}\): \[\rho\rightarrow\sum_{i}p_{i}U_{i}^{\dagger}\rho U_{i} \tag{23}\] If we start in a pure state, essentially we can generate the following mixed state: \[\left|\psi\right\rangle\left\langle\psi\right|\rightarrow\sum_{i}p_{i}U_{i}^{ \dagger}\left|\psi\right\rangle\left\langle\psi\right|U_{i}=\sum_{i}p_{i}\left| \Psi_{i}\right\rangle\left\langle\Psi_{i}\right| \tag{24}\] ### Metric Now we introduce some useful result to help analysis of randomized algorithms. We use diamond norm to calculate the difference between simulated state-vector and actual one: \[d_{\diamondsuit}(\mathcal{E},\mathcal{U})=\frac{1}{2}\|\mathcal{E}-\mathcal{U }\|_{\diamondsuit}=\sup_{\|\rho\|_{1}=1}\frac{1}{2}\|((\mathcal{E}-\mathcal{U })\otimes\mathbb{I})(\rho)\|_{1} \tag{25}\] where \(\mathbb{I}\) is the identity channel which has the same size as \(\mathcal{E}\) and \(\|\cdot\|\) is Schatten-1 norm or trace norm. Trace norm can be basically considered as the Euclidean distance between two quantum states: \[\|M\|=\mathrm{tr}(\sqrt{MM^{\dagger}}) \tag{26}\] \[M=\left|\psi\right\rangle\left\langle\psi\right|-\left|\phi\right\rangle\left\langle \phi\right| \tag{27}\] Following above we can show given an operator M, \[\left|\operatorname{tr}(M\mathcal{E})-\operatorname{tr}(M\mathcal{U})\right| \leq 2\|M\|d_{\diamond}(\mathcal{E},\mathcal{U}) \tag{28}\] It's important to note that because diamond norm evaluates the maximum possibility that the two channels can be distinguished in _all quantum states_(meaning that the smaller the value the closer the simulation is to reality), this is the worst scenario bound for measuring the error because effectively it is evaluating the error over all state vector space. Although other spectral error metrics have been assessed intensively[26], we still adapt diamond norm as it is the most commonly used ones in literature. As each quantum gate represents a unitary operator, to related it to quantum channel we employ mixing lemma: **Lemma 1**.: _Let \(\mathcal{U}=U\rho U^{\dagger}\) and \(\mathcal{V}_{j}=V_{j}\rho V_{j}^{\dagger}\) be unitary channels, and let \(p_{j}\) be the probability distribution for the randomized protocol, suppose that:_ \[\|\sum_{j}(p_{j}V_{j})-U\|\leq a\] _Then the mixed channel \(\mathcal{V}:=\sum_{j}(p_{j}\mathcal{V}_{|})\) satisfy_ \[\|\mathcal{V}-\mathcal{U}\|\leq 2a\] Note this is an improved version over the original lemma and the proof is in AppendixA as well as [Lemma 3.4][27]. ### qDrift With Lemma1, we first choose an important sampling distribution \(p_{j}\). Here we consider the original construction, i.e. \(p_{j}=\frac{\left|h_{j}\right|}{\lambda}\) where \(\left|\cdot\right|\) means absolute value and \(\lambda=\sum_{j=1}^{L}\left|h_{j}\right|\). More advanced important sampling technique could be used to reduce cost of the circuit[28; 29] in AppendixC. Then for each sample step4 we sample from the pool \(H_{j}\) and implement the Pauli string with modified coefficients \(\lambda t/N\) with N being the total sample number. Effectively, with probability \(\lambda^{N}\prod_{j=1}^{N}\left|h_{j_{k}}\right|\), we have constructed a quantum channel: Footnote 4: qDrift does not have the idea of time step, instead it is number of Pauli strings sampled that specify the interval of evolution. \[\mathcal{V} =\prod_{k=1}^{N}\sum_{j=1}^{L}p_{j}V_{j}\rho V_{j}^{\dagger} \tag{29}\] \[=\prod_{k=1}^{N}\sum_{j=1}^{L}p_{j}e^{-i\lambda t/NH_{j}}\rho e^ {i\lambda t/NH_{j}}\] (30) \[=\prod_{k=1}^{N}\mathcal{V}_{N}=\prod_{k=1}^{N}\sum_{j=1}^{L}p_{ j}e^{\tau L_{j}} \tag{31}\] the last one is in liouvillian form where \(\tau=\frac{\lambda t}{N}\). To express it as unitary we can think of the expectation value of the resultant operator \(\mathbb{E}[V_{k}]=e^{-it/N\mathbb{E}[h_{k}/p_{k}]}\) for a single mixed unitary and then for the whole process we have \(V=\mathbb{E}[V_{N}\dots V_{1}]\). By telescoping lemma2 [also seen in Lemma 3.6[27]] we can derive(AppendixA) the scaling for qDrift as: \[N=\mathcal{O}(\frac{2(t\lambda)^{2}}{\epsilon}) \tag{32}\] **Lemma 2**.: _Telescoping: If \(\mathbb{E}[V]\) and U are bounded by operator norm \(\|\mathbb{E}[V]\|\), \(\|U\|\leq 1\), then:_ \[\|\mathbb{E}[V]^{N}-U^{N}\|\leq N\|\mathbb{E}[V]-U\|\] It is worth noting that the number of gates required for a fixed error threshold for qDrift is independent of L, the number of terms in the Hamiltonian. Considering an electronic system with \(\mathcal{O}(N^{4})\) terms, meaning \(L^{8}\) scaling for Suzuki-trotter formulation and the fact that quantum advantage usually emerges around N \(>40\), this randomized approach reduces the cost. We summarize this section with a flow chart in FIG.4 before moving on to random permutation. ### Random permutation We first give the scaling relation for the algorithm[17]: **Theorem 2**.: _Given \(H=\sum_{k=1}^{L}h_{k}H_{k}\) as the Hamiltonian and \(U=e^{-iHt}\) the evolution operator for any time \(t\in\mathbb{R}\). Let \(S_{1}(t)=\prod_{k=1}^{L}\exp(-ih_{k}H_{k}t)\) denotes forward lie-trotter evolution and \(S_{1}^{rev}(t)=\prod_{k=L}^{1}\exp(-ih_{k}H_{k}t)\) represents backward evolution. We have:_ \[d_{\diamond}(\mathcal{U},\frac{1}{2^{N}}(\mathcal{S}(\delta t)+\mathcal{S}^{ rev}(\delta t))^{N}) \leq\frac{(\Lambda tL)^{4}}{N^{3}}\exp(\frac{2(\Lambda tL)}{N})\] \[+\frac{2(\Lambda tL)^{3}}{3N^{2}}\exp(\frac{\Lambda tL}{N})\] _where \(\delta t=\frac{t}{N}\) with \(N_{t}\) being Trotter steps and \(\Lambda:=\max_{k}\|h_{k}\|\)._ To see it intuitively we consider Taylor expansion for the ideal evolution of Hamiltonian \(H=A+B\): \[U=e^{(A+B)t}=I+(A+B)t+\frac{1}{2}(A^{2}+AB+BA+B^{2})t^{2}+O(t^{3}) \tag{33}\] then if we evaluate similarly the expansion for first-order lie-trotter: \[S_{1}(t)=e^{At}e^{Bt}=I+(A+B)t+\frac{1}{2}(A^{2}+2AB+B^{2})t^{2}+O(t^{3}) \tag{34}\] \[S_{1}^{rev}(t)=e^{Bt}e^{At}=I+(A+B)t+\frac{1}{2}(A^{2}+2BA+B^{2})t^{2}+O(t^{3}) \tag{35}\] We can easily see the second order terms cancel and effectively the scheme improved the simulation to third order: \[e^{(A+B)t}=\tfrac{1}{2}e^{At}e^{Bt}+\tfrac{1}{2}e^{Bt}e^{At}+\mathcal{O}\left( t^{3}\right) \tag{36}\] Or equivalently the summation forms a CPTP map: \[\rho\rightarrow\tfrac{1}{2}(e^{iBt}e^{iAt}\rho e^{-iAt}e^{-iBt})+\tfrac{1}{2 }(e^{iAt}e^{iBt}\rho e^{-iBt}e^{-iAt}) \tag{37}\] But in general the summation of two operations is not unitary, which means we need to use mixing lemma, i.e. randomization to sample a permutation sequence from the pool \(\pi_{k}\in Sym(L)\), where index k indicate a particular sequence. After obtaining the permutation list we then put the operator in the circuits, for example, with time interval \(\tau\) and [3; 1; 2; 4; 6; 5] being chosen we evolve with: \[V_{circuit}=e^{i\pi H_{5}}e^{i\pi H_{6}}e^{i\tau H_{4}}e^{i\tau H_{2}}e^{i\tau H _{1}}e^{i\tau H_{3}} \tag{38}\] It is applicable for higher order Suzuki-trotter formula[17] and recently a protocol[30] called SparSto combining qDrift and random permutation has narrowed the error bound empirical by a significant margin with convex optimization in FIG.5, which leads directly to extrapolation for large system size. But we could see that though some randomness is added in the random permutation, the dependence on \(L^{2}\) still exists. On the contrary, qDrift with uniform sample distribution has escaped the problem. To have a better visualization for how randomness affect the error reduction, we use evolution bar in FIG.6 to represent individual Hamiltonian evolution proposed by Campbell[20]: The blue block with the largest coefficients is sampled the most frequent, comprising roughly half the gates in the whole sequence. Because there is no fixed order of gates as trotter, we use the wording sample step instead of time step and we carefully choose the sample number to match total time simulation, i.e. same length of the bar. However, individual duration of gate might be different. With this implementation we require less gates aiming for same precision. The reason behind how ran Figure 4: Flow chart for qDrift protocol Figure 5: Flow chart for SparSto protocol domness works better partly lies in the fact that coherent noise effects are washed out into less harmful stochastic noise[31]. ## IV Particle number conservation In this section we introduce our improvements over original qDrift. The idea behind our proposal is to sample term over all Pauli stings in the physical (fermionic) space. Particularly we consider the conservation of particle number, i.e. electrons, throughout the full evolution. We first illustrate with example how to keep the total number of electron constant by applying only terms that make physical sense. Then we will show different sampling tricks to alter the probability distribution that makes the empirical result better. ### Pauli grouping With qDrift we sample a single Pauli string at one sample step \(N_{q_{i}}\). For example, when we compare the spectral error with the actual time evolution operator at the second sample step, we might implement on the circuit Pauli strings 'ZII' and 'ZZX' with the first string from the part of the number counting group or the coulomb group from TABLE.I and the second one from part of the excitation group after the JW transformation. Note the actual pauli gates comes in pairs, thet is to say for each hydrogen in a chain there are electron spin up and down: above we have 3 hydrogens in a chain so we have generalized 'ZIIIII' and 'IZIII' to just 'ZII', for instance. It is easy to see that the _Particle number operator_ does not commute here with the simulated unitary using commutativity Theorem3, meaning that the particle number is not conserved using Ehrenfest theorem: \[\frac{d}{dt}<\hat{P}>=\frac{1}{i\hbar}<[\hat{P},H]> \tag{39}\] where \(\hat{P}=\sum_{i=1}^{N}(I-Z_{p})\) **Theorem 3**.: _For a Pauli string with \(\hat{P}=\bigotimes_{i=1}^{n}p_{i}\) length n, and each \(p_{i}\in\{X,Y,Z,I\}\), commutativity between two strings is conveniently determined by counting the number of positions in which the corresponding pauli operator in subset \(X,Y,Z\) differ. If the total count is even, the operators commute, otherwise they anticommute._ However, if we sample each of the group accordingly and apply each of the Pauli strings with Pauli gadget, then total number of particle is constant due to the fact that individual group represents a physical process. Another advantage of this protocol is that inside each group all Pauli string commute with one another as in FIG.7. Ordering of the exponentials within each physical term will not affect the error, which, allows us instead to focus on choosing an order than will optimise circuit depth, e.g. by maximising cancellation of CNOT gates[32]. So as long as the spectral error is taken after the whole group is applied(which can be done as there is no rules governing the exact sample steps to take error analysis), we Figure 6: Each gate is a coloured block with the magnitude of the sub-term represented by the length of corresponding block. (a) illustrates the meaning of each bar in one block; evolution goes from left to right. (b) demonstrates 10 consecutive Trotter step during evolution for first-order product formula and random permutation; note the reverse ordering in the lower panel. (c) shows qDrift where the lower panel combined same sample terms together. With random sampling the number of gates for ’strong’ operations get sampled more often than ’weak’ ones. Figure 7: Example of physDrift sampling scheme where Pauli strings comprising the same physical (particle number conserving) term have the same coefficient will be sampled. Given enough samples, this process will converge onto a particle conserving unitary. can restrict the evolution in physical space. But can we do better? Because there are only five physical groups in second quantisation, which is quite small compared to the pool of qDrift. So the strong terms with large coefficients(we give the strength of each group in FIG.8) might be averaged out by weak Pauli strings in the same group. If the probability does not differ much, this is effectively the random permutation method. So we need to split the groups again and rearrange the strings according to each molecular orbit(or site in Heisenberg's model). This basically restricts the evolution to a physical subspace \(\mathcal{H}_{phys}\): \[\mathcal{H}_{phys}=\cap_{i}^{m}\mathcal{K}er(\hat{\mathcal{P}}_{i}) \tag{40}\] where m is the number of molecular orbits(equals to half of the number of qubits) and the kernel space of particle number operator at each orbit: \(\mathcal{K}er(\hat{\mathcal{P}}_{i})=\{\ket{\psi}:\left(\hat{\mathcal{P}}_{i}-N _{i}\right)\ket{\psi}=0\}\) where \(\hat{\mathcal{P}}_{i}\) is the particle counting operator and \(N_{i}\) associated value. Given the partition scheme for the Hamiltonian, we need to find the optimized probability distribution. We have made comparison between two proposals: * _Absolute weights_: Assigning each of the Pauli strings with weight \(w_{i}=abs(h_{i})\) similarly as in qDrift. For each group \(\mathcal{G}_{j}\) sampled, the probability is \(\mathcal{A}_{j}=\sum_{i}w_{i}\) * _Mean weights_: With \(\mathcal{M}_{j}=\sum_{i}h_{i}\), we take the sign of coefficients into consideration. Intuitively, with the _Absolute weights_ scheme, we are still sampling according to how strong the sub-terms are in the Hamiltonian. This means with N being large, the histogram of Pauli string counts will converge to the one for qDrift(which will 'drift' towards particle conserving evolution as well in this sense). We demonstrate this effect with histogram plotted in FIG.9. However, adapt _Mean weights_, Pauli strings with similar physical meaning but opposite signs will dilute the overall effect. For example, \(h_{pqqp}\) in Coulomb group and \(h_{pqpq}\) in spin exchange group are in the same sample space as explained above. But the sign will be opposite due to symmetric consideration. \(|h_{pqqp}+h_{pqpq}|\leq|h_{pqqp}|+|h_{ppqq}|\) with the equality achieved when \(h_{pqqp}*h_{pqpq}>0\). However, empirically taken _Absolute weights_ the spectral error is much worse than _mean weights_ and even worse than original qDrift, we will use the latter for the rest of the analysis. ### Experiment results In this section we will compare our algorithm with the deterministic approach as well as the randomized ones. But before showing the results, we will elaborate more on how we integrate state-of-art noisy model and relevant mitigation techniques. #### iv.2.1 Noise model We categorize the noise into three types: 1. **Sampling noise**: generated by the order of unitaries applied. But we assume that any coherent errors have been removed with the randomised compilation method, leaving only depolarising errors. 2. **Depokarising noise**: we consider stochastic error in each gate which can be simulated by randomly adding a Pauli operator with probability \(p\) after each gate - \(p\) was chosen to reflect current ion-trap hardware. 3. **Shot noise**: The affect of these is to effectively reduce the amplitude of the measured expectation value by an exponential factor \(e^{-t}\), where \(t\) is proportional to the depth of the circuit (i.e. number of Pauli exponentials). We can then assume the simulations run with no noise, but the shot noise \(\epsilon_{\text{shot}}\) becomes correlated with the circuit depth[33], causing the number of shots \(N_{\text{shot}}\) to scale exponentially with the number of exponentials: \[\epsilon_{\text{shot}}\sim\frac{1}{N_{\text{shot}}^{1/2}e^{-t}}\implies N_{ \text{shot}}\sim\frac{1}{\epsilon_{\text{shot}}^{2}e^{-2t}} \tag{41}\] So we will track the spectral error for each of the algorithm at the same circuit depth. And it is not trivial to note that we exchange circuit depth here with the number of Pauli exponentials or Pauli gadgets. This is true provided there is no optimization scheme applied[34; 35]. #### iv.2.2 Symmetric protection We restrict ourselves to the deterministic picture. By exploiting symmetries of the system, that we can substantially reduce the total error \(\epsilon\) of the simulation without significantly increasing the gate count[36]. Hamiltonian is invariant under Particle number operator: \[[H,\hat{\mathcal{P}}]=0 \tag{42}\] Because identity operator always commutes with other Pauli strings with same dimension, here \(\hat{\mathcal{P}}\) can be simplified to \(\{exp(i\phi Z):\phi\in[0,2\pi)\}\) where \(\phi\) is taken randomly between [0, 2] as a multiple of \(\pi\). Consider each Trotter step \(\mathcal{S}_{\delta t}=e^{-iH_{eff}t}\) where \(H_{eff}=H+\delta H\) the second term being the error. \[e^{-iHt}\implies e^{-iH_{eff}t}=e^{-iH(H+\delta H)t} \tag{43}\] We then extend the original circuit with following protection: \[V=\prod_{k=1}^{N_{t}}\hat{\mathcal{P}}^{\dagger}\mathcal{S}_{\delta t}\hat{ \mathcal{P}}=\prod_{k=1}^{N_{t}}e^{-i(H+\hat{\mathcal{P}}^{\dagger}\delta H\hat{ \mathcal{P}})t} \tag{44}\] The second equality is from equation 42. And the error term will get reduced similar to the averaged distance in random walk. However, we argue that this improvement will not be suitable for randomized approach because there is no fixed known unitary in the circuit, which means the error term is different upon sampling. #### iii.2.3 Main result Now we would like to simulate for some time \(t_{max}\), and we can only use \(N\) exponentials (since the noise on the hardware limits the depth), for which, We assume that typical hardware can use \(10^{3}\) or so CNOT gates. We compared the precision we can get for algorithms mentioned so far without depolarising error first in FIG.10. Note if not specified the Hamiltonian system is 3-hydrogen chain system. We can see that with our scheme the spectral error is lower and it even performs better when the permutation is taken. However, this has some counter effect on qDrift. Figure 8: Strength ordering of all Hamiltonian sub-groups, bottom is the strongest. Figure 10: With \(t_{max}\) taken at 0.5, 1, 2, 5 and 8, the spectral error of the average of 3 qDrift sampling sequence is evaluated on the left blue curve. The red plot indicates the averaged value for physDrift, but note that because there is no one-to-one correspondence between \(t_{max}\) and sample steps, as the number of Pauli exponentials at a particular time changes for each experiment, we average the mean value of the unitary and compare it with the exact evolution. On the left, to combine physDrift with random permutation we randomly permuted the ordering of Pauli strings. The top x-axis represents the number of Pauli exponentials in the circuit. Figure 9: Histogram for 3-hydrogen chain with 1420 sample steps, given the top left is under mean weight scheme and the top right is the absolute scheme. Bottom one is qDrift. For qDrift, all the Pauli strings that could be sampled are given on the x-axis, for physDrift, all the physical terms (comprising multiple Pauli strings) that could be sampled are given on the x-axis. The ordering of the terms is the same, so we can directly compare the shape of the histograms. To make better analysis, we next move on comparing the exact evolution at the time where each whole group in the physDrift is applied with the simulation in FIG.11 The result has been improved for the permuted one. In contrast the one with just averaging the raw experimental result does not predict the structure of evolution as accurate as qDrift. Adding in other algorithms in FIG.12 With the following lemma we explain why the empirical result for physDrift is better than qDrift. **Lemma 3**.: _Given a pool of \(\mathcal{R}(\mathcal{L}_{1},\mathcal{L}_{2},\ldots,\mathcal{L}_{L})\) to implement random samples of unitary, the spectral error is always greater than or equal to the protocol with a pool of \(\hat{\mathcal{R}}(\mathcal{L}_{1},\mathcal{L}_{2},\ldots,\mathcal{L}_{i}+ \mathcal{L}_{j},\ldots\mathcal{L}_{\hat{L}})\) where \(\hat{L}\leq L\)._ Proof.: By Equation(31), \[\|\mathcal{U}_{N}-\mathcal{V}_{N}\|_{\diamond}=\left\|\sum_{n=2}^{\infty}\frac {t^{n}\mathcal{L}^{n}}{n!N^{n}}-\sum_{j}\frac{h_{j}}{\lambda}\sum_{n=2}^{\infty }\frac{\lambda^{n}\tau^{n}\mathcal{L}_{j}^{n}}{n!N^{n}}\right\|_{\diamond}\] \[\leq\sum_{n=2}^{\infty}\frac{t^{n}\left\|\mathcal{L}^{n}\right\|_{\diamond}} {n!N^{n}}+\sum_{j}\frac{h_{j}}{\lambda}\sum_{n=2}^{\infty}\frac{\lambda^{n} \tau^{n}\left\|\mathcal{L}_{j}^{n}\right\|_{\diamond}}{n!N^{n}}\] where the first order cancels with the choice of \(\tau=\lambda t/N\). Looking at the second term, we know from subadditivity of diamond norm \(\|A+B\|_{\diamond}\leq\|A\|_{\diamond}+\|B\|_{\diamond}\) that \(\|\mathcal{L}_{1}^{n}\|_{\diamond}+\|\mathcal{L}_{2}^{n}\|_{\diamond}+\ldots\| (\mathcal{L}_{i}+\mathcal{L}_{j})^{n}\|_{\diamond}+\cdots+\|\mathcal{L}_{L}^{n }\|_{\diamond}=\sum_{j}^{L}\|\mathcal{L}_{j}^{n}\|_{\diamond}\leq\|\mathcal{L}_ {1}^{n}\|_{\diamond}+\|\mathcal{L}_{2}^{n}\|_{\diamond}+\cdots+\|\mathcal{L}_{ L}^{n}\|_{\diamond}=\sum_{j}^{L}\|\mathcal{L}_{j}^{n}\|_{\diamond}\). This means that by combining more commuting terms together the theoretical bound will become tighter. Besides particle number in FIG.14, we also track the expectation of the \(\left\langle H\right\rangle\) in FIG.13, which, under exact evolution, should be conserved. So it's easy to see how the error fluctuations. As we can see the fluctuation of physDrift concentrates around the expected energy while the one for qDrift overshoots after a while. This mean physDrift actually has more tendency to stay in physical space \(\mathcal{H}_{phys}\). To our disappointment, it seems that qDrift does slight better again than physDrift. We have also tracked the particle number in each orbit for each algorithm shown Figure 11: Now \(t_{max}\) is taken at 0.5, 1, 2, 5 only. The horizontal error comes from the fact that the length of physical group is different and because at each \(t_{max}\) this only differ by few strings, the error is too small to see. The right one is taken after applying permutation average. Figure 14: Particle number as time progress. We started with 3 hydrogens so there should be 3 electrons in total. Figure 12: Each method is indicated by different colour. The left one has a final \(t_{max}\) at 5 seconds while right 8 seconds. The protected case overlap with the original algorithm without symmetric protection. The random permutation performs badly might owing to the fact that we are only taking one single circuit with no averaging so that we can keep the depth and cost of the circuit the same. Figure 13: Evolution of the Hamiltonian. Because of energy conservation and \(H\left|\psi\right\rangle=E\left|\psi\right\rangle\) so the expectation value should stay around E. in FIG.15. Note that generally, the simulation of an expectation value can be good, but the simulation of the state itself might be bad. But it is not true the other way around. Similar to [17], we first tracked how the spectral error changes with the number of sample steps as in FIG.16. Then we plotted the system size-spectral error variation for the choice of 3, 4 and 5 hydrogen chains in FIG.17. We should emphasize that each hydrogen has two qubits which means the total qubits experimented is upper bounded by 12. Now we add some depolarising error with strength parameter around 0.1%[37] and we can see directly that other than smaller spectral error in FIG.18, the scheme is more tolerant to errors in the real device. This suggests that physDrfit is worth implementing on near-term quantum computers. ## V Discussion We show the improved quantum simulation technique with randomness based on qDrift. The basic idea is to restrict the evolution in a physical process. Overall the result is promising in the sense that spectral error is reduced and we have demonstrated the advantage over a naive noise model. But we still see sometimes the physical property is conserved better in the existing schemes. This might result from the metrics we use is not accurate and general enough, i.e. minimizing the error or variance of expectations does not always result in the decrease of state vector error. Fundamentally unitary evolution Figure 16: Plot to show in a specific case, \(t_{max}=0.5s\) for a 3-hydrogen chain, how the error changes when we increase the number of sample steps. Note spectral error was expressed in log scale. Figure 17: With a fixed number of sample steps we extracted the error with varying system size. We come to conclusion within this scenario physDrfit generally performs better than qDrift. Figure 18: Top left is showing the error for all algorithms while right the comparison between physDrift and qDrift. The bottom one has used the permutation average. is the process of matrix multiplication. So exploiting mathematics behind random walk in higher dimension, for example with Lie group formulation can help better understand the evolution as well[38]. Eventually we want to compare the result on analog quantum simulator and to reduce the cost transformation protocols like parity mapping[22] worth investigation, which eliminates qubits due to intrinsic symmetries in the Hamiltonian. ###### Acknowledgements. We wish to acknowledge the support of Dr.Alex Thom and Chiara Leadbeater for helpful discussion as well as suggestions on the relevant topics. ## Appendix A Randomized Algorithm **Lemma 1**. Let \(\mathcal{U}=U\rho U^{\dagger}\) and \(\mathcal{V}_{j}=V_{j}\rho V_{j}^{\dagger}\) be unitary channels, and let \(p_{j}\) be the probability distribution for the randomized protocol, the improvements over the original mixing lemma is: \[\frac{1}{2}\|\mathcal{U}-\mathbb{E}[\mathcal{V}]\|\leq\|U-\mathbb{E}[V]\|\] Proof.: Let's fix a state \(\left|\psi\right\rangle\) first with following notation: \(\left|u\right\rangle=U\left|\psi\right\rangle\), \(\left|v\right\rangle=V\left|\psi\right\rangle\). Normalization to unity ensures \(\left|\left\langle u,v\right\rangle\right|\leq 1\). With Fuchs-van de Graaf relations in [[39],Theorem 3.33] we have: \[\frac{1}{2}\|\left|u\right\rangle\left\langle u\right|-\left|v\right\rangle \left\langle v\right|\|_{1} =\sqrt{1-\left\langle u,v\right\rangle}=\sqrt{(1-\left\langle u,v\right\rangle)(1+\left\langle u,v\right\rangle)}\] \[\leq\sqrt{2(1-Re(\left\langle u,v\right\rangle))}=\|\left|u\right\rangle-\left| v\right\rangle\|_{l_{2}}\] In [[40], Sec. 5.3] we have the fact that stabilization is not necessary for computing the diamond distance of two unitary channels: \[\frac{1}{2}d_{\Diamond}(\mathcal{U},\mathcal{V})=\max_{\left|\psi\right\rangle \left\langle\psi\right|}\frac{1}{2}\|(\mathcal{U}(\left|\psi\right\rangle \left\langle\psi\right|)-\mathcal{V}(\left|\psi\right\rangle\left\langle\psi \right|)\|_{1}\] \[\leq\max_{\left|\psi\right\rangle}\|(\left(U-V\right)\left|\psi\right\rangle) \|_{l_{2}}=\|U-V\|\] But this is only the case for a deterministic single unitary. To account for randomization with probabilistic distribution \(\{p_{k},V_{k}\}\) we use Cauchy-Schwarz: \[=\left(\sum_{k}p_{k}\right)\sum_{k}p_{k}|\left\langle\psi\right|U^{\dagger}V_{ k}\left|\psi\right\rangle|^{2}\] \[=\sum_{k}p_{k}|\left\langle\psi\right|U^{\dagger}V_{k}\left|\psi\right\rangle|^ {2}\] \[=\sum_{k}p_{k}|\left\langle\psi\right|U^{\dagger}V_{k}\left|\psi\right\rangle|^ {2}\] Similarly as above, we get the following with Fuchs-van de Graaf: \[\frac{1}{2}\|\mathcal{U}(\left|\psi\right\rangle\left\langle\psi\right|)- \mathcal{V}(\left|\psi\right\rangle\left\langle\psi\right|\|_{1}=\sqrt{1-\sum_ {k}p_{k}|\left\langle\psi\right|U^{\dagger}V_{k}\left|\psi\right\rangle|^{2}}\] \[=\sqrt{1-|\left\langle\psi\right|U^{\dagger}\mathbb{E}[V]\left|\psi\right\rangle |^{2}}\] \[\implies\frac{1}{2}\|\mathcal{U}(\left|\psi\right\rangle\left\langle\psi \right|)-\mathcal{V}(\left|\psi\right\rangle\left\langle\psi\right|\|_{1}\leq \|(U-\mathbb{E}[V])\left|\psi\right\rangle\|_{l_{2}}\] Because the average of \(V_{k}\) is not an unitary of \(\mathcal{V}\) is not a CPTP map, we need stabilization form of the diamond norm. \[=\max_{\left|\psi\right\rangle\left\langle\psi\right|}\frac{1}{2}\|(U\otimes \mathbb{I})(\left|\psi\right\rangle\left\langle\psi\right|)(U\otimes\mathbb{I })^{\dagger}-\mathbb{E}[(U\otimes\mathbb{I})(\left|\psi\right\rangle\left\langle \psi\right|)(U\otimes\mathbb{I})^{\dagger}]\|_{1}\] \[\leq\max_{\left|\psi\right\rangle}\|(U\otimes\mathbb{I}-\mathbb{E}[V\otimes \mathbb{I}])\left|\psi\right\rangle\|_{l_{2}}=\|(U-\mathbb{E}[V])\otimes \mathbb{I}\|\] Finally we can extract the identity in the product and get \(\|(U-\mathbb{E}[V])\|\). **Scaling for qDrift in Sec.III.B** **Theorem 4** (Taylor expansion bound).: _The error of a function f with Taylor expansion approximation to order k can be considered as the remainder term in order \(k+1\)[41]._ \[\mathcal{R}_{k}(e^{\alpha})|\leq\frac{|\alpha|^{k+1}}{(k+1)!}e^{|\alpha|}, \forall\alpha\in\mathbb{C}\] _where \(\mathcal{R}_{k}(f)\) is the remainder Taylor expansion to order k of the function f, for instance, \(\mathcal{R}_{1}(e^{x})=\mathcal{R}_{1}(\sum_{n=0}^{\infty}\frac{x^{n}}{n!})= \sum_{n=2}^{\infty}\frac{x^{n}}{n!}\)_ We write the ideal channel as \(\mathcal{U}_{N}=e^{\frac{t}{N}\mathcal{L}}\) and the approximation as in equation 31\(\mathcal{V}_{N}=\sum_{j=1}^{L}p_{j}e^{\tau\mathcal{L}_{j}}\) for a single sample step where \(p_{j}=\frac{h_{j}}{\sum_{j=1}^{L}h_{j}}\). Expand both channel: \[e^{t\mathcal{L}}\approx\mathbb{I}+\frac{t}{N}\mathcal{L}+\frac{1}{2!}\frac{t^{ 2}\mathcal{L}^{2}}{N^{2}}+\ldots\] \[\sum_{j=1}^{L}p_{j}e^{\tau\mathcal{L}_{j}}\approx\mathbb{I}+\sum_{j=1}^{L}p_{j }\tau\mathcal{L}_{j}+\frac{1}{2!}\sum_{j=1}^{L}p_{j}\tau^{2}\mathcal{L}_{j}^{2}+\ldots\] Now because we can always choose \(\tau=\Lambda t/N\), the first order get canceled exactly. \[\|\mathcal{U}_{N}-\mathcal{V}_{N}\|_{\diamond}=\|\sum_{n=2}^{\infty}\frac{t^{n} \mathcal{L}^{n}}{n!N^{n}}-\sum_{j}\frac{h_{j}}{\lambda}\sum_{n=2}^{\infty}\frac {\lambda^{n}t^{n}\mathcal{L}_{j}^{n}}{n!N^{n}}\|_{\diamond}\] \[\leq\sum_{n=2}^{\infty}\frac{t^{n}\|\mathcal{L}^{n}\|_{\diamond}}{n!N^{n}}+ \sum_{j}\frac{h_{j}}{\lambda}\sum_{n=2}^{\infty}\frac{\lambda^{n}t^{n}\| \mathcal{L}_{j}^{n}\|_{\diamond}}{n!N^{n}}\] \[\leq\sum_{n=2}^{\infty}\frac{1}{n!}\left(\frac{2\lambda t}{N}\right)^{n}+\sum _{j}\frac{h_{j}}{\lambda}\sum_{n=2}^{\infty}\frac{1}{n!}\left(\frac{2\lambda t }{N}\right)^{n}\] \[=2\sum_{n=2}^{\infty}\frac{1}{n!}\left(\frac{2\lambda t}{N}\right)^{n}\] where in the third line we used the fact that each \(H_{j}\) is a unitary to get the bound that \(\|\mathcal{L}_{j}\|_{\diamond}\leq 2\|H_{j}\|\leq 2\). Similarly we have inequality \(\|\mathcal{L}\|_{\diamond}\leq 2\|H\|\leq 2\lambda\). Apply Theorem 4 with \(k=1\) and \(x=\frac{2\lambda t}{N}\) now: \[d_{\diamond}(\mathcal{U}_{N},\mathcal{V}_{N})\leq\frac{2\lambda^{2}t^{2}}{N^{ 2}}e^{2\lambda t/N}\] when \(N>>\lambda t\) the exponential term drops out and with the help of telescoping lemma, we get: \[d_{\diamond}(\mathcal{U}_{N},\mathcal{V}_{N})\leq\frac{2\lambda^{2}t^{2}}{N}\] ## Appendix B Optimize Trotter order From equation 7 we have \(N=\mathcal{O}(\frac{(tL\lambda)^{1+\frac{1}{2}}}{\epsilon^{1/2k}})\) for higher order trotter scaling. To find the trade of between accuracy in terms of order and the number of gates we need, we consider T the number of total gates required to have the scaling \(T\propto\frac{5^{k}}{\epsilon^{1/2k}}\) by including the recursive formula(five terms in total) and ignoring other effects like total time and number of terms in the system Hamiltonian. It's straightforward to just calculate the first derivative of T to optimize the cost: \[\frac{dT(2k)}{dk}\approx 2k^{2}log(5)-log(\frac{1}{\epsilon})\] For example the numerical solution with \(\epsilon\approx 10^{-6}\) is around \(k=2.02\). ## Appendix C Important Sampling Important sampling is a powerful technique to compute expectation: \[\mathbb{E}_{p}[f(x)]=\sum_{x}p(x)f(x)\] If we want to reduce the variance, we can re-weight accordingly: \[\mathbb{E}_{p}[f(x)]=\sum_{x}q(x)\frac{p(x)}{q(x)}f(x)=\mathbb{E}_{p}[w(x)f(x)]\] by a simple weighting scheme: \[q_{c}(j)=\frac{h_{j}}{C_{j}\lambda_{c}},\lambda_{c}=\sum_{l}\frac{h_{l}}{C_{l}}\] we can prove that the sampling require less total simulation cost than the original one with Jensen's inequality: \[N_{q_{c}}\mathbb{E}_{q_{c}}[C]\leq N_{p_{c}}\mathbb{E}_{p_{c}}[C]\] for any given precision \(\epsilon\), where \(N_{q_{c}}\) is the sample steps for the re-weighted important sampling and \(N_{p_{c}}\) is the original one. We first show the error bound for \(q_{c}\) by expressing Hamiltonian as \(H=\sum_{j}h_{j}H_{j}=\lambda\mathbb{E}_{p}[H_{j}]=\lambda\mathbb{E}_{q_{c}}[w(q )H_{j}]\) where \(w(q)=\frac{h_{j}}{\lambda q(j)}\). Then: \[U=e^{-itH}=e^{-it\lambda\mathbb{E}_{p}[H_{j}]}=-it\lambda\mathbb{E}_{q}[w(j)H _{j}]_{=}e^{-i\mathbb{E}_{q}[X_{j}]}\] Note we can obtain the bound: \[\|X_{j}\|=\frac{h_{j}t}{q(j)}\|H_{j}\|\] \[d_{\diamond}(\mathcal{U}_{N},\mathbb{E}_{q}[\mathcal{V}_{N}])\leq 2\|U_{N}- \mathbb{E}_{q}[\mathcal{V}_{N}]\|\] \[=2\|e^{-i\mathbb{E}_{q}[X(t)]}-\!\!1+i\mathbb{E}_{q}[X(t)]+\mathbb{E}_{q}[\! 1\!-\!iX\!-\!e^{-iX(t)}]\|\] \[\leq 2\|e^{-i\mathbb{E}_{q}[X(t)]}-\!\!1+i\mathbb{E}_{q}[X(t)]\|\!+\!2\mathbb{E} _{q}[\![\!1\!-\!iX\!-\!e^{-iX(t)}]\|\!]\] \[\leq\|\mathbb{E}_{q}[X]\|^{2}+\mathbb{E}_{q}[\|X\|^{2}]\] \[\leq(t\lambda)^{2}+\mathbb{E}_{q}[(\frac{h_{j}t}{q(j)})^{2}]\] \[\leq(t\lambda)^{2}(1+\mathbb{E}_{p}[(w(j)])\] where \(V=X(t)\) represents the mixed operator, and we further relax the bound using triangular inequality in line three. Again using telescoping lemma: \[d_{\diamond}(\mathcal{U},\mathbb{E}_{q}[\mathcal{V}])\leq\frac{(t\lambda)^{2}}{ N}(1+\mathbb{E}_{p}[(w(j)])\] The total cost of the important distribution is: \[C_{q_{c}}=N_{q_{c}}\mathbb{E}_{q_{c}}[C]=\frac{(t\lambda)^{2}}{\epsilon}(1+ \mathbb{E}_{p}[(w(j)])\mathbb{E}_{q_{c}}[C]\] \[=\frac{(t\lambda)^{2}}{\epsilon}(1+\sum_{j}\frac{h_{j}}{\lambda}w(j)) \mathbb{E}_{q_{e}}[C]\] \[=\frac{(t\lambda)^{2}}{\epsilon}(1+\sum_{j}\lambda_{e}\frac{h_{j}} {\lambda^{2}}C_{j})\mathbb{E}_{q_{e}}[C]\] \[=\frac{(t\lambda)^{2}}{\epsilon}(1+\frac{\lambda_{e}}{\lambda} \mathbb{E}_{p}[C])\mathbb{E}_{q_{e}}[C]\] \[=\frac{(t\lambda)^{2}}{\epsilon}(1+\mathbb{E}_{p}[\frac{1}{C}] \mathbb{E}_{p}[C])\mathbb{E}_{q_{e}}[C]\] \[=\frac{(t\lambda)^{2}}{\epsilon}\frac{1+\mathbb{E}_{p}[\frac{1}{C }]\mathbb{E}_{p}[C]}{\mathbb{E}_{p}[1/C]}\] Comparing this to the one for qDrift \(C_{p}=\frac{2(t\lambda)^{2}}{\epsilon}\mathbb{E}_{p}[C]\) we need: \[\frac{1+\mathbb{E}_{p}[1/C]\mathbb{E}_{p}[C]}{\mathbb{E}_{p}[1/C]}\leq 2 \mathbb{E}_{p}[C]\] \[\implies\mathbb{E}_{p}[1/C]\mathbb{E}_{p}[C]\leq 1\] which is always satified with Jensen's inequality.
量子コンピュータのデジタル量子コンピュータ上でハミルトンダイナミクを計算するためのランダム化アルゴリズムを導入します。これを「physDrift」と呼び、物理における保存則が任意の量子状態の進化過程で満たされることを特徴付けます。実験的に、水素鎖モデルを用いることで、従来のプロトコルと比較してスペクトルエラーの削減に優れています。ノイズモデルについても調査を行い、その特性を回路に異なる方法で特徴付けました。すなわち、測定された期待値の減衰は回路の深さを同じように保ち、デプロイ化エラーをランダムに適用したPauligatesを用いてシミュレートします。これは、この提案が現在のノイズハードウェアの実装およびテストに非常に適していることを示唆しています。
2309.13417
A Review on Practical Challenges of Aerial Quantum Communication
The increasing demand for the realization of global-scale quantum communication services necessitates critical investigation for a practical quantum secure communication network that relies on full-time all-location coverage. In this direction, the non-terrestrial quantum key distribution is expected to play an important role in providing agility, maneuverability, relay link, on-demand network, and last-mile coverage. In this work, we have summarized the research and development that has happened until now in the domain of quantum communication using non-terrestrial platforms with a specific focus on the associated challenges and the relevant models. Further, to extend the analysis beyond the existing know-how, a hybrid model involving the features of Vasylyev et al. model and Liorni et al. model is introduced here. The hybrid model entails us adapting a spherical beam to an elliptic beam approximation and effectively capturing the characteristics of transmittance in densely humid weather conditions and at low altitudes. Further, to understand the potential impact of the weather conditions of a region on atmospheric attenuation, as an example the average monthly visibility of Pune city was analyzed for the years 2021 and 2022. In addition, a simulation of a generic model is performed using a software-defined network paradigm where quantum teleportation is simulated between distant parties using a swarm of drones in NetSquid.
Umang Dubey, Prathamesh Bhole, Arindam Dutta, Dibya Prakash Behera, Vethonulu Losu, Guru Satya Dattatreya Pandeeti, Abhir Raj Metkar, Anindita Banerjee, Anirban Pathak
2023-09-23T16:03:23
http://arxiv.org/abs/2309.13417v1
# A Review on Practical Challenges of ###### Abstract The increasing demand for the realization of global-scale quantum communication services necessitates critical investigation for a practical quantum secure communication network that relies on full-time all-location coverage. In this direction, the non-terrestrial quantum key distribution is expected to play an important role in providing agility, maneuverability, relay link, on-demand network, and last-mile coverage. In this work, we have summarized the research and development that has happened until now in the domain of quantum communication using non-terrestrial platforms with a specific focus on the associated challenges and the relevant models. Further, to extend the analysis beyond the existing know-how, a hybrid model involving the features of Vasylyev _et al._'s model and Liorni _et al._'s model is introduced here. The hybrid model entails us adapting a spherical beam to an elliptic beam approximation and effectively capturing the characteristics of transmittance in densely humid weather conditions and at low altitudes. Further, to understand the potential impact of the weather conditions of a region on atmospheric attenuation, as an example the average monthly visibility of Pune city was analyzed for the years 2021 and 2022. In addition, a simulation of a generic model is performed using a software-defined network paradigm where quantum teleportation is simulated between distant parties using a swarm of drones in NetSquid. Quantum Key Distribution Modelling Aerial Quantum Communication Drone-based QKD Acquisition-Pointing and Tracking (APT) Atmospheric Turbulence Quantum Software Defined Networking Free-space QKD. ## 1 Introduction Quantum communication offers a fundamentally secure way to establish long-distance communication channels, making it highly relevant for secure communication in critical applications where traditional encryption methods may be vulnerable to future quantum attacks. Quantum communication has many facets and the two most important facets are secure quantum communication and teleportation, both are unique in some sense, teleportation does not have any classical analog, and quantum cryptography can be unconditionally secure whereas classical cryptography can never achieve that feature. Quantum key distribution (QKD) is one of the cornerstones of quantum cryptography. It is a method of exchanging symmetric keys among parties, by leveraging the principles of quantum mechanics to ensure provable security against adversaries. Fiber and free-space are the most commonly used transmission mediums for QKD. However, there are several challenges in establishing practical and secure networks. These challenges include device imperfections, such as detector noise, polarization-extinction-ratio, and signal loss depending on the transmission medium. In fiber-based QKD, the losses increase significantly with the distance, making it unfeasible over larger geographical areas. Free-space QKD offers the advantage of extended coverage and flexibility but is susceptible to losses caused by atmospheric turbulence, fog, and other environmental factors in the communication channel [1; 2]. Satellite-based QKD is considered a potential candidate for long-distance communication, however, along with the free-space propagation challenges, it faces a limited operational timing window, non-agility, and higher infrastructural costs. These factors collectively impede achieving higher key rates in satellite-based QKD systems. However, to realize a practical quantum secure communication network that would ideally provide full-time all-location coverage, all the modes of transmission need to function in an integrated fashion. Here, the utilization of aerial platforms [3] may offer a highly flexible, cost-effective, and re-configurable approach for expanding the reach of quantum communications across time and space. In Fig. 1, we have illustrated the concept of aerial quantum communication, with a hierarchical quantum network operating in different atmospheric layers. Deploying of aerial quantum nodes such as drones, high altitude platforms (HAPs), hot-air balloons, unmanned aerial vehicles (UAVs), and aircraft can serve as temporary relays. It can also act as intermediate mobile nodes between terrestrial ground stations and satellites and can be used for resolving the last-mile quantum key exchange challenge for inner-city or field networks due to their rapid deployment capabilities. Moreover, for higher altitudes, low-velocity aircraft can provide longer link duration and broader transmission coverage. Present-day drones, or UAVs, encompass a wide spectrum of capabilities, spanning take-off weights ranging from a few grams to several tons. They can operate at cruising altitudes that vary from a few meters above the ground to altitudes exceeding 20 kilometers. Furthermore, their flight duration can extend up to 25 days. Considering these recent advancements it is imperative to consider these UAVs to establish mobile quantum networks (QNs), enabling on-demand and real-time coverage across diverse spatial and temporal scales. This will enable quantum communication [32] from distances of kilometers (local-area networks) to hundreds of kilometers (wide-area networks). This approach represents a flexible and economically viable means of expanding the reach of secure communication while delivering real-time coverage as needed. Several works have been reported in this area, which includes air-to-ground QKD demonstration using the Dornier-228 aircraft by Nauerth _et al._[19], downlink QKD demonstration using the hot-air balloon by Wang _et al._[20], the basis detection and compensation experiment using the Z-9 helicopter by Zhang _et al._[22], the free-space QKD based on a moving pick-up truck by Bourgoin _et al._[23], uplink QKD demonstration using the Twin Otter research aircraft by Pugh _et al._[26], the drone-based QKD test using DJI S1000+ octocopter by Hill _et al._[33] and drone-based entanglement distribution using UAV by Liu _et al._[34; 32]. The work by Liu _et al._ laid the foundations for establishing re-configurable mobile QNs. Recently drone-based QKD, with an average secure key rate larger than 8 kHz using decoy-state BB84 protocol with polarization encoding was demonstrated [29]. There have been a few demonstrations of the satellite QKD also, including a B92 protocol implementation [25] using the SOCRATES (Space Optical Communications Research Advanced Technology Satellite), and a 600 km DS-QKD implementation [21] using the QEYSSAT microsatellite. In Table 1, we have reported the developments in aerial quantum communication to date. Considering the fact that aerial QKD is emerging as a potential candidate for the efficient implementation of a practical secure quantum communication network. It is interesting to address the implementation challenges and their impact on the performance of aerial QKD systems. Consequently, in Section 2, the technological challenges are presented in detail. Figure 1: (Color online) Concept of aerial quantum communication [3] \begin{table} \begin{tabular}{l l l l l l l l} \hline **Year** & **Distance** & **Secure** & \(\lambda\) & **Pulse repe-** & **QKD** & **QBER** & **Demonstration** \\ & (km) & **key rate** & (nm) & **-tition rate** & **protocol** & **QBER** & **Demonstration** \\ \hline 1989 & 30cm & - & - & 403 bits & - & 66 bits & On table at IBM [4] \\ \hline 1992 & 32cm & - & - & 217 bits & - & 2-4\% & Free air optical path [5] \\ \hline 1997 & 0.205 & 50 Hz & 772 & - & B92 & 1-6\% & Over indoor paths [6] \\ \hline 1998 & \(\sim 1\) & 3.5-45 KHz & 772 & 10 MHz & B92 & 1.5 \% (D) & Los Alamos (D) \\ & & & & & 2.1\% (N) & National Laboratory (N) [7] \\ \hline 2002 & 9.81 & 50.78 Kb (D) & 772 & - & BB84 & 5\% (D) & Los Alamos Ski Club, \\ & & 118.06 Kb (N) & & & 2.1\% (N) & The National Forest Service [8] \\ \hline 2002 & 23.4 & 1.5-2 Kbps & - & - & BB84 & 5\% & Tx-Zugspitze, South Germany [9] \\ & & & & & & Rx- Mountain of Karwendelspitzer \\ \hline 2004 & 0.73 & 1 Mbps & 845 & 250 ps & B92 & 1.1\% & Free-space [10] \\ \hline 2004 & 13 & 10 bps & 702 & - & BB84 & 5.83\% & Tx - Dashu Mountain \\ & & & & & & & Hefei of China (elevation- 281 m) \\ & & & & & & & Alice-West Campus of USTC \\ & & & & & & & Bob-Feixi of Hefei [11] \\ \hline 2006 & 144 & 417 bits & 710 & 249 MHz & BB84 & 4.8\% & La Palma and Tenerife [12] \\ \hline 2006 & 1.5 & 850 bps & 404 & - & BB84 for & 5.4\% & Free-space [13] \\ & & & & & pol. ent. p. & & \\ \hline 2006 & 0.48 & 50 Kbps & 850 & - & BB84 & 3-5\% & Free space, Munich [14] \\ \hline 2007 & 144 & 12.8, 42 bps & 850 & 10 MHz & DS BB84 & 6.48\% & La Palma and Tenerife [15] \\ \hline 2008 & 1.575 & 85 bps & 815 & - & BBM92 & 4.92\% & Free-space [16] \\ \hline 2008 & \(\sim 1.5\) & 300 bps & 407 & - & Modified E91 & \(\sim 3\%\) & Free-space [17] \\ & & & -810 & & & & \\ \hline 2010 & 1.305 & 2.7 Kbps & 404 & - & BBM92 & 2.48 & Free-space [18] \\ \hline 2013 & 20 & 7.9 bps & 850 & 10 MHz & BB84 & 4.8\% & Dornier 228 turboprop aircraft \\ & & & & & & & and the optical ground station [19] \\ \hline 2013 & \(\sim 96\) & 159.4 bps (MP) & 850 & 100 MHz & DS & 4.04\% & MP: Over a turntable \\ & & & 48 bps (FP) & & & & FP: Hot-air balloon [20] \\ \hline 2014 & 600 & 100 Kb & - & 76 MHz & DS & 4.3-5.51\% & QEYSSAT- 600 km \\ & & & & & & & altitude microsatellite [21] \\ \hline 2014 & 2.5-7.5 & - & 850 & 1 MHz & BB84 & - & Tx: Helicopter (100 kmph) \\ & & & & & & Rx: Top floor of a building \\ & & & & & & & in an airport [22] \\ \hline 2015 & \(\sim 0.650\) & 40 bps & 532, & 80 MHz & DS BB84 & 6.16\% & Pickup truck \\ & & & & & & traveling at 33 kmph \\ & & & 1550 & & & & angular speed [23] \\ \hline 2017 & 1200 & 1.1 Kbps & 850 & 100 MHz & DS BB84 & 1-3\% & Micius- 635 kg satellite [24] \\ \hline 2017 & 802 & \(\sim 10\)-100 bps & 800 & 10 MHz & B92 & \(<5\%\) & SOCRATES- 50 kg \\ & & & & & & microsatellite [25] \\ \hline 2017 & 3-10 & 868 Kb & 785 & 400 MHz & DS BB84 & 3-5\% & Twin Otter- research aircraft [26] \\ \hline 2017 & - & - & 650 & 500 KHz & DS BB84 & - & On table (towards DJI S1000+ \\ & & & & & & ococopter QKD) [27] \\ \hline 2021 & 0-0.04 & - & - & - & BB84 & \(\sim 50\%\) & Amov- lab’s Z410 drone \\ & & & & & & with T- engine 2216 \\ & & & & & & & and Pixhawk flight control QKD \\ \hline 2022 & 30 cm & 4 - 15.3 kbps & 850 & 100 MHz & BB84 & 2.4\% & Hand-held sender [28] \\ \hline 2023 & 0.2 & 8 KHz & 850 & 50 MHz & BB84 & 2.22-2.32\% & Drone-QKD [29] \\ \hline 2021- & \(10^{a}\) & - & 850 & 50 MHz & 3 states & 2.22-2.32\% & a. Drone-Drone: DJI S1000+ \\ & 2023 & & & & & & drone to Alta 8 Pro drone [27; 30] \\ & & & & & & & b. Drone-Car [30; 31] \\ & & & & & & & c. Car-Car [30; 31] \\ \hline \end{tabular} \end{table} Table 1: Developments towards aerial quantum communication around the world, where \(\lambda\): Wavelength, QBER: Quantum bit error rate, D: Day, N: Night, DS: Decoy state, pol. ent. p.: polarization-entangled photons, MP: Moving platform, FP: Floating platform, Tx: Transmitter, Rx: Receiver In Section 3, we introduce a hybrid model for low-altitude communication that takes into account real-world scenarios. In Section 4, we discuss the link configurations, budgeting, and margin in detail, along with time synchronization. Section 5 presents the simulation of quantum teleportation using a swarm of drones based on quantum software-defined networking (QSDN) oriented architecture. Finally, the paper is concluded in Section 6. ## 2 Technological challenges There are several challenges associated with the implementation of aerial quantum communication. One of the major challenges in achieving long-distance aerial quantum communication is the loss of signal in the transmission medium, this can be caused due to various physical reasons. Before we describe them, we may note that in an optical fiber, the losses increase exponentially with the length of the fiber and it is denoted by the attenuation coefficient (\(\beta_{\mathrm{a}}\)), expressed in dB/km. It depends on the fiber material, manufacturing tolerances, and wavelength. It is about 2 dB/km at 800 nm, 0.35 dB/km at 1310 nm, and 0.2 dB/km at 1550 nm. Secure quantum communication is usually done through telecom-grade optical fiber using light of wavelength about 1550 nm, where the attenuation is minimum at \(\sim\)0.2 dB/km. It can be slightly reduced further by using ultra-low-loss-fiber with a nominal attenuation coefficient of 0.158 dB/km and that can increase the distance for quantum key distribution to some extent. However, to perform secure quantum communication, beyond a few hundred km, one would be required to use a free-space route. Now, we may note that below fiber-based optical communication using light of wavelength below 800 nm is unusable as attenuation due to Rayleigh scattering increases considerably. Here appears an interesting point: there exists a high transmission window for free-space communication at around 770 nm. It is weakly dispersive and essentially non-birefringent at these wavelengths. This provides a great advantage to free-space communication. However, free-space transmission has some drawbacks, too. Particularly, its performance depends on the atmospheric conditions. For example, the transmission of the signal through a turbulent medium may lead to arrival time-jitter, beam wander, beam pointing error, beam divergence, etc. In this section, we will systematically discuss the technological challenges that arise due to these issues with a specific focus on how to model the effect of atmospheric conditions. To begin with we may discuss the effect of atmospheric turbulence. ### Atmospheric turbulence Air turbulence [35] in the atmosphere plays a significant role in free-space optical (FSO) communication as it can affect the operating laser beam, leading to beam divergence, beam wandering, scintillation, etc. Several efforts have been made to mathematically describe the effect of atmospheric turbulence on the FSO [36]. One such effort led to the development of energy cascade theory [37]. The energy cascade theory is a fundamental concept in the study of turbulence in the Earth's atmosphere. It explains energy transferred from large-scale turbulent motion to smaller and smaller scales. It states that the outer scale eddy \(L_{o}\), and inner scale eddy \(l_{o}\), form the bounds of an inertial sub-range. The eddies in the inertial range are statistically homogeneous and isotropic. Within this range, large eddies break into smaller eddies transferring energy. This process carries on until inner scale eddy \(l_{o}\) is reached. After this, energy dissipates through viscosity. In 1940s, Andrey Kolmogorov [38] obtained a beautiful expression for the wavenumber spectrum (now known as the Kolmogorov spectrum) in the turbulence inertial subrange. The Kolmogorov spectrum describes the refractive index fluctuations as \[\phi_{n}(k)=0.033C_{n}^{2}k^{\frac{-11}{3}},\frac{1}{L_{o}}<<k<<\frac{1}{l_{o}} \tag{1}\] where, \(k\) is the wavenumber, and \(C_{n}^{2}\) is the refractive index structure parameter. The refractive index variations arise due to changes in temperature and pressure with varying altitudes. The refractive index structure constant, \(C_{n}^{2}\), is a parameter used to characterize refractive index of air variations thus, the strength of air turbulence. It has values ranging from \(10^{-17}m^{\frac{-2}{3}}\) to \(10^{-13}m^{\frac{-2}{3}}\) to describe weak to strong turbulence, respectively [39]. It serves as a valuable tool for assessing both the scintillation index and the Rytov variance. Certain models offer a means to depict the impact of atmospheric turbulence on \(C_{n}^{2}\)[40]. Among these, the Hufnagel-Valley Boundary (HVB) model [41] is used for long-range propagation. The model incorporates various on-site conditions such as wind speed, iso-plannic angle, and altitude. Using the HVB model, \(C_{n}^{2}\) was plotted for different wind velocities as shown in Fig. 2a. Higher wind velocities have higher \(C_{n}^{2}\) values depicting a highly turbulent atmosphere. Fried [42] proposed another model for determining \(C_{n}^{2}\). It is valid for only short-range propagation. For the Fried model, \(C_{n}^{2}\) was plotted using the turbulence strength parameter, \(K_{o}\) values for strong, moderately strong, and moderate conditions are shown in Fig. 2b. \(C_{n}^{2}\) increases for increasing \(K_{o}\) values showing more turbulent environments. Further, an alternative model used for describing the refractive index structure constant at low altitudes is known as #### 2.1.1 Scintillation and beam wandering Atmospheric turbulence affects the propagation of the optical beams leading to wavefront distortions. It can cause fluctuations in the intensity of the beam, such that we obtain speckled patterns on the beam wavefront at the receiver end. This phenomenon is known as scintillation. It occurs because the turbulent atmosphere causes different parts of the beam to experience varying refractive index gradients. Scintillation causes loss in signal-to-noise ratio and deep signal fades. Aperture averaging [44] is one of the techniques used to mitigate scintillation. Beam wandering arises as a result of two distinct factors: atmospheric turbulence along the path of the beam and random errors in the transmitter's pointing mechanism. These two factors operate independently and their effects accumulate over the course of propagation. When transmitting an optical signal through free space, one observes the random displacement of the instantaneous centroid of the signal, often referred to as the "hot spot" or point of maximum irradiance. This quivering, which is assumed to follow a Gaussian distribution with variance \(\sigma^{2}\), is commonly known as beam or centroid wandering. In essence, this wandering phenomenon is a consequence of both pointing error, denoted as \(\sigma_{pe}^{2}\), stemming from Gaussian jitter and off-target tracking, and atmospheric turbulence, represented by \(\sigma_{tb}^{2}\). These two effects are mutually independent, and their combined effect results in the total variance of wandering, denoted as \(\sigma^{2}=\sigma_{pe}^{2}+\sigma_{tb}^{2}\)[45]. The impact of \(\sigma_{pe}^{2}\) and \(\sigma_{tb}^{2}\) varies depending on the different weather conditions, wavelength used, beam size and shapes, etc. In Fig. 3, variance of the beam centroid wandering resulting from turbulence (\(\sigma_{tb}^{2}\)), pointing error (\(\sigma_{pe}^{2}\)) and the long-term beam waist (\(w_{lt}^{2}\)) are plotted for \(\lambda=800\) nm and initial radius of collimated beam \(w_{0}\) = 5 cm a. It is observed that \(w_{lt}^{2}\gg\sigma_{tb}^{2}\gg\sigma_{pe}^{2}\) for all distances. The parameters \(w_{lt}^{2},\sigma_{tb}^{2}\) and \(\sigma_{pe}^{2}\) are shown to have a logarithmic growth with increasing distance. Other parameters are the outer scale of turbulence \(L_{0}=1\) m and \(C_{n}^{2}=1.28\)x\(10^{-14}m^{-2/3}\) (night-time operation). #### 2.1.2 Atmospheric attenuation Signal loss and link failure are caused by atmospheric attenuation due to absorption, scattering, and scintillation. All these effects vary with time and depend on the current local conditions, weather, and distance. The atmospheric attenuation \((\tau)\) in dB for distance \(L\) (km) and \(\beta_{\mathrm{a}}\) attenuation coefficient, can be given by: \[\tau=4.3429\beta_{\mathrm{a}}L \tag{2}\] The absorption loss is mainly due to the carbon dioxide molecules and water particles, whereas the scattering loss is due to the snow, fog, clouds and rain present in the atmosphere. For weather conditions such as clear weather to dense fog weather, scattering loss varies between 0.21 dB/km to 0.84 dB/km [46]. It can be characterized as follows: **Attenuation coefficient due to fog and rain:** Attenuation due to scattering of the optical signal depends on the visibility range of the link. And the visibility varies depending on different weather conditions. The attenuation factor for different weather conditions such as fog and rain is given by: \[\beta_{\mathrm{fog}}=\left(\frac{3.91}{V}\right)\left(\frac{\lambda}{550} \right)^{-p} \tag{3}\] Figure 2: Plot for structure parameter constant \(C_{n}^{2}\) with altitude (a) using HVB model with varying velocities (b) using Fried model for moderate, moderately strong, and strong conditions and (c) using SLC model. \[\beta_{\rm rain}=\left(\frac{2.8}{V}\right) \tag{4}\] where, \(V\) (km) is the visibility and \(p\) is the size distribution coefficient of scattering. Attenuation for thick fog, light fog, and haze conditions can be modeled by the Kim [47] or Kruse [48] model. Kim model is able to describe attenuation for visibility less than 1 km. For thick fog conditions where visibility is under 0.5 km, \(p=0\), Thus, attenuation is the same for all operating wavelengths. As visibility increases, the attenuation reduces overall. Higher wavelength values have slightly less attenuation when compared to lower wavelength values. See Fig. 3(a) and Fig. 3(b) to visualize the effect of fog. Size distribution, \(p\) are chosen depending on the visibility range as defined in the Kruse and Kim models. According to the Kim model, \[p=\begin{cases}1.6&\text{when $V>50$ km}\\ 1.3&\text{when $6$ km}<V<50$ km}\\ 0.16\ V+0.34&\text{when $V<6$ km}\\ V\ -0.5&\text{when $0.5$ km}<V<1$ km}\\ 0&\text{when $V<0.5$ km}.\end{cases} \tag{5}\] According to the Kruse model, \[p=\begin{cases}1.6&\text{when $V>50$ km}\\ 1.3&\text{when $6$ km}<V<50$ km}\\ 0.585\ V^{1/3}&\text{when $V<6$ km}.\end{cases} \tag{6}\] Figure 4: Specific attenuation vs visibility using wavelengths 850 nm, 950 nm and 1550 nm which are frequently used in FSO communication for (a) thick fog condition and (b) light fog and haze condition. Figure 3: (Color online) Variance \(\sigma_{pe}^{2},\sigma_{tb}^{2}\) and \(w_{lt}^{2}\) for varying distances. We have investigated the average visibility of Pune city for the last two years using the data collected from the Indian Meteorological Department (IMD) (refer to Fig. 5). Pune city is chosen just as an example, as we plan to perform experimental aerial quantum communication in Pune. We observe that due to the changing weather conditions of any region, there are variations in the average visibility of the atmosphere. Therefore, the performance of any aerial quantum communication system would depend on the date and time when it's used. Additionally, the integration of weather monitoring systems and predictive algorithms can aid in optimizing system performance by adjusting parameters in response to changing weather conditions. Overall, understanding and mitigating the effects of weather and visibility is crucial for reliable aerial quantum communication. **Atmospheric extinction**: An additional significant source of signal loss during the free-space transmission of an optical beam is atmospheric extinction. This phenomenon results from the combined impact of aerosol absorption and Mie/Rayleigh scattering. When we consider free-space communication at a constant altitude \(\overline{h}\), this phenomenon can be quantified using the straightforward Beer-Lambert equation, \(\eta_{\mathrm{atm}}\left(\overline{h}\right)=e^{-\alpha\left(\overline{h} \right)z}\), where, \(\alpha\left(\overline{h}\right)\) is the extinction factor which varies depending on both the altitude and the wavelength of the signal [49]. Neglecting refraction, the atmospheric transmissivity can be expressed as \[\eta_{\mathrm{atm}}\left(\overline{h},\phi\right)=\exp\left\{-\int_{0}^{z \left(\overline{h},\phi\right)}dx\,\alpha\left[\overline{h}\left(x,\phi \right)\right]\right\}, \tag{7}\] while taking into consideration a generic zenith angle (\(\phi\)). **Atmospheric transmittance:** Atmospheric transmittance is a measure of the amount of incoming electromagnetic radiation (such as visible light, infrared, or microwave radiation) that passes through the Earth's atmosphere without being absorbed, scattered, or otherwise attenuated. Different wavelengths of electromagnetic radiation are affected differently as they pass through Earth's atmosphere. The variation in transmittance with wavelength is primarily due to the absorption and scattering properties of the atmospheric constituents, like gas molecules, aerosols, etc., at different wavelengths. In Fig. 6, we have presented a simulation for the atmospheric transmittance for a 1 km FSO link as a function of different wavelengths along the zenith for the downlink configuration was carried out using the MODTRAN software, which was developed by the Spectral Sciences Inc. (SSI) and the Air Force Research Laboratory of The United States of America (USA) for an urban location with the tropical atmospheric model and 9 km visibility. The results provide an indication for the identification of the optimum wavelengths necessary for the free-space link establishment like the APT coarse and fine-tracking laser beams, entangled pair distribution, and time synchronization. #### 2.1.3 Beam divergence loss One of the major sources of loss in establishing a point-to-point link, with accuracy for single mode fiber (SMF) coupling (where the SMFs typically have the mode field diameter of around 5 \(\mu m\)), is the diffraction-induced beam broadening. The optical beam propagation through the atmosphere spreads out owing to the diffraction, leaving the receiver with a narrow field of view (FOV), not being able to collect a fraction of the transmitted power, resulting in the beam divergence loss, also known as the geometric loss. Figure 5: (Color online) Comparison of average monthly visibility for Pune city for the years 2021 and 2022. One may consider the Gaussian beam as a quasi-monochromatic optical mode source with wavelength \(\lambda\), and employ it for achieving free-space quantum communication. If this beam travels a distance of \(z\), due to diffraction the spot size of the beam, \(w_{D}\) will become: \[w_{D}=w_{0}\sqrt{\left(1-\frac{z}{R_{0}}\right)^{2}+\left(\frac{z}{z_{R}} \right)^{2}} \tag{8}\] where, the initial beam spot size is \(w_{0}\) (smaller than the aperture of the transmitter), radius of curvature is \(R_{0}\), and (\(z_{R}=\frac{\pi w_{0}^{2}}{\lambda}\)) is the Rayleigh length1. Only a fraction of the initial beam is detectable and this fraction is determined by the diffraction-induced transmissivity, Footnote 1: For collimated Gaussian beam \((R_{0}=\infty)\), and consequently, the spot size can be considered as \(w_{D}=w_{0}\sqrt{1+\left(\frac{z}{z_{R}}\right)^{2}}\). \[\eta_{D}(z)=1-e^{-\frac{2a_{r}^{2}}{w_{D}^{2}}} \tag{9}\] which may be approximated as, \[\eta_{D}\simeq\eta_{D}^{\text{far}}:=\frac{2a_{r}^{2}}{w_{D}^{2}}\ll 1 \tag{10}\] where \(a_{r}\) is the aperture of the receiving telescope and \(w_{D}\) spot size of the beam. Employing the PLOB (Pirandola-Laurenza-Ottaviani-Banchi) bound [50] with the transmittance, we can estimate the upper bound of the maximum number of secret bits that can be distributed by a QKD protocol across a free-space communication channel by \[\mathcal{U}\left(z\right)=\frac{2}{\ln 2}\frac{a_{r}^{2}}{w_{D}^{2}} \tag{11}\] bits per use. Hence, it is important to choose the optimum transmitter and receiver optics aperture areas for the optimal beam diameters and low-pointing errors. Therefore, using Eq. (12), a simulation for the beam divergence loss, \(L\) (dB) as a function of the diffraction-limited link distances within a local area network for small transmitting and receiving optics aperture diameters was carried out (refer to Fig. 7), \[L\text{(dB)}=-10\left[\left(2\log\frac{4}{\pi}\right)+\log\left(\frac{A_{t}A_ {r}}{\lambda^{2}z^{2}}\right)\right] \tag{12}\] where, \(A_{t}\): aperture area of the transmitter optics, and \(A_{r}\): aperture area of the receiver optics. Similarly, the beam divergence loss as a function of the transmitter and receiver optics diameter at 500 m link distance is obtained (see Fig. 8). These results can aid in the identification of the proper transmitter and receiver optics aperture areas for the APT units to achieve longer link coverage, low pointing errors, and low diffraction-induced beam divergence loss. Figure 6: Simulated atmospheric transmittance for zenith with different wavelengths. It can be observed that the transmitter and receiver optics diameter of up to some centimeters, which can give the Rayleigh lengths of up to some hundreds of meters with low beam divergence loss are sufficient for the free-space communication within a local area mobile network. Further, an increase in the transmitting optics aperture area will effectively reduce the transmitter beamwidth, delivering the signal with more intensity, and hence reducing the beam divergence loss. However, it may lead to tight acquisition, pointing, and tracking requirements and will also increase the overall mass and the cost of the payload. Similarly, increasing the receiving aperture area scales the receiving signal power and reduces the beam divergence loss. However, it will also increase the collection of the amount of background noise by the receiver. Therefore it implies that the effective performance improvement does not always scale linearly with the increasing transmitter and receiver optics aperture areas and an optimum choice needs to be made for the trade-off [51]. Also, for a long-distance link, we can reduce the effects of beam divergence loss by exploiting several shorter link segments and using the optical relay method [34] which is feasible, especially for drone-based platforms. The overall transmissivity includes the multiplication of three types of optical transmissivity [45], \[\eta=\eta_{D}\eta_{eff}\eta_{atm} \tag{13}\] where, \(\eta_{D}\) is turbulence or diffraction-induced transmissivity, \(\eta_{eff}\) is receiver's efficiency and \(\eta_{atm}\) is atmospheric loss. Overall transmissivity reduces with increasing altitude as shown in Fig. 9. Up to this point, we have delved into the significant and inevitable challenges faced in free-space quantum communication. These challenges encompass various factors, including atmospheric turbulence, scintillation, beam wandering, atmospheric attenuation, and beam divergence loss, all of which we have extensively discussed. In addressing these real-world effects, Vasylyev et al. introduced a model utilizing an elliptic beam appro strong compatibility with actual experimental data in their influential paper [52; 53]. Liorni et al. extended this model for broader application in low earth orbit (LEO) satellite-based quantum communication [54] They factored in considerations such as the refractive index structure constant and the density of scattering particles, maintaining consistency with LEO satellite conditions, and evaluated their model under various weather scenarios. Now, our focus shifts to assessing the combined and realistic impact of these factors at lower altitudes, where communication can be facilitated using drones. To do this, we adapt their approach by incorporating the refractive index structure constant applicable to lower altitudes [40; 43] and introduce our hybrid methodology tailored for shorter altitude ranges. ## 3 A hybrid model for low altitude signal transmission In this section, we present a hybrid model using the model that exploits the properties of the Gaussian elliptical beam proposed by Vasylyev _et al._, [52; 53]. Furthermore, we apply the generalized approach and incorporate day-time and night-time conditions, as introduced by Liorni _et al._ in their seminal paper [54]. Their approach influences the transmittance value significantly, as transmittance relies on both beam parameters \(\mathbf{V}\) and the diameter of the receiving aperture \(a\). In order to enhance the readers' grasp of the elliptic beam approximation and its modified version, we provide a concise elucidation of the fundamental theory. A Gaussian beam is projected through a link that traverses both the atmosphere and a vacuum, originating from either a space transmitter (drone) or a ground station. This link is distinguished by its non-uniform characteristics. Typically, the changing intensity transmittance of this signal (the received beam) as it passes through a circular aperture with a radius \(a_{r}\) in the receiving telescope is formulated as follows (see Refs. [53; 55] for details) \[\eta=\int_{|\mathbf{\rho}|^{2}=a_{r}^{2}}\mathrm{d}^{2}\mathbf{\rho}\left|\mathrm{u} \left(\mathbf{\rho},z\right)\right|^{2} \tag{14}\] In this context, the function \(u\left(\mathbf{\rho},z\right)\) represents the beam envelope at the receiver plane, which is situated at a distance \(z\) from the transmitter. The quantity \(\left|u\left(\mathbf{\rho},z\right)\right|^{2}\) signifies the normalized intensity concerning the entire \(\mathbf{\rho}\) plane, where \(\mathbf{\rho}\) denotes the position vector within the transverse plane. The vector parameter \(\mathbf{V}\) provides a comprehensive description of the beam's state at the receiver plane (see Fig. 1 in Ref. [52]) and it's described as \[\mathbf{V}=\left(x_{0},y_{0},W_{1},W_{2},\theta\right), \tag{15}\] where \(x_{0},y_{0}\), \(W_{1}\), \(W_{2}\) and \(\theta\) represent the coordinates of the beam centroid, the dimensions of the elliptical beam profile (characterized by its principal semi-axes), and the orientation angle of the elliptical beam, respectively. The transmittance is influenced by these beam parameters in conjunction with the radius of the receiving aperture (\(a_{r}\)). In the context of an elliptical beam's interaction with a circular aperture characterized by a radius denoted as \(a_{r}\), the notion of transmittance is precisely described by Equation (14). The transmittance for this scenario can be articulated as follows [53] \[\eta\left(x_{0},y_{0},W_{1},W_{2},\theta\right) = \frac{2\,\chi_{\mathrm{ext}}}{\pi W_{1}W_{2}}\int_{0}^{a_{r}} \rho\,\mathrm{d}\rho\int_{0}^{2\pi}\mathrm{d}\varphi\,\mathrm{e}^{-2\mathrm{ A}\left(\rho\mathrm{cos}\varphi-\rho_{0}\right)}\mathrm{e}^{-2\mathrm{B}\rho^{2} \sin^{2}\varphi}e^{-2\mathrm{C}\left(\rho\mathrm{cos}\varphi-\rho_{0}\right) \rho\sin\varphi} \tag{16}\] Here, the symbol \(a_{r}\) signifies the aperture's radius, while \(\rho\) and \(\varphi\) are used to express the polar coordinates of the vector \(\mathbf{\rho}\), we may write \(x=\rho\cos\varphi\) and \(y=\rho\sin\varphi\), and \(x_{0}=\rho_{0}\cos\varphi_{0}\) and \(y_{0}=\rho_{0}\sin\varphi_{0}\), where \(\rho_{0}\) and \(\varphi_{0}\) denote the polar coordinates associated with the vector \(\mathbf{\rho}_{0}\). Additionally, the expressions of the constants are, \(\mathrm{A}=\left(\frac{\cos^{2}\left(\theta-\varphi_{0}\right)}{W_{1}^{2}}+ \frac{\sin^{2}\left(\theta-\varphi_{0}\right)}{W_{2}^{2}}\right),\,\mathrm{B}= \left(\frac{\sin^{2}\left(\theta-\varphi_{0}\right)}{W_{1}^{2}}+\frac{\cos^{2} \left(\theta-\varphi_{0}\right)}{W_{2}^{2}}\right),\) and \(\mathrm{C}=\left(\frac{1}{W_{1}^{2}}-\frac{1}{W_{2}^{2}}\right)\sin 2\left( \theta-\varphi_{0}\right).\) Here, \(\chi_{\mathrm{ext}}\) accounts for the influence of _atmospheric extinction_, which encompasses factors like back-scattering and absorption that occur within the atmosphere [56]. With this elliptic beam approximation method one can relate the atmospheric effect in free-space link at receiver's end. To make it more acceptable and useful in real life situation, for free-space quantum communication, Liorni _et al._ proposed a generalized model [54]. Their model was generalized in the sense that it involved a non-uniform link distribution between a drone and the ground, as described. To calculate the moments of the distributions related to the parameters of the elliptic Gaussian beam, we adopt the same Heaviside function as employed in Liorni's model. We proceed to assess the expressions for the first and second moments of the beam parameters (\(\mathbf{V}\)) by making adaptations to Equations (4) through (9) from the Ref. [54], aligning them with the conditions specific to drone-based communication. We assume that the orientation angle \(\theta\) of the the elliptical profile follows a uniform distribution within the interval \(\left[0,\frac{\pi}{2}\right]\). In the context of up-links, the mean value and variance of the beam's centroid position are consistent in both the \(x\) and \(y\) directions and are equivalent to, \(\left\langle x_{0}\right\rangle=\left\langle y_{0}\right\rangle=0\), and \(\left\langle x_{0}^{2}\right\rangle=\left\langle y_{0}^{2}\right\rangle=0.419 \,\sigma_{R}^{2}w_{D}^{2}\Omega^{-\frac{7}{2}}\),where the term \(\sigma_{R}=1.23\,C_{n}^{2}k^{\frac{7}{2}}z^{\frac{4}{2}}\) is referred to as _Rytov parameter_ which is an useful indicator of integrated turbulence strength for extended propagation; \(\Omega=\frac{k\,w_{D}^{2}}{2z}\) represents the Fresnel number, where \(k\) denotes the optical wave number and \(w_{D}\) represents the beam spot size at the receiver. In the chosen reference frame, the condition is set so that \(\left\langle x_{0}\right\rangle=\left\langle y_{0}\right\rangle=0\). The mean and (co)variance of \(W_{i}^{2}\) can be expressed as, \[\left\langle W_{i}^{2}\right\rangle = \frac{w_{D}^{2}}{\Omega^{2}}\left(1+\frac{\pi}{8}\,zn_{0}w_{D}^{ 2}+2.6\,\sigma_{R}^{2}\Omega^{\frac{5}{6}}\right),\] \[\left\langle\Delta W_{i}^{2}\Delta W_{j}^{2}\right\rangle = (2\delta_{ij}-0.8)\,\frac{w_{D}^{4}}{\Omega^{\frac{7}{26}}} \left(1+\frac{\pi}{8}\,zn_{0}w_{D}^{2}\right)\sigma_{R}^{2},\] where, \(n_{0}\) denotes the scattering particles density2. Similar expressions are relevant for down-links when considering the position of the elliptic beam centroid, \(\left\langle x_{0}\right\rangle=\left\langle y_{0}\right\rangle=0\), and \(\left\langle x_{0}^{2}\right\rangle=\left\langle y_{0}^{2}\right\rangle= \alpha_{p}\,z\),also the semi-axes of the elliptic beam profile are, Footnote 2: To estimate the value of \(n_{0}\), which primarily comprises water droplets, we utilize the atmospheric water vapor content profile. This profile serves as our for understanding the scattering particles [57, 58] \[\left\langle W_{i}^{2}\right\rangle = \frac{w_{D}^{2}}{\Omega^{2}}\left(1+\frac{\pi}{24}\,zn_{0}w_{D}^{ 2}+1.6\,\sigma_{R}^{2}\Omega^{\frac{5}{6}}\right),\] \[\left\langle\Delta W_{i}^{2}\Delta W_{j}^{2}\right\rangle = (2\delta_{ij}-0.8)\,\frac{3}{8}\,\frac{w_{D}^{4}}{\Omega^{\frac{ 7}{26}}}\left(1+\frac{\pi}{24}\,zn_{0}w_{D}^{2}\right)\sigma_{R}^{2},\] In this context, the symbol \(\alpha_{p}\approx 2\)\(\mu\)rad denotes the approximate angular pointing error. Afterward, we employ the knowledge of the probability distribution related to the elliptic beam parameters (as expressed in equation 15) to compute the probability distribution transmittance (PDT) using equation 16 through a random sampling procedure using a Monte Carlo methodology. #### 3.0.1 Performance analysis of simulation result The proposed hybrid approach primarily relies on short-altitude communication, employing Gaussian beam-based quantum communication via drones. To validate the applicability and performance integrity of the proposed model in the context of FSO communication, we need to analyze the probability distribution of the transmittance (PDT) of this model. In our analysis, we appropriately employ both normal and uniform distributions [59] for beam parameters (\(\mathbf{V}\)) and incorporate specific optical values [32] to emulate our model (refer to Table 2). To generate PDT plots, we utilize random M5-tuples, generating a substantial number of values (\(10^{6}\) values), and approximate the results to five decimal places to get well-suited for PDT representation. We present the transmittance performance in various scenarios encompassing both up-link and down-link configurations as well as day and night conditions, at altitudes of \(30\) m and \(220\) m (refer to Fig. 10). Notably, for the down-link configuration, the transmittance probability exhibits similar trends in both day and night conditions (refer to Fig. 10 (a) and 10 (b)). At an altitude of \(30\) m, we observe peak transmittance probability values occurring for transmittance values of about \(0.25\) and \(0.5\). In this scenario, the probability distribution is relatively broad when compared to the \(220\) m altitude scenario. Conversely, at \(30\) \begin{table} \begin{tabular}{c c c} \hline Parameter & Value & Description \\ \hline \(w_{D}\) & 1.15 cm & Down-link / up-link \\ \(a_{r}\) & 2.64 cm & Down-link / up-link \\ \(\lambda\) & 810 nm & Wavelength of the signal light \\ \(\beta\) & 0.7 & Parameter in \(\chi_{\text{ext}}(\phi)\) \\ \(\alpha_{p}\) & \(2\times 10^{-6}\) rad & Pointing error \\ \(\overline{h}\) & \(18.5\,\text{m}-240\,\text{m}\) & Altitude of drone \\ \(n_{0}\) & 0.61 m\({}^{-3}\) & Night-time condition \\ \(n_{0}\) & 0.01 m\({}^{-3}\) & Day-time condition \\ \(C_{n}^{2}\) & \(\frac{4.008\times 10^{-13}}{h^{-0.64}}\) & Night-time condition \\ \(C_{n}^{2}\) & \(\frac{3.13\times 10^{-13}}{h}\) & Day-time condition \\ \hline \end{tabular} \end{table} Table 2: Parameters linked to the optical and technical attributes of the transmission link with weather conditions. m altitude, the peak transmittance probability occurs only in the vicinity of a transmittance value of \(0.5\), with a sharply peaked distribution and higher magnitude, evident in both day and night conditions. In the up-link configuration, peak transmittance values are consistently located near a transmittance value of \(0.5\) for both day and night conditions (see Fig. 10 (c) and 10 (d)). The distribution nature is broader and slightly lower in value for the \(30\) m altitude compared to the \(220\) m altitude scenario. This observation is attributed to the lower losses incurred at low altitudes (\(30\) m), as there is relatively less interaction with the atmosphere. Conversely, at high altitudes (\(220\) m), the losses are substantial, resulting in a sharper distribution. We have also generated plots illustrating the variation in transmittance concerning altitude (\(\overline{h}\)) and zenith angle (\(\phi\)), as shown in Fig. 11, for both up-link and down-link configurations, encompassing both day and night conditions. To generate these plots, we have utilized random sets of M5-tuples, each containing \(1000\) values drawn from an appropriate probability distribution. These random samples of beam parameters allowed us to simulate the transmittance values across various combinations of altitude and zenith angle. Notably, the curvature of the transmittance values across different combinations of altitude and zenith angle exhibits similar trends for all the cases. These findings align with the results obtained from the PDT analysis. It is worth mentioning that due to the relatively low altitude of the drone-based FSO communication system, the variation in transmission remains nearly consistent across different environmental conditions. However, it is important to note that our hybrid approach can be extended to consider various values of \(C_{n}^{2}\) for higher altitudes, as detailed in Appendix A, to gain a deeper understanding of its applicability under such conditions. Figure 10: (Color online) Plot of PDT variation with different altitude positions at the zenith position for our hybrid model: (a) PDT at day time condition under down-link configuration, (b) PDT at night time condition under down-link configuration, (c) PDT at day time condition under up-link configuration, (d) PDT at night time condition under up-link configuration. ## 4 Link configuration, budgeting and margin and time synchronization ### Link configuration For longer link distances, it is assumed that the key generation rate of an uplink configuration is roughly one magnitude lower than that of the downlink [23; 60], while in the down-link scenario, pointing errors are notably relevant. In the up-link, pointing errors can be mitigated since ground stations can employ more extensive and sophisticated optical systems. However, the turbulence is more concentrated near the earth's surface so for the uplink transmission the turbulence-induced distortion at the beginning significantly increases the beam wandering and divergence angle resulting in a larger channel attenuation as compared to the case of the downlink transmission. A comparison of the atmospheric transmittance for a 1 km FSO link as a function of different angles with the zenith for the uplink and downlink configurations with different wavelengths was carried out using MODTRAN software. Fig.12 and Fig.13 show the simulated atmospheric transmittance for an urban location with the tropical atmospheric model and 9 km visibility. From the simulation results, we can observe that for the shorter links, the transmittance for both uplink and downlink configurations is comparable. And since aerial platforms can fly at much lower altitudes, the total link budget will have minor deviations between the uplink and downlink in terms of geometric loss, atmospheric turbulence, and other types of attenuation. Figure 11: (Color online) Variation of transmittance with altitude (\(\overline{h}\)) and zenith angle (\(\phi\)) for our hybrid model: (a) Transmittance at day time condition under down-link configuration, (b) Transmittance at night time condition under down-link configuration, (c) Transmittance at day time condition under up-link configuration, (d) Transmittance at night time condition under up-link configuration. #### 4.1.1 Integrated acquisition, pointing, and tracking (APT) For aerial quantum communication, distributing the photons simultaneously raises a higher requirement of the dynamically established aerial vehicle-to-ground station links, and to keep the polarization and time series stable during the whole distribution process. Thus there is a need to integrate all the elements for polarization compensation, adaptive optics, collimation, and tracking into an integrated APT unit and perform two-stage tracking, viz. coarse and fine [29]. We have presented a high-level architecture of an APT system in Fig. 14. An APT unit consists of a motorized three-axis (pitch, yaw, and roll) gimbal mount along with a telescope platform. The coarse pointing alignment of the transmitter/receiver telescope is enabled by moving the telescope platform by the gimbal mount using a proportion-integration-differentiation (PID) error signal. This is calculated from the target image using a coaxial zoom camera. The target for this imaging identification is an uncollimated laser beam, typically of the NIR or IR wavelength range on the corresponding receiver or transmitter side. The telescope on each APT unit collimates light to a beam size optimum for reducing the beam divergence loss, as discussed in Section 2.1.3. A carbon-fiber base plate can be used for the telescope platform, where the composite structure design can be optimized for the best thermal stability. Typically 90-degree off-axis parabolic mirror (OAPM) of aperture comparable to the desired beam width is used for collimation. Whereas, the beacon laser beams for the second stage-fine tracking pass through the central hole of the parabolic mirror. The beacon laser has a small aperture, however as it propagates through the link it provides broader FOV, which helps in the coarse tracking. Subsequently, the fine-tracking is performed using a fast-steering mirror (FSM) and a position-sensitive detector (PSD). The PSD is placed at the image position of the dichroic mirror (as shown in Fig. 14). It captures the position of the fine-tracking laser and generates error signals to give feedback to the FSM. Accordingly, FSM aligns itself to reduce this error and achieve tracking with accuracy within the 5 \(\mu\)m range. The PSD is mounted at the image position of the transmitter or receiver fiber port to a dichroic mirror (DM). It monitors the focal position of the beacon light to generate the error signal and feedback to the FSM. With proper feedback electronic controls, the transmitter and receiver unit can be pointed at each other within the accuracy of SMF coupling. APT systems for aerial quantum communication face significant challenges due to the aerial platform-induced jitter, vibrations, and the need for precise synchronization. Mechanical vibrations and jitter from aerial platforms can disrupt optical alignment, requiring real-time feedback-based compensation mechanisms like fast steering mirrors. Effective vibration isolation is also crucial, as environmental factors such as wind and atmospheric turbulence also impact the stability. Moreover, there are constraints with the SWaP (size, weight, and power) factors, as the payload needs to be lightweight and power-efficient for aerial deployment. Overcoming these challenges demands advanced technology and robust testing to maintain a stable optical link while minimizing the system's physical footprint. ### Link budgeting A link budget aims to calculate and analyze the overall performance of a communication link or system, mainly to figure out what distances one could reach with given equipment and to determine whether additional power is available for FSO links under given atmospheric conditions, especially in wireless communication. QKD systems rely on optical communications link analysis to have enough photons arriving at the receiver. The main factors that must be considered regarding optical communications are the distance between the transmitter and the receiver, the operating wavelength, all the losses related to atmospheric conditions, geometrical losses, channel turbulence, background noise, and optical losses. Link budget calculates the minimum power or signal strength required for a communication link to function under specific conditions. In contrast, the link margin represents the additional power or signal strength added to ensure reliability. The link margin is directly related to the link budget. #### 4.2.1 Link margin The link margin is the gap between the actual received power and the minimum required received signal level. \[\text{Link Margin}=P_{\text{t}}-A_{\text{tx}}-20\text{log}\left(\frac{\sqrt{2} L\theta_{div}}{\text{D}}\right)-A_{\text{rx}}-\alpha_{\text{fog}}L-S_{\text{r}} \tag{17}\] where, \(P_{\text{t}}\) is the transmitted power, \(A_{\text{tx}}\) is the coupling losses at the transmitter, \(L\) is the range of the FSO link, \(\theta_{div}\) is the half-angle divergence, \(\alpha_{\text{fog}}\) is the attenuation losses due to moisture and \(S_{\text{r}}\) is the sensitivity of the receiver. It is imperative that the link margin remains positive, and efforts should be directed toward its maximization. If the link margin becomes negative, the FSO link will no longer be operational. In Fig. 15, we have simulated the link margin as a function of link range with various aperture diameters of the receiver lens. It was observed that with increasing distances, the link margin decreases. However, as we increase the aperture diameter of the receiving optics, the link margin increases. ### Time synchronization Time synchronization is essential to provide a time reference that allows two distant users to generate the correlated information simultaneously. Generally, components like lasers, modulators, and detectors can introduce jitter due to Figure 14: Schematic of integrated acquisition, pointing, and tracking (APT) unit for aerial quantum communication, where the abbreviations used are as follows- C: collimator, F: optical fiber, QS: quantum signal, DM: dichroic mirror, PSD: position sensitive detector, FSM: fast-steering mirror, OAPM: off-axis parabolic mirror, CMOS: camera/sensor. their finite response times and inherent noise. Precise compensation for this jitter can be mitigated by the use of stable and precise reference clocks, the implementation of delay compensation techniques, and the use of high-quality optical components with low jitter, which is necessary to ensure accurate synchronization. For aerial quantum communication, the distance between the transmitter and receiver continuously changes; hence, time synchronization is implemented in a particular manner. A fault-tolerant synchronization based on de Bruijn sequences is suitable for timing and synchronization over high-loss space-to-ground communication channels. It provides an efficient sequence position encoding, which exploits achieving robustness to beacon corruption in the decoding process [61]. A fiber optic two-way quantum clock synchronization combined with microwave frequency transfer technology gives picosecond scale synchronization precision, which promises femtosecond precision over intercity optical fiber links in the future [62]. Qubit-based synchronization (Qubit4sync) with a cross-correlation scheme is a synchronization procedure that only needs the same photons encoding the quantum state exchanged in QKD protocol. This avoids additional hardware, makes it cheaper, and lowers failure probability due to hardware [63]. Qubit-based clock synchronization using the Bayesian probabilistic algorithm efficiently finds the clock offset without sacrificing the secure key. In comparison with other protocols, it is more robust to channel loss, noise, and clock drift [64]. In satellite-to-ground large-distance, quantum communication where independent reference clocks are employed GPS pulse-per-second (PPS) signal and an assistant pulse laser are used for time synchronization [65]. In 2021, the Space Application Centre (SAC) of ISRO used a novel synchronization technique enabled with NavIC for a distance of 300 m to achieve a secure key rate of 300 kbps [66]. ## 5 Simulation of quantum teleportation using entanglement swapping through a swarm of drone network In this section, we have presented a use case where we simulate quantum teleportation between two distant nodes using entanglement swapping through a swarm of drones. We have performed the simulation using the Network Simulator for Quantum Information using Discrete events (NetSquid). NetSquid [67] is a software tool for the modeling and simulation of scalable quantum networks developed by QuTech. This QN simulation directs towards a software-defined networking (SDN)-based architecture [68] to manage the distribution of end-to-end entangled pairs between two ground stations (GSs). The architecture is adaptable for quantum computing and QKD services. In the simulation scheme presented in Fig. 16, a swarm of drones comprising of \(n\) quantum repeaters (QR), designated as \(D_{n}^{QR}\), is distributed between two end stations performing the quantum teleportation. The drones nearest to the end stations, Alice and Bob can be referred to as \(D_{1}^{QR}\) and \(D_{n}^{QR}\). We consider that each QR drone has quantum memory (QM), which can house two quantum particles entangled with the adjacent neighboring QR drones' particles. When the QR drone performs a Bell state measurement (BSM) on its two quantum particles, the measurement will result in the entanglement swapping amongst the two neighboring QR drone particles. The entire scheme is discussed in detail below: Figure 15: (Color online) Link margin versus link range with various aperture diameters of the receiver lens * A swarm of \(n\) QR drones (\(D_{1}^{QR}\) to \(D_{n}^{QR}\)), is distributed between the two end stations performing quantum teleportation, Alice and Bob. * Each QR drone (\(D_{i}^{QR}\)) possesses two particles, each entangled with the two subsequent neighboring QR drones' particles (\(D_{i-1}^{QR}\) and \(D_{i+1}^{QR}\)). The entangled pairs may be stored on the QR drones using quantum memories before the take-off or distributed in real-time (refer to Fig. 17). * The end stations, say Alice and Bob share an entangled pair with the \(D_{1}^{QR}\) and \(D_{n}^{QR}\), respectively. * Quantum entanglement swapping is executed at \(D_{1}^{QR}\) resulting in the entanglement between Alice and \(D_{2}^{QR}\). * In this way, the entanglement swapping [69] is repeated consequently for the rest of the QR drone chain, from the \(D_{2}^{QR}\) to \(D_{n}^{QR}\). At the end after \(n\) entanglement swapping, Alice's particle gets entangled with Bob's particle. * After the establishment of an entanglement pair between Alice and Bob, for the quantum teleportation Alice performs a complete measurement of the _von Neumann_ type on the joint system, consisting of her particle from the shared EPR pair and the _arbitrary unknown state_ (\(|\psi\rangle\)) particle whose information needs to be shared. * She then sends the outcome of her measurement to Bob through the classical channel, who then applies the required unitary (rotation) operations on his EPR particle to receive \(|\psi\rangle\). Hence, the state is teleported from Alice's lab to Bob's lab. The simulation of the above quantum teleportation scheme was carried out for different configurations on NetSquid. We have calculated the fidelities of the resulting teleported states and performed the time analysis for the execution of the entire scheme. In the Node-to-Node configuration, quantum teleportation between Alice and Bob separated at a \(5\) km distance without any intermediate QR drone was carried out. While in the End-to-End configuration, quantum teleportation over \(50\) km distance using entanglement swapping as per the above scheme, through ten QR drones, each separated at \(5\) km distance between Alice and Bob was carried out. The results are shown in the Table 3. ## 6 Conclusion In this work, we have emphasized the necessity and importance of non-terrestrial platforms for future quantum communication which will explore free-space mediums in an optimal way to provide end-to-end solutions. We have Figure 16: Scheme for the quantum teleportation using entanglement swapping using a swarm of drones [68]. Figure 17: Entanglement swapping [69]. \begin{table} \begin{tabular}{c c c} \hline Parameters & Node-to-Node & End-to-End \\ \hline Fidelity & 0.964 & 0.1516 \\ Time (ns) & 5 & 236111 \\ \hline \end{tabular} \end{table} Table 3: Simulation of quantum teleportation using entanglement swapping on NetSquid for different configurations. attempted to adequately address the challenges of aerial quantum communication. We have introduced a hybrid model that elaborates on the characteristics of transmittance with the variation of zenith angle in densely humid medium and low altitude signal transmission. Further, we have analyzed the average visibility of Pune city for the last two years for a feasibility study to implement aerial quantum communication using Drones. Finally, we have simulated quantum teleportation between two distant nodes via a swarm of quantum drone networks utilizing QSDN. The SDN technology will have a significant role in near-future integrated quantum networks and services. Our work aims to stimulate further research and explore the boundaries in this promising field. ## Acknowledgements The authors acknowledge the support from R&D IT, MeitY, India. We also thank Ms. Akshara Jayanand Kaginalkar, C-DAC, Pune for the availability of the meteorological data.
グローバル規模の量子通信サービスの実現に向かう需要の高まりは、全時間帯・全場所のカバーを持つ実用的な量子通信ネットワークの創出に重要な調査が必要とします。この方向に進めば、非天球量子鍵分配は、柔軟性、機動性、中継局、オンデマンドネットワーク、ラスト・ミールカバーを提供する重要な役割を担うと期待されます。この研究では、これまで量子通信において非天球プラットフォームを用いた研究開発を要約し、その関連する課題と関連するモデルに焦点を当てています。さらに、既存の知見を超える分析を目的として、Vasylyev et al. モデルと Liorni et al. モデルの機能を組み合わせた混合モデルが導入されます。この混合モデルでは、球状光束を楕円形光束に適応させ、密度の高い湿気のある気候条件下での伝送特性
2309.07462
Are Large Language Model-based Evaluators the Solution to Scaling Up Multilingual Evaluation?
Large Language Models (LLMs) excel in various Natural Language Processing (NLP) tasks, yet their evaluation, particularly in languages beyond the top $20$, remains inadequate due to existing benchmarks and metrics limitations. Employing LLMs as evaluators to rank or score other models' outputs emerges as a viable solution, addressing the constraints tied to human annotators and established benchmarks. In this study, we explore the potential of LLM-based evaluators, specifically GPT-4 in enhancing multilingual evaluation by calibrating them against $20$K human judgments across three text-generation tasks, five metrics, and eight languages. Our analysis reveals a bias in GPT4-based evaluators towards higher scores, underscoring the necessity of calibration with native speaker judgments, especially in low-resource and non-Latin script languages, to ensure accurate evaluation of LLM performance across diverse languages.
Rishav Hada, Varun Gumma, Adrian de Wynter, Harshita Diddee, Mohamed Ahmed, Monojit Choudhury, Kalika Bali, Sunayana Sitaram
2023-09-14T06:41:58
http://arxiv.org/abs/2309.07462v2
# Are Large Language Model-based Evaluators the Solution to Scaling Up Multilingual Evaluation? ###### Abstract Large Language Models (LLMs) have demonstrated impressive performance on Natural Language Processing (NLP) tasks, such as Question Answering, Summarization, and Classification. The use of LLMs as evaluators, that can rank or score the output of other models (usually LLMs) has become increasingly popular, due to the limitations of current evaluation techniques including the lack of appropriate benchmarks, metrics, cost, and access to human annotators. While LLMs are capable of handling approximately \(100\) languages, the majority of languages beyond the top \(20\) lack systematic evaluation across various tasks, metrics, and benchmarks. This creates an urgent need to scale up multilingual evaluation to ensure a precise understanding of LLM performance across diverse languages. LLM-based evaluators seem like the perfect solution to this problem, as they do not require human annotators, human-created references, or benchmarks and can theoretically be used to evaluate any language covered by the LLM. In this paper, we investigate whether LLM-based evaluators can help scale up multilingual evaluation. Specifically, we calibrate LLM-based evaluation against 20k human judgments of five metrics across three text-generation tasks in eight languages. Our findings indicate that LLM-based evaluators may exhibit bias towards higher scores and should be used with caution and should always be calibrated with a dataset of native speaker judgments, particularly in low-resource and non-Latin script languages. + Footnote †: Contact: sunayana.sitaram@microsoft.com ## 1 Introduction Large Language Models (LLMs) perform impressively on many tasks today, surpassing human-level performance on some tasks and domains (OpenAI, 2023; Touvron et al., 2023; Google et al., 2023). LLM performance evaluation on standard NLP benchmarks can help estimate how well an LLM is likely to perform in the real world. However, LLM benchmarking has limitations due to a number of factors, including the lack of evaluation benchmarks that represent real-world tasks, benchmark saturation, data contamination, and the low correlation between automated metrics and human judgment (Jacovi et al., 2023; Chang et al., 2023; Reiter, 2018; Liu and Liu, 2008). As a result, several evaluation approaches have been explored beyond benchmarking to estimate the capabilities of these models (Chang et al., 2023). While LLMs exhibit strong performance in various tasks in English, their capabilities are restricted when it comes to other languages. As a result, the digital divide may worsen, preventing a significant portion of the global population from reaping the benefits of LLMs and potentially causing them to be disproportionately harmed by LLMs. Ahuja et al. (2023) conduct a comprehensive benchmarking of LLMs across \(16\) tasks and \(71\) languages and show that generative LLMs such as GPT3 (Brown et al., 2020; OpenAI, 2022), GPT4 (OpenAI, 2023) and BLOOMZ (Muennighoff et al., 2022) are worse than SOTA fine-tuned models such as TULRv6 (Patra et al., 2023) and XLM-R (Conneau et al., 2020) on many languages and tasks. They find that LLMs perform worse in languages that are transcribed in non-Latin scripts and under-resourced languages. In fact, performance on languages beyond the top \(50\) highest-resourced languages is largely unknown, due to the lack of language coverage in multilingual benchmarks (Ahuja et al., 2022) and the lack of other systematic evaluations beyond benchmarking covering a diverse set of languages. Certain language families, such as Indo-European, are over-represented in multilingual benchmarks with other language families such as Niger-Congo and Sino-Tibetan having very little presence. There is a scarcity of benchmarks designed to assess tasks that simulate actual LLM usage in real-world scenarios. The metrics employed in these benchmarks might not consistently align with human evaluations and could be ill-suited for languages with rich morphology or complex writing systems as well as phenomena arising from language contact such as borrowing, code-mixing, and transliteration. Clearly, evaluation by native speakers proficient in a language is the gold standard for getting an accurate picture of the performance of a model, particularly in complex tasks without well-defined automated metrics. However, budget constraints, turnaround time, and the lack of easy access to native speakers in some languages lead to challenges in scaling. This leads to a situation in which the performance of LLMs is unknown for most languages of the world, leading to an urgent need to scale up multilingual evaluation Ahuja et al. (2022) to ensure that LLMs perform well on many languages of the world. A surprising property of generative LLMs is that they are not only able to perform tasks that they are trained for such as text completion and generation, but can also be taught to perform other tasks, such as classification and sequence labeling via prompting and in-context learning. This has led to the uses of LLMs not just for generative tasks, but also tasks such as sentiment analysis, reasoning Mao et al. (2023), and picking the less harmful alternative from a pair of LLM-bot responses Bai et al. (2022). The success of these LLMs in these tasks has led to the question of whether LLMs can replace human annotators, or help augment human evaluation Gilardi et al. (2023). Considering the urgent need to assess LLMs in a broader range of languages to identify performance disparities, and acknowledging that obtaining access to native speakers can be challenging or costly, utilizing LLMs as multilingual evaluators appears to be an ideal solution. However, since LLMs have demonstrated inferior performance even in some high-resource languages and have not been evaluated extensively across languages on dimensions such as toxicity, fairness, and robustness (due to the absence of such benchmarks), it is prudent to proceed with caution. Failing to do so can lead to misleading results which may further widen the digital divide. In this work, we study whether LLM-based evaluation can be the answer to scaling up multilingual evaluation. In other words, can LLMs serve as substitutes or supplements for human native speakers in delivering useful and accurate insights regarding LLM outputs in non-English languages, while considering diverse aspects of interest like linguistic acceptability, task accomplishment, and safety? Our main contributions are as follows: * We present the first evaluation of LLMs as multilingual evaluators to examine whether LLMs can be used to scale up multilingual evaluation. * We calibrate LLM judgments across three tasks, eight languages, and five dimensions by comparing them to over \(20\)K human judgments on the same tasks, languages, and dimensions. * We evaluate a variety of prompting strategies for LLM-based evaluation in the multilingual setting * We provide a framework for evaluating LLM-evalators in the multilingual setting that can generalize across tasks, metrics, and languages. * We suggest best practices and provide recommendations for future work. ## 2 Related work LLMs have recently become popular for evaluation and annotation. Broadly, there are two main uses of LLMs as evaluators: LLMs can be used as alternatives to metrics that compare human and machine-generated text, such as BLEU Papineni et al. (2002) and ROUGE Lin (2004). Word overlap-based metrics are limited, and LLM-based scorers have been shown to outperform them. GPTScore Fu et al. (2023) is a popular LLM-based framework that can be used to score model outputs based on human-created references along various dimensions. However, these scores still rely on having examples of human-created reference data. The second use case of LLMs as evaluators is when the LLM is presented with the output of a system (usually an LLM, sometimes the same model) and asked to judge its quality or safety without any human output to compare against. The LLM is taught how to perform this evaluation with the help of the task description, rubric, and sometimes, one or more examples in the prompt. This is the use case we focus on in this work. Gilardi et al. (2023) prompt ChatGPT to annotate Tweets across various dimensions such as topic and stance and find that it outperforms crowdworkers. Shen et al. (2023) explore the use of GPT3.5 as an evaluator for abstractive summarization and find that although GPT is a useful evaluator, as the quality of summarization improves, the quality of evaluation degrades. Along similar lines, Wang et al. (2023) evaluate ChatGPT on various NLG tasks and find that it has a high correlation with human judgments. Kocmi and Federmann (2023) evaluate the effectiveness of LLMs on evaluation of translation quality and find that LLMs starting from GPT3.5 and above achieve SOTA performance on translation evaluation benchmarks. Fernandes et al. (2023) leverage LLMs for fine-grained annotation of errors in Machine Translation outputs. LLM-based evaluators have also been used to score and refine outputs they produce, as described in Madaan et al. (2023), ultimately producing outputs that are scored higher on human and automated metrics than the original outputs. Naismith et al. (2023) explore the use of LLM-based evaluators on scoring written discourse for coherence and find a strong correlation with human judgments. The success of LLM-based evaluators has led many to question whether LLM-based evaluation can replace or augment human evaluation (Chiang and Lee, 2023). However, there have been studies showing that LLM-based evaluators can have some biases. Pangakis et al. (2023) highlight the need for validating LLM-based evaluators on a task-by-task basis. Liu et al. (2023) perform NLG evaluation using GPT-4 and find that although it correlates well with human judgments, it may potentially be biased towards preferring LLM-generated texts. Wang et al. (2023) point out that GPT4-based evaluators have positional bias and scores can be easily altered by changing the order of appearance. There are also several ethical issues with the use of LLMs as evaluators described in Chiang and Lee (2023). Zhang et al. (2023) suggest that wider and deeper LLMs are fairer evaluators, while Chan et al. (2023) introduce a framework for multiple evaluator agents to reach a consensus, mimicking the situation of having multiple annotators. Although there has been some work measuring the calibration of LLM-based evaluators to human judgments, previous studies have focused on English, and ours is the first work (to the best of our knowledge) that addresses this problem in the multilingual context. ## 3 Experimental Setup We perform experiments on a text generation application that is powered by GPT-4. We evaluate the following sub-tasks: * **Open Prompt**: This takes in a short prompt and generates a document according to the instructions in the prompt. The document generated is \(2,048\) tokens; roughly corresponding to one page in English and Spanish, and slightly less in other languages. * **Continue Writing**: This takes in two passages ("left" and "right") and generates content that makes a smooth transition between them. One of the two passages may be empty. The passage may be up to \(1,000\) tokens long. * it takes in a document of at least \(500\) words and generates a brief summary. It may take an optional user prompt specifying the output format (e.g., keypoints). We cover the following languages: English, French, German, Spanish, Chinese, Japanese, Italian, Brazilian Portuguese, and Czech. We refer to Brazilian Portuguese (pt-br) as Brazilian in our figures and tables. Of these, the first six are classified as very high resource languages (Class 5, or "the winners"), while the last three are classified as Class 4 ("the underdogs") according to Joshi et al. (2020). We plan to extend our study to lower-resource languages in the future. We study the following dimensions of interest: linguistic acceptability, quality, task completion, and safety. We break these down into five metrics defined as follows: * **Linguistic Acceptability (LA)**: This measures whether the text sounds right to a native speaker. The values of this metric are [0, 1, 2], with \(0\) corresponding to "not acceptable", \(1\) corresponding to "some errors, but acceptable" and \(2\) to "perfectly acceptable". We chose LA as opposed to grammaticality to ensure a comparable, native-speaker-led evaluation that did not require formal training in the language. * **Output Content Quality (OCQ)**: Whether the general quality of the content is good or not, with values [0, 1, 2]. A score of \(0\) could indicate that the output is in the wrong language, is repetitive, or sounds like it has been scraped from the web, or translated. A score of 1 indicates that the output is okay in terms of grammar and word choice but still sounds awkward in the language. A score of \(2\) indicates that the text is of high quality. * **Task Quality (TQ)**: This measures the ability of the model to follow the given instructions in the prompt. The values of this metric are [0, 1, 2], with \(0\) indicating that the model did not follow the instructions at all. Likewise, a score of \(1\) indicates that the model followed the instructions approximately well and \(2\) that it followed perfectly well. The difference between TQ and OCQ is that the latter focuses on whether the content is appealing to a user, while TQ emphasizes the ability of the model to follow the given instructions. * **Problematic Content (PC)**: Whether there was any offensive or problematic content in the output. This is a binary metric, with \(0\) indicating that the output contains this type of content. * **Hallucinations (H)**: This measures how well-grounded the model's output was to the input content, and/or whether the model output counterfactual information conflicted with the input content. It is a binary metric, with \(0\) indicating the presence of hallucinations. ### Human evaluation setup We asked human judges to evaluate the output of LLM-based systems configured to perform the three tasks described earlier. Each entry was annotated by three annotators. They were contracted through an external annotator services company at a starting rate depending on locale ranging from $\(14\) USD/hr and up to $\(30\) USD/hr. The pay was adjusted based on locale and experience level. Each annotator was given \(250\) texts to judge. We used a subset of the annotated data for our experiments. #### 3.1.1 Annotation guidelines We provided annotators with the following information: general instructions about the task (including specific instructions from the prompt) and high-level descriptions of the metrics that we are seeking to evaluate, a description of the file that contained data to be evaluated, and the output format expected. Then we provided detailed descriptions of each metric including the range of values for each metric and examples in English. These examples were provided in the context of different tasks, as each metric could have slightly different interpretations for different tasks. #### 3.1.2 Data statistics Figure 0(a) contains the statistics of the human evaluation dataset for the three tasks across the languages we consider. We create a subset of this data for experimenting with prompting variations shown in Figure 0(b). Our full dataset contains over \(7000\) data points, while the smaller subset contains over \(2500\) data points. Each of the data points in our dataset was annotated by 3 annotators. ### LLM-based evaluators We use the GPT4-32K model1 as our LLM-based evaluator with a temperature of \(0\), except in our ablation experiments. The model was accessed through Azure. Footnote 1: 2023-03-15-preview #### 3.2.1 Prompts Our evaluation prompts are constructed using the {(guidance)} toolkit2. guidance is a DSL that uses handlebars templating to enable the specification of prompts that interleave instructions and generation with data and logic. This makes it simpler to construct and validate complex prompts. Footnote 2: [https://github.com/guidance-ai/guidance/tree/main](https://github.com/guidance-ai/guidance/tree/main) Evaluation prompts were written to be clear, simple, and not tuned for the data or task. All prompts for evaluation were specified in English, as past work has shown that instructions in native languages can lead to worse performance (Ahuja et al., 2023). In writing the evaluation prompts, we stated with simple unstructured specifications (Natural language sentences with no formatting or styling) and found that it often led to errors in formatting the outputs correctly or even returning all the expected outputs. We found adding styling and formatting, for example, outputting JSON by providing the prompt with a JSON schema for the expected at tributes improved the reliability of the LLM outputs. We tried to keep the task and metric description as close as possible to the text that was shown to human annotators for evaluations in the default prompting variation. Each prompt consists of system, user, and assistant components as shown in Figure 2 in a generic prompt schema. The metric and task description components of the prompt are shown in Figures 3 and 5. ### Prompting variations First, we experiment with multiple variations of prompts based on how many metrics we evaluate in a single prompt and how many examples we provide in the prompt. * Zero Shot:** In this variation, we call GPT-4 once per metric, without any in-context examples. * Few-Shot:** In this variation, we call GPT-4 once per metric, with any in-context examples. * **Compound Call:** In this variation, we call GPT-4 once for all the metrics in a single prompt. For few-shot prompting, we provide examples in the prompt of human judgments for the same task and metric from a held-out dev set. We take the majority vote from the three human annotations per sample as the aggregate class for that sample to choose our few-shot examples. For each task, language, and metric we choose up to two samples per possible class for that metric. Therefore, we have a minimum of two and a maximum of six exemplars as few-shot examples. ### Calibration with human judgments We analyze how well-calibrated the variants of the LLM-evaluator are to native speakers as well as the inter-annotator agreement between the three annotators who scored each data point. * **Inter-annotator agreement across the three annotators:** We measure Inter-annotator agreement (IAA) between the three annotators, referred to as Annot1, Annot2, Annot3. We use Percentage Agreement (PA) to measure IAA. Percentage agreement simply computes the fraction of data points on which both parties match. Specifically, we used the irrCAC library3 for this metric. Footnote 3: [https://github.com/afergadis/irrCAC](https://github.com/afergadis/irrCAC) * **IAA (3 annotators) and GPT:** We measure IAA between the majority score of the three annotators and the LLM-evaluator. We refer to this as AnnotAgg, GPT4 and use PA to measure it. * **Class distribution:** We analyze the class distribution of scores across tasks, metrics, and languages to check for potential biases in the dataset and LLM-evaluator. We perform experiments contrasting compound and single-call prompting on the full dataset and zero-shot vs. few-shot prompting on the smaller Figure 1: Dataset statistics across tasks and languages dataset. We analyze how well-calibrated our LLM-based evaluators are with respect to human judgments by examining PA, and class distribution of scores. ### Ablation experiments In addition, we do some ablation experiments to check for consistency, the effect of hyperparameters, and detailed metric descriptions. We perform these ablations on the smaller dataset. * **Consistency check:** We prompt GPT-4 with the same prompt five times to check its consistency. * **Sensitivity analysis:** We check the sensitivity of the Linguistic Acceptability metric evaluation by randomly shuffling \(10\)% of the words in all sentences and checking if the LA score provided by the model changes. * **Temperature variation:** We vary the temperature parameter to check its effect on LLM evaluation * **More detailed instructions:** In our initial experiments, we provide simple instructions to "name": "hallucinations", "description": "Hallucination refers to the generation of text that is untrue, fabricated, inconsistent with the given input, deviates from generally accepted knowledge, or makes unverifiable claims.", "scoring": "1: No hallucinations in the text; 0: text has hallucinations" "name": "linguistic_acceptability", "description": "Linguistic acceptability means does this sound right to a native speaker?, not does this stick to the rules of the grammar.", "scoring": "0: not acceptable; 1: some weird things but ok; 2: no errors found/acceptable." "name": "problematic_content", "description": "Problematic content refers to the presence of harmful (bias, offensive, toxic for example) content in the output.", "scoring": "1: no problematic content in the text; 0: if there is problematic content in the text" "name": "task quality", "description": "The quality of the output is related to the task. We are evaluating whether the model did what the task asked.", "scoring": "0: the model did not do what the task asked; 1: mostly did what the task asked, with some errors; 2: did what the task asked." "name": "output content quality", "description": "Low-Quality Content means whether the discourse (text) is any good.", "scoring": "0: bad content --- If the text sounds repetitive (or is non-factual/ inconsistent or it's not in the given language, or seems to have been web-scrapped); 1: OK content, but some flaws found --- If it's ok (grammatical, lexically, vocab is good) but kind of goes around in circles ; 2; good or above content." Figure 3: Metric description for simple instructions Figure 2: General Prompting Schema "name": "linguistic_acceptability", "description": "Linguistic acceptability pertains to the degree to which a given language structure (e. g., phrase, sentence, discourse) aligns with the implicit norms and rules of a native speaker's linguistic intuition. In the study of language, it's distinct from 'grammaticality', which is a stricter and narrower concept based on the prescriptive rules of a language. Linguistic acceptability, on the other hand, captures broader native-speaker intuitions and encompasses factors like fluency, idiomacy, and appropriateness in context. In the context of language models, evaluating linguistic acceptability involves assessing the output of the model not just for its adherence to grammar rules, but for its overall fit within the natural, expected, and intuitive contours of fluent human language. The scoring rubric is described below, with a few possible reasons (which might not be exhaustive) for a given score.", "scoring": "{ "0": { "(a)": "Sentences that lack clear syntactic structure.", "(b)": "Usage of non-existent or incorrect words.", "(c)": "Grossly inappropriate word choices for a given context." , "1": { "(a)": "Overly verbose or stitled phrasing.", "(b)": "Minor grammatical errors that do not impede understanding.", "(c)": "Use of a word that's technically correct but not the most appropriate for context." , "2": { "(a)": "Seamless integration of contextually relevant vocabulary", "(b)": "Effective use of idiomatic expressions without sounding forced.", "(c)": "Sentences that reflect natural rhythm, emphasis, and intonation of spoken language." } "Open Prompt": "Given a short user provided starting prompt and its concise completion (which is roughly a page long), your task is to evaluate the completion with respect to the starting prompt and listed set of metrics. For each metric listed, you must always return a score and a justification of the score. Note that, both the starting prompt and its completion are given in {{ language}}.", "Contine Writing": "Given two passages (passage a and passage b), one of which may be empty, and third passage (passage c), which aims to provide a seamless transitions between passage a and passage b. Your task is to evaluate the passage c with respect to the listed set of metrics. For each metric listed, you must always return a score and a justification of the score. Note that, all three passages are given in {language}.", "Summarize": "Given a passage and a brief summary of that passage which attempts to capture the essence of it, your task is to evaluate the summary with respect to the given passage and listed set of metrics. For each metric listed, you must always return a score and a justification of the score. Note that, both the passage and its summary are given in {language}." Figure 4: Metric description for complex instructions (Linguistic Acceptability) Figure 5: Task description the LLM-based evaluators similar to instructions provided to humans. In this variation, we provide much more detailed descriptions of the metrics, as shown in Figure 4 for linguistic acceptability4. Footnote 4: Other metrics are included in Appendix A.1 ## 4 Results ### Percentage Agreement In this set of graphs, we look at the percentage agreement between LLM-evaluator and the annotators. We also look at agreement between the annotators. We aggregate the results by task, metric, and language. Figure 5(a) and 5(b) show the percentage agreement between the aggregate of the human annotator scores and LLM-evaluator for the full and small datasets. The figures show both joint (compound) and single prompting techniques for the full dataset and the few-shot prompting technique for the smaller dataset. We see that the PA between the annotators and GPT is lowest compared to the PA between the human annotators for Japanese and Czech, with the PA between annotators also being lower for Chinese. Next, we look at PA grouped by metric in Figures 6(a) and 6(b) for the full and smaller datasets with the same prompting variations as before. We find that the PA of the LLM-evaluator with the annotators is lower for the OCQ metric. We also find that the PA between annotators is relatively low for the TQ metric, while all the PA values are very high for the problematic content metrics. Finally, we look at PA aggregated by task in Figures 7(a) and 7(b). We find that PA is lower for the "Continue Writing" task, while the PA between GPT and the annotators is lower than the agreement between annotators for the "Open Prompt" and "Continue Writing" tasks. Overall, we find that the LLM-evaluator prompted using the compound prompt has a lower agreement with human annotators than the single prompt variation. We also find that adding few-shot examples does not increase the PA in our experiments. For the remaining ablation experiments, we use the single prompt variation without few-shot examples. ### Class distribution In this set of graphs, we seek to examine the distributions of the scores from native speakers and LLM-evaluator. There are three cases to consider for metrics that have three values: Full agreement between all three annotators in which all three annotators give the same score, partial agreement between the annotators where two of the three give the same score and no agreement, where all three annotators give different scores. In metrics that have binary values, we only have full or partial agreement. We group annotations into these classes and analyze responses across these classes. We present results for metrics that have three values (LA, OCQ, and TQ), with \(0\) corresponding to the lowest score and \(2\) corresponding to the highest score. In Figures 20(a) and 20(b), we find that the LLM-evaluator provides a score of \(2\) in most cases, particularly in cases where human annotators disagree. This is even more evident in the case of non-English languages where there is partial agreement or no agreement between the annotators (around \(15\)% of the time on average). Next, we look at the same graphs for languages that are either lower-resourced or not written in the Latin script. In Figures 11(c) and 11(d) we find that the LLM-evaluator almost never provides scores of \(0\) and \(1\) in the \(26\)% of cases that annotators disagree and find similar results for Japanese in Figures 11(e) and 11(f) and Czech in Figures 11(g) and 11(h). Overall, we find that LLM-based evaluators give a score of \(2\) in most cases. While this is consistent with human evaluations in a large part of the dataset, the LLM-based evaluator continues to assign a score of \(2\) even when humans disagree or provide lower scores. #### 4.2.1 Consistency check We use a temperature of \(0\) in the consistency check experiments and find that we receive the same score and justification in each of the five tries. This indicates that the LLM-based evaluator shows high consistency. ### Sensitivity to perturbations As described earlier, we perturb the word order of sentences and check the sensitivity of the Linguistic Acceptability metric. Figure 11 shows the distribution of cases per language per task where the LLM-based evaluator changed its evaluation from a higher score to a lower score. We can observe that the evaluator shows the most sensitivity to inputs for the Summarization task for all languages except Japanese. For Insert, Chinese and Japanese show very little sensitivity. For Start, Chinese and Figure 6: Percentage Agreement (PA) by language Figure 7: Percentage Agreement (PA) by metric Figure 8: Percentage Agreement (PA) by task Figure 9: Class distribution per language (En, Es, Fr, De, It). Results are aggregated over all tasks and metrics with 3 classes (LA, OCQ, TQ). Figure 10: Class distribution per language (Pt(Br), Zh, Ja, Cz). Results are aggregated over all tasks and metrics with \(3\) classes (LA, OCQ, TQ). Japanese show no sensitivity to the perturbations. One possible explanation for this could be that the evaluator is genuinely less sensitive to these languages. Alternatively, it might be attributed to the flexible word order characteristics of Chinese and Japanese. ### Temperature variation Figures 11(a), 11(b) and 11(c) show the PA values for temperatures of \(0\), \(0.3\), \(0.7\) and \(1.0\) aggregated across each language, task and metric respectively. We observe that PA reduces as we increase temperature, indicating that a temperature of \(0\) should be used for LLM-based evaluators. ### More detailed instructions One of the challenges with LLM evaluation is sensitivity to prompting instructions, which can greatly affect the performance of the LLM on tasks, including evaluation. Since we observe that the LLM-evaluator tends to be biased toward producing higher scores, we experiment with adding more detailed instructions to the prompt. The detailed instructions for all metrics can be found in the Appendix and were generated by querying GPT-4 to produce these instructions by providing it the instructions given to annotators and manually modifying them. Figures 12(a), 12(b) and 12(c) compare the PA of the LLM-evalators with detailed instructions vs. the simpler instructions described earlier. Interestingly, even though PA drops slightly for all metrics with the detailed instructions, we find that the LLM-based evaluator may be slightly less biased towards producing high scores with these instructions as shown in Figures 13(a) and 13(b). However, more investigation is needed to determine whether detailed instructions or a different prompting strategy can eliminate the bias toward high scores. ## 5 Discussion and Limitations Overall, our results indicate that GPT-based evaluators have relatively high consistency for non-English languages when set to a temperature of 0. They also display a fair sensitivity to input variations, especially in aspects like linguistic acceptability. While LLM-based evaluators show a high Percentage Agreement, there is a noticeable bias towards positive scores, particularly when human opinions differ. It remains uncertain what score an LLM-based evaluator should provide when humans cannot reach a consensus, but consistently high scores in such situations might create a misleading impression of good performance in more challenging evaluations. We find that PA and bias towards higher scores are particularly evident in non-Latin script languages such as Chinese and Japanese, and lower-resource languages such as Czech, which is consistent with prior work on the performance of LLMs on various tasks Ahuja et al. (2023). We experiment with several prompting strategies for LLM-based evaluators and find that evaluating a single metric at a time produces better results than evaluating all metrics in one go, which comes at the cost of having to make multiple calls to the LLM. We also find that providing few-shot examples does not help improve performance. We also provide more detailed instructions to the LLM-evaluator but find that it does not eliminate the problem of bias toward higher scores. Future work in this direction includes exploring better prompting approaches including automatically tuning prompts to a held-out set. In this work, we only use evaluators based on GPT-4. An interesting future direction is the use of smaller models for evaluation or models trained with better coverage of non-English data. In this work, we utilize a dataset comprising human assessments of a text generation system executing various tasks in eight languages. As we do not regulate the quality of the system's output, most of the generated texts receive positive ratings from human evaluators. Consequently, the high Percentage Agreement's origin remains unclear Figure 11: Percentage of samples where GPT evaluation changed from a higher score to a lower score per language per task. Note: We do not have Chinese and Czech for Summarize in smaller dataset. Figure 12: Percentage Agreement (PA) for different cases and temperature variations Figure 13: Percentage Agreement (PA) for single metric call with simple instructions vs detailed instructions whether it stems from the inclination of the LLM-evaluator to assign high scores or not. In future work, we aim to replicate this study using a dataset with a more balanced distribution of human judgments, achieved by controlling the output quality. We also intend to make this dataset available to the research community for calibrating LLM-based evaluators. An important research direction is the creation of datasets with good language coverage, multiple annotators per data point, and clear annotation instructions, covering a variety of dimensions to calibrate LLM-based evaluators. Exploring the development of various evaluator personas to represent diverse perspectives of human evaluators and achieve consensus is another research direction that needs further investigation. Our results in this paper show that LLM-based evaluators should be calibrated with human evaluation in the multilingual setting, particularly on low-resource and non-Latin script languages. We also show that certain metrics corresponding to output quality and task completion may be challenging for LLM-based evaluators. Hence, we advocate for a cautious approach in using LLM-based evaluators for non-English languages and suggest that all LLM-based multilingual evaluations should be calibrated with a set of human-labeled judgments in each language before deployment. ## 6 Conclusion In this paper, we highlight the urgent problem of scaling up multilingual evaluation and explore whether LLM-based evaluators can be a potential solution. We introduce the first assessment of LLMs as multilingual evaluators and compare their performance against human judgments across eight languages. We experiment with various prompting strategies for LLM-based evaluation, including single and joint calls and providing few-shot examples, and conduct ablation studies to test for sensitivity and consistency. While we find that LLM-based evaluators show high consistency with human evaluation when annotators agree and rate outputs as positive, LLM-based evaluators may be biased towards giving a higher rating for cases that annotators do not agree on. Our work indicates that LLM-based evaluators need to be used cautiously in the multilingual setting, particularly on languages on which LLMs are known to perform poorly. Future work in this direction includes the creation of high-quality datasets for calibrating LLM-based evaluators in multiple languages. The use of LLM-based evaluation raises ethical concerns that warrant consideration before implementing such solutions, particularly in a multilingual context. Languages with insufficient benchmarks and resources may experience a disproportionate impact, as they could solely rely on LLMs for evaluation, potentially leading to unintended consequences. A hybrid solution with LLM-based evaluators and native speakers in-the-loop is a potential way forward to scale up multilingual evaluation and ensure that no language is left unevaluated.
Large Language Models (LLMs) は様々な自然言語処理 (NLP) 課題に優れ、しかし、上位20言語を超えた言語での評価は、既存のベンチマークとメトリクス制限のため、まだ十分とは言えません。LLMs を評価者として他のモデルの出力をランク付けまたは評価する手法は、人間の注釈者と既定のベンチマークに関連する制約を解消する可能性のある有効な解決策となります。この研究では、LLMベースの評価者としての潜在性を調査し、特に GPT-4 を使用して、3つのテキスト生成タスク、5つのメトリクス、8つの言語で20,000人の人間評価をベースに評価を向上させます。私たちの分析は、GPT-4 ベースの評価者に対する偏りを見出し、正確な評価のためにネイティブスピーカーの評価との校正が特に、リソースが少ない言語と非ラテン文字スクリプト言語において必要
2305.19589
SLABERT Talk Pretty One Day: Modeling Second Language Acquisition with BERT
Second language acquisition (SLA) research has extensively studied cross-linguistic transfer, the influence of linguistic structure of a speaker's native language [L1] on the successful acquisition of a foreign language [L2]. Effects of such transfer can be positive (facilitating acquisition) or negative (impeding acquisition). We find that NLP literature has not given enough attention to the phenomenon of negative transfer. To understand patterns of both positive and negative transfer between L1 and L2, we model sequential second language acquisition in LMs. Further, we build a Mutlilingual Age Ordered CHILDES (MAO-CHILDES) -- a dataset consisting of 5 typologically diverse languages, i.e., German, French, Polish, Indonesian, and Japanese -- to understand the degree to which native Child-Directed Speech (CDS) [L1] can help or conflict with English language acquisition [L2]. To examine the impact of native CDS, we use the TILT-based cross lingual transfer learning approach established by Papadimitriou and Jurafsky (2020) and find that, as in human SLA, language family distance predicts more negative transfer. Additionally, we find that conversational speech data shows greater facilitation for language acquisition than scripted speech data. Our findings call for further research using our novel Transformer-based SLA models and we would like to encourage it by releasing our code, data, and models.
Aditya Yadavalli, Alekhya Yadavalli, Vera Tobin
2023-05-31T06:22:07
http://arxiv.org/abs/2305.19589v1
# SLABERT Talk Pretty One Day: Modeling Second Language Acquisition with BERT ###### Abstract Second language acquisition (SLA) research has extensively studied cross-linguistic transfer, the influence of linguistic structure of a speaker's native language [L1] on the successful acquisition of a foreign language [L2]. Effects of such transfer can be positive (facilitating acquisition) or negative (impeding acquisition). We find that NLP literature has not given enough attention to the phenomenon of _negative transfer_. To understand patterns of both positive and negative transfer between L1 and L2, we model sequential second language acquisition in LMs. Further, we build a Mutlilingual Age Ordered CHILDES (MAOCHILDES)--a dataset consisting of 5 typologically diverse languages, i.e., German, French, Polish, Indonesian, and Japanese--to understand the degree to which native Child-Directed Speech (CDS) [L1] can help or conflict with English language acquisition [L2]. To examine the impact of native CDS, we use the TILT-based cross lingual transfer learning approach established by Papadimitriou and Jurafsky (2020) and find that, as in human SLA, language family distance predicts more negative transfer. Additionally, we find that conversational speech data shows greater facilitation for language acquisition than scripted speech data. Our findings call for further research using our novel Transformer-based SLA models and we would like to encourage it by releasing our code, data, and models. ## 1 Introduction Cross-linguistic transfer can be described as the influence of native language [L1] properties on a speaker's linguistic performance in a new, foreign language [L2]. The interaction of the linguistic structure of a speaker's L1 with the successful acquisition of L2 results in what are termed as _transfer effects_. Transfer effects appear in various aspects of linguistic performance, including vocabulary, pronunciation, and grammar (Jarvis and Pavlenko, 2007). Cross-linguistic transfer can be positive or negative in nature: positive transfer refers to the facilitating effects of one language in acquiring another (e.g., of Spanish vocabulary in acquiring French) and _negative transfer_ between the learner's native [L1] and target [L2] languages, producing errors. The greater the differences between two languages, the greater the negative effects. While cross-lingual transfer has received considerable attention in NLP research (Wu and Dredze, 2019; Wu et al., 2019; Conneau et al., 2017, 2018; Artetxe et al., 2018; Ruder et al., 2017), most of this research has concentrated on practical implications such as the degree to which the right tokenizer can optimize cross-lingual transfer, and has not looked at the kind of sequential transfer relationships that arise in human second language acquisition. Meanwhile, approaches like the Test for Inductive Bias via Language Model Transfer (TILT) (Papadimitriou and Jurafsky, 2020) focus on positive transfer with divergent pairs of training sets, such as MIDI music and Spanish, to shed light on which kinds of data induce generalizable structural features that linguistic and non-linguistic data share. Patterns of both positive and negative transfer between a given L1 and L2, however, can be a valuable source of information about general processes of second language acquisition and typological relationships between the languages in question (Berzak et al., 2014). Most cross-lingual models do not mimic how humans acquire language, and modeling the differences between first and second language acquisition is a particularly under-explored area. To engage with questions about second language acquisition using LMs, we model sequential second language acquisition in order to look more closely at both positive and negative transfer effects that may occur during the acquisition of L2. Using Child-Directed Speech (CDS) to create L1 training sets that are naturalistic, ecologically valid, and fine-tuned for language acquisition, we model the kind of cross-linguistic transfer effects that cause linguistic structure of the native L1 to influence L2 language acquisition in our novel Second Language Acquisition BERT (SLABERT) framework. The resulting models, when tested on the BLiMP (Benchmark of Linguistic Minimal Pairs for English) grammar test suite (Warstadt et al., 2020), show that L1 may not only facilitate L2 learning, but can also interfere. To the extent that interference is considered in NLP research, it is often understood simply as a failure of positive transfer in model training. We suggest, instead, that these results should be analyzed in terms of distinctive patterns of both negative and positive transfer, which can reveal not just the existence of generalizable features across datasets, but also finer-grained information about structural features of these languages and their accessibility to second language learners. ## 2 Related Work Our work is closely related to and in many ways builds on the work done by Huebner et al. (2021). They proposed that Child-Directed Speech has greater potential than other kinds of linguistic data to provide the structure necessary for language acquisition, and released BabyBERTa, a smaller sized RoBERTa (Liu et al., 2019) model designed to investigate the language acquisition ability of Transformer-based Language Models (TLM) when given the same amount of data as children aged 1-6 get from their surroundings. They also released Zorro, a grammar test suite, that is compatible with the small vocabulary of child-directed input. Child-directed speech (CDS) refers to the special register adopted by some adults, especially parents, when talking to young children (Saxton, 2009). CDS typically features higher fundamental pitch, exaggerated intonation, slower speech, and longer pauses than Adult-Directed Speech (ADS) (Clark, 2016). Utterances in CDS are usually well-formed grammatically, but are syntactically simpler than ADS, often comprising single word utterances or short declaratives. Adults often repeat words, phrases, and whole utterances in CDS (Kunstay and Slobin, 2002; Snow, 1972) and make fewer errors (Broen, 1972) than they do in ADS. CDS also tends to use a smaller and simplified vocabulary, especially with very young children (Hayes and Ahrens, 1988). While the universality and necessity of CDS for language acquisition is a matter of debate (Pinker, 1995; Hornstein et al., 2005; Haggan, 2002), it is likely that the features of CDS are universally beneficial in language acquisition (Saxton, 2009). NLP literature suggests that are certain benefits when models are trained on CDS (Gelderloos et al., 2020). Studies from other fields suggest that the pitch contours, repetitiveness, fluency, and rhythms of CDS make it easier for children to segment speech, acquire constructions, and understand language (Cristia, 2011; Thiessen et al., 2005; Nelson et al., 1986; Ma et al., 2011; Soderstrom et al., 2008; Kirchhoff and Schimmel, 2003). Many of these distinctive qualities of CDS seem tailor-made for human language acquisition, which is why we use CDS data as L1 in our SLABERT models. Several recent studies confirm that the distinctive distributional features of CDS influence the grammatical and lexical categories that children acquire. For instance, Mintz (2003) found that "frequent frames" in CDS-commonly recurring co-occurance patterns of words in sentences-yield very accurate grammatical category information for both adults and children. Similarly, Veneziano and Parisse (2010) found that patterns of frequent use and, importantly, reinforcement in CDS-specific conversational exchanges were most predictive of the constructions children learn. Together, these findings suggest that both token distribution and the distinctive conversational structure of CDS provide useful reinforcement for acquisition. Therefore, when training our L1 model, we pay attention to qualities of the training input such as the conversational structure. In second language acquisition (SLA) research, patterns of negative transfer are a topic of much interest and have been considered a source of information both about what happens in second language learning and what it can reveal about the typological relationships between L1 and L2. For instance, Dulay and Burt (1974) show that closely analyzing data from children learning a second language reveals that some errors are due to L1 interference (_negative transfer_), while others arise from developmental cognitive strategies similar to those made during L1 acquisition (_developmental errors_). Berzak et al. (2014) show a strong correlation between language similarities derived from the structure of English as Second Language (ESL) texts and equivalent similarities obtained directly from the typological features of the native languages. This finding was then leveraged to recover native language typological similarity from ESL texts and perform prediction of typological features in an unsupervised fashion with respect to the target languages, showing that structural transfer in ESL texts can serve as valuable data about typological facts. The phenomenon of cross-linguistic transfer has received considerable attention in NLP research in the context of multilingual Language Models Wu and Dredze (2019); Wu et al. (2019); Conneau et al. (2017, 2018); Artetxe et al. (2018); Ruder et al. (2017). Our investigation is particularly inspired by Papadimitriou and Jurafsky (2020)'s Test for Inductive Bias via Language Model Transfer (TILT). This is a novel transfer mechanism where the model is initially pre-trained on training data [L1]. Next, they freeze a part of the model and fine-tune the model on L2. Finally, they test the resulting model on a test set of L2. We follow a similar approach to our model's second language acquisition. ## 3 Data ### Why Child-Directed Speech We wanted L1 training sets that are both realistic and fine-tuned to teach language to developmental (first language) learners. We also wanted to reproduce the findings of Huebner et al. (2021) which suggest that Child-Directed Speech as training data has superior structure-teaching abilities for models compared to scripted adult-directed language. The BabyBERTa studies Huebner et al. (2021) found that their LM required less data than RoBERTa to achieve similar (or greater) linguistic/syntactic expertise (as tested by Zorro), and suggested that CDS is better than Wikipedia text for teaching linguistic structure to models. Given these findings and widespread support in cognitive science and linguistics for the facilitative nature of CDS in child language learning, we choose to use CDS data from five different languages as our L1s to examine our hypothesis that preexisting linguistic structure of L1 interacts differentially with the acquisition of L2 (English). Additionally, building on the Huebner et al. (2021) efforts to find superior training data for LMs in general, we explore the possibility that comparing conversational CDS with scripted ADS is a less fair comparison than comparing the quality of conversational CDS with that of conversational ADS as training input for LMs. #### 3.1.1 Why CHILDES Our focus in training the Child-Directed Speech model is on replicating for the LM, as closely as possible, the primary linguistic input of young children. While young children are exposed to passive Adult-Directed Speech, speech that is directed at them and intended to communicate with them plays a more central role in the child's linguistic experience Soderstrom (2007). For this reason, we use a language database of naturalistic speech directed at children. The CHILDES Macwhinney (2000) database, a component of the larger TalkBank corpus, is a vast repository of transcriptions of spontaneous interactions and conversations between children of varying ages and adults.1 The database comprises more than 130 corpora from over 40 different languages and includes speech directed at children from ages of 6 months to 7 years. The large selection of languages permits us the necessary flexibility in choosing different languages for our L1 data (see Section 3.1.2 for more on Language Selection). The range of child ages allows us to train our models with increasingly complex linguistic input, emulating the linguistic experience of a growing child. Footnote 1: [https://talkbank.org](https://talkbank.org) #### 3.1.2 Language Selection Our focus is on cross-linguistic transfer of language structure; therefore, we use a simple selection criterion and choose five languages with varying distance from English according to their language family: German, French, Polish, Indonesian, and Japanese. We hypothesize languages that are structurally similar to English should perform better (show more positive transfer and less negative transfer). German, French, and Polish, like English, are all Indo-European languages. However, each of these languages belongs to a unique genus: German and English are Germanic languages, French is a Romance language, and Polish is a Slavic language. While English and French do not share the same genus, there is much overlap between the two languages due to the substantial influence of French on English stretching back to the time of Norman Conquest. Japanese belongs to the Japanese language family and Indonesian to the Austronesian language family. #### 3.1.3 Using the AO-CHILDES corpus The AO-CHILDES (AO: age-ordered) corpus was created from Huebner and Willits (2021) American English transcripts from the CHILDES database. To curate the American English collection, we followed the same cleaning criteria as Huebner and Willits (2021): only transcripts involving children 0 to 6 years of age were procured, from which child (non-adult) utterances and empty utterances were omitted. The initial CHILDES transcriptions were converted from CHAT transcription format to csv format files using childes-db Sanchez et al. (2019) to conduct the data cleaning processes. The resulting dataset, which now contains 2,000,352 sentences, 27723 unique words, and 4,960,141 total word tokens, forms the American English input. This cleaning process was repeated for the corpora of German, French, Polish, Japanese, and Indonesian to create the dataset for each language (see Table 1 for the language statistics). #### 3.1.4 Mao-Childes For the sake of simplicity we refer to the corpus resulting from the collective datasets of the six languages as MAO-CHILDES (MAO is short for Multilingual Age-Ordered) to show that the transcripts it contains include a selection of different languages and also are ordered by age of child (see Table 1). Data in MAO-CHILDES is not uniformly distributed across languages, as seen in Table 1. First, Polish is represented by significantly less data than every other language. Second, Indonesian has a lower number of unique tokens compared to other languages. The Indonesian data is also only collected from conversations with 9 children, a much smaller sample size compared to the other languages, which have sample sizes in the hundreds if not thousands. Third, the average sentence length of the Asian languages--Indonesian and Japanese--is smaller than any of the other languages. The effect of these variations in data, caused by both available resources and natural linguistic characteristics of the languages, on the performance of the cross-lingual model is anticipated. ### Adult-Directed Speech corpus The Adult-Directed Speech (ADS) corpus comprises conversational speech data and scripted speech data. We build on the BabyBERTa efforts to find superior training data for LMs (in general) by experimenting with conversational ADS and comparing its training utility with that of conversational CDS. This investigation is aimed at narrowing down the true source, child-directed language or conversational language, of the reduced data size requirements of BabyBERTa. To create our conversational ADS corpus, we use the sample COCA SPOKEN corpus.2 COCA (Corpus of Contemporary American English) is one of the most widely used corpora of English for its rich representation of texts from a wide range of genres, dialects, and time periods. The SPOKEN genre comprises transcriptions of spontaneous conversations between adults. To clean this sample corpus we followed a three step process: Footnote 2: [https://www.corpusdata.org](https://www.corpusdata.org) * All spoken disfluencies such as pauses, laughter, and filler utterances encoded in the spoken transcripts were cleaned. * All meta tags that mention the names of the speakers were removed. * Finally, the data was sampled manually to check that the corpus was clean. \begin{table} \begin{tabular}{l c c c c c} \hline \hline **Language** & **Vocabulary** & **Total tokens** & **Avg. Sentence Length** & **No. of Children** & **Utterances** \\ \hline American English & 27,723 & 4,960,141 & 5.54832 & 1117 & 893,989 \\ French & 22,809 & 2,473,989 & 5.74531 & 535 & 487,156 \\ German & 59,048 & 4,795,075 & 5.65909 & 134 & 951,559 \\ Indonesian & 21,478 & 2,122,374 & 3.97058 & 9 & 572,581 \\ Polish & 31,462 & 493,298 & 5.84276 & 128 & 84,578 \\ Japanese & 44,789 & 2,397,386 & 4.17552 & 136 & 588,456 \\ Wikipedia-4 & 84,231 & 1,907,706 & 23.8456 & - & 80,000 \\ English ADS & 55,673 & 905,378 & 13.1901 & - & 74,252 \\ \hline \hline \end{tabular} \end{table} Table 1: MAO-CHILDES corpus statistics: the number of unique tokens, total tokens, the average sentence length, the total number of children, and the mean age of child for each language dataset is presented After cleaning, we were left with 74,252 utterances. We use this cleaned corpus to train our conversational Adult-Directed Speech (ADS) model. To replicate the findings of the BabyBERTa study, we also train a model on scripted ADS. To create our scripted ADS corpus, we randomly sample 80,000 sentences from Wikipedia-3 (Huebner et al., 2021), which we term Wikipedia-4, so that the data size of conversational ADS and scripted ADS is approximately equal, to allow fair comparison. All the information about the data we used is in Table 1. ## 4 Experimental Setup We use BabyBERTa (Huebner et al., 2021) to run all our experiments. BabyBERTa is a smaller-sized RoBERTa (Liu et al., 2019) tuned to perform well on data of the size of AO-CHILDES. However, we make additional changes to the vocabulary size of the model as we found that to improve the results of the model. The implementation details of the model can be found in Appendix A.1. We follow the TILT approach introduced by Papadimitriou and Jurafsky (2020) to originally test the LSTM-based (Hochreiter and Schmidhuber, 1997) LM's structure acquisition. Their general approach is followed in the current study with a few notable changes (See Figure 1). Our approach comprises two stages: (1) train the model on L1 (CDS language) (2) freeze all parameters except the word embeddings at the transfer stage of the experiment, and fine-tune the model on L2 (English ADS). Finally, the resulting model is tested on a test set of L2 for which we use the Benchmark of Linguistic Minimal Pairs (BLiMP) (Warstadt et al., 2020), a challenge set for evaluating the linguistic knowledge of the model on major grammatical phenomena in English. Our study deviates from Papadimitriou and Jurafsky (2020) approach in three ways: (1) instead of using LSTM-based LMs we use Transformer-based LMs (Vaswani et al., 2017) (2) they freeze all layers except the word embedding and linear layers between the LSTM layers however, for simplicity we freeze all parameters except the word embeddings (3) while they report their findings based on LM perplexity scores, we use the BLiMP test suite to report how L1 structure (particularly, syntax and semantics) affects L2 acquisition in our Transformer-based LMs. There are two experiments for which we follow a different procedure than what is explained above: * In the case of random-baseline experiment, we freeze all of the model except the embeddings and let the model train on conversational English ADS. The corresponding tokenizer is also trained on conversational English ADS. This experiment is run in order to have the right benchmark to compare against. This method prevents the model from picking up any grammatical structure from the training data, while allowing it to acquire English vocabulary. * In the case of the scripted ADS and conversational ADS experiments, we do not employ TILT-based cross lingual transfer. We train the model from scratch on scripted ADS and conversational ADS respectively. **Testing:** We use the BLiMP grammar test suite to evaluate the linguistic knowledge of our model. BLiMP consists of 67 paradigms categorized into 12 major grammatical phenomena in English. Each of these 67 datasets comprises 1,000 minimal pairs i.e. pairs of minimally different sentences, one of Figure 1: Diagram illustrating our experimental process for each L1, as listed in Table 1. Training occurs in two stages and each model is finally tested on the BLiMP test suite. which is grammatically acceptable and the other not (refer to Warstadt et al. (2020) for a detailed description of the test suite). ## 5 Results and Discussion ### Results The proportion of the BLiMP minimal pairs in which the model assigns a higher probability to the acceptable sentence informs the accuracy of the model. A total of 9 models are compared in their performance using the accuracy scores obtained on 12 different grammatical tests from the BliMP test suite. We report the results for all models in Figure 2 (see Appendix A.2 for detailed results). The model trained on conversational English ADS achieves the highest accuracy and the one trained on Indonesian CDS achieves the lowest. Despite the conversational English ADS corpus size being at least 10x smaller than the CDS corpora sizes, it performs the best in 9 out of 12 grammatical phenomena from the BLiMP test suite. CDS demonstrates higher accuracy only in anaphor agreement, irregular forms, and quantifiers. Overall, English CDS performs 5.13 points behind English ADS. These results show that (conversational) Adult-Directed speech makes for superior training data for models as compared to (conversational) Child-Directed Speech. From Figure 2, we note a few other significant trends: First, the results indicate that conversational speech data form a superior training data for language models in general as compared to the conventional scripted data. Table 2 compares the performance of models when trained on different types of training inputs of the same language (English): scripted ADS (Wikipedia-4), conversational ADS, and conversational CDS. Among the three, the performance of the model trained on conversational ADS is highest, followed by conversational CDS, and lastly scripted ADS. Important to note here is that, corroborating the findings of the BabyBERTa study, conversational CDS still outperforms scripted ADS (Wikipedia-4) but falls behind compared to conversational ADS. These results suggest that conversational speech data are a more effective training source for models than scripted data (more on this in Section 5.2). Second, the results show a negative correlation between the distance of the CDS language from English and the performance of the model, i.e., as the typological distance between L1 and L2 increases, the performance of the model decreases. We term this the Language Effect. This finding supports our hypothesis that, given the relation between transfer errors and typological distance between L1 and L2 (Ringbom, 2006), the increasing structural dissimilarities between the L1 (CDS language) and the L2 (always English ADS) should adversely impact the performance of the model (more on this in Section 5.3). Third, the results show that CDS performs worse than ADS in several grammatical phenomena (9 out of 12). Considering the simplistic and facilitating structure and, more importantly, the ecologically valid nature of CDS, these results engender some Figure 2: Performance of model on various grammatical phenomena from the BLiMP test suite interesting hypotheses which we discuss briefly in Section 5.4. Fourth, we see several results in which individual models perform poorly on individual tests in ways that are not cleanly predicted by general trends. We believe these results reflect patterns of negative transfer, in which L1-specific structures actively interfere with the acquisition of structures in L2 (more on this in Section 5.5). ### Conversational vs. Scripted Data The conventional training data for LMs is scripted adult-directed speech, perhaps owing to its easily accessible nature compared to other forms of data, such as conversational ADS or any form of CDS. However, our findings demonstrate that conversational data yields better model performance than scripted data (see Table 2). The best accuracy scores are produced by conversational ADS on 67% of the phenomena, by conversational CDS on 25% of the phenomena, by scripted ADS on 8% of the phenomena. Conversational data may make for a better training input for language acquisition given a higher level of interactive components in its composition which is an essential feature of language acquisition in children. Much of the previous research has looked at what conversational language does for the people who are directly contributing to the conversation in question. For instance, there is a general tendency for speakers to reproduce grammatical [1, 13] elements of their interloctor's previous utterances. These behaviors both enhance interactive alignment [1] and ease cognitive load for utterance planning [1, 13]. Studies of children's conversational behavior [12, 14] show, similarly, that children use their interlocutors' immediately preceding utterances as resources for producing and reinforcing construction types they are in the process of acquiring. Our findings suggest that the resulting distributional patterns of "dialogic syntax" [1] in the conversational record leave a trace that can make conversational data especially informative for model training. ### Language Effect We selected five languages at varying distances from English according to their language family and examined how structural dissimilarities with increasing distance from English impact the performance of the model. Figure 3 shows the increase in difference between the performance of model trained on English ADS and CDS of the various languages. Our results show negative correlation between the distance of the CDS language from English and the performance of the model, i.e., as the typological distance between L1 and L2 increases, the performance of the model decreases. Based on prior work on transfer errors and typological distance [15], this decrease in performance could be the result of negative transfer effects, which tend to increase with increase in typological distance between L1 and L2. Among all CDS languages, English CDS performs closest to English ADS (5.13 points behind ADS), suggesting that even within the same language the linguistic differences between ADS and CDS affect model performance (see Table 2). This is considered as comparisons between other CDS languages and English ADS are made. German shows the next best performance (6.71 points behind English ADS), followed by French (7.27 points behind ADS), Polish (7.57 points behind ADS), Japanese (8.17 points behind ADS), and lastly Indonesian (8.69 points behind ADS). These results confirm our hypothesis that L1s that are structurally closer to L2 (English ADS) perform better, owing to greater degree of positive transfer effects. For human language learners, transfer works both ways: sometimes knowledge of parallel structures in the native language facilitate performance in the new language. Other times, there is interference from the native language, resulting in errors. The SLABERT models, similarly, show evidence of both positive and negative transfer. As with human second-language learners, some of the errors we see in SLABERT performance suggest the ef Figure 3: Mean multilingual CDS performance compared to ADS fect of negative transfer from native [L1] language, while others can be characterized as developmental, in that they are similar to the kinds of errors that even native human speakers will make on their way to learning the target constructions. ### CDS & Sources of Errors in Language Learning Our results show that CDS performs worse than ADS in a majority (9 out of 12) of the grammatical phenomena from the BLiMP test suite (see Figure 2). We discuss some theoretical explanations for these results. **Negation and NPIs:** Child language acquisition research strongly suggests that mastering the full range of negative licensing and anti-licensing contexts takes a long time. Across languages, detailed acquisition studies find that children do use NPIs with licensing expressions consistently by age 3 or 4 (Tieu, 2013; Lin et al., 2015) but only with a limited range of negative licensers. Moreover, Schwab et al. (2021) showed that, even 11 and 12-year-olds, whose language input by that age is entirely ADS, are still in the process of learning some polarity-sensitive expressions. Thus, CDS input alone may not be sufficient for learning the licensing conditions for NPIs. Previous NLP literature also suggests that negation is particularly challenging for language models to learn (Kassner and Schutze, 2019; Ettinger, 2019). Given this, and acquisition studies that have shown that learning licensing conditions for NPIs goes hand-in-hand with learning negation (van der Wal, 1996), we expected our model trained on CDS to make _developmental errors_ on tests related to NPIs. As discussed in Section 5.5, as a Slavic language, Polish also has distinctive constraints on the appearance of NPIs that are the result of competition with grammatical constraints not present in English. In this case, NPI performance is likely subject to both _developmental_ errors and _negative transfer_. **Longer Distance Dependencies:** Short and simple sentences are characteristic of CDS. However, it is likely that such utterances do not make ideal training input for LMs to learn long-distance dependencies (LDDs). Consequently, we expect all models trained on CDS data to be negatively impacted on tests that demand long-distance dependency understanding. Island effects, the phenomenon that showed the widest difference in performance compared to ADS-trained (-21.3 points), is one such phenomenon in the BLiMP test suite, requiring long-distance dependency understanding to perform well (Sprouse and Hornstein, 2013). Ellipsis and filler-gap structures also depend on LDDs and also suffer from significant decreases in scores compared to ADS (-10.8 and -6.5 points, respectively). This also applies to binding and control/raising phenomena (-2.8 and -3.6 respectively); however, island effects, ellipsis, and filler-gap tests are particularly affected by the model's lack of LDD understanding. **Phenomena That Confuse Humans:** Warstadt et al. (2020) report human performance scores which we use to gain an understanding of how our model performs on tests compared to humans. From the reported human performance scores, we observe that not all of the grammatical phenomena in the BLiMP test suite are equally transparent to humans. Human performance on 8 out of 12 phenomena is below 90 points and 3 of those are below 85 points. The lowest is a mean score of 81 for tests on argument structure, where the CDS-trained and ADS-trained models are also seen struggling (rather more seriously) with a mean score of 55.1 and 56.1, respectively. For control/raising, simi \begin{table} \begin{tabular}{l|c c c} \hline \hline **Phenomenon** & **Wikipedia-4** & **Conversational ADS** & **Conversational CDS** \\ \hline Ananaphor Agreement & 51.4 & 60.6 & 62.9 \\ Argument Structure & 54.5 & 56.1 & 55.1 \\ Binding & 60.7 & 61.6 & 58.9 \\ Control/Raising & 48.8 & 59.1 & 55.6 \\ Determiner Noun Agreement & 65.2 & 70.9 & 67.8 \\ Ellipses & 68.6 & 66.2 & 57.5 \\ Filler Gap & 62.4 & 67.3 & 62.6 \\ Irregular Forms & 61.8 & 68.2 & 70.9 \\ Island Effects & 51.8 & 72.7 & 51.3 \\ NPI Licensing & 53.7 & 62.6 & 51.9 \\ Quantifiers & 58.5 & 62.4 & 71.7 \\ Subject Verb Agreement & 54.9 & 57.7 & 53.8 \\ \hline \hline \end{tabular} \end{table} Table 2: Performance of model on BLiMP test suite when trained on different types of input data. larly, human performance has a mean score of 84 points while CDS-trained and ADS-trained models have mean scores of 55.6 and 59.1 respectively. We expect CDS to perform poorly on these tests, which are challenging even for people. ### Negative Transfer There are tests where performance of CDS-trained models would be expected to be better given the nature of the phenomena and the characteristics of CDS utterances. However, CDS underperforms compared to ADS even on tests we might expect to be in its wheelhouse. In particular, determiner-noun agreement and subject-verb agreement are the kinds of phenomena that should be easy for the model to learn even from shorter utterances and with relatively small vocabulary size, since they are matters of simple, regular morphology. The results, therefore, are interesting. We hypothesize one reason we do not see good transfer boosts from other-language CDS on these is that patterns of morphology are very language specific. Looking broadly at the performance of non-English CDS models, we suggest that these results reflect negative cross-linguistic transfer. For example, the distribution of negative polarity items in Polish and many other Slavic languages displays what has been termed the "Bagel problem" (Pereltsvaig, 2006): because of conflicts with the demands of strict negative concord (in which negation requires multiple elements of an expression must all appear in their negative forms), in Slavic languages, there are NPIs that never appear in what would otherwise be the canonical context of negative polarity licensing, i.e. direct negation (Hoeksema, 2012). In this way, language-specific paradigmatic patterns supersede the general correlational relationship between NPIs and their licensing contexts, producing an opportunity for _negative transfer_ and L1 interference effects. ## 6 Conclusion In this paper, we explore how second language acquisition research and models of second language acquisition can contribute to questions in NLP about the learnability of grammar. Drawing from the previous research on the unique role of child-directed speech (CDS) in language acquisition, we investigate the potential of spontaneously generated CDS to form a special source from which LMs can acquire the structure necessary for first language acquisition. To test sequential second language acquisition in LMs, we introduce SLABERT. The results from our experiments suggest that while positive transfer is a lot more common than negative transfer, negative transfer occurs in LMs just like it occurs in English Second Language (ESL) learners. We believe these novel findings call for further research on this front, and suggest that models like SLABERT can provide useful data for testing questions about both language acquisition and typological relationships through patterns of cross-linguistic transfer. To support this, we release our code, novel MAO-CHILDES corpus, and models. ## 7 Limitations Given that many special properties of Child-Directed Speech are not present in text, we would have liked to work on a multimodal dataset, where both visual and speech information would be present. More specifically, we would have liked to test the effect of the following: * Grounding the language models in vision to test the effect of joint attention (Rowe, 2012; Akhtar and Gernsbacher, 2007). Joint attention refers to the phenomena where the caregiver's and the child's coordinated attention attention to each other to a third object or an event. * Child-Directed Speech is known to have special prosodic properties such as higher variability in pitch (Fernald et al., 1989; McRoberts and Best, 1997; Papousek et al., 1991), lengthening of vowels and pauses (Albin and Echols, 1996; Ratner, 1986; Fernald et al., 1989), context-specific intonational contours (Katz et al., 1996; Papousek et al., 1991; Stern et al., 1982). These properties have been suggested by many researchers to serve as a mechanism for getting the infants attention (Cruttenden, 1994; Ferguson, 1977; Fernald, 1989). This attentive role may be considered to be beneficial for language development in children (Garnica, 1977). As our models only take text as the input, we were unable to test the relationship the between these properties and language acquisition in neural network based models have. * Caregivers give a lot of feedback when young children are first producing and acquiring lan guage (Soderstrom, 2007). Our current mainstream language models are not interactive. Therefore, it is difficult to incorporate the feedback loop and the test the effect of the same in models' language acquisition. As it is, our findings suggest that many of the most important facilitative features of Child-Directed Speech are relevant to precisely those formal and conceptual aspects of language acquisition that are not captured by text-based language models. In this paper, we have tested the effect of native CDS in L2 acquisition with 5 typologically diverse languages. However, there is enormous scope to test the effect of the same with many more different languages, which may lead to more pointed implications and conclusions than the findings offered here. ## 8 Ethics Statement We use publicly available CHILDES data to build our corpora (MAO-CHILDES). Please read more about their terms before using the data.3 We use the dataset extracted from the CHILDES database only for research purposes and not for commercial reasons. We will release the dataset upon publication under the same license as CHILDES and this is compatible with the license of CHILDES database (Macwhinney, 2000). The results of this study are reported on a single run as part of measures taken to avoid computation wastage. We do not foresee any harmful uses of this work. Footnote 3: [https://talkbank.org](https://talkbank.org) ## Acknowledgements We would like to acknowledge Philip Huebner for clearing our queries regarding the BabyBERTa code-base. We would also like to thank Saujas Vaduguru for helping us improve our initial drafts. We also thank the anonymous reviewers for their feedback on our work. This work made use of the High Performance Computing Resource in the Core Facility for Advanced Research Computing at Case Western Reserve University.
第二言語習得 (SLA) research は、言語構造の相互影響を広く研究している、語彙の相互影響、[L1] のネイティブ言語の言語構造 [L1] の影響は、外国語学習の成功に影響を与える。 transfer の効果は、ポジティブ (学習を促進する) またはネガティブ (学習を阻害する) である。私たちは、NLPの文献には、この現象に対する十分な注意が払われていないと見なしている。 L1 と L2 の間で、ポジティブとネガティブな転換のパターンを理解するため、LM で後続的な第二言語習得をモデル化する。さらに、5 言語、ドイツ語、フランス語、ポーランド語、インドネシア語、日本語から成る多言語の年齢順調なCHILDES (MAO-CHILDES) -- データセットを作成し、ネイティブの子供directed speech (CDS
2309.08751
Diverse Neural Audio Embeddings -- Bringing Features back !
With the advent of modern AI architectures, a shift has happened towards end-to-end architectures. This pivot has led to neural architectures being trained without domain-specific biases/knowledge, optimized according to the task. We in this paper, learn audio embeddings via diverse feature representations, in this case, domain-specific. For the case of audio classification over hundreds of categories of sound, we learn robust separate embeddings for diverse audio properties such as pitch, timbre, and neural representation, along with also learning it via an end-to-end architecture. We observe handcrafted embeddings, e.g., pitch and timbre-based, although on their own, are not able to beat a fully end-to-end representation, yet adding these together with end-to-end embedding helps us, significantly improve performance. This work would pave the way to bring some domain expertise with end-to-end models to learn robust, diverse representations, surpassing the performance of just training end-to-end models.
Prateek Verma
2023-09-15T20:27:47
http://arxiv.org/abs/2309.08751v2
# Diverse Neural Audio Embeddings - ###### Abstract With the advent of modern AI architectures, a shift has happened towards end-to-end architectures. This pivot has led to neural architectures being trained without domain-specific biases/knowledge, optimized according to the task. We in this paper, learn audio embeddings via diverse feature representations, in this case, domain-specific. For the case of audio classification over hundreds of categories of sound, we learn robust separate embeddings for diverse audio properties such as pitch, timbre, and neural representation, along with also learning it via an end-to-end architecture. We observe hand-crafted embeddings, e.g., pitch and timbre-based, although on their own, are not able to beat a fully end-to-end representation, yet adding these together with end-to-end embedding helps us, significantly improve performance. This work would pave the way to bring some domain expertise with end-to-end models to learn robust, diverse representations, surpassing the performance of just training end-to-end models. Prateek Verma prateekv@stanford.edu Stanford University Stanford, California, 94305 Features, Diverse, Robust, Audio Embeddings, Transformers, Neuralogram ## 1 Introduction and Related Work We interact with sounds every day. They occur in various environments and places around us, with their diversity and richness described in [1], having the largest ontology of everyday sounds. Making computers hear similar to humans, has come realistically close to achieving super-human performance, with the advent of transformer architectures [2]. They have not only revolutionized natural language processing [2, 3], they also have altered the course of research in problems in areas such as computer vision [4], and audio [5, 6, 7]. The present work touches on ways to derive audio embeddings which have supported a variety of applications such as ASR, audio understanding [8, 9], conditional audio synthesis [10, 11] as well as style, signal transformation [12]. We can summarize the contents of the audio signal depending on the task at hand in these small latent representations. Learning a small compressed representation of a input was the beginning of the modern deep learning revolution, with the classic work by Hinton [13]. Once a representation is learned, a classification head similar to [14, 15] is then used to map these vectors to actual labels. There was a shift to end-to-end neural architectures first in the ASR, by the CLDNN paper proposed by Google in 2015 [16], and then used in tasks like acoustic scene understanding similar to the ImageNet challenge by [17]. These architectures quickly surpassed the performance of handcrafted features. [18] combined the front-end of the work done by raw-CLDNN with the mixture of expert architectures drawing from[19]. This performed better than using a simple convolutional front end with the same Transformer module, showing how elements from traditional signal processing can be combined with classic machine learning ideas. Pre-trained architectures have gone mainstream and are increasingly ubiquitous for all applications, essentially becoming universal function approximators [20]. In our work, we provide a direction to improve these architectures by bringing back hand crafted domain specific features. There have been similar research directions in computer vision, where [21] explored diverse sets of feature priors, thus having less overlapping failure modes while dealing with spurious data. However, the goal in our case is different: We do not use them as an ensemble but rather as feature extractors and harness strong inductive domain knowledge to help improve model performance. Before the modern advent of deep learning, several spectro-temporal features were used that could describe characteristics of interest for a particular task. [22] used timbral, energy, rhythm, spectral, frequency-based handcrafted descriptors to identify the contents of the audio signal, in this case, the genre of the music being played. However, end-to-end architectures quickly surpassed them, such as one described in [23] using convolutional models that could learn features from scratch. One of the motivations of the current paper is also as follows: We typically use data-augmentation[24] to help with the robustness and scalability of our neural architectures so that it can generalize better to unseen audio/test samples. However, there can exist feature-based representations the model would not encounter in real life. For example, say a spectrogram that only contains bi nary 1/0 mask attributing to the presence/absence of peaks in the spectral representation. Or for another case, an MFCC or a neuralogram representation. We cannot reconstruct and get back to the audio signal via these representations. Yet, they convey a specific meaning/representation for the input signals. Additionally, each one of them is also orthogonal to the other: a binary mask of the location of peaks in a spectrogram only tells us about the location of the frequency content of the audio signal and nothing else. A 13-dim MFCC-based representation gives us only the timbre of the audio signals and nothing more. Thus, we are operating a neural architecture on each of these according to the loss function we used for end-to-end trained architecture, and training each one of them from scratch separately. We could have created multiple augmentation schemes and used them in conjunction to train a neural architecture. Another approach by Deepmind explored creating multiple representations of the same audio signal and mapping them to the same latent space [15]. However, they do not combine the latent codes but try to make the neural architecture make the latent representation identical or close to each other, for unsupervised setups. However, each of the parameters being learned would have to consider all of the augmentations so that the weights can generalize to unseen scenarios. However, as described earlier, they would only encounter them in the test scenario if we transform the audio in that manner. Hence, we explored the approach described in the current paper. The contributions of our paper are as follows: i) We report how to create feature-based robust neural embeddings for audio signals. These feature-based embeddings are interpretable; for example, for a task, we can see how much pitch-based and timbre-based features would contribute, in terms of absolute metrics in accuracy ii) Further, these embeddings are robust; that is, a latent code is learned only by looking at a specific category of features(end-to-end or human-defined), and it uses only those embeddings for a particular task. For example, given a pitch-based representation, it will only use the input provided to learn the best representation for a particular task, unlike passing a raw waveform directly where it can use any attribute of the signal it deems fit. iii) We showcase how embeddings having prior domain-specific knowledge used in conjunction with end-to-end architectures can surpass the results obtained by just using purely learned architecture. This is a very strong finding, as it opens the doors of feature engineering to be used with state-of-the-art architectures such as Transformers. ## 2 Dataset For the sake of this work, we work with [25]. This is a classification task, and the dataset contains supervised labels, often with one or more tags assigned for a specific audio clip. The audio files are variable, sometimes up to 15 seconds long. It contains about 51k audio files, with the same ontology drawn from AudioSet[1]. The reader of this paper is asked to refer to [25] for choosing this dataset over AudioSet [1]. We primarily chose it for the free availability of the balanced reference dataset, and secondly, the primary reason being a uniform way of training/testing and reporting the results. We only train from scratch neural architecture _only_ on this dataset, rather than pre-training on massive audio/vision dataset, unlike [26]. We resample all audio files to have a sampling rate 16kHz. To be consistent with other papers reporting results [25, 5, 18], we do not carry out any data augmentation like additive noise, spectral changes, etc [24]. All neural architectures are first trained on 1s of audio (similar to the trend started by [25, 1] with the architecture predicting one or more labels. The labels of the entire clip are assigned to each one of the audio chunks of 1s during training, which are learned to be predicted by a neural architecture on representation or from embeddings. Figure 1: (L) We learn diverse, robust latent representation by passing on diverse input representation, e.g., timbral MFCCs, pitch maps representing sinusoidal peaks, waveform, and neural embeddings from the last layer of a convnet. The learned embedding learns representation for that kind of input ONLY, without piggybacking on other features/inputs (R). Our approach of combining diverse, robust latent code with linear head gives a significant boost in large-scale acoustic scene understanding. For testing, the labels are averaged across the entire clip to report mean-average precision at the clip level. ## 3 Methodology We describe the methodology we use to showcase the strength of our work. In most of the literature, data augmentation is used in order to build robustness into the system, e.g., learning timbal variations, pitch variations, additive noise to name a few [24]. For each input representation, a front-end is defined as how to go from a feature representation of interest to feed it to the Transformer architecture, and adding positional encodings [2]. The rest of the block remains the same: The Transformer module consists of 6 layers with 64 as the embedding size with a single layer of 256 dimensions acting as an MLP module. We use a dropout rate of 0.3 in the attention and MLP layers, with a 12-number of heads. Global average pooling is carried out at the last layer (6th layer) of the Transformer architecture to get a representation of 64-dim encapsulating the input to get the embedding for the particular input for a particular task. This is consistent with previous work such as [5]. Each of the outputs after two Transformer modules is followed by a Max-Pooling block, which reduces the dimension of the number of tokens by a factor of 2. This is successful in computer vision, too, as the final output from the last convolutional layers looks at a much broader receptive field and a hierarchical structure. This is passed onto a linear layer of 2048 neurons followed by a 200-neuron final layer to have the output vector. The loss criterion used to update the weights is Huber loss between the predicted vector of the neural architecture and a 200-dim vector binary vector with 1s present at the location of the category(ies) of the audio present in the input audio representation. All architectures are trained for 300 epochs starting from 2e-4 till 1e-6. For the next subsections, we focus on how to pass on a representation, either end to end or a pre-defined representation-based, onto the Transformer module. Each of these architectures, are identical but trained from scratch, except for how to pipe feature-based representation onto the Transformer module. ### Frequency Content-Based Representation We, in this representation, only allow pitch/frequency-based information to pass through. Traditionally, pitch detection in a polyphonic setting is a really difficult problem. For understanding the frequency content, we do not want any other information like the energy and timbre of the signal in our representation. To achieve this, we do the following processing. For each 1s audio chunk, we first compute a log-magnitude constant-Q representation [27] with the hop length of 25ms, for 80 bins, with 12 bins for every octave doubling starting from 40Hz, and a sparsity factor chosen to be 0.01 using Librosa library [28]. We only retain spectral peaks, with peaks picked in individual slices by looking at +/-2 spectral bins of either side. Further, we only retain the peaks of absolute strength greater than equal to the median of the log-magnitude of the contents in the 1s of the constant-Q representation. This will retain the spectral/harmonic structure of the contents of the audio signal yet will only have binary values that correspond to the presence/absence of peaks. This representation has been used by the author's previous work in unsupervised representation learning as shown in [9]. The front-end encoder, in this case, takes an 80-dim vector corresponding to the single slice of our representation, learns a 64-dim embedding to conform it to the embedding dimension of the Transformer, and adds sinusoidal positional embeddings, and passed onto Transformer modules. ### Timbre Based Representation To represent timbre-based information, we compute a 13-dim MFCC [29] representation and throw away the first coefficient to get a 12-dimension vector every 25ms, to get a representation of MFCC coefficients of dimension 12x40 time steps. This is piped through a front end similar to the frequency-based content representation, i.e. projected to a dimension of 64-dim via a linear layer, and sinusoidal positional embeddings are added before being passed onto the Transformer. ### End-To-End Architecture The recipe for an end-to-end Transformer follows the classical work of [30], and more recently, that of [18]. We divide an input waveform into patches of 25ms, thus having 40 chunks. Each 25ms comprising 400 samples is passed through a series of convolutional filters of length 200, with the number of filters being 128, with zero-padded such that the output of each convolutional filter is the same length as the input. Now, we take the maximum across the output of the convolutional filter for each convolutional filter to get a single vector of length equal to the number of convolutional filters for each 25ms. This vector is now projected to a dimension of 64 via a linear layer, and sinusoidal positional embedding is added before being passed onto the Transformer module. ### Neuralogram: Stacked Embeddings In this representation, we project each of the 100ms waveform chunks through a traiend convolutional architecture, MobileNet [31]. This representation of stacking the output of the last convolutional layer (Neuralogram [32]), when we pass on a input 100ms gives a 1024-embedding vector, gives us a representation of 1024x10 shape for 1s content of audio signal. This vector is now projected to a dimension of 64 via a linear layer, and sinusoidal positional embedding is added before being passed onto the Transformer module. We think this is different from an end-to-end architecture, as the architectures are first different. Secondly, we use the convolutional module only as a projector of smaller waveform chunks, and a Transformer architecture still learns the actual dependencies across time. Individually, understanding the contents of the signal, with just 100ms is a difficult task, and generally for humans too more context is needed. However, the embeddings are projected onto a space, that can be utilized/fed to the Transformers, to understand context. With these representations, in this work, we train each of them on the embeddings separately with the given labels and combine them to get a stacked representation which is interpretable and robust, achieving significant gains, when used together. ### Combining Embeddings For the baseline architecture, we report the top-5 accuracy, from the test set for an end-to-end architecture. This is the exact model as described in the section above. As per our introduction, we use the embedding of dimension 64 dimensions to get the representation or embedding for each one of the feature sets, namely i) pitch/frequency content, ii) timbre, iii) end-to-end architecture, and iv) pretrained stacked embedding. Since we train each of these four architectures separately, on the input as described before, all of which correspond to 1s of audio, we treat the 64-dim output before the linear classification head (after taking the global average pooling operation) as the representation/embedding for the contents of that audio signal. We now train the same linear classifier onto the subset of the embeddings to see how well we do and stack a robust, diverse, interpretable feature set that makes sense, together with an end-to-end learned architecture. ## 4 Results and Discussion We, in the first experiment, report how well we do when we use each of the input feature representations or end-to-end architectures individually. We trained each model architectures separately and reported the top-5 accuracy on FSD-50K in Table 1. The best results are obtained by learning an embedding by projecting waveform onto MobileNet embeddings and training a 6-layer Transformer module on top of them. This method surpassed an end-to-end learned architecture with the same number of parameters. One hypotheses is that most heavy lifting has already been carried out by conv net to project waveform patches to separable neural embeddings. Hence, most of the parameters of neural Transformers are dedicated to learning interdependencies amongst the latent codes, as opposed to the end-to-end model that has to learn not only the separable latent codes but also the connections from scratch. An important point is that a pitch-based representation of retaining just the binary locations of spectral peaks does not achieve competitive results. Yet, the embedding learned is robust, as the model tries its best to achieve the best results in a sub-optimal representation without taking help from other features that an end-to-end or a waveform-based architecture might take. Similar arguments can be made for other input representations. This work shows that first learning, diverse latent embeddings on a (sub-optimal) representation, and then re-training just the linear head, achieves state of the art accuracy surpassing end-to-end architectures. ## 5 Conclusion and Future Work In this paper, we have shown how prior domain-specific feature embeddings can be extracted and used in conjunction with end-to-end learned embeddings. This is particularly important, as by learning feature-specific embeddings, we learn a robust feature set that focuses on the best representation in that domain-specific representation rather than taking help from other signals. In this work, we see that we diversify our feature set by first learning a diverse feature set based on pitch, timbre, end-to-end architecture, and convolutional embedding. These category-specific features, combined with end-to-end architecture-derived embedding, not only add to the interpretability and robustness of the learned representation but also help us increase the performance of a baseline end-to-end learned architecture by quite a significant amount. We hope this work would pave way for bringing in domain expertise together with optimized end to end architectures. ## 6 Acknowledgement This work was supported by the Stanford Institute of Human-Centered AI (HAI) through a Google Cloud computing grant.
modern AI architectures の advent に伴い、エンドツーエンドアーキテクチャへの転換が起きている。この転換は、特定のドメインのバイアス/知識なしにニューラルアーキテクチャをトレーニングし、タスクに合わせて最適化することにより引き起こされた。この論文では、音声エンベディングを、異なる特徴表現を通じて学習し、このケースでは特定のドメインの表現を用いる。数百のカテゴリのサウンドの分類に対して、音質、音色、神経表現などのさまざまな音質属性を考慮した、 robustな個別エンベディングを学習し、また、エンドツーエンドアーキテクチャを通じて学習する。手作りのエンベディング、例えば、音質と音色に基づくものを、それ自体では、エンドツーエンドの表現に匹敵する能力を持つとは言えないが、これらのエンベディングをエンドツーエンドエンベディングと組み合わせて使用することで、パフォーマンス
2308.00024
Searching for High-Energy Neutrino Emission from Seyfert Galaxies in the Northern Sky with IceCube
The recent detection of TeV neutrino emission from the nearby active galaxy NGC 1068 by IceCube suggests that AGN could make a sizable contribution to the total high-energy cosmic neutrino flux. The absence of TeV gamma rays from NGC 1068, indicates neutrino production originates in the innermost region of the AGN. Disk-corona models predict a correlation between neutrinos and keV X-rays in Seyfert galaxies, a subclass of AGN to which NGC 1068 belongs. Using 10 years of IceCube through-going track events, we report results from searches for neutrino signals from 27 additional sources in the Northern Sky by studying both the generic single power-law spectral assumption and spectra predicted by the disk-corona model. Our results show excesses of neutrinos associated with two sources, NGC 4151 and CGCG 420-015, at 2.7$\sigma$ significance, and at the same time constrain the collective neutrino emission from our source list.
Theo Glauch, Ali Kheirandish, Tomas Kontrimas, Qinrui Liu, Hans Niederhausen
2023-07-31T18:00:01
http://arxiv.org/abs/2308.00024v1
# Searching for High-Energy Neutrino Emission from Seyfert Galaxies in the Northern Sky with IceCube ###### Abstract The recent detection of TeV neutrino emission from the nearby active galaxy NGC 1068 by IceCube suggests that AGN could make a sizable contribution to the total high-energy cosmic neutrino flux. The absence of TeV gamma rays from NGC 1068, indicates neutrino production originates in the innermost region of the AGN. Disk-corona models predict a correlation between neutrinos and keV X-rays in Seyfert galaxies, a subclass of AGN to which NGC 1068 belongs. Using 10 years of IceCube through-going track events, we report results from searches for neutrino signals from 27 additional sources in the Northern Sky by studying both the generic single power-law spectral assumption and spectra predicted by the disk-corona model. Our results show excesses of neutrinos associated with two sources, NGC 4151 and CGCG 420-015, at 2.7\(\sigma\) significance, and at the same time constrain the collective neutrino emission from our source list. Theo Glauch\({}^{1}\), Ali Kheirandish\({}^{2}\), Tomas Kontrimas\({}^{1}\), Qinrui Liu\({}^{3,4*}\), Hans Niederhausen\({}^{5}\) \({}^{1}\) Technical University of Munich, TUM School of Natural Sciences, Dept. of Physics \({}^{2}\) Dept. of Physics & Astronomy and Nevada Center for Astrophysics, University of Nevada, Las Vegas \({}^{3}\) Dept. of Physics, Engineering Physics & Astronomy and Arthur B. McDonald Canadian Astroparticle Physics Research Institute, Queen's University \({}^{4}\) Perimeter Institute for Theoretical Physics \({}^{5}\) Dept. of Physics and Astronomy, Michigan State University \({}^{*}\) Presenter The 38th International Cosmic Ray Conference (ICRC2023) 26 July - 3 August, 2023 Nagoya, Japan Introduction The continuous observation of the high-energy neutrino sky by IceCube has recently revealed evidence for particle acceleration in a nearby Seyfert galaxy, NGC 1068 [1]. This result reinforces the idea that active galactic nuclei (AGN) are cosmic ray (CR) accelerators and make a sizable contribution to the flux of high-energy cosmic neutrinos. With the origin of the rest of the astrophysical neutrino flux unknown, it is well motivated to search for sources similar to NGC 1068. NGC 1068 was identified as the most significant source so far, with an excess in the energy range of 1.5-15 TeV. The measured neutrino flux from NGC 1068 is much larger than the \(\sim\)GeV gamma rays measured by _Fermi_-LAT [2, 3] as well as the upper limits of \(\sim\)TeV gamma-ray emissions placed by MAGIC and HAWC [4, 5]. As the interactions of CRs simultaneously produce high-energy neutrinos and gamma rays at the same flux level, the observations indicate that the environments where neutrinos are produced must be opaque to the accompanying gamma rays. The primary candidate is the core of AGN, which can accommodate the efficient production of neutrinos and simultaneously provide an optically thick region where gamma rays are obscured [6]. At the same time, the measurement of the total neutrino flux shows that the flux at medium energies (\(\sim 30\) TeV) is an order of magnitude greater than that of high energies (\(\gtrsim 100\) TeV), which implies that sources dominating the medium energies should be opaque to gamma rays in order not to exceed the isotropic gamma-ray background. ## 2 Seyfert Galaxies as High-energy Neutrino sources In this study, we investigate neutrino emission from the coronae of Seyfert galaxies [6, 7]. In Seyfert galaxies, accretion dynamics and magnetic dissipation lead to the formation of a hot, highly magnetized, and turbulent corona above the disk [8]. The dense environments near the supermassive black hole provides suitable conditions for the interactions of CRs and simultaneous absorption of the accompanying gamma rays. These models, commonly referred to as disk-corona models, can accommodate the excess of neutrino flux at medium energies and the observed flux from NGC 1068 [6, 9, 10, 11, 7]. Here, we employ the predicted neutrino flux from the disk-corona model presented in [6, 7]. In this model, CRs are accelerated stochastically by plasma turbulence in coronae and then interact with gas or radiation in the innermost regions of the AGN to produce neutrinos. AGN coronae are primarily characterized by thermal X-ray emission, making the intrinsic X-ray luminosity \(L_{X}\) the principal parameter in disk-corona models for estimating the neutrino emission. Other model parameters include the CR to thermal pressures that summarizes the CR budget and the turbulence strength. While moderate values of CR to thermal pressure can explain the medium-energy neutrino flux, a higher level of CR pressure is needed to explain the neutrino flux measured in the direction of NGC 1068. This assumption is heavily tied to the measured intrinsic X-ray flux. For this study, we solely focus on the high CR pressure scenario, given that identification of sources with moderate CR pressure requires next-generation neutrino telescopes. Based on the reported intrinsic X-ray flux, this model also finds NGC 1068 as the brightest source in IceCube and suggests that additional sources might be identified if they share similar characteristics with NGC 1068. Here, we conduct analyses focusing on potential neutrino emission from X-ray bright Seyfert galaxies with IceCube muon track events from the Northern Sky (declination > -5\({}^{\circ}\)) for the good pointing power of track events and effective suppression of overwhelming atmospheric muons of up-going events with the Earth acting as a filter. ## 3 Analyses Our source selection is based on the BAT AGN Spectroscopic Survey (BASS) [12] which is an all-sky study of X-ray detected AGN. In the selection, we pick bright Seyfert galaxies in the Northern sky according to their reported intrinsic X-ray fluxes at 2-10 keV as sources with weak X-ray fluxes are not expected to produce detectable neutrino fluxes. NGC 1068 is one of the brightest in this list. The selection retains 28 sources in the Northern Sky, including NGC 1068. Considering the knowledge of a strong flux from this source, including NGC 1068 in the list would cause a bias in a search. Therefore, we discuss the exclusion and inclusion of NGC 1068 separately. To be conservative and to take into account the fact that the remaining sources can still give neutrino signals significant enough based on the model expectation, we conclude our results without NGC 1068 and the results including NGC 1068 are shown for completeness. In this work, we analyze the \(\nu_{\mu}\) induced muon tracks from the Northern sky. The data sample is processed the same way as in [1] which includes new data processing, data calibration, and event reconstruction implemented that grant us with substantially improved energy reconstructions and point spread function at low to medium energies. In addition to the data used in the previous work, 1.7 years of experimental data was added to the sample. This extension of livetime increases the statistics by \(\sim 20\%\) compared to data used in [1]. Figure 1: The expected flux of each source (thin lines) from the disk-corona model with the top 4 sources, which are likely to be observed by IceCube, are highlighted. The total fluxes excluding or including NGC 1068 are shown, to be compared with the 5 \(\sigma\) discovery potentials in both cases. We employ the unbinned maximum likelihood ratio method for this work based on the direction, energy proxy, and angular uncertainty of the events in order to discriminate potential neutrino emission from the background composed of the atmospheric and the isotropic astrophysical neutrinos. We perform two types of searches. One is the catalog search, looking for the neutrino emission from each source separately, using power-law and model fluxes, respectively. In addition, we conduct a binomial test to examine the significance of observing excesses of \(k\) sources for the two flux hypotheses for our catalog search. The other is the stacking search, where the emission from all selected sources is combined in order to obtain an enhanced signal above the background. In the stacking analysis, only the model flux is tested. We apply the improved kernel density estimation (KDE) method presented in [1] to these analyses to generate the probability density functions (PDFs). This method improves the modeling of directional distributions of neutrinos significantly compared to the multivariate Gaussian approximation used in previous IceCube analyses. The application of the KDE method depends on the shape of the energy spectrum. For the analyses assuming the disk-corona model, the flux shape varies with \(L_{X}\) and the flux normalization changes with the CR pressure. Other parameters in the calculation are fixed to values fitting the observed flux from NGC 1068 assuming all sources to be intrinsically similar to NGC 1068. Accordingly, we apply KDE to generate the grid of PDFs for the model flux analyses based on \(L_{X}\). As the shape of the flux is determined by the X-ray luminosity, the only free parameter to be fitted in the search is the number of signal \(n_{s}\), which decides the flux normalization. The expected fluxes of selected sources when setting parameters to ones fitting NGC 1068 are shown in Fig. 1. The total model fluxes with and without NGC1068 for the stacking search are also shown with comparison to the \(5\sigma\) discovery potential. Even excluding the contribution from NGC 1068, the expected emission exceeds the discovery criterion assuming the optimistic model scenario, i.e., high CR pressure. The analysis performance inspection shows that if the disk-corona model predicts the true flux, modeling the flux correctly gives a notable improvement comparing to fitting the power-law spectrum. The quantity of this improvement is source-dependent. As stated above, in addition to the catalog search and stacking search based on the fluxes predicted by the disk-corona model, we also perform a catalog search with the power-law spectrum assumption where the spectral index \(\gamma\) is fitted as well as \(n_{s}\). This search has the same procedure as in [1] and we continue to use the PDF generation of each spectral index for the power-law flux. This analysis is to complement the search discussed above for possible high-energy events which would be missed due to the cutoff of the model spectrum at high energies and for an intuitive comparison with other work by applying the usual power-law flux assumption. ## 4 Results & Discussion The results for the top sources in the two catalog searches and the stacking search are summarized in Table 1. In addition to NGC 1068, we find that excesses of neutrino emission could be associated with two other sources: CGCG 420-015 and NGC 4151. CGCG 420-015 is the most significant in the search based on the disk-corona model flux assumption with a \(2.5\sigma\) post-trial significance while NGC 4151 stands out in the search based on the power-law spectrum assumption with a \(2.1\sigma\) post-trial significance. The significance of NGC 1068 increases owing to the increase \begin{table} \begin{tabular}{l l l l l l l l} \hline & spectrum & \(n_{\rm exp}\) & \(\hat{n}_{\rm s}\) & \(\hat{\gamma}\) & \(p_{\rm local}\) & \(p_{\rm global}\) & \(n_{\rm UL}^{90\%}\) \\ \hline Stacking Searches & & & & & & & \\ \hline Stacking (excl.) & disk-corona & 154.4 & 5 & - & 2.4\(\times 10^{-1}\) (0.7 \(\sigma\)) & 2.4\(\times 10^{-1}\) (0.7 \(\sigma\)) & 51.1 \\ Stacking (incl.) \({}^{(*)}\) & disk-corona & 198.9 & 77 & - & 1.1\(\times 10^{-4}\) (3.7 \(\sigma\)) & – & 128.0 \\ \hline Catalog Search 1 & & & & & & \\ \hline CGCG 420-015 & disk-corona & 3.2 & 31 & - & 2.4\(\times 10^{-4}\) (3.5 \(\sigma\)) & 6.5\(\times 10^{-3}\) (2.5 \(\sigma\)) & 46.4 \\ NGC 4151 & disk-corona & 13.1 & 23 & - & 6.4\(\times 10^{-4}\) (3.2 \(\sigma\)) & – & 39.5 \\ NGC 1068 \({}^{(*)}\) & disk-corona & 44.6 & 48 & - & 3.0\(\times 10^{-7}\) (5.0 \(\sigma\)) & – & 61.4 \\ \hline Catalog Search 2 & & & & & & \\ \hline NGC 4151 & powerlaw & – & 30 & 2.7 & 6.4\(\times 10^{-4}\) (3.2 \(\sigma\)) & 1.7\(\times 10^{-2}\) (2.1 \(\sigma\)) & 61.4 \\ CGCG 420-015 & powerlaw & – & 35 & 2.8 & 3.0\(\times 10^{-3}\) (2.7 \(\sigma\)) & – & 62.1 \\ NGC 1068 \({}^{(*)}\) & powerlaw & – & 94 & 3.3 & 8.0\(\times 10^{-8}\) (5.2 \(\sigma\)) & – & 94.9 \\ \hline \end{tabular} \end{table} Table 1: Results for the stacking search and selected results from two catalog searches. Best-fitted signal events \(\hat{n}_{\rm s}\), pre-trial and post-trial \(p\)-values are shown with the post-trial significance. For the model analysis, expected numbers of events (\(n_{\rm exp}\)) are listed and for the power-law analysis, best-fitted spectral indices \(\hat{\gamma}\) are listed. \(n_{\rm UL}^{90\%}\) column shows the 90% confidence level upper limits of the numbers of signal events. Upper limits assuming power-law spectra are given assuming \(\gamma=3\). Results marked with \({}^{(*)}\) are provided for completeness but are not used to compute final significances because evidence for neutrino emission from NGC 1068 was known prior to this work [1, 13]. Figure 2: Local pre-trial \(p\)-value maps around the top sources NGC 1068, NGC 4151 and CGCG 420-015 with the the model fit (top) and the power-law fit (bottom). Colored points show the locations of sources and crosses show the best-fit locations. Contours correspond to 68% (solid) and 95% (dashed) confidence regions. of the statistics of the data. Fig. 2 shows the \(p\)-value scans in the regions around the top sources under our two flux assumptions. For all selected sources, Fig. 3 displays event numbers of the expectations as well as the measurement with the 90% confidence level upper limits. The binomial test results in a post-trial 2.7\(\sigma\) excess from CGCG 420-015 and NGC 4151 when we exclude NGC 1068 and the significance grows to 4\(\sigma\) including NGC 1068. There is no significant excess found in the stacking search with a \(p\)-value=0.24 without including the contribution from NGC 1068, and the best-fit event number is much below the expectation. The results, on one hand, demonstrate the feasibility of identifying sources similar to NGC 1068 in the catalog searches and the binomial test. On the other hand, the absence of a strong signal in the stacking search implies the model parameters suited to explain the observed neutrino flux from NGC 1068 are unlikely to be shared with most sources in the selected list. The first implication of the results is that the CR pressure, which sets the normalization of CRs at the source, is lower than what is fitted for NGC 1068 for most sources. As discussed in [7], more moderate neutrino emission scenarios are beyond the detectability of current neutrino telescopes and the identification of those sources is more feasible with the next-generation detectors. Meanwhile, the selection of bright Seyfert galaxies and the calculation of the expected neutrino flux in the disk-corona model highly depend on the reported intrinsic X-ray flux by BASS, which introduces the primary uncertainty in the analysis as precise estimation of the intrinsic luminosity is challenging for Compton thick sources. Regardless that the BASS catalog offers the most comprehensive survey of non-jetted AGN, more accurate measurement is usually accomplished by targeting instruments such as _NuSTAR_. It is worth mentioning that the higher intrinsic flux from NGC 1068 reported in [14] would indicate lower CR pressure, which would decrease the expected emission from the other sources in the catalog. ## 5 Summary In this study, we searched for high-energy neutrino emission from X-ray bright Seyfert galaxies in the Northern Hemisphere. We incorporate the disk-corona model to perform a catalog search and a stacking search on our selected sources where the generic power-law spectrum assumption is also applied for a catalog search. As there is no significant excess of neutrino events observed in the stacking search, we can constrain the collective neutrino emission from those X-ray bright Seyfert galaxies in Northern sky. However, our results hint neutrino emission from two sources, i.e. NGC 4151 and CGCG 420-015 in addition to NGC 1068. Our results might implicate the existence of sources similar to NGC 1068 whose neutrino emission can possibly be explained by the disk-corona model. Nevertheless, the absence of a significant correlation in the stacking search and most individual sources implies that the features of NGC 1068 leading to the strong neutrino emission are not commonly shared with other X-ray bright Seyfert galaxies. The expectation of neutrino emission relies considerably on the details of the modeling within the picture of the disk-corona model and more comprehensive multi-wavelength observations will provide further insight on the characteristics of the potential sources which will benefit the modeling significantly. IceCube-Gen2, the next-generation of the IceCube detector [15], will be 8 times larger in volume with an expected \(\sim\)5 times increase of the muon track effective area. The sensitivity to \(\nu_{\mu}\) fluxes is expected to rise similarly. This improvement is expected to provide promising prospects for enhancement of the excess from the interesting sources and potential of finding more sources in the future, including ones expected to have moderate neutrino emission. Considering the fact that the majority of the bright Seyfert galaxies reside in the Southern Sky, the improved sensitivity in this region recently achieved by the technical progress in track events selection by IceCube [16] provides an opportunity to identify more sources. A similar study focusing on the Southern Sky X-ray bright Seyfert galaxies using this selection is presented in [17]. In the upcoming years, detectors instrumented in the Northern Hemisphere will boost the identification of sources in the Southern Sky, complementing the detection prospect in the Northern Sky.
近年、ICECUBEがNGC1068の近傍にある活性銀河の TeV Neutrino emissionを検出したことで、AGNは高エネルギー宇宙のニュートリノ流に大きな寄与している可能性を示唆しています。NGC1068からの TeV Gamma Ray の欠如は、ニュートリノの産出がAGN の内側の領域に由来する可能性を示唆しています。ディスク-コーナモデルは、Seyfert銀河に存在する、AGNのサブクラスであるNGC 1068に属する。10年分のICECUBEの通過するトラックイベントを用いて、27の追加のソースからのニュートリノ信号の探索結果を発表しています。両方の一般的な単一パワー定数スペクトル仮定と、ディスク-コーナモデルによって予測されたスペクトルを調べ、これらの探索結果から、NGC4151とCGCG420-015の2つの
2308.16646
Hydrodynamic limit and Newtonian limit from the relativistic Boltzmann equation to the classical Euler equations
The hydrodynamic limit and Newtonian limit are important in the relativistic kinetic theory. We justify rigorously the validity of the two independent limits from the special relativistic Boltzmann equation to the classical Euler equations without assuming any dependence between the Knudsen number $\varepsilon$ and the light speed $\mathfrak{c}$. The convergence rates are also obtained. This is achieved by Hilbert expansion of relativistic Boltzmann equation. New difficulties arise when tacking the uniform in $\mathfrak{c}$ and $\varepsilon$ estimates for the Hilbert expansion, which have been overcome by establishing some uniform-in-$\mathfrak{c}$ estimate for relativistic Boltzmann operators.
Yong Wang, Changguo Xiao
2023-08-31T11:36:31
http://arxiv.org/abs/2308.16646v1
Hydrodynamic limit and Newtonian limit from the relativistic Boltzmann equation to the classical Euler equations ###### Abstract. The hydrodynamic limit and Newtonian limit are important in the relativistic kinetic theory. We justify rigorously the validity of the two independent limits from the special relativistic Boltzmann equation to the classical Euler equations without assuming any dependence between the Knudsen number \(\varepsilon\) and the light speed \(\mathfrak{c}\). The convergence rates are also obtained. This is achieved by Hilbert expansion of relativistic Boltzmann equation. New difficulties arise when tacking the uniform in \(\mathfrak{c}\) and \(\varepsilon\) estimates for the Hilbert expansion, which have been overcome by establishing some uniform-in-\(\mathfrak{c}\) estimate for relativistic Boltzmann operators. Key words and phrases:relativistic Boltzmann equation; relativistic Euler equations; hydrodynamic limit; Newtonian limit; Hilbert expansion 2010 Mathematics Subject Classification: 82C40; 35Q20; 35Q75; 76P05; 76Y05 * Corresponding author: changguoxiao@mailbox.gxnu.edu.cn ###### Contents * 1 Introduction * 2 Preliminaries * 3 The Newtonian limit of the relativistic Euler equations * 4 Uniform-in-\(\mathfrak{c}\) estimates on the linearized collision operators * 5 Uniform-in-\(\mathfrak{c}\) estimates on the linear part of Hilbert expansion * 6 Uniform in \(\mathfrak{c}\) and \(\varepsilon\) estimates on the remainder \(F_{R}^{\varepsilon,\mathfrak{c}}\) * 7 Appendix: Derivation of the orthonormal basis of \(\mathcal{N}_{\mathfrak{c}}\) ## 1. Introduction ### The relativistic Boltzmann equation We consider the special relativistic Boltzmann equation \[p^{\mu}\partial_{\mu}F=\frac{1}{\varepsilon}\mathcal{C}(F,F), \tag{1.1}\] which describes the dynamics of single-species relativistic particles. The dimensionless parameter \(\varepsilon\) is the Knudsen number, which is proportional to the mean free path. The unknown \(F(t,x,p)\geq 0\) is a distribution function for relativistic particles with position \(x=(x_{1},x_{2},x_{3})\in\Omega\) and particle momentum \(p=(p^{1},p^{2},p^{3})\in\mathbb{R}^{3}\) at time \(t>0\). The collision term \(\mathcal{C}(h_{1},h_{2})\) is defined by \[\mathcal{C}(h_{1},h_{2})=\frac{1}{2}\int_{\mathbb{R}^{3}}\frac{dq}{q^{0}}\int _{\mathbb{R}^{3}}\frac{dp^{\prime}}{p^{\prime 0}}\int_{\mathbb{R}^{3}}\frac{dq^{ \prime}}{q^{\prime 0}}W\left(p,q\mid p^{\prime},q^{\prime}\right)\left[h_{1}\left(p^{ \prime}\right)h_{2}\left(q^{\prime}\right)-h_{1}(p)h_{2}(q)\right],\] where the transition rate \(W\left(p,q\mid p^{\prime},q^{\prime}\right)\) has the form \[W\left(p,q\mid p^{\prime},q^{\prime}\right)=s\varsigma(g,\vartheta)\delta(p^{ 0}+q^{0}-p^{\prime 0}-q^{\prime 0})\delta^{(3)}(p+q-p^{\prime}-q^{\prime}). \tag{1.2}\] The streaming term of the relativistic Boltzmann equation (1.1) is given by \[p^{\mu}\partial_{\mu}=\frac{p^{0}}{\mathfrak{c}}\partial_{t}+p\cdot\nabla_{x},\] where \(\mathfrak{c}\) denotes the speed of light and \(p^{0}\) denotes the energy of a relativistic particle with \[p^{0}=\sqrt{m_{0}^{2}\mathfrak{c}^{2}+|p|^{2}}.\] Here \(m_{0}\) denotes the rest mass of particle. Now we can rewrite (1.1) as \[\partial_{t}F+\hat{p}\cdot\nabla_{x}F=\frac{1}{\varepsilon}Q(F,F), \tag{1.3}\] where \(\hat{p}\) denotes the normalized particle velocity \[\hat{p}:=\mathfrak{c}\frac{p}{p^{0}}=\frac{\mathfrak{c}p}{\sqrt{m_{0}^{2} \mathfrak{c}^{2}+|p|^{2}}}.\] The collision term \(Q(h_{1},h_{2})\) in (1.3) has the form \[Q(h_{1},h_{2})=\frac{\mathfrak{c}}{2}\frac{1}{p^{0}}\int_{\mathbb{R}^{3}} \frac{dq}{q^{0}}\int_{\mathbb{R}^{3}}\frac{dp^{\prime}}{p^{\prime 0}}\int_{ \mathbb{R}^{3}}\frac{dq^{\prime}}{q^{\prime 0}}W\left(p,q\mid p^{\prime},q^{ \prime}\right)\left[h_{1}\left(p^{\prime}\right)h_{2}\left(q^{\prime}\right)- h_{1}(p)h_{2}(q)\right].\] We denote the energy-momentum 4-vector as \(p^{\mu}=(p^{0},p^{1},p^{2},p^{3})\). The energy-momentum 4-vector with the lower index is written as a product in the Minkowski metric \(p_{\mu}=g_{\mu\nu}p^{\nu}\), where the Minkowski metric is given by \(g_{\mu\nu}=\text{diag}(-1,1,1,1)\). The inner product of energy-momentum 4-vectors \(p^{\mu}\) and \(q_{\mu}\) is defined via the Minkowski metric \[p^{\mu}q_{\mu}=p^{\mu}g_{\mu\nu}q^{\nu}=-p^{0}q^{0}+\sum_{i=1}^{3}p^{i}q^{i}.\] Then it is clear that \[p^{\mu}p_{\mu}=-m_{0}^{2}\mathfrak{c}^{2}.\] We note that the inner product of energy-momentum 4-vectors is Lorentz invariant. The quantity \(s\) is the square of the energy in the _center of momentum system_, \(p+q=0\), and is given as \[s=s(p,q)=-\left(p^{\mu}+q^{\mu}\right)\left(p_{\mu}+q_{\mu}\right)=2\left(p^{ 0}q^{0}-p\cdot q+m_{0}^{2}\mathfrak{c}^{2}\right)\geq 4m_{0}^{2}\mathfrak{c}^ {2}.\] And the relative momentum \(g\) in (1.24) is denoted as \[g=g(p,q)=\sqrt{\left(p^{\mu}-q^{\mu}\right)\left(p_{\mu}-q_{\mu}\right)}= \sqrt{2\left(p^{0}q^{0}-p\cdot q-m_{0}^{2}\mathfrak{c}^{2}\right)}\geq 0.\] It is direct to know that \[s=g^{2}+4m_{0}^{2}\mathfrak{c}^{2}.\] The post-collision momentum pair \((p^{\prime\mu},q^{\prime\mu})\) and the pre-collision momentum pair \((p^{\mu},q^{\mu})\) satisfy the relation \[p^{\mu}+q^{\mu}=p^{\prime\mu}+q^{\prime\mu}. \tag{1.4}\] One may also write (1.4) as \[p^{0}+q^{0} =p^{\prime 0}+q^{\prime 0}, \tag{1.5}\] \[p+q =p^{\prime}+q^{\prime}, \tag{1.6}\] where (1.5) represents the principle of conservation of energy and (1.6) represents the conservation of momentum after a binary collision. Using Lorentz transformations in [22, 47], in the _center of momentum system_, \(Q(F,F)\) can be written as \[Q(F,F)= \int_{\mathbb{R}^{3}}\int_{\mathbb{S}^{2}}v_{\phi}\zeta(g,\vartheta) \Big{[}F(p^{\prime})F(q^{\prime})-F(p)F(q)\Big{]}d\omega dq\] \[:= Q^{+}(F,F)-Q^{-}(F,F), \tag{1.7}\] where \(v_{\phi}=v_{\phi}(p,q)\) is the Moller velocity \[v_{\phi}(p,q):=\frac{\mathfrak{c}}{2}\sqrt{\left|\frac{p}{p^{0}}-\frac{q}{q^{0} }\right|^{2}-\left|\frac{p}{p^{0}}\times\frac{q}{q^{0}}\right|^{2}}=\frac{ \mathfrak{c}}{4}\frac{g\sqrt{s}}{p^{0}q^{0}}.\] The pre-post collisional momentum in (1.7) satisfies \[\begin{cases}p^{\prime}=\frac{1}{2}(p+q)+\frac{1}{2}g\Big{(}\omega+(\gamma_{0} -1)(p+q)\frac{(p+q)\cdot\omega}{|p+q|^{2}}\Big{)},\\ q^{\prime}=\frac{1}{2}(p+q)-\frac{1}{2}g\Big{(}\omega+(\gamma_{0}-1)(p+q) \frac{(p+q)\cdot\omega}{|p+q|^{2}}\Big{)},\end{cases}\] where \(\gamma_{0}:=(p^{0}+q^{0})/\sqrt{s}\). The pre-post collisional energy is given by \[\begin{cases}p^{\prime 0}=\frac{1}{2}(p^{0}+q^{0})+\frac{1}{2}\frac{g}{ \sqrt{s}}(p+q)\cdot\omega,\\ q^{\prime 0}=\frac{1}{2}(p^{0}+q^{0})-\frac{1}{2}\frac{g}{\sqrt{s}}(p+q) \cdot\omega.\end{cases}\] The scattering angle \(\vartheta\) is defined by \[\cos\vartheta:=\frac{(p^{\mu}-q^{\mu})(p^{\prime}_{\mu}-q^{\prime}_{\mu})}{g^ {2}}.\] The angle is well defined under (1.4) and we refer to [17, Lemma 3.15.3]. The function \(\varsigma(g,\vartheta)\) in (1.2) is called the differential cross-section or scattering kernel. The relativistic differential cross section \(\varsigma(g,\vartheta)\) measures the interactions between relativistic particles. Throughout the present paper, we consider the "hard ball" particles \[\varsigma(g,\vartheta)=\text{constant}.\] Without loss of generality, we take \(\varsigma(g,\vartheta)=1\) for simplicity. The Newtonian limit in this situation, as \(\mathfrak{c}\to\infty\), is the Newtonian hard-sphere Boltzmann collision operator [48]. ### Hilbert expansion In the present paper, we are concerned with both the hydrodynamic limit and Newtonian limit from the relativistic Boltzmann equation to the classical Euler equations. To achieve this, we perform a Hilbert expansion for the relativistic Boltzmann equation (1.3) with small Knudsen number \(\varepsilon\). To emphasize the dependence on \(\varepsilon\) and \(\mathfrak{c}\) for relativistic Boltzmann solutions, we denote the solutions of (1.3) as \(F^{\varepsilon,\mathfrak{c}}\) and decompose \(F^{\varepsilon,\mathfrak{c}}\) as the sum \[F^{\varepsilon,\mathfrak{c}}=\sum_{n=0}^{2k-1}\varepsilon^{n}F^{\mathfrak{c}}_ {n}+\varepsilon^{k}F^{\varepsilon,\mathfrak{c}}_{R},\quad k\geq 3, \tag{1.8}\] where \(F^{\mathfrak{c}}_{0},F^{\mathfrak{c}}_{1},\ldots,F^{\mathfrak{c}}_{2k-1}\) in (1.8) will depend upon \(\mathfrak{c}\) but be independent of \(\varepsilon\). Also, \(F^{\varepsilon,\mathfrak{c}}_{R}\) is called the remainder term which will depend upon \(\varepsilon\) and \(\mathfrak{c}\). For \(\mathfrak{c}=1\), Speck-Strain[46] have already established the Hilbert expansion for the relativistic Boltzmann equation. Since we shall consider both the hydrodynamic limit \(\varepsilon\to 0\) and Newtonian limit \(\mathfrak{c}\to\infty\) of the relativistic Boltzmann equation, it is crucial to derive the uniform-in-\(\mathfrak{c}\) estimates on \(F^{\mathfrak{c}}_{n}\) (\(n=0,1,\cdots,2k-1\)) and uniform in \(\mathfrak{c}\) and \(\varepsilon\) estimates on \(F^{\varepsilon,\mathfrak{c}}_{R}\). To determine the coefficients \(F^{\mathfrak{c}}_{0}(t,x,p),\cdots\), \(F^{\mathfrak{c}}_{2k-1}(t,x,p)\), we begin by plugging the expansion (1.8) into (1.3) to obtain \[\partial_{t}\Big{(}\sum_{n=0}^{2k-1}\varepsilon^{n}F^{\mathfrak{c }}_{n}+\varepsilon^{k}F^{\varepsilon,\mathfrak{c}}_{R}\Big{)}+\hat{p}\cdot \nabla_{x}\Big{(}\sum_{n=0}^{2k-1}\varepsilon^{n}F^{\mathfrak{c}}_{n}+ \varepsilon^{k}F^{\varepsilon,\mathfrak{c}}_{R}\Big{)}\] \[=\frac{1}{\varepsilon}Q_{\mathfrak{c}}\Big{(}\sum_{n=0}^{2k-1} \varepsilon^{n}F^{\mathfrak{c}}_{n}+\varepsilon^{k}F^{\varepsilon,\mathfrak{c} }_{R},\sum_{n=0}^{2k-1}\varepsilon^{n}F^{\mathfrak{c}}_{n}+\varepsilon^{k}F ^{\varepsilon,\mathfrak{c}}_{R}\Big{)}. \tag{1.9}\] Comparing the order of \(\varepsilon\) in (1.9), one has \[0 =Q_{\mathfrak{c}}\left(F_{0}^{\mathfrak{c}},F_{0}^{\mathfrak{c}} \right),\] \[\partial_{t}F_{0}^{\mathfrak{c}}+\hat{p}\cdot\nabla_{x}F_{0}^{ \mathfrak{c}} =Q_{\mathfrak{c}}\left(F_{0}^{\mathfrak{c}},F_{1}^{\mathfrak{c}} \right)+Q_{\mathfrak{c}}\left(F_{1}^{\mathfrak{c}},F_{0}^{\mathfrak{c}} \right),\] \[\partial_{t}F_{1}^{\mathfrak{c}}+\hat{p}\cdot\nabla_{x}F_{1}^{ \mathfrak{c}} =Q_{\mathfrak{c}}\left(F_{0}^{\mathfrak{c}},F_{2}^{\mathfrak{c}} \right)+Q_{\mathfrak{c}}\left(F_{2}^{\mathfrak{c}},F_{0}^{\mathfrak{c}} \right)+Q_{\mathfrak{c}}\left(F_{1}^{\mathfrak{c}},F_{1}^{\mathfrak{c}} \right),\] \[\cdots\cdots\cdots\] \[\partial_{t}F_{n}^{\mathfrak{c}}+\hat{p}\cdot\nabla_{x}F_{n}^{ \mathfrak{c}} =\sum_{\begin{subarray}{c}i+j=n+1\\ i,j\geq 0\end{subarray}}Q_{\mathfrak{c}}\left(F_{i}^{\mathfrak{c}},F_{j}^{ \mathfrak{c}}\right), \tag{1.10}\] \[\cdots\cdots\cdots\] \[\partial_{t}F_{2k-1}^{\mathfrak{c}}+\hat{p}\cdot\nabla_{x}F_{2k- 1}^{\mathfrak{c}} =\sum_{\begin{subarray}{c}i+j=2k\\ i,j\geq 1\end{subarray}}Q_{\mathfrak{c}}\left(F_{i}^{\mathfrak{c}},F_{j}^{ \mathfrak{c}}\right).\] The remainder \(F_{R}^{\varepsilon,\mathfrak{c}}\) satisfies the equation \[\partial_{t}F_{R}^{\varepsilon,\mathfrak{c}}+\hat{p}\cdot\nabla _{x}F_{R}^{\varepsilon,\mathfrak{c}}-\frac{1}{\varepsilon}\left\{Q_{\mathfrak{ c}}\left(F_{0}^{\mathfrak{c}},F_{R}^{\varepsilon,\mathfrak{c}}\right)+Q_{ \mathfrak{c}}\left(F_{R}^{\varepsilon,\mathfrak{c}},F_{0}^{\mathfrak{c}} \right)\right\}\] \[=\varepsilon^{k-1}Q_{\mathfrak{c}}\left(F_{R}^{\varepsilon, \mathfrak{c}},F_{R}^{\varepsilon,\mathfrak{c}}\right)+\sum_{i=1}^{2k-1} \varepsilon^{i-1}\left\{Q_{\mathfrak{c}}\left(F_{i}^{\mathfrak{c}},F_{R}^{ \varepsilon,\mathfrak{c}}\right)+Q_{\mathfrak{c}}\left(F_{R}^{\varepsilon, \mathfrak{c}},F_{i}^{\mathfrak{c}}\right)\right\}+\varepsilon^{k}A, \tag{1.11}\] where \[A:=\sum_{\begin{subarray}{c}i+j\geq 2k+1\\ 2\leq i,j\leq 2k-1\end{subarray}}\varepsilon^{i+j-1-2k}Q_{\mathfrak{c}} \left(F_{i}^{\mathfrak{c}},F_{j}^{\mathfrak{c}}\right).\] From [22, Chap.2], the first equation of (1.10) implies that \(F_{0}^{\mathfrak{c}}\) is a local Maxwellian of the form \(\mathbf{M}_{\mathfrak{c}}(n_{0},u,T_{0};p)\), i.e., \[F_{0}^{\mathfrak{c}}(t,x,p)=\mathbf{M}_{\mathfrak{c}}(n_{0},u,T_{0};p)=\frac{ n_{0}\gamma}{4\pi\mathfrak{c}^{3}K_{2}(\gamma)}\exp\Big{\{}\frac{u^{\mu}p_{\mu}}{T_ {0}}\Big{\}}, \tag{1.12}\] where \(\gamma\) a dimensionless variable defined as \[\gamma=\frac{m_{0}\mathfrak{c}^{2}}{k_{B}T_{0}}\] and \(T_{0}(t,x)>0\) represents the temperature, \(n_{0}(t,x)>0\) is the proper number density, \((u^{0},u)\) is the four-velocity. \(K_{j}(\gamma)\)\((j=0,1,2,\cdots)\) are the modified second order Bessel functions defined in (2.1). ### The relativistic Euler equations and classical Euler equations Similar to [10, 23], for \(\alpha\), \(\beta\in\{0,1,2,3\}\), we define the first momentum as \[I^{\alpha}[\mathbf{M}_{\mathfrak{c}}]:=\int_{\mathbb{R}^{3}}\frac{p^{\alpha}}{ p^{0}}\mathbf{M}_{\mathfrak{c}}dp\] and the second momentum as \[T^{\alpha\beta}[\mathbf{M}_{\mathfrak{c}}]:=\int_{\mathbb{R}^{3}}\frac{p^{ \alpha}p^{\beta}}{p^{0}}\mathbf{M}_{\mathfrak{c}}dp.\] It has been shown in [46, Proposition 3.3] that \[I^{\alpha}[\mathbf{M}_{\mathfrak{c}}] =\frac{n_{0}u^{\alpha}}{\mathfrak{c}}, \tag{1.13}\] \[T^{\alpha\beta}[\mathbf{M}_{\mathfrak{c}}] =\frac{e_{0}+P_{0}}{\mathfrak{c}^{3}}u^{\alpha}u^{\beta}+\frac{P_{ 0}g^{\alpha\beta}}{\mathfrak{c}}, \tag{1.14}\] where \(e_{0}(t,x)>0\) is the proper energy density and \(P_{0}(t,x)>0\) is the pressure. Projecting the second equation in (1.10) onto \(1\), \(p\), \(p^{0}\), which are five collision invariants for the relativistic Boltzmann collision operator \(Q_{\mathfrak{c}}(\cdot,\cdot)\), and using (1.13)-(1.14), one obtains that \((n_{0},u,T_{0})\) satisfies the relativistic Euler equations. \[\begin{cases}\frac{1}{\mathfrak{c}}\partial_{t}\left(n_{0}u^{0} \right)+\nabla_{x}\cdot(n_{0}u)=0,\\ \frac{1}{\mathfrak{c}}\partial_{t}\left[\left(e_{0}+P_{0}\right)u^{0}u\right] +\nabla_{x}\cdot\left[\left(e_{0}+P_{0}\right)u\otimes u\right]+\mathfrak{c} ^{2}\nabla_{x}P_{0}=0,\\ \frac{1}{\mathfrak{c}}\partial_{t}\left[\left(e_{0}+P_{0}\right)\left(u^{0} \right)^{2}-\mathfrak{c}^{2}P_{0}\right]+\nabla_{x}\cdot\left[\left(e_{0}+P_{ 0}\right)u^{0}u\right]=0.\end{cases} \tag{1.15}\] The fluid variables \(n_{0}\), \(T_{0}\), \(S\), \(P_{0}\), \(e_{0}\) in (1.15) satisfy the following relations \[P_{0} =k_{B}n_{0}T_{0}=m_{0}\mathfrak{c}^{2}\frac{n_{0}}{\gamma}, \tag{1.16}\] \[e_{0} =m_{0}\mathfrak{c}^{2}n_{0}\frac{K_{1}(\gamma)}{K_{2}(\gamma)}+ 3P_{0}=m_{0}\mathfrak{c}^{2}n_{0}\frac{K_{3}(\gamma)}{K_{2}(\gamma)}-P_{0},\] (1.17) \[n_{0} =4\pi e^{4}m_{0}^{3}\mathfrak{c}^{3}\exp\left(\frac{-S}{k_{B}} \right)\frac{K_{2}(\gamma)}{\gamma}\exp\left(\gamma\frac{K_{1}(\gamma)}{K_{2} (\gamma)}\right), \tag{1.18}\] where \(k_{B}>0\) is Boltzmann's constant, \(S(t,x)>0\) is the entropy per particle which is defined by (1.18) and \(K_{i}(\gamma)\) is the modified Bessel function of the second kind defined later. Denote \[V:=\begin{pmatrix}P_{0}\\ u\\ S\end{pmatrix}.\] We assume that (1.15) is supplemented with initial data \[V|_{t=0}=V_{0}. \tag{1.19}\] The existence of local smooth solutions of the relativistic Euler equations (1.15) with initial condition (1.19) can be established by standard hyperbolic symmetrized method and it holds that \[\|V-\overline{V}\|_{H^{N_{0}}}\lesssim 1, \tag{1.20}\] where \(\overline{V}:=(\overline{P},0,\overline{S})\) is a constant background with \(\overline{P}>0\) and \(\overline{S}>0\). We point out that the estimate in (1.20) is uniform-in-\(\mathfrak{c}\), which is important for us, see Lemma 3.1 for details. From [8, 46], we have the following two properties : _Property 1_: The map \(\mathbf{\Phi}:(n_{0},T_{0})\mapsto(P_{0},S)\) is an auto-diffeomorphism of the region \((0,\infty)\times(0,\infty)\), where the map is defined by (1.16)-(1.18). _Property 2_: Under the equations of state (1.16)-(1.18), there hold 1. There exists a smooth function \(\mathcal{H}\) such that \(P_{0}\) can be expressed in terms of \(e_{0}\) and \(S\) as \(P_{0}=\mathcal{H}(e_{0},S)\). 2. The relativistic Euler equations is hyperbolic. 3. The relativistic Euler equations is causal (the speed of sound \(a:=\mathfrak{c}\sqrt{\frac{\partial P_{0}}{\partial e_{0}}}\Big{|}_{S}\) is real and less than the speed of light). Actually, it holds that \(0<a<\frac{\mathfrak{c}}{\sqrt{3}}\). From [46, Proposition 3.4], the following Gibbs relation (see [11]) holds \[T_{0}dS=d\Big{(}\frac{e_{0}}{n_{0}}\Big{)}+P_{0}d\Big{(}\frac{1}{n_{0}}\Big{)},\] which is equivalent to \[\frac{\partial e_{0}}{\partial n_{0}}\Big{|}_{S}=\frac{e_{0}+P_{0}}{n_{0}}, \quad\frac{\partial e_{0}}{\partial S}\Big{|}_{n_{0}}=n_{0}T_{0}.\] For simplicity of presentation, in the rest of this paper, we always assume that \[k_{B}=1,\quad m_{0}=1.\] Formally, when \(\mathfrak{c}\) tends to infinity, the relativistic Euler equations (1.15) reduces to \[\begin{cases}\partial_{t}\rho+\nabla_{x}\cdot(\rho\mathfrak{u})=0,\\ \partial_{t}(\rho\mathfrak{u})+\nabla_{x}\cdot(\rho\mathfrak{u}\otimes \mathfrak{u})+\nabla_{x}\mathcal{P}=0,\\ \partial_{t}\Big{(}\rho\Big{(}\frac{1}{2}|\mathfrak{u}|^{2}+\mathcal{E}\Big{)} \Big{)}+\nabla_{x}\cdot\Big{(}\Big{(}\rho\Big{(}\frac{1}{2}|\mathfrak{u}|^{2}+ \mathcal{E}\Big{)}+\mathcal{P}\Big{)}\mathfrak{u}\Big{)}=0,\end{cases} \tag{1.21}\] which is the classical compressible Euler equations. Here \(\rho(t,x)>0\) denotes the density of the fluid, \(\mathfrak{u}(t,x)\) is velocity, \(\mathcal{P}(t,x)\) is pressure, \(\mathcal{E}(t,x)>0\) is internal energy per unit mass. The fluid variables \(\rho\), \(\theta\), \(\eta\), \(\mathcal{P}\) and \(\mathcal{E}\) satisfy the following relations \[\mathcal{P}=\rho\theta,\quad\eta=-\ln(A_{0}\rho\theta^{-\frac{3}{2}}),\quad \mathcal{E}=\frac{3}{2}\theta, \tag{1.22}\] where \(A_{0}=(2\pi)^{-\frac{3}{2}}e^{-\frac{5}{2}}\) and \(\theta(t,x)\) is the temperature of the fluid, \(\eta(t,x)\) is the physical entropy. It is clear that \[\theta d\eta=d\mathcal{E}-\frac{\mathcal{P}}{\rho^{2}}d\rho.\] Denote \[W:=\begin{pmatrix}\mathcal{P}\\ \mathfrak{u}\\ \eta\end{pmatrix},\] then the classical Euler equations (1.21) can be written as a symmetric hyperbolic system \[\mathbf{D}_{0}\partial_{t}W+\sum_{j=1}^{3}\mathbf{D}_{j}\partial_{j}W=0, \tag{1.23}\] where \[\mathbf{D}_{0}=\begin{pmatrix}1&0&0\\ 0&\sigma^{2}\rho^{2}\mathbf{I}&0\\ 0&0&1\end{pmatrix},\quad\mathbf{D}_{j}=\begin{pmatrix}\mathfrak{u}_{j}&\sigma ^{2}\rho\mathbf{e}_{j}^{t}&0\\ \sigma^{2}\rho\mathbf{e}_{j}&\sigma^{2}\rho^{2}\mathfrak{u}_{j}\mathbf{I}&0 \\ 0&0&\mathfrak{u}_{j}\end{pmatrix}.\] The quantity \(\sigma=\sqrt{\frac{\partial\mathcal{P}}{\partial\rho}}\Big{|}_{\eta}>0\) is the sound speed of the classical Euler equations. For simplicity, we supplement (1.23) with the same initial data and constant background as in the relativistic Euler case, that is, \[W|_{t=0}=V_{0},\quad\overline{W}=\overline{V},\] where \(\overline{W}:=(\overline{\mathcal{P}},0,\overline{\eta})\). It is a classical result from the theory of symmetric hyperbolic systems that (1.23) admits a local smooth solution with smooth initial data, see Lemma 3.2 for details. ### A brief history of the hydrodynamic and Newtonian limits for (relativistic) Boltzmann equation For the hydrodynamic limit of the non-relativistic Boltzmann equation, there have been extensive studies on this subject and we only mention a few works. In the founding work of Maxwell [41] and Boltzmann [5], it is shown that the Boltzmann equation is closely related to the fluid dynamical systems for both compressible and incompressible flows. Hilbert and Enskog-Chapman independently developed a set of formal small-parameter expansion methods, called Hilbert expansion and Enskog-Chapman expansion respectively, and established the connection between Boltzmann equation and compressible (incompressible) Euler equations, compressible (incompressible) Navier-Stokes (Fourier) systems and the acoustic system, etc. It is a important and challenging problem to rigorously justify these formal approximations. In fact, the purpose of Hilbert's sixth problem [31] is to establish the laws of motion of continua from more microscopic physical models, such as Boltzmann theory, from a rigorous mathematical standpoint. For the hydrodynamic limit from Boltzmann equation to compressible Euler equations, Cafilsch [6] strictly justified the validity of the limit by employing the truncated Hilbert expansion method, see also [36, 42, 49], and [25, 27] for an application of \(L^{2}-L^{\infty}\) approach. For the hydrodynamic limit to incompressible Navier-Stokes system, see [1, 2, 4, 9, 12, 13, 14, 20, 26, 30, 32, 34, 39, 45, 52] and references cited therein. For the compressible Euler limit and acoustic limit of Boltzmann equation with specular reflection boundary conditions, we refer the reader to the recent work of Guo-Huang-Wang [28]. For other works which connect to hydrodynamic limit, we refer to [3, 37, 19, 29] and the review articles [21, 40, 50]. Although there have been satisfactory results on the hydrodynamic limit of the non-relativistic Boltzmann equation, much less is known on the hydrodynamic limit or/and Newtonian limit of the relativistic Boltzmann equation despite of its importance. For the Newtonian limit of the relativistic particles, Calogero [7] established the existence of local-in-time relativistic Boltzmann solutions in periodic box, and then proved that such solutions converge, in a suitable norm, to the Newtonian Boltzmann solutions as \(\mathfrak{c}\to\infty\). Later, for the case near vacuum, Strain [48] proved the unique global-in-time mild solution and justified the Newtonian limit for arbitrary time intervals \([0,T]\). For the hydrodynamic limit of the relativistic Boltzmann equation, Speck-Strain [46] demonstrated the hydrodynamic limit from the relativistic Boltzmann equation to the relativistic Euler equations for local-in-time smooth solutions. It is shown in [23] that solutions of the relativistic Vlasov-Maxwell-Boltzmann system converge to solutions of the relativistic Euler-Maxwell system globally in time, as the Knudsen number \(\varepsilon\to 0\). In the present paper, we are concerned with both the hydrodynamic limit \(\varepsilon\to 0\) and Newtonian limit \(\mathfrak{c}\to\infty\) from the relativistic Boltzmann equation to the classical Euler equations. This is achieved by employing the Hilbert expansion method and uniform in \(\mathfrak{c}\) and \(\varepsilon\) estimates on the Hilbert expansion. ### Main results We consider the perturbation around the local Maxwellian \(\mathbf{M}_{\mathfrak{c}}\): \[F(t,x,p)=\mathbf{M}_{\mathfrak{c}}(t,x,p)+\sqrt{\mathbf{M}_{\mathfrak{c}}(t,x,p)}f(t,x,p), \tag{1.24}\] We define the linearized collision operator \(\mathbf{L}_{\mathfrak{c}}f\) and nonlinear collision operator \(\Gamma_{\mathfrak{c}}\left(f_{1},f_{2}\right)\): \[\mathbf{L}_{\mathfrak{c}}f:=-\frac{1}{\sqrt{\mathbf{M}_{\mathfrak{c}}}}[Q_{ \mathfrak{c}}(\sqrt{\mathbf{M}_{\mathfrak{c}}}f,\mathbf{M}_{\mathfrak{c}})+Q_ {\mathfrak{c}}(\mathbf{M}_{\mathfrak{c}},\sqrt{\mathbf{M}_{\mathfrak{c}}}f)]= \nu_{\mathfrak{c}}f-\mathbf{K}_{\mathfrak{c}}f,\] \[\Gamma_{\mathfrak{c}}\left(f_{1},f_{2}\right):=\frac{1}{\sqrt{\mathbf{M}_{ \mathfrak{c}}}}Q_{\mathfrak{c}}\left(\sqrt{\mathbf{M}_{\mathfrak{c}}}f_{1}, \sqrt{\mathbf{M}_{\mathfrak{c}}}f_{2}\right),\] where the collision frequency \(\nu_{\mathfrak{c}}=\nu_{\mathfrak{c}}(t,x,p)\) is defined as \[\nu_{\mathfrak{c}}(p):=\int_{\mathbb{R}^{3}}\int_{\mathbb{S}^{2}}v_{\phi}(p,q )\mathbf{M}_{\mathfrak{c}}(q)d\omega dq=\frac{\mathfrak{c}}{2}\frac{1}{p^{0}} \int_{\mathbb{R}^{3}}\frac{dq}{q^{0}}\int_{\mathbb{R}^{3}}\frac{dq^{\prime}}{ q^{\prime 0}}\int_{\mathbb{R}^{3}}\frac{dp^{\prime}}{p^{0}}W(p,q\mid p^{\prime},q^{\prime}) \mathbf{M}_{\mathfrak{c}}(q) \tag{1.25}\] and \(\mathbf{K}_{\mathfrak{c}}f\) takes the following form: \[\mathbf{K}_{\mathfrak{c}}f:=\int_{\mathbb{R}^{3}}\int_{\mathbb{S}^ {2}}v_{\phi}(p,q)\sqrt{\mathbf{M}_{\mathfrak{c}}(q)}\left[\sqrt{\mathbf{M}_{ \mathfrak{c}}(q^{\prime})}f\left(p^{\prime}\right)+\sqrt{\mathbf{M}_{ \mathfrak{c}}(p^{\prime})}f\left(q^{\prime}\right)\right]d\omega dq\] \[\qquad\qquad-\int_{\mathbb{R}^{3}}\int_{\mathbb{S}^{2}}v_{\phi}(p,q)\sqrt{\mathbf{M}_{\mathfrak{c}}(p)\mathbf{M}_{\mathfrak{c}}(q)}f(q)d \omega dq\] \[=\mathbf{K}_{\mathfrak{c}2}f-\mathbf{K}_{\mathfrak{c}1}f.\] We introduce the global Maxwellian \(J_{\mathfrak{c}}(p)\) as \[J_{\mathfrak{c}}(p):=\frac{n_{M}\gamma_{M}}{4\pi^{3}K_{2}(\gamma_{M})}\exp \Big{(}\frac{-\mathfrak{c}p^{0}}{T_{M}}\Big{)}, \tag{1.26}\] where \(n_{M}\), \(T_{M}\) are positive constants and \(\gamma_{M}=\frac{\mathfrak{c}^{2}}{T_{M}}\). For each \(\ell\geq 0\), we also define the weight function \(w_{\ell}\) as \[w_{\ell}:=w_{\ell}(p)=(1+|p|^{2})^{\frac{\ell}{2}}. \tag{1.27}\] We then define a corresponding weighted \(L^{\infty}\) norm by \[\|h\|_{\infty,\ell}=\|w_{\ell}h\|_{L^{\infty}}.\] Our first result is the Hilbert expansion of relativistic Boltzmann equation with uniform-in-\(\mathfrak{c}\) estimates. Notice that \(k\) and \(N_{0}\) are defined in (1.8) and Lemma 3.1. **Theorem 1.1**.: _Assume \(k=3\), \(N_{0}\geq 10\). Let \((n_{0}(t,x),u(t,x),T_{0}(t,x))\) be a smooth solution to the relativistic Euler equations (1.15) with initial data (1.19) and constant background \(\overline{V}\) for \((t,x)\in[0,T]\times\mathbb{R}^{3}\). Suppose that \(\mathbf{M}_{\mathfrak{c}}(t,x,p)\) is the local relativistic Maxwellian in (1.12) and there exist constants \(C>0\), \(n_{M}>0\), \(T_{M}>0\), and \(\alpha\in(\frac{1}{2},1)\) such that_ \[\frac{J_{\mathfrak{c}}(p)}{C}\leq\mathbf{M}_{\mathfrak{c}}(t,x,p)\leq CJ_{ \mathfrak{c}}^{\alpha}(p). \tag{1.28}\] _Define initially_ \[F^{\varepsilon,\mathfrak{c}}(0,x,p)=\mathbf{M}_{\mathfrak{c}}(0,x,p)+\sum_{n= 1}^{2k-1}\varepsilon^{n}F_{n}^{\mathfrak{c}}(0,x,p)+\varepsilon^{k}F_{R}^{ \varepsilon,\mathfrak{c}}(0,x,p)\geq 0\] _with_ \[\varepsilon^{\frac{3}{2}}\Big{\|}\frac{F_{R}^{\varepsilon,\mathfrak{c}}(0)}{ \sqrt{J_{\mathfrak{c}}}}\Big{\|}_{\infty,\ell}+\Big{\|}\frac{F_{R}^{ \varepsilon,\mathfrak{c}}(0)}{\sqrt{\mathbf{M}_{\mathfrak{c}}(0)}}\Big{\|}_{ 2}\leq C<\infty.\] _Then there exist two independent positive constants \(\varepsilon_{0}\in(0,1]\) and \(\mathfrak{c}_{0}\gg 1\) such that, for each \(0<\varepsilon\leq\varepsilon_{0}\) and \(\mathfrak{c}\geq\mathfrak{c}_{0}\), there admits a unique classical solution \(F^{\varepsilon,\mathfrak{c}}\) of the relativistic Boltzmann equation (1.15) for \((t,x,p)\in[0,T]\times\mathbb{R}^{3}\times\mathbb{R}^{3}\) in the following form of expansion_ \[F^{\varepsilon,\mathfrak{c}}(t,x,p)=\mathbf{M}_{\mathfrak{c}}(t,x,p)+\sum_{n= 1}^{2k-1}\varepsilon^{n}F_{n}^{\mathfrak{c}}(t,x,p)+\varepsilon^{k}F_{R}^{ \varepsilon,\mathfrak{c}}(t,x,p)\geq 0,\] _where the functions \(F_{n}^{\varepsilon}\)\((n=1,\cdots,2k-1)\) are constructed in Proposition 5.1._ _Furthermore, there exists a constant \(C_{T}>0\) such that for all \(\varepsilon\in(0,\varepsilon_{0}]\) and for any \(\ell\geq 9\), the following estimate holds:_ \[\varepsilon^{\frac{3}{2}}\sup_{0\leq t\leq T}\left\|\frac{F_{R}^{ \varepsilon,\mathfrak{c}}(t)}{\sqrt{J_{\mathfrak{c}}}}\right\|_{\infty,\ell}+ \sup_{0\leq t\leq T}\left\|\frac{F_{R}^{\varepsilon,\mathfrak{c}}(t)}{\sqrt{ \mathbf{M}_{\mathfrak{c}}(t)}}\right\|_{2}\] \[\leq C_{T}\left\{\varepsilon^{\frac{3}{2}}\Big{\|}\frac{F_{R}^{ \varepsilon,\mathfrak{c}}(0)}{\sqrt{J_{\mathfrak{c}}}}\Big{\|}_{\infty,\ell}+ \Big{\|}\frac{F_{R}^{\varepsilon,\mathfrak{c}}(0)}{\sqrt{\mathbf{M}_{ \mathfrak{c}}(0)}}\Big{\|}_{2}+1\right\}.\] _Moreover, we have that_ \[\sup_{0\leq t\leq T}\left\|\frac{F^{\varepsilon,\mathfrak{c}}(t)-\mathbf{M}_{ \mathfrak{c}}(t)}{\sqrt{J_{\mathfrak{c}}}}\right\|_{\infty}+\sup_{0\leq t\leq T }\left\|\frac{F^{\varepsilon,\mathfrak{c}}(t)-\mathbf{M}_{\mathfrak{c}}(t)}{ \sqrt{\mathbf{M}_{\mathfrak{c}}(t)}}\right\|_{2}\leq C_{T}\varepsilon, \tag{1.29}\] _where the constant \(C\) and \(C_{T}>0\) are independent of \(\varepsilon\) and \(\mathfrak{c}\)._ **Remark 1.2**.: It follows from (1.29) that we have established the uniform-in-\(\mathfrak{c}\) hydrodynamic limit from the relativistic Boltzmann equation to the relativistic Euler equations. **Remark 1.3**.: When \(\frac{|u|}{\mathfrak{c}}\) is suitably small, it has been shown in [46, Lemma 1.1] that there exist positive constants \(C>0\), \(n_{M}>0\), \(T_{M}>0\), and \(\alpha\in(\frac{1}{2},1)\), which are independent of \(\mathfrak{c}\), such that (1.28) holds. **Remark 1.4**.: The uniform-in-\(\mathfrak{c}\) estimates for the relativistic Boltzmann collision operators developed here can also be applied to the Newtonian limit from the relativistic Boltzmann equation to the Newtonian Boltzmann equation. This will be considered in a forthcoming paper. With the uniform-in-\(\mathfrak{c}\) estimates in Theorem 1.1, one can further obtain both the hydrodynamic limit \(\varepsilon\to 0\) and the Newtonian limit \(\mathfrak{c}\to\infty\) at the same time. **Theorem 1.5**.: _Assume that all conditions in Theorem 1.1 are satisfied. Suppose that \((\rho(t,x)\), \(\mathfrak{u}(t,x),\theta(t,x))\) is a smooth solution to the classical Euler equations (1.21) with the same initial data and constant background as the relativistic Euler case. Let \(\mu\) be the local Maxwellian of classical Boltzmann equation, i.e.,_ \[\mu(t,x,p)=\frac{\rho}{(2\pi\theta)^{\frac{3}{2}}}e^{-\frac{|p-u|^{2}}{2\theta}}. \tag{1.30}\] _Then there exist independent positive constants \(\varepsilon_{0}\in(0,1]\) and \(\mathfrak{c}_{0}\gg 1\) such that for all \(0<\varepsilon\leq\varepsilon_{0}\) and \(\mathfrak{c}\geq\mathfrak{c}_{0}\), the following estimate holds:_ \[\sup_{0\leq t\leq T}\left\|\big{(}F^{\varepsilon,\mathfrak{c}}-\mu\big{)}(t)e ^{\delta_{0}|p|}\right\|_{\infty}\leq C_{T}\varepsilon+C_{T}\mathfrak{c}^{- \frac{3}{2}}, \tag{1.31}\] _where all the positive constants \(\varepsilon_{0}\), \(\mathfrak{c}_{0}\), \(C_{T}\) and \(\delta_{0}\) are independent of \(\varepsilon\) and \(\mathfrak{c}\)._ **Remark 1.6**.: (1.31) indicates that we have established both the hydrodynamic limit and the Newtonian limit from the relativistic Boltzmann equation to the classical Euler equations. We point out that the two limits \(\varepsilon\to 0\) and \(\mathfrak{c}\to\infty\) can be taken independently at the same time without assuming any dependence between \(\varepsilon\) and \(\mathfrak{c}\). **Remark 1.7**.: It is worth noting that we make no efforts to obtain the best convergence rates, which is not our main focus here. Actually, for the Newtonian limit, one can obtain the convergence rate \(\frac{1}{\varepsilon^{2-\varepsilon}}\) for any given small \(\epsilon>0\). **Remark 1.8**.: Due to the effect of special relativity, we can only obtain the particle velocity weight \(e^{\delta_{0}|p|}\) in (1.31). ### Main difficulties and strategy of the proof We make some comments on the main ideas of the proof and explain the main difficulties and techniques involved in the process. It is noted that, for the relativistic Boltzmann equation (1.3), one can not transform the solution \(F(t,x,p)\) to \(F(t,x,\mathfrak{p})\) with a change of variables \(p=\mathfrak{cp}\). Now we take the global Maxwellian \(\mathbf{M}_{\mathfrak{c}}(1,0,1;p)\) as an example. In fact, \(\mathbf{M}_{\mathfrak{c}}(1,0,1;p)\cong e^{\mathfrak{c}^{2}-cp^{0}}\). It is clear that \[e^{\mathfrak{c}^{2}-\mathfrak{cp}^{0}}=e^{-\frac{\mathfrak{c}^{2}|p|^{2}}{1+ \sqrt{1+|p|^{2}}}},\] which is actually still a function of \(\mathfrak{p}\) and \(\mathfrak{c}\). On the other hand, for the normalized particle velocity \(\hat{p}\), it holds that \[\hat{p}=\mathfrak{c}\frac{p}{p^{0}}=\frac{\mathfrak{cp}}{\sqrt{|p|^{2}+ \mathfrak{c}^{2}}}=\frac{\mathfrak{cp}}{\sqrt{1+|\mathfrak{p}|^{2}}},\] which is also a function of \(\mathfrak{p}\) and \(\mathfrak{c}\). Hence the collision term \(Q_{\mathfrak{c}}(F,F)\) can not be transformed into a new form depending only on \(\mathfrak{p}\). Thus the roles of the light speed \(\mathfrak{c}\) and the Knudsen number \(\varepsilon\) are totally different. Therefore it is important to establish the uniform-in-\(\mathfrak{c}\) estimate for the relativistic Boltzmann collision in the present paper. To justify both the hydrodynamic limit and Newtonian limit from the relativistic Boltzmann equation to the classical Euler equations, we utilize the Hilbert expansion for the relativistic Boltzmann equation with respect to the small Knudsen number. The key point is to establish the uniform-in-\(\mathfrak{c}\) estimates for the Hilbert expansion. Firstly, we prove the existence of smooth solutions to the relativistic Euler equations with uniform-in-\(\mathfrak{c}\) estimates, see Lemma 3.1. Then, by applying the energy method of the symmetric hyperbolic systems, we establish the Newtonian limit from the relativistic Euler equations (1.15) to the classical Euler equations (1.21) with convergence rate \(\mathfrak{c}^{-2}\), see section 3.3 for details of proof. Secondly, we aim to establish the uniform-in-\(\mathfrak{c}\) bounds for the Hilbert expansion \(F_{n}^{\mathfrak{c}}\) (\(n\geq 1\)) as well as the remainder \(F_{R}^{\mathfrak{c},\mathfrak{c}}\). As explained above, since the collision operators must dependent on the speed of light \(\mathfrak{c}\), the main difficulty lies in the uniform-in-\(\mathfrak{c}\) estimates on the collision operators \(Q_{\mathfrak{c}}(\cdot,\cdot)\), \(\mathbf{L}_{\mathfrak{c}}\) and \(\mathbf{L}_{\mathfrak{c}}^{-1}\). For the relativistic Boltzmann equation, due to the complexity of the local relativistic Maxwellian \(\mathbf{M}_{\mathfrak{c}}\), the expression of the kernel \(k_{\mathfrak{c}}(p,q)\) (see (2.9)) of \(\mathbf{K}_{\mathfrak{c}}\) is very complicated and it is not an easy job to obtain the uniform-in-\(\mathfrak{c}\) estimate for \(k_{\mathfrak{c}}(p,q)\). By applying the Lorentz transformation and dividing the integration region into three parts: \(\{|\bar{p}-\bar{q}|\geq\mathfrak{c}^{\frac{1}{3}}\}\), \(\{|\bar{p}-\bar{q}|\leq\mathfrak{c}^{\frac{1}{3}},\,\&\,|p|\leq\mathfrak{c}\}\) and \(\{|\bar{p}-\bar{p}|\leq\mathfrak{c}^{\frac{1}{8}},\,\&\,|p|\geq\mathfrak{c}\}\), one can get \[\int_{\mathbb{R}^{3}}|k_{\mathfrak{c}}(p,q)|dq\lesssim\begin{cases}(1+|p|)^{-1 },&|p|\leq\mathfrak{c},\\ \mathfrak{c}^{-1},&|p|\geq\mathfrak{c},\end{cases}\] see Lemmas 4.3-4.5 for details. Similarly, we can also prove \[\nu_{\mathfrak{c}}(p)\cong\begin{cases}1+|p|,&|p|\leq\mathfrak{c},\\ \mathfrak{c},&|p|\geq\mathfrak{c},\end{cases}\] see Lemma 4.6 for details. Let \(k(p,q)\) be the kernel of the classical Boltzmann equation of hard sphere (see (4.93)). Observe that \(k_{\mathfrak{c}}(p,q)\) and \(k(p,q)\) depend on the relativistic Euler solutions and classical Euler solutions, respectively. By tedious calculations and the Newtonian limit of the relativistic Euler equations (see Proposition 3.8), we can establish the following \[\int_{\mathbb{R}^{3}}|k_{\mathfrak{c}}(p,q)-k(p,q)|dq\lesssim\mathfrak{c}^{- \frac{3}{8}}\to 0\quad\text{as}\quad\mathfrak{c}\to\infty, \tag{1.32}\] see Lemmas 4.8-4.9 for details. Since the orthonormal basis \(\{\chi_{\alpha}^{\mathfrak{c}}\}_{\alpha=0}^{4}\) of the null space \(\mathcal{N}_{\mathfrak{c}}\) also depend on \(\mathfrak{c}\), we also need to prove that \[\lim_{\mathfrak{c}\to\infty}\chi_{\alpha}^{\mathfrak{c}}=\chi_{\alpha}, \tag{1.33}\] where \(\{\chi_{\alpha}\}_{\alpha=0}^{4}\) is the corresponding orthonormal basis of the null space \(\mathcal{N}\) for the classical Boltzmann equation, see Lemma 4.12 for details. With the help of (1.32)-(1.33) and a contradiction argument, one can finally obtain the following uniform-in-\(\mathfrak{c}\) coercivity estimate for \(\mathbf{L}_{\mathfrak{c}}\) \[\langle\mathbf{L}_{\mathfrak{c}}g,g\rangle\geq\zeta_{0}\|(\mathbf{I}-\mathbf{ P}_{\mathfrak{c}})g\|_{\nu_{\mathfrak{c}}}^{2},\quad g\in L_{\nu}^{2}(\mathbb{R}^{ 3}).\] Here we emphasize that \(\zeta_{0}>0\) is a positive constant independent of \(\mathfrak{c}\). With the uniform-in-\(\mathfrak{c}\) coercivity estimate for \(\mathbf{L}_{\mathfrak{c}}\), one can derive the uniform-in-\(\mathfrak{c}\) exponential decay for \(\mathbf{L}_{\mathfrak{c}}^{-1}\) by similar arguments as in [33], see section 4.2 for details. Utilizing the above uniform-in-\(\mathfrak{c}\) estimates, we can establish the uniform bounds on the Hilbert expansions \(F_{n}^{\mathfrak{c}}(t,x,p)\) (\(n\geq 1\)), see Proposition 5.1 for details. Based on the estimates on \(F_{n}^{\mathfrak{c}}(t,x,p)\) (\(n\geq 1\)), we use the \(L^{2}-L^{\infty}\) framework in [24, 25, 46] to control the remainder \(F_{R}^{\mathfrak{c},\mathfrak{c}}\) uniformly in \(\mathfrak{c}\) and \(\varepsilon\), see Lemmas 6.3-6.4 for details. Hence, we established the Hilbert expansion of the relativistic Boltzmann equation with uniform-in-\(\mathfrak{c}\) estimates, see Theorem 1.1. Finally, by combining the Hilbert expansion in Theorem 1.1 and the Newtonian limit of relativistic Euler equations in Proposition 3.8, we can justify both the hydrodynamic limit and Newtonian limit of the relativistic Boltzmann equation to the classical Euler equations, see Theorem 1.5 for details. ### Organization of the paper In section 2, we present some results about Bessel functions and give explicit expressions for the kernel of the linearized relativistic collision operators. Section 3 is dedicated to the existence of local-in-time solutions of the relativistic Euler equations and the Newtonian limit of the relativistic Euler equations. In section 4, we develop a series of uniform-in-\(\mathfrak{c}\) estimates to obtain the key coercivity estimate on the linearized operator \(\mathbf{L}_{\mathfrak{c}}\) as well as \(\mathbf{L}_{\mathfrak{c}}^{-1}\), which allow us to establish the uniform-in-\(\mathfrak{c}\) bounds on the Hilbert expansion \(F_{n}^{\mathfrak{c}}\) in section 5. In section 6, we use the \(L^{2}-L^{\infty}\) method to derive the uniform in \(\mathfrak{c}\) and \(\varepsilon\) estimates on the remainder \(F_{R}^{\varepsilon,\mathfrak{c}}\) and prove the main theorems, Theorems 1.1 and 1.5. In the appendix, we are devoted to present the orthonormal basis of the null space \(\mathcal{N}_{\mathfrak{c}}\) of \(\mathbf{L}_{\mathfrak{c}}\). ### Notations Throughout this paper, \(C\) denotes a generic positive constant which is independent of \(\mathfrak{c}\), \(\varepsilon\) and \(C_{a},C_{b},\ldots\) denote the generic positive constants depending on \(a,b,\ldots\), respectively, which may vary from line to line. \(A\lesssim B\) means that there exists a constant \(C>0\), which is independent of \(\mathfrak{c},\ \varepsilon\), such that \(A\leq CB\). \(A\cong B\) means that both \(A\lesssim B\) and \(B\lesssim A\) hold. \(\|\cdot\|_{2}\) denotes either the standard \(L^{2}\left(\mathbb{R}_{x}^{3}\right)\)-norm or \(L^{2}\left(\mathbb{R}_{p}^{3}\right)\)-norm or \(L^{2}\left(\mathbb{R}_{x}^{3}\times\mathbb{R}_{p}^{3}\right)\)-norm. Similarly, \(\|\cdot\|_{\infty}\) denotes either the \(L^{\infty}\left(\mathbb{R}_{x}^{3}\right)\)-norm or \(L^{\infty}\left(\mathbb{R}_{p}^{3}\right)\)-norm or \(L^{\infty}\left(\mathbb{R}_{x}^{3}\times\mathbb{R}_{p}^{3}\right)\)-norm. We also introduce the weighted \(L^{\infty}\) norm \(\|\cdot\|_{\infty,\ell}=\|w_{\ell}\cdot\|_{\infty}\). We denote \(\langle\cdot,\cdot\rangle\) as either the \(L^{2}\left(\mathbb{R}_{x}^{3}\right)\) inner product or \(L^{2}\left(\mathbb{R}_{p}^{3}\right)\) inner product or \(L^{2}\left(\mathbb{R}_{x}^{3}\times\mathbb{R}_{p}^{3}\right)\) inner product. Moreover, we denote \(\|\cdot\|_{\nu_{\mathfrak{c}}}:=\|\sqrt{\nu_{\mathfrak{c}}}\cdot\|_{2}\). ## 2. Preliminaries We define the modified Bessel function of the second kind (see [10, (3.19)]) \[K_{j}(z)=\left(\frac{z}{2}\right)^{j}\frac{\Gamma(\frac{1}{2})}{\Gamma(j+ \frac{1}{2})}\int_{1}^{\infty}e^{-zt}\left(t^{2}-1\right)^{j-\frac{1}{2}}dt, \quad j\geq 0,\ z>0. \tag{2.1}\] We will frequently use the following properties for \(K_{j}(z)\). **Lemma 2.1**.: ([43, 51]) _It holds that_ \[K_{j+1}(z)=\frac{2j}{z}K_{j}(z)+K_{j-1}(z),\quad j\geq 1,\] _and_ \[\frac{d}{dz}\left(\frac{K_{j}(z)}{z^{j}}\right)=-\left(\frac{K_{j+1}(z)}{z^{j }}\right),\quad j\geq 0.\] _The asymptotic expansion for \(K_{j}(z)\) takes the form_ \[K_{j}(z)=\sqrt{\frac{\pi}{2z}}\frac{1}{e^{z}}\left[\sum_{m=0}^{n-1}A_{j,m}z^{ -m}+\gamma_{j,n}(z)z^{-n}\right],\quad j\geq 0,\ n\geq 1,\] _where the following additional identities and inequalities also hold:_ \[A_{j,0} =1,\] \[A_{j,m} =\frac{1}{m!8^{m}}(4j^{2}-1)(4j^{2}-3^{2})\cdots(4j^{2}-(2m-1)^{ 2}),\quad j\geq 0,\ m\geq 1,\] \[|\gamma_{j,n}(z)| \leq 2|A_{j,n}|\exp\left([j^{2}-\frac{1}{4}]z^{-1}\right),\quad j \geq 0,\ n\geq 1,\] \[K_{j}(z) <K_{j+1}(z),\quad j\geq 0.\] _Furthermore, for \(j\leq n+\frac{1}{2}\), one has a more exact estimate_ \[|\gamma_{j,n}(z)|\leq|A_{j,n}|.\] We next deduce the kernel of the linearized relativistic collision operator. Recall that \[\mathbf{K}_{\mathfrak{c}}f=\int_{\mathbb{R}^{3}}\int_{\mathbb{S}^{2}}v_{\phi}(p,q)\sqrt{\mathbf{M}_{\mathfrak{c}}(q)}\left[\sqrt{\mathbf{M}_{\mathfrak{c}}(q ^{\prime})}f\left(p^{\prime}\right)+\sqrt{\mathbf{M}_{\mathfrak{c}}(p^{\prime })}f\left(q^{\prime}\right)\right]d\omega dq\] \[-\int_{\mathbb{R}^{3}}\int_{\mathbb{S}^{2}}v_{\phi}(p,q)\sqrt{\mathbf{M}_{ \mathfrak{c}}(p)\mathbf{M}_{\mathfrak{c}}(q)}f(q)d\omega dq\] \[=\frac{\mathfrak{c}}{2}\frac{1}{p^{0}}\int_{\mathbb{R}^{3}}\frac{ dq}{q^{0}}\int_{\mathbb{R}^{3}}\frac{dq^{\prime}}{q^{\prime 0}}\int_{\mathbb{R}^{3}}\frac{dp^{\prime}}{p^{\prime 0}}W(p,q \mid p^{\prime},q^{\prime})\sqrt{\mathbf{M}_{\mathfrak{c}}(q)\mathbf{M}_{ \mathfrak{c}}(q^{\prime})}f(p^{\prime})\] \[\qquad+\frac{\mathfrak{c}}{2}\frac{1}{p^{0}}\int_{\mathbb{R}^{3}} \frac{dq}{q^{0}}\int_{\mathbb{R}^{3}}\frac{dq^{\prime}}{q^{\prime 0}}\int_{ \mathbb{R}^{3}}\frac{dp^{\prime}}{p^{\prime 0}}W(p,q\mid p^{\prime},q^{ \prime})\sqrt{\mathbf{M}_{\mathfrak{c}}(q)\mathbf{M}_{\mathfrak{c}}(p^{ \prime})}f(q^{\prime})\] \[\qquad-\frac{\mathfrak{c}}{2}\frac{1}{p^{0}}\int_{\mathbb{R}^{3}} \frac{dq}{q^{0}}\int_{\mathbb{R}^{3}}\frac{dq^{\prime}}{q^{\prime 0}}\int_{ \mathbb{R}^{3}}\frac{dp^{\prime}}{p^{\prime 0}}W(p,q\mid p^{\prime},q^{ \prime})\sqrt{\mathbf{M}_{\mathfrak{c}}(p)\mathbf{M}_{\mathfrak{c}}(q)}f(q)\] \[:=\mathbf{K}_{\mathfrak{c}2}f-\mathbf{K}_{\mathfrak{c}1}f.\] Then it is clear that the kernel of \(\mathbf{K}_{\mathfrak{c}1}\) takes the form \[k_{\mathfrak{c}1}(p,q)=\int_{\mathbb{S}^{2}}v_{\phi}(p,q)\sqrt{\mathbf{M}_{ \mathfrak{c}}(p)\mathbf{M}_{\mathfrak{c}}(q)}d\omega=\frac{\pi\mathfrak{c}g \sqrt{s}}{p^{0}q^{0}}\sqrt{\mathbf{M}_{\mathfrak{c}}(p)\mathbf{M}_{\mathfrak{c }}(q)}. \tag{2.2}\] By similar arguments as in [47], we can deduce that each term of \(\mathbf{K}_{\mathfrak{c}2}f\) is equal to \[\frac{\mathfrak{c}}{2}\frac{1}{p^{0}}\int_{\mathbb{R}^{3}}\frac{dq}{q^{0}}f(q )\Big{\{}\int_{\mathbb{R}^{3}}\frac{dq^{\prime}}{q^{\prime 0}}\int_{ \mathbb{R}^{3}}\frac{dp^{\prime}}{p^{\prime 0}}\bar{s}\delta^{(4)}(p^{\mu}+p^{ \prime\mu}-q^{\mu}-q^{\prime\mu})\sqrt{\mathbf{M}_{\mathfrak{c}}(p^{\prime}) \mathbf{M}_{\mathfrak{c}}(q^{\prime})}\Big{\}},\] which yields that the kernel of \(\mathbf{K}_{\mathfrak{c}2}\) is \[k_{\mathfrak{c}2}(p,q)=\frac{\mathfrak{c}}{p^{0}q^{0}}\int_{\mathbb{R}^{3}} \frac{dq^{\prime}}{q^{\prime 0}}\int_{\mathbb{R}^{3}}\frac{dp^{\prime}}{p^{\prime 0}} \bar{s}\delta^{(4)}(p^{\mu}+p^{\prime\mu}-q^{\mu}-q^{\prime\mu})\sqrt{ \mathbf{M}_{\mathfrak{c}}(p^{\prime})\mathbf{M}_{\mathfrak{c}}(q^{\prime})}, \tag{2.3}\] where \[\bar{s}=\bar{g}^{2}+4\mathfrak{c}^{2},\quad\bar{g}^{2}=g^{2}-\frac{1}{2}(p^{ \mu}+q^{\mu})(p^{\prime}_{\mu}+q^{\prime}_{\mu}-p_{\mu}-q_{\mu}).\] We introduce the Lorentz transformation \(\bar{\Lambda}\) \[\bar{\Lambda}=\left(\bar{\Lambda}^{\mu}_{\nu}\right)=\left(\begin{array}{ cccc}\tilde{r}&\frac{\tilde{r}v_{1}}{\mathfrak{c}}&\frac{\tilde{r}v_{2}}{ \mathfrak{c}}&\frac{\tilde{r}v_{3}}{\mathfrak{c}}\\ \frac{\tilde{r}v_{1}}{\mathfrak{c}}&1+(\tilde{r}-1)\frac{v_{1}^{2}}{|v|^{2}}& (\tilde{r}-1)\frac{v_{1}v_{2}}{|v|^{2}}&(\tilde{r}-1)\frac{v_{1}v_{3}}{|v|^{2 }}\\ \frac{\tilde{r}v_{2}}{\mathfrak{c}}&(\tilde{r}-1)\frac{v_{1}v_{2}}{|v|^{2}}& 1+(\tilde{r}-1)\frac{v_{2}^{2}}{|v|^{2}}&(\tilde{r}-1)\frac{v_{2}v_{3}}{|v|^{2 }}\\ \frac{\tilde{r}v_{3}}{\mathfrak{c}}&(\tilde{r}-1)\frac{v_{1}v_{3}}{|v|^{2}}& (\tilde{r}-1)\frac{v_{2}v_{3}}{|v|^{2}}&1+(\tilde{r}-1)\frac{v_{3}^{2}}{|v|^{2 }}\end{array}\right) \tag{2.4}\] and its inverse transformation \[\bar{\Lambda}^{-1}=\left(\begin{array}{cccc}\tilde{r}&-\frac{\tilde{r}v_{1}} {\mathfrak{c}}&-\frac{\tilde{r}v_{2}}{\mathfrak{c}}&-\frac{\tilde{r}v_{3}}{ \mathfrak{c}}\\ -\frac{\tilde{r}v_{1}}{\mathfrak{c}}&1+(\tilde{r}-1)\frac{v_{1}^{2}}{|v|^{2}}& (\tilde{r}-1)\frac{v_{1}v_{2}}{|v|^{2}}&(\tilde{r}-1)\frac{v_{1}v_{3}}{|v|^{2 }}\\ -\frac{\tilde{r}v_{2}}{\mathfrak{c}}&(\tilde{r}-1)\frac{v_{1}v_{2}}{|v|^{2}}&1+ (\tilde{r}-1)\frac{v_{2}^{2}}{|v|^{2}}&(\tilde{r}-1)\frac{v_{2}v_{3}}{|v|^{2 }}\\ -\frac{\tilde{r}v_{3}}{\mathfrak{c}}&(\tilde{r}-1)\frac{v_{1}v_{3}}{|v|^{2}}& (\tilde{r}-1)\frac{v_{2}v_{3}}{|v|^{2}}&1+(\tilde{r}-1)\frac{v_{2}^{2}}{|v|^{2 }}\end{array}\right),\] where \(\tilde{r}=\frac{u^{0}}{\mathfrak{c}},v_{i}=\frac{\mathfrak{c}u_{i}}{u^{0}}\). A direct calculation shows that \[\bar{\Lambda}^{-1}(u^{0},u^{1},u^{2},u^{3})^{t}=(\mathfrak{c},0,0,0)^{t}.\] Assume \(\bar{\Lambda}\bar{P}=P\), then one has \[\bar{P}=\bar{\Lambda}^{-1}P=\left(\begin{matrix}\frac{u^{0}p^{0}-u\cdot p}{ \mathfrak{c}}\\ -\frac{u_{1}p^{0}}{\mathfrak{c}}+p_{1}+\left(\frac{u^{0}}{\mathfrak{c}}-1\right) \frac{u_{1}}{|u|^{2}}u\cdot p\\ -\frac{u_{2}p^{0}}{\mathfrak{c}}+p_{2}+\left(\frac{u^{0}}{\mathfrak{c}}-1\right) \frac{u_{2}}{|u|^{2}}u\cdot p\\ -\frac{u_{3}p^{0}}{\mathfrak{c}}+p_{3}+\left(\frac{u^{0}}{\mathfrak{c}}-1\right) \frac{u_{3}}{|u|^{2}}u\cdot p\end{matrix}\right). \tag{2.5}\] Using Lorentz transformation \(\bar{\Lambda}\), we can express \(k_{\mathfrak{c}2}(p,q)\) as \[k_{\mathfrak{c}2}(p,q)=\frac{\mathfrak{c}c_{0}}{p^{0}q^{0}}\int_{\mathbb{R}^{3}} \frac{dq^{\prime}}{q^{\prime 0}}\int_{\mathbb{R}^{3}}\frac{dp^{\prime}}{p^{\prime 0}}\tilde{s} \delta^{(4)}(\bar{p}^{\mu}+p^{\prime\mu}-\bar{q}^{\mu}-q^{\prime\mu})e^{-\frac{ \mathfrak{c}(p^{\prime 0}+q^{\prime 0})}{2\tilde{T}_{0}}},\] where \(\tilde{s}=-(\bar{p}^{\prime\mu}+p^{\prime\mu})(\bar{p}_{\mu}+p^{\prime}_{\mu})\) and \[c_{0}:=\frac{n_{0}\gamma}{4\pi\mathfrak{c}^{3}K_{2}(\gamma)}.\] By similar arguments as in [47], we can write \(k_{\mathfrak{c}2}(p,q)\) as \[k_{\mathfrak{c}2}(p,q)=\frac{\mathfrak{c}c_{0}\pi s^{\frac{3}{2}}}{4gp^{0}q^{0 }}\int_{0}^{\infty}\frac{y(1+\sqrt{y^{2}+1})}{\sqrt{y^{2}+1}}e^{-\tilde{ \mathfrak{c}}\sqrt{y^{2}+1}}I_{0}(\tilde{\mathcal{J}}y)dy, \tag{2.6}\] where \[I_{0}(r)=\frac{1}{2\pi}\int_{0}^{2\pi}e^{r\cos\Theta}d\Theta\] and \[\bar{\boldsymbol{\ell}}=\frac{\boldsymbol{\ell}}{T_{0}},\quad \bar{\boldsymbol{j}}=\frac{\boldsymbol{j}}{T_{0}},\quad\boldsymbol{\ell}= \frac{\mathfrak{c}}{2}(\bar{p}^{0}+\bar{q}^{0}),\quad\boldsymbol{j}= \mathfrak{c}\frac{|\bar{p}\times\bar{q}|}{g}.\] Using the fact that for any \(R>r\geq 0\), \[\int_{0}^{\infty}\frac{e^{-R\sqrt{1+y^{2}}}yI_{0}(ry)}{\sqrt{1+y^{ 2}}}dy =\frac{e^{-\sqrt{R^{2}-r^{2}}}}{\sqrt{R^{2}-r^{2}}},\] \[\int_{0}^{\infty}e^{-R\sqrt{1+y^{2}}}yI_{0}(ry)dy =\frac{R}{R^{2}-r^{2}}\left\{1+\frac{1}{\sqrt{R^{2}-r^{2}}} \right\}e^{-\sqrt{R^{2}-r^{2}}},\] one can express \(k_{\mathfrak{c}2}(p,q)\) as \[k_{\mathfrak{c}2}(p,q)=\frac{\mathfrak{c}c_{0}\pi s^{\frac{3}{2}}}{4gp^{0}q^{0 }}\left[J_{1}(\bar{\boldsymbol{\ell}},\bar{\boldsymbol{j}})+J_{2}(\bar{ \boldsymbol{\ell}},\bar{\boldsymbol{j}})\right], \tag{2.7}\] where \[J_{1}(\bar{\boldsymbol{\ell}},\bar{\boldsymbol{j}})=\frac{\bar{\boldsymbol{ \ell}}}{\bar{\boldsymbol{\ell}}^{2}-\bar{\boldsymbol{j}}^{2}}\left[1+\frac{1} {\sqrt{\bar{\boldsymbol{\ell}}^{2}-\bar{\boldsymbol{j}}^{2}}}\right]e^{- \sqrt{\bar{\boldsymbol{\ell}}^{2}-\bar{\boldsymbol{j}}^{2}}},\quad J_{2}( \bar{\boldsymbol{\ell}},\bar{\boldsymbol{j}})=\frac{1}{\sqrt{\bar{\boldsymbol {\ell}}^{2}-\bar{\boldsymbol{j}}^{2}}}e^{-\sqrt{\bar{\boldsymbol{\ell}}^{2}- \bar{\boldsymbol{j}}^{2}}}. \tag{2.8}\] For later use, we denote the kernel of \(\mathbf{K}_{\mathfrak{c}}\) as \[k_{\mathfrak{c}}(p,q):=k_{\mathfrak{c}2}(p,q)-k_{\mathfrak{c}1}(p,q). \tag{2.9}\] It is well-known that \(\mathbf{L}_{\mathfrak{c}}\) is a self-adjoint non-negative definite operator in \(L_{p}^{2}\) space with the kernel \[\mathcal{N}_{\mathfrak{c}}=\mathrm{span}\left\{\sqrt{\mathbf{M}_{\mathfrak{c }}},\ p_{i}\sqrt{\mathbf{M}_{\mathfrak{c}}}\ (i=1,2,3),\ p^{0}\sqrt{\mathbf{M}_{\mathfrak{c}}}\right\}.\] Let \(\mathbf{P}_{\mathfrak{c}}\) be the orthogonal projection from \(L_{p}^{2}\) onto \(\mathcal{N}_{\mathfrak{c}}\). For given \(f\), we denote the macroscopic part \(\mathbf{P}_{\mathfrak{c}}f\) as \[\mathbf{P}_{\mathfrak{c}}f=\left\{a_{f}+b_{f}\cdot p+c_{f}p^{0}\right\}\sqrt{ \mathbf{M}_{\mathfrak{c}}},\] and further denote \(\{\mathbf{I}-\mathbf{P}_{\mathfrak{c}}\}f\) to be the microscopic part of \(f\). For the orthonormal basis of \(\mathcal{N}_{\mathfrak{c}}\), see the appendix. ## 3. The Newtonian limit of the relativistic Euler equations ### Reformulation of the relativistic Euler equations By a delicate computation, the relativistic Euler equations (1.21) can be rewritten as the following symmetric hyperbolic system \[\mathbf{B}_{0}\partial_{t}V+\sum_{j=1}^{3}\mathbf{B}_{j}\partial_{j}V=0, \tag{3.1}\] where \[\mathbf{B}_{0}=\begin{pmatrix}1&n_{0}\frac{\partial P_{0}}{\partial n_{0}} \frac{u^{t}}{(u^{0})^{2}}&0\\ n_{0}\frac{\partial P_{0}}{\partial n_{0}}\frac{u}{(u^{0})^{2}}&\frac{1}{ \mathrm{c}^{2}}n_{0}\frac{\partial P_{0}}{\partial n_{0}}(e_{0}+P_{0})( \mathbf{I}-\frac{u\otimes u}{(u^{0})^{2}})&0\\ 0&0&\frac{1}{\mathrm{c}}u^{0}\end{pmatrix}\] and \[\mathbf{B}_{j}=\begin{pmatrix}\frac{\mathfrak{c}}{u^{0}}u_{j}&\frac{\mathfrak{ c}}{u^{0}}n_{0}\frac{\partial P_{0}}{\partial n_{0}}\mathbf{e}_{j}^{t}&0\\ \frac{\mathfrak{c}}{\mathfrak{c}^{0}}n_{0}\frac{\partial P_{0}}{\partial n_{0 }}\mathbf{e}_{j}&\frac{1}{\mathfrak{c}u^{0}}n_{0}\frac{\partial P_{0}}{ \partial n_{0}}[(e_{0}+P_{0})u_{j}\mathbf{I}-\frac{u\otimes u}{(u^{0})^{2}}u_ {j}]&0\\ 0&0&u_{j}\end{pmatrix}.\] It is clear that \(\mathbf{B}_{0}\) and \(\mathbf{B}_{j}\) (\(j=1,2,3\)) are symmetric. Recall that \[\frac{\partial e_{0}}{\partial n_{0}}\Big{|}_{S}=\frac{e_{0}+P_{0}}{n_{0}}, \quad\frac{\partial e_{0}}{\partial S}\Big{|}_{n_{0}}=n_{0}T_{0},\] then one has \[n_{0}\frac{\partial P_{0}}{\partial n_{0}}\Big{|}_{S}=n_{0}\frac{\partial P_{0 }}{\partial e_{0}}\Big{|}_{S}\cdot\frac{\partial e_{0}}{\partial n_{0}}\Big{|} _{S}=\frac{a^{2}}{\mathrm{c}^{2}}(e_{0}+P_{0}),\] where \(a^{2}=\mathfrak{c}^{2}\frac{\partial P_{0}}{\partial e_{0}}|_{S}\) is the square of sound speed. Using the fact that \(a\in\big{(}0,\frac{\mathfrak{c}}{\sqrt{3}}\big{)}\) (see [8, 46]), one can show that \(\mathbf{B}_{0}\) is a positive definite matrix. Denoting \[\zeta_{0}:=\frac{a}{\mathfrak{c}^{2}}(e_{0}+P_{0})=an_{0}\frac{K_{3}(\gamma)} {K_{2}(\gamma)}>0,\] we can rewrite \(\mathbf{B}_{0}\) as \[\mathbf{B}_{0}=\begin{pmatrix}1&a\zeta_{0}\frac{u^{t}}{(u^{0})^{2}}&0\\ a\zeta_{0}\frac{u}{(u^{0})^{2}}&\zeta_{0}^{2}(\mathbf{I}-\frac{u\otimes u}{(u^ {0})^{2}})&0\\ 0&0&\frac{u^{0}}{\mathfrak{c}}\end{pmatrix}.\] ### Local smooth solutions to the relativistic Euler and classical Euler Assume that \[\eta_{1}(V)\leq\eta_{2}(V)\leq\eta_{3}(V)\leq\eta_{4}(V)\leq\eta_{5}(V)\] are the five eigenvalues of \(\mathbf{B}_{0}\). Since \(\mathbf{B}_{0}\) is positive definite, it follows that \(\eta_{i}(V)>0\) for all \(V\neq 0\), \(i=1,2,\cdots,5\). By Vieta's theorem, one has \[\sum_{i=1}^{5}\eta_{i}(V)=\sum_{i=1}^{5}(\mathbf{B}_{0})_{ii}=1+\frac{u^{0}}{ \mathfrak{c}}+\zeta_{0}^{2}\Big{(}2+\frac{\mathfrak{c}^{2}}{(u^{0})^{2}}\Big{)} \tag{3.2}\] and \[\Pi_{i=1}^{5}\eta_{i}(V)=\det\mathbf{B}_{0}=\frac{\zeta_{0}^{6}}{\mathfrak{c}( u^{0})^{3}}\Big{[}\mathfrak{c}^{4}+|u|^{2}(\mathfrak{c}^{2}-a^{2})\Big{]}. \tag{3.3}\] Since all the elements of \(\mathbf{B}_{0}\) are smooth functions of \(V\), it yields that \(\eta_{i}(V)\) (\(i=1,\cdots,5\)) are continuous functions of \(V\). Therefore, for any compact subset \(\mathcal{V}\subset\mathbb{R}^{+}\times\mathbb{R}^{3}\times\mathbb{R}^{+}\) and suitably large \(\mathfrak{c}\), the RHS of (3.2) and (3.3) are bounded by positive constants from below and above and the constants are independent of \(\mathfrak{c}\). Thus there exists a positive constant \(\beta>0\) which is independent of \(\mathfrak{c}\), such that \[\beta\mathbf{I}_{5}\leq\mathbf{B}_{0}(V)\leq\beta^{-1}\mathbf{I}_{5},\quad V\in \mathcal{V} \tag{3.4}\] holds in the sense of quadratic forms. **Lemma 3.1** (Local existence for the relativistic Euler equations).: _Considering the relativistic Euler equations (3.1) with a complete equation of state (1.16)-(1.18) in some open domain \(\mathcal{V}\subset\left\{(P_{0},u,S)\in\mathbb{R}^{+}\times\mathbb{R}^{3} \times\mathbb{R}^{+}\right\}\), we assume that \(\overline{V}=(\overline{P},0,\overline{S})\in\mathcal{V}\) with \(\overline{P}>0\), \(\overline{S}>0\) being given constants which are independent of the light speed \(\mathfrak{c}\). Suppose that_ \[V_{0}\in\overline{V}+H^{N_{0}}\left(\mathbb{R}^{3}\right),\] _with \(N_{0}\geq 3\) and \(V_{0}\in\mathcal{V}_{1}\subset\subset\mathcal{V}\). Then there exist a local existing time \(T_{1}>0\) which is independent of \(\mathfrak{c}\), and a unique classical solution \(V\in C^{1}\left([0,T]\times\mathbb{R}^{3}\right)\) of the Cauchy problem associated with (3.1) and the initial data \(V(0)=V_{0}\) such that \(V-\overline{V}\) belongs to \(C\left([0,T_{1}];H^{N_{0}}\right)\cap C^{1}\left([0,T_{1}];H^{N_{0}-1}\right)\) and the following estimate holds_ \[\|V-\overline{V}\|_{C\left([0,T_{1}];H^{N_{0}}\right)\cap C^{1}\left([0,T_{1}] ;H^{N_{0}-1}\right)}\leq C_{1},\] _where \(C_{1}\) depends on \(\|V_{0}-\overline{V}\|_{H^{N_{0}}}\) and is independent of \(\mathfrak{c}\)._ Proof.: The proof is very similar to the one in [16, Theorem 10.1]. The only difference lies in the argument of the independence of \(\mathfrak{c}\) for \(T_{1}\) and the upper bound for the solution. The fact that \(T_{1}\) is independent of \(\mathfrak{c}\) comes from the one that \(\beta\) in (3.4) is independent of \(\mathfrak{c}\). In addition, from the specific expressions for the elements of \(\mathbf{B}_{\alpha}\) (\(\alpha=0,1,2,3\)), we can easily derive that \[\|\nabla_{x}\mathbf{B}_{0}(V)\|_{H^{N_{0}-1}}+\sum_{j=1}^{3}\|\mathbf{B}_{j}(V )\|_{H^{N_{0}}}\leq C\|V-\overline{V}\|_{H^{N_{0}}},\] where \(C\) depends on \(\|V-\overline{V}\|_{L^{\infty}}\) and is independent of \(\mathfrak{c}\). The remaining arguments are very similar to ones in [16, Theorem 10.1] and we omit the details here for brevity. Therefore the proof is completed. For later use, we present the local result for the classical Euler equations (1.23), see [15, 16, 35, 38] for instance. **Lemma 3.2**.: _[_16_]_ _Considering the classical Euler equations (1.23) with equation of state (1.22) in some open domain \(\mathcal{W}\subset\left\{(\mathcal{P},\mathfrak{u},\eta)\in\mathbb{R}^{+} \times\mathbb{R}^{3}\times\mathbb{R}^{+}\right\}\), we assume that \(\overline{W}=(\overline{\mathcal{P}},0,\overline{\eta})\in\mathcal{W}\) with \(\overline{\mathcal{P}}>0\), \(\overline{\eta}>0\) being given constants. Suppose that_ \[W_{0}\in\overline{W}+H^{N_{0}}\left(\mathbb{R}^{3}\right),\quad\overline{W} \in\mathcal{W}\] _with \(N_{0}\geq 3\) and \(W_{0}\in\mathcal{W}_{1}\subset\subset\mathcal{W}\). Then there exist a local existing time \(T_{2}>0\) and a unique classical solution \(W\in C^{1}\left([0,T_{2}]\times\mathbb{R}^{3}\right)\) of the Cauchy problem associated with (1.23) and the initial data \(W(0)=W_{0}\) such that \(W-\overline{W}\) belongs to \(C\left([0,T_{2}];H^{N_{0}}\right)\cap C^{1}\left([0,T_{2}];H^{N_{0}-1}\right)\) and the following estimate holds_ \[\|W-\overline{W}\|_{C\left([0,T_{2}];H^{N_{0}}\right)\cap C^{1}\left([0,T_{2}] ;H^{N_{0}-1}\right)}\leq C_{2},\] _where \(C_{2}\) depends on \(\|W_{0}-\overline{W}\|_{H^{N_{0}}}\). Furthermore, the lifespan \(T_{2}\) have the following lower bound_ \[T_{2}\geq C_{3}\Big{(}\|W_{0}-\overline{W}\|_{H^{N_{0}}}\Big{)}^{-1},\] _where \(C_{3}\) is independent of \(\mathfrak{c}\) and \(\|W_{0}-\overline{W}\|_{H^{N_{0}}}\)._ ### Newtonian limit from the relativistic Euler to the classical Euler In this subsection, we focus on the Newtonian limit of the relativistic Euler equations. It follows from (1.23) and (3.1) that \[\mathbf{D}_{0}\partial_{t}(W-V)+\sum_{j=1}^{3}\mathbf{D}_{j}\partial_{j}(W-V)= \Upsilon,\quad(W-V)\Big{|}_{t=0}=0, \tag{3.5}\] where \[\Upsilon=(\mathbf{B}_{0}-\mathbf{D}_{0})\partial_{t}V+\sum_{j=1}^{3}(\mathbf{B }_{j}-\mathbf{D}_{j})\partial_{j}V.\] **Lemma 3.3**.: _There hold_ \[\sigma^{2}=\frac{\partial\mathcal{P}}{\partial\rho}\Big{|}_{\eta}=\frac{5}{3}\theta \tag{3.6}\] _and_ \[a^{2}=\mathfrak{c}^{2}\frac{\partial P_{0}}{\partial e_{0}}\Big{|}_{S}=\frac{ 5}{3}T_{0}+O(\mathfrak{c}^{-2}). \tag{3.7}\] Proof.: For (3.6), it follows from (1.22) that \[\mathcal{P}=\rho\theta=A_{0}^{\frac{2}{3}}\rho^{\frac{5}{3}}e^{\frac{2}{3}\eta },\quad A_{0}=(2\pi)^{-\frac{3}{2}}e^{-\frac{5}{2}}, \tag{3.8}\] which implies that \[\frac{\partial\mathcal{P}}{\partial\rho}\Big{|}_{\eta}=\frac{5}{3}A_{0}^{ \frac{2}{3}}e^{\frac{2}{3}\eta}=\frac{5\mathcal{P}}{3\rho}=\frac{5}{3}\theta.\] For (3.7), it follows from (1.16)-(1.18) that \[P_{0} =4\pi e^{4}e^{-S}\mathfrak{c}^{5}\frac{K_{2}(\gamma)}{\gamma^{2} }\exp\left(\gamma\frac{K_{1}(\gamma)}{K_{2}(\gamma)}\right),\] \[e_{0} =P_{0}\Big{(}\gamma\frac{K_{1}(\gamma)}{K_{2}(\gamma)}+3\Big{)}.\] It is clear that \[\frac{\partial P_{0}}{\partial\gamma}\Big{|}_{S}=\frac{\partial P_{0}}{ \partial e_{0}}\Big{|}_{S}\cdot\frac{\partial e_{0}}{\partial\gamma}\Big{|}_{S}.\] It follows from [46, (3.32)] that \[\Big{(}\frac{\partial P_{0}}{\partial e_{0}}\Big{|}_{S}\Big{)}^{-1}=\frac{ \left.\frac{\partial e_{0}}{\partial\gamma}\right|_{S}}{\frac{\partial P_{0}} {\partial\gamma}\Big{|}_{S}}=\gamma\frac{K_{1}(\gamma)}{K_{2}(\gamma)}+3+ \frac{\gamma\left(\frac{K_{1}(\gamma)}{K_{2}(\gamma)}\right)^{2}+4\frac{K_{1} (\gamma)}{K_{2}(\gamma)}-\gamma}{\gamma\left(\frac{K_{1}(\gamma)}{K_{2}( \gamma)}\right)^{2}+3\frac{K_{1}(\gamma)}{K_{2}(\gamma)}-\gamma-\frac{4}{ \gamma}}.\] Using the asymptotic expansions of \(K_{2}(\gamma)\) and \(K_{3}(\gamma)\) in Lemma 2.1, one has \[\frac{K_{3}^{2}(\gamma)}{K_{2}^{2}(\gamma)}-1 =\frac{\frac{5}{\gamma}+\frac{115}{4\gamma^{2}}+\frac{2205}{32 \gamma^{3}}+\frac{10395}{128\gamma^{4}}+O(\gamma^{-5})}{1+\frac{15}{4\gamma}+ \frac{165}{32\gamma^{2}}+\frac{315}{128\gamma^{3}}+O(\gamma^{-4})}\] \[=\frac{5}{\gamma}+\frac{10}{\gamma^{2}}+\frac{45}{8\gamma^{3}}- \frac{15}{4\gamma^{4}}+O(\gamma^{-5}) \tag{3.9}\] and \[\frac{K_{3}(\gamma)}{K_{2}(\gamma)}-1=\frac{\frac{5}{2\gamma}+\frac{105}{16 \gamma^{2}}+\frac{945}{256\gamma^{3}}+O(\gamma^{-4})}{1+\frac{15}{8\gamma}+ \frac{105}{128\gamma^{2}}+O(\gamma^{-3})}=\frac{5}{2\gamma}+\frac{15}{8\gamma ^{2}}-\frac{15}{8\gamma^{3}}+O(\gamma^{-4}). \tag{3.10}\] Then one has \[\left(\frac{K_{3}(\gamma)}{K_{2}(\gamma)}\right)^{2}-\frac{5}{ \gamma}\frac{K_{3}(\gamma)}{K_{2}(\gamma)}-1\] \[=\Big{[}\left(\frac{K_{3}(\gamma)}{K_{2}(\gamma)}\right)^{2}-1- \frac{5}{\gamma}\Big{]}-\frac{5}{\gamma}\Big{(}\frac{K_{3}(\gamma)}{K_{2}( \gamma)}-1\Big{)}\] \[=\frac{10}{\gamma^{2}}+\frac{45}{8\gamma^{3}}-\frac{15}{4\gamma^ {4}}+O(\gamma^{-5})-\frac{5}{\gamma}\Big{(}\frac{5}{2\gamma}+\frac{15}{8\gamma ^{2}}-\frac{15}{8\gamma^{3}}+O(\gamma^{-4})\Big{)}\] \[=-\frac{5}{2\gamma^{2}}-\frac{15}{4\gamma^{3}}+\frac{45}{8\gamma^ {4}}+O(\gamma^{-5}). \tag{3.11}\] Applying \(\frac{K_{1}(\gamma)}{K_{2}(\gamma)}=\frac{K_{3}(\gamma)}{K_{2}(\gamma)}-\frac {4}{\gamma}\), we have \[\left(\gamma\frac{\partial P_{0}}{\partial e_{0}}\Big{|}_{S} \right)^{-1} =\frac{K_{1}(\gamma)}{K_{2}(\gamma)}+\frac{3}{\gamma}+\frac{ \left(\frac{K_{1}(\gamma)}{K_{2}(\gamma)}\right)^{2}+\frac{4}{\gamma}\frac{K_ {1}(\gamma)}{K_{2}(\gamma)}-1}{\left(\frac{K_{1}(\gamma)}{K_{2}(\gamma)} \right)^{2}+\frac{3}{\gamma}\frac{K_{1}(\gamma)}{K_{2}(\gamma)}-1-\frac{4}{ \gamma^{2}}}\cdot\frac{1}{\gamma}\] \[=\frac{K_{3}(\gamma)}{K_{2}(\gamma)}+\frac{\frac{K_{3}(\gamma)}{ K_{2}(\gamma)}}{\left(\frac{K_{3}(\gamma)}{K_{2}(\gamma)}\right)^{2}-\frac{5}{ \gamma}\frac{K_{3}(\gamma)}{K_{2}(\gamma)}-1}\cdot\frac{1}{\gamma^{2}}\] \[=1+O(\gamma^{-1})+\frac{1+O(\gamma^{-1})}{-\frac{5}{2}+O(\gamma^ {-1})}=\frac{3}{5}+O(\gamma^{-1}),\] which implies that \[a^{2}=T_{0}\gamma\frac{\partial P_{0}}{\partial e_{0}}\Big{|}_{S}=T_{0}\Big{(} \frac{5}{3}+O(\gamma^{-1})\Big{)}=\frac{5}{3}T_{0}+O(\mathfrak{c}^{-2}).\] Therefore the proof is completed. **Lemma 3.4**.: _It holds that_ \[|n_{0}-\rho|+|T_{0}-\theta|\leq C|W-V|+\frac{C}{\mathfrak{c}^{2}}, \tag{3.12}\] _where the constant \(C\) depends on \(\sup_{0\leq t\leq T}\|V(t)-\overline{V}\|_{H^{N_{0}}}\) and \(\sup_{0\leq t\leq T}\|W(t)-\overline{W}\|_{H^{N_{0}}}\) and is independent of \(\mathfrak{c}\)._ Proof.: It follows from (3.8) that \[\rho=(2\pi\mathcal{P})^{\frac{3}{5}}e^{1-\frac{2}{5}\eta}. \tag{3.13}\] Since \(K_{2}(\gamma)=\sqrt{\frac{\pi}{2\gamma}}e^{-\gamma}(1+O(\gamma^{-1}))\), it follows from (1.18) that \[n_{0} =4\pi e^{4-S}\mathfrak{c}^{3}\frac{K_{2}(\gamma)}{\gamma}\exp \left(\gamma\frac{K_{1}(\gamma)}{K_{2}(\gamma)}\right)\] \[=(2\pi)^{\frac{3}{2}}\Big{(}\frac{P_{0}}{n_{0}}\Big{)}^{\frac{3}{ 2}}e^{-S}\exp\left(\gamma\frac{K_{3}(\gamma)}{K_{2}(\gamma)}-\gamma\right)(1+ O(\gamma^{-1})),\] which yields immediately that \[n_{0} =(2\pi)^{\frac{3}{5}}P_{0}^{\frac{3}{5}}e^{-\frac{2}{5}S}\exp \left(\frac{2}{5}\gamma\frac{K_{3}(\gamma)}{K_{2}(\gamma)}-\frac{2}{5}\gamma \right)(1+O(\gamma^{-1}))\] \[=(2\pi)^{\frac{3}{5}}P_{0}^{\frac{3}{5}}e^{1-\frac{2}{5}S}\exp \left(O(\gamma^{-1})\right)(1+O(\gamma^{-1}))\] \[=(2\pi P_{0})^{\frac{3}{5}}e^{1-\frac{1}{5}S}+O(\gamma^{-1}). \tag{3.14}\] Using (3.13)-(3.14), one has \[|n_{0}-\rho|\leq C|W-V|+\frac{C}{\mathfrak{c}^{2}}. \tag{3.15}\] For the estimate of \(|T_{0}-\theta|\) in (3.12), a direct calculation shows that \[T_{0}-\theta=\frac{P_{0}}{n_{0}}-\frac{\mathcal{P}}{\rho}=\frac{1}{n_{0}}(P_{0 }-\mathcal{P})+\frac{P_{0}}{n_{0}\rho}(\rho-n_{0}),\] which, together with (3.15), yields that \[|T_{0}-\theta|\leq C|W-V|+\frac{C}{\mathfrak{c}^{2}}.\] Therefore the proof is completed. Since \(n_{0}=n_{0}(P_{0},S)\) and \(\rho=\rho(\mathcal{P},\eta)\), to consider the Newtonian limit of the relativistic Euler equations, we still need to control the following functions \[\frac{\partial n_{0}}{\partial P_{0}}\Big{|}_{S}-\frac{\partial\rho}{\partial \mathcal{P}}\Big{|}_{\eta},\quad\frac{\partial n_{0}}{\partial S}\Big{|}_{P_ {0}}-\frac{\partial\rho}{\partial\eta}\Big{|}_{\mathcal{P}}.\] For simplicity of notations, we replace \(\frac{\partial n_{0}}{\partial P_{0}}\Big{|}_{S}\) with \(\frac{\partial n_{0}}{\partial P_{0}}\) and the remaining notations can be understood in the same way. **Lemma 3.5**.: _It holds that_ \[\Big{|}\frac{\partial n_{0}}{\partial P_{0}}-\frac{\partial\rho}{\partial \mathcal{P}}\Big{|}+\Big{|}\frac{\partial n_{0}}{\partial S}-\frac{\partial \rho}{\partial\eta}\Big{|}+\Big{|}\frac{\partial T_{0}}{\partial P_{0}}- \frac{\partial\theta}{\partial\mathcal{P}}\Big{|}+\Big{|}\frac{\partial T_{0 }}{\partial S}-\frac{\partial\theta}{\partial\eta}\Big{|}\leq C|W-V|+\frac{C} {\mathfrak{c}^{2}}, \tag{3.16}\] _where the constant \(C\) depends on \(\sup_{0\leq t\leq T}\|V(t)-\overline{V}\|_{H^{N_{0}}}\) and \(\sup_{0\leq t\leq T}\|W(t)-\overline{W}\|_{H^{N_{0}}}\) and is independent of \(\mathfrak{c}\)._ Proof.: Using (3.13), one has \[\frac{\partial\rho}{\partial\mathcal{P}}=\frac{3\rho}{5\mathcal{P}}=\frac{3} {5\theta},\quad\frac{\partial\rho}{\partial\eta}=-\frac{2\mathcal{P}}{5\theta }=-\frac{2}{5}\rho. \tag{3.17}\] Since \(\gamma=\frac{\mathfrak{c}^{2}}{T_{0}}=\mathfrak{c}^{2}\frac{n_{0}}{P_{0}}\) and \[n_{0}=4\pi e^{-S}\mathfrak{c}^{3}\frac{K_{2}(\gamma)}{\gamma}\exp\left( \gamma\frac{K_{3}(\gamma)}{K_{2}(\gamma)}\right),\] it holds that \[\frac{\partial n_{0}}{\partial P_{0}} =4\pi e^{-S}\mathfrak{c}^{3}\frac{d}{d\gamma}\Big{[}\frac{K_{2}( \gamma)}{\gamma}\exp\left(\gamma\frac{K_{3}(\gamma)}{K_{2}(\gamma)}\right) \Big{]}\cdot\frac{\partial\gamma}{\partial P_{0}}\] \[=4\pi e^{-S}\mathfrak{c}^{3}\Big{[}\frac{K_{2}^{\prime}(\gamma) \gamma-K_{2}(\gamma)}{\gamma^{2}}+\frac{K_{2}(\gamma)}{\gamma}\Big{(}\frac{K_ {3}(\gamma)}{K_{2}(\gamma)}+\gamma\frac{K_{3}^{\prime}(\gamma)K_{2}(\gamma)-K _{3}(\gamma)K_{2}^{\prime}(\gamma)}{K_{2}^{2}(\gamma)}\Big{)}\Big{]}\] \[\quad\times\exp\Big{(}\gamma\frac{K_{3}(\gamma)}{K_{2}(\gamma)} \Big{)}\cdot\frac{\mathfrak{c}^{2}}{P_{0}}\Big{(}\frac{\partial n_{0}}{ \partial P_{0}}-\frac{n_{0}}{P_{0}}\Big{)}\] \[=\gamma^{2}\Big{(}\frac{K_{3}^{2}(\gamma)}{K_{2}^{2}(\gamma)}- \frac{5}{\gamma}\frac{K_{3}(\gamma)}{K_{2}(\gamma)}-1+\frac{1}{\gamma^{2}} \Big{)}\Big{(}\frac{\partial n_{0}}{\partial P_{0}}-\frac{n_{0}}{P_{0}}\Big{)}\] \[=\varphi(\gamma)\Big{(}\frac{\partial n_{0}}{\partial P_{0}}- \frac{n_{0}}{P_{0}}\Big{)}, \tag{3.18}\] where we have denoted \[\varphi(\gamma):=\gamma^{2}\Big{(}\frac{K_{3}^{2}(\gamma)}{K_{2}^{2}(\gamma)}- \frac{5}{\gamma}\frac{K_{3}(\gamma)}{K_{2}(\gamma)}-1+\frac{1}{\gamma^{2}} \Big{)}.\] It follows from (3.11) that \[\varphi(\gamma)=-\frac{3}{2}+O(\gamma^{-1}). \tag{3.19}\] Substituting (3.19) into (3.18), one gets \[\frac{\partial n_{0}}{\partial P_{0}}=\frac{3}{5T_{0}}+O(\gamma^{-1}). \tag{3.20}\] Similarly, one has \[\frac{\partial n_{0}}{\partial S} =-n_{0}+4\pi e^{-S}\mathfrak{c}^{3}\frac{d}{d\gamma}\Big{[}\frac {K_{2}(\gamma)}{\gamma}\exp\left(\gamma\frac{K_{3}(\gamma)}{K_{2}(\gamma)} \right)\Big{]}\cdot\frac{\partial\gamma}{\partial S}\] \[=-n_{0}+\varphi(\gamma)\frac{\partial n_{0}}{\partial S}=-n_{0}+ \Big{(}-\frac{3}{2}+O(\gamma^{-1})\Big{)}\frac{\partial n_{0}}{\partial S},\] which yields that \[\frac{\partial n_{0}}{\partial S}=-\frac{2}{5}n_{0}+O(\gamma^{-1}). \tag{3.21}\] It follows from (3.17), (3.20) and (3.21) that \[\Big{|}\frac{\partial n_{0}}{\partial P_{0}}-\frac{\partial\rho}{\partial \mathcal{P}}\Big{|}+\Big{|}\frac{\partial n_{0}}{\partial s}-\frac{\partial \rho}{\partial\eta}\Big{|}\leq C\Big{|}T_{0}-\theta|+C\Big{|}n_{0}-\rho|+ \frac{C}{\mathfrak{c}^{2}}\leq C|W-V|+\frac{C}{\mathfrak{c}^{2}}. \tag{3.22}\] Next, we consider \(\big{|}\frac{\partial T_{0}}{\partial P_{0}}-\frac{\partial\theta}{\partial \mathcal{P}}\big{|}\) and \(\big{|}\frac{\partial T_{0}}{\partial S}-\frac{\partial\theta}{\partial\eta} \big{|}\) in (3.16). Noting \(T_{0}=\frac{P_{0}}{n_{0}}\) and \(\theta=\frac{\mathcal{P}}{\rho}\), we have \[\frac{\partial T_{0}}{\partial P_{0}}=\frac{1}{n_{0}}-\frac{T_{0}}{n_{0}} \frac{\partial n_{0}}{\partial P_{0}},\quad\frac{\partial\theta}{\partial \mathcal{P}}=\frac{1}{\rho}-\frac{\theta}{\rho}\frac{\partial\rho}{\partial \mathcal{P}} \tag{3.23}\] and \[\frac{\partial T_{0}}{\partial S}=-\frac{T_{0}}{n_{0}}\frac{\partial n_{0}}{ \partial S},\quad\frac{\partial\theta}{\partial\eta}=-\frac{\theta}{\rho} \frac{\partial\rho}{\partial\eta}. \tag{3.24}\] Hence it is clear that \[\Big{|}\frac{\partial T_{0}}{\partial P_{0}}-\frac{\partial\theta}{\partial \mathcal{P}}\Big{|}\leq C\Big{|}n_{0}-\rho\Big{|}+C\Big{|}T_{0}-\theta\Big{|} +C\Big{|}\frac{\partial n_{0}}{\partial P_{0}}-\frac{\partial\rho}{\partial \mathcal{P}}\Big{|}\] and \[\Big{|}\frac{\partial T_{0}}{\partial S}-\frac{\partial\theta}{\partial\eta} \Big{|}\leq C\Big{|}n_{0}-\rho\Big{|}+C\Big{|}T_{0}-\theta\Big{|}+C\Big{|} \frac{\partial n_{0}}{\partial S}-\frac{\partial\rho}{\partial\eta}\Big{|},\] which, together with (3.22), yield (3.16). Therefore the proof is completed. **Lemma 3.6**.: _There hold_ \[\Big{|}\frac{\partial^{2}n_{0}}{\partial P_{0}^{2}}-\frac{\partial^{2}\rho}{ \partial\mathcal{P}^{2}}\Big{|}+\Big{|}\frac{\partial^{2}n_{0}}{\partial S^{2} }-\frac{\partial^{2}\rho}{\partial\eta^{2}}\Big{|}+\Big{|}\frac{\partial^{2}n _{0}}{\partial P_{0}\partial S}-\frac{\partial^{2}\rho}{\partial\mathcal{P} \partial\eta}\Big{|}\leq C|W-V|+\frac{C}{\mathfrak{c}^{2}} \tag{3.25}\] _and_ \[\Big{|}\frac{\partial^{2}T_{0}}{\partial P_{0}^{2}}-\frac{\partial^{2}\theta}{ \partial\mathcal{P}^{2}}\Big{|}+\Big{|}\frac{\partial^{2}T_{0}}{\partial S^{2} }-\frac{\partial^{2}\theta}{\partial\eta^{2}}\Big{|}+\Big{|}\frac{\partial^{2} T_{0}}{\partial P_{0}\partial S}-\frac{\partial^{2}\theta}{\partial\mathcal{P} \partial\eta}\Big{|}\leq C|W-V|+\frac{C}{\mathfrak{c}^{2}}, \tag{3.26}\] _where the constant \(C\) depends on \(\sup_{0\leq t\leq T}\|V(t)-\overline{V}\|_{H^{N_{0}}}\) and \(\sup_{0\leq t\leq T}\|W(t)-\overline{W}\|_{H^{N_{0}}}\) and is independent of \(\mathfrak{c}\)._ Proof.: It follows from (3.17) that \[\frac{\partial^{2}\rho}{\partial\mathcal{P}^{2}}=-\frac{6}{25}\frac{1}{ \mathcal{P}\theta},\quad\frac{\partial^{2}\rho}{\partial\mathcal{P}\partial \eta}=-\frac{6}{25}\frac{1}{\theta},\quad\frac{\partial^{2}\rho}{\partial \eta^{2}}=\frac{4}{25}\rho. \tag{3.27}\] Using (3.18), one has \[\frac{\partial^{2}n_{0}}{\partial P_{0}^{2}}=\varphi^{\prime}(\gamma)\frac{ \mathfrak{c}^{2}}{P_{0}}\Big{(}\frac{\partial n_{0}}{\partial P_{0}}-\frac{n_{0} }{P_{0}}\Big{)}\Big{(}\frac{\partial n_{0}}{\partial P_{0}}-\frac{n_{0}}{P_{0}} \Big{)}+\varphi(\gamma)\Big{(}\frac{\partial^{2}n_{0}}{\partial P_{0}^{2}}- \frac{1}{P_{0}}\frac{\partial n_{0}}{\partial P_{0}}+\frac{n_{0}}{P_{0}^{2}} \Big{)}\] \[=\gamma\varphi^{\prime}(\gamma)\frac{1}{n_{0}\varphi^{2}(\gamma)}\Big{(}\frac{ \partial n_{0}}{\partial P_{0}}\Big{)}^{2}+\varphi(\gamma)\Big{(}\frac{\partial^ {2}n_{0}}{\partial P_{0}^{2}}-\frac{1}{P_{0}}\frac{\partial n_{0}}{\partial P_ {0}}+\frac{1}{P_{0}T_{0}}\Big{)}. \tag{3.28}\] Noting \[\Big{(}\frac{K_{3}(\gamma)}{K_{2}(\gamma)}\Big{)}^{\prime}=\frac{K_{3}^{2}( \gamma)}{K_{2}^{2}(\gamma)}-\frac{5}{\gamma}\frac{K_{3}(\gamma)}{K_{2}(\gamma) }-1, \tag{3.29}\] one has \[\varphi^{\prime}(\gamma) =\frac{d}{d\gamma}\Big{\{}\gamma^{2}\Big{(}\frac{K_{3}^{2}(\gamma )}{K_{2}^{2}(\gamma)}-\frac{5}{\gamma}\frac{K_{3}(\gamma)}{K_{2}(\gamma)}-1+ \frac{1}{\gamma^{2}}\Big{)}\Big{\}}\] \[=2\gamma\Big{[}\frac{K_{3}^{2}(\gamma)}{K_{2}^{2}(\gamma)}-1 \Big{]}+2\gamma^{2}\Big{[}\frac{K_{3}(\gamma)}{K_{2}(\gamma)}-1\Big{]}\Big{(} \frac{K_{3}(\gamma)}{K_{2}(\gamma)}\Big{)}^{\prime}\] \[\qquad+(2\gamma^{2}-5\gamma)\Big{(}\frac{K_{3}(\gamma)}{K_{2}( \gamma)}\Big{)}^{\prime}-5\frac{K_{3}(\gamma)}{K_{2}(\gamma)}:=\sum_{j=1}^{4} \mathcal{R}_{j}. \tag{3.30}\] Applying (3.9), (3.10), (3.11) and (3.29), one can obtain \[\mathcal{R}_{1} =2\gamma\Big{[}\frac{K_{3}^{2}(\gamma)}{K_{2}^{2}(\gamma)}-1 \Big{]}=2\gamma\Big{(}\frac{5}{\gamma}+\frac{10}{\gamma^{2}}+\frac{45}{8 \gamma^{3}}+O(\gamma^{-4})\Big{)}=10+\frac{20}{\gamma}+\frac{45}{4\gamma^{2}} +O(\gamma^{-3}), \tag{3.31}\] \[\mathcal{R}_{2} =2\gamma^{2}\Big{[}\frac{K_{3}(\gamma)}{K_{2}(\gamma)}-1\Big{]} \Big{(}\frac{K_{3}(\gamma)}{K_{2}(\gamma)}\Big{)}^{\prime}\] \[=2\gamma^{2}\Big{(}\frac{5}{2\gamma}+\frac{15}{8\gamma^{2}}+O( \gamma^{-3})\Big{)}\Big{(}-\frac{5}{2\gamma^{2}}-\frac{15}{4\gamma^{3}}+O( \gamma^{-4})\Big{)}=-\frac{25}{2\gamma}-\frac{225}{8\gamma^{2}}+O(\gamma^{-3}),\] (3.32) \[\mathcal{R}_{3} =(2\gamma^{2}-5\gamma)\Big{(}\frac{K_{3}(\gamma)}{K_{2}(\gamma)} \Big{)}^{\prime}=(2\gamma^{2}-5\gamma)\Big{(}-\frac{5}{2\gamma^{2}}-\frac{15} {4\gamma^{3}}+\frac{45}{8\gamma^{4}}+O(\gamma^{-5})\Big{)}\] \[=-5+\frac{5}{\gamma}+\frac{30}{\gamma^{2}}+O(\gamma^{-3}),\] (3.33) \[\mathcal{R}_{4} =-5\Big{[}\frac{K_{3}(\gamma)}{K_{2}(\gamma)}-1\Big{]}-5=-5\Big{(} \frac{5}{2\gamma}+\frac{15}{8\gamma^{2}}+O(\gamma^{-3})\Big{)}-5\] \[=-5-\frac{25}{2\gamma}-\frac{75}{8\gamma^{2}}+O(\gamma^{-3}). \tag{3.34}\] Hence it follows from (3.30)-(3.34) that \[\varphi^{\prime}(\gamma)=\frac{15}{4\gamma^{2}}+O(\gamma^{-3}),\] which implies that \[\gamma\varphi^{\prime}(\gamma)\frac{1}{\varphi^{2}(\gamma)}=O(\gamma^{-1}). \tag{3.35}\] Since \(\frac{\partial n_{0}}{\partial P_{0}}=\frac{3}{5T_{0}}+O(\gamma^{-1})\) and \(\varphi(\gamma)=-\frac{3}{2}+O(\gamma^{-1})\), it follows from (3.28) and (3.35) that \[(1-\varphi(\gamma))\frac{\partial^{2}n_{0}}{\partial P_{0}^{2}}=(-\frac{3}{2}+ O(\gamma^{-1}))\Big{\{}-\frac{1}{P_{0}}\Big{(}\frac{3}{5T_{0}}+O(\gamma^{-1}) \Big{)}+\frac{1}{P_{0}T_{0}}\Big{\}}+O(\gamma^{-1}),\] which implies that \[\frac{\partial^{2}n_{0}}{\partial P_{0}^{2}}=-\frac{6}{25}\frac{1}{P_{0}T_{0}} +O(\gamma^{-1}). \tag{3.36}\] Similarly, we can obtain that \[\frac{\partial^{2}n_{0}}{\partial P_{0}\partial S}=-\frac{6}{25}\frac{1}{T_{0}} +O(\gamma^{-1}),\quad\frac{\partial^{2}n_{0}}{\partial S^{2}}=\frac{4}{25}n_{0} +O(\gamma^{-1}). \tag{3.37}\] Hence we conclude (3.25) from (3.27), (3.36)-(3.37) and Lemma 3.4. Using (3.23) and (3.24), one has \[\frac{\partial^{2}T_{0}}{\partial S^{2}} =\Big{(}-\frac{1}{n_{0}}\frac{\partial T_{0}}{\partial S}+\frac{T_ {0}}{n_{0}^{2}}\frac{\partial n_{0}}{\partial S}\Big{)}\frac{\partial n_{0}}{ \partial S}-\frac{T_{0}}{n_{0}}\frac{\partial^{2}n_{0}}{\partial S^{2}}, \tag{3.38}\] \[\frac{\partial^{2}T_{0}}{\partial S\partial P_{0}} =\Big{(}-\frac{1}{n_{0}}\frac{\partial T_{0}}{\partial P_{0}}+ \frac{T_{0}}{n_{0}^{2}}\frac{\partial n_{0}}{\partial P_{0}}\Big{)}\frac{ \partial n_{0}}{\partial S}-\frac{T_{0}}{n_{0}}\frac{\partial^{2}n_{0}}{ \partial S\partial P_{0}},\] (3.39) \[\frac{\partial^{2}T_{0}}{\partial P_{0}^{2}} =\Big{(}-\frac{1}{n_{0}^{2}}\frac{\partial n_{0}}{\partial P_{0}} -\frac{1}{n_{0}}\frac{\partial T_{0}}{\partial P_{0}}+\frac{T_{0}}{n_{0}^{2}} \frac{\partial n_{0}}{\partial P_{0}}\Big{)}\frac{\partial n_{0}}{\partial P_ {0}}-\frac{T_{0}}{n_{0}}\frac{\partial^{2}n_{0}}{\partial P_{0}^{2}}, \tag{3.40}\] and \[\frac{\partial^{2}\theta_{0}}{\partial\eta^{2}} =\Big{(}-\frac{1}{\rho}\frac{\partial\theta}{\partial\eta}+\frac {\theta}{\rho^{2}}\frac{\partial\rho}{\partial\eta}\Big{)}\frac{\partial\rho} {\partial\eta}-\frac{\theta}{\rho}\frac{\partial^{2}\rho}{\partial\eta^{2}}, \tag{3.41}\] \[\frac{\partial^{2}\theta}{\partial\eta\partial\mathcal{P}} =\Big{(}-\frac{1}{\rho}\frac{\partial\theta}{\partial\mathcal{P}} +\frac{\theta}{\rho^{2}}\frac{\partial\rho}{\partial\mathcal{P}}\Big{)}\frac{ \partial\rho}{\partial\eta}-\frac{\theta}{\rho}\frac{\partial^{2}\rho}{ \partial\eta\partial\mathcal{P}},\] (3.42) \[\frac{\partial^{2}\theta}{\partial\mathcal{P}^{2}} =\Big{(}-\frac{1}{\rho^{2}}\frac{\partial\rho}{\partial\mathcal{ P}}-\frac{1}{\rho}\frac{\partial\theta}{\partial\mathcal{P}}+\frac{\theta}{ \rho^{2}}\frac{\partial\rho}{\partial\mathcal{P}}\Big{)}\frac{\partial\rho}{ \partial\mathcal{P}}-\frac{\theta}{\rho}\frac{\partial^{2}\rho}{\partial \mathcal{P}^{2}}. \tag{3.43}\] Thus (3.26) follows from (3.38)-(3.43), Lemmas 3.4-3.5 and (3.25). Therefore the proof is completed. By similar arguments as in Lemmas 3.5-3.6, we can obtain the following lemma whose proof is omitted for brevity of presentation. **Lemma 3.7**.: _There hold_ \[|\partial_{i}(a^{2}-\sigma^{2})|\leq C|W-V|+C|\partial_{i}(W-V)|+\frac{C}{ \mathfrak{c}^{2}},\quad i=1,2,3,\] _and_ \[|\partial_{ij}(a^{2}-\sigma^{2})|\leq C|W-V|+C|\nabla_{x}(W-V)|+C|\partial_{ ij}(W-V)|+\frac{C}{\mathfrak{c}^{2}},\quad i,j=1,2,3,\] _where the constant \(C\) depends on \(\sup_{0\leq t\leq T}\|V(t)-\overline{V}\|_{H^{N_{0}}}\) and \(\sup_{0\leq t\leq T}\|W(t)-\overline{W}\|_{H^{N_{0}}}\) and is independent of \(\mathfrak{c}\)._ We are now in a position to show the Newtonian limit from the relativistic Euler equations to the classical Euler equations. **Proposition 3.8**.: Assume \(\overline{V}=\overline{W}\). Suppose that \(V=(P_{0},u,S)\) is the unique smooth solution in Lemma 3.1 and \(W=(\mathcal{P},\mathfrak{u},\eta)\) is the unique smooth solution in Lemma 3.2 with the same initial data. Let \(T=\min\{T_{1},T_{2}\}\), then it holds that \[\sup_{0\leq t\leq T}\|(W-V)(t)\|_{L^{\infty}}\leq\frac{C}{\mathfrak{c}^{2}},\] where the constant \(C\) depends on \(\sup_{0\leq t\leq T}\|V(t)-\overline{V}\|_{H^{N_{0}}}\) and \(\sup_{0\leq t\leq T}\|W(t)-\overline{W}\|_{H^{N_{0}}}\) and is independent of \(\mathfrak{c}\). Proof.: Using Lemmas 3.3-3.4, we have \[|\mathbf{B}_{\alpha}-\mathbf{D}_{\alpha}|\leq C|W-V|+\frac{C}{\mathfrak{c}^{2}},\quad\alpha=0,1,2,3.\] Denote \(\mathcal{U}(t):=\langle\mathbf{D}_{0}(W-V),W-V\rangle(t)\). It follows from (3.5) that \[\frac{d}{dt}\mathcal{U}(t) =\frac{d}{dt}\langle\mathbf{D}_{0}(W-V),W-V\rangle\] \[=\langle(\partial_{i}\mathbf{D}_{0}+\sum_{j=1}^{3}\partial_{j} \mathbf{D}_{j})(W-V),W-V\rangle+2\langle\Upsilon,W-V\rangle\] \[\leq C\|W-V\|_{2}^{2}+\frac{C}{\mathfrak{c}^{2}}\|W-V\|_{2}\] \[\leq C\mathcal{U}(t)+\frac{C}{\mathfrak{c}^{2}}\sqrt{\mathcal{U}(t )}.\] Applying Gronwall's inequality, one obtains \[\sup_{0\leq t\leq T}\mathcal{U}(t)\leq\frac{C}{\mathfrak{c}^{4}},\] where the constant \(C\) depends on \(T\), \(\sup_{0\leq t\leq T}\|V(t)-\overline{V}\|_{H^{N_{0}}}\) and \(\sup_{0\leq t\leq T}\|W(t)-\overline{W}\|_{H^{N_{0}}}\) and is independent of \(\mathfrak{c}\). Hence we get \[\sup_{0\leq t\leq T}\|(W-V)(t)\|_{2}\leq\frac{C}{\mathfrak{c}^{2}}. \tag{3.44}\] Similarly, by using Lemmas 3.5-3.7 and the energy method, we can obtain \[\sup_{0\leq t\leq T}\|\nabla_{x}(W-V)(t)\|_{2}+\sup_{0\leq t\leq T}\|\nabla_{ x}^{2}(W-V)(t)\|_{2}\leq\frac{C}{\mathfrak{c}^{2}},\] which, together with (3.44), yields that \[\sup_{0\leq t\leq T}\|(W-V)(t)\|_{\infty}\leq C\sup_{0\leq t\leq T}\|(W-V)(t) \|_{H^{2}}\leq\frac{C}{\mathfrak{c}^{2}}.\] Therefore the proof is completed. Based on Lemmas 3.1-3.2 and the diffeomorphisms \((n_{0},T_{0})\leftrightarrow(P_{0},S)\), \((\rho,\theta)\leftrightarrow(\mathcal{P},\eta)\) of \((0,\infty)\times(0,\infty)\), there exist positive constants \(\bar{C}_{0}\), \(\bar{c}_{j}\) (\(j=1,2,3,4\)) which are independent of \(\mathfrak{c}\), such that for any \((t,x)\in[0,T]\times\mathbb{R}^{3}\), there holds \[|u(t,x)|\leq\bar{C}_{0},\quad 0<4\bar{c}_{1}\leq T_{0}(t,x)\leq\frac{1}{4 \bar{c}_{1}},\quad 0<4\bar{c}_{2}\leq\theta(t,x)\leq\frac{1}{4\bar{c}_{2}} \tag{3.45}\] and \[0<\bar{c}_{3}\leq n_{0}(t,x)\leq\frac{1}{\bar{c}_{3}},\quad 0<\bar{c}_{4} \leq\rho(t,x)\leq\frac{1}{\bar{c}_{4}}. \tag{3.46}\] ## 4. Uniform-in-\(\mathfrak{c}\) estimates on the linearized collision operators We first present a useful lemma which is very similar to [18, Lemma 3.1]. Since the proof is similar, we omit the details here for brevity. **Lemma 4.1**.: ([18]) _Denote_ \[\boldsymbol{\ell}_{1}:=\mathfrak{c}\frac{p^{0}+q^{0}}{2},\quad \boldsymbol{j}_{1}:=\mathfrak{c}\frac{|p\times q|}{g},\] _then there hold_ \[(i) \frac{\sqrt{|p\times q|^{2}+\mathfrak{c}^{2}|p-q|^{2}}}{\sqrt{p^ {0}q^{0}}}\leq g\leq|p-q|\ \ \text{and}\ \ g^{2}<s\leq 4p^{0}q^{0}.\] \[(ii) v_{\phi}=\frac{\mathfrak{c}}{4}\frac{g\sqrt{s}}{p^{0}q^{0}}\leq \min\Big{\{}\mathfrak{c},\frac{|p-q|}{2}\Big{\}}.\] \[(iii) \boldsymbol{\ell}_{1}^{2}-\boldsymbol{j}_{1}^{2}=\frac{s\mathfrak{ c}^{2}}{4g^{2}}|p-q|^{2}=\frac{\mathfrak{c}^{2}g^{2}+4\mathfrak{c}^{4}}{4g^{2}}|p-q|^{2} \geq\mathfrak{c}^{4}+\frac{\mathfrak{c}^{2}}{4}|p-q|^{2},\quad p\neq q.\] \[(iv) \lim_{\varepsilon\to\infty}\frac{g}{|p-q|}=\lim_{\varepsilon\to \infty}\frac{s}{4\mathfrak{c}^{2}}=\lim_{\varepsilon\to\infty}\frac{\boldsymbol {\ell}_{1}}{\mathfrak{c}^{2}}=\lim_{\varepsilon\to\infty}\frac{\boldsymbol{ \ell}_{1}^{2}-\boldsymbol{j}_{1}^{2}}{\mathfrak{c}^{4}}=1,\quad p\neq q.\] **Lemma 4.2**.: _Recall \(\bar{C}_{0}\) in (3.45) and \(\bar{p}\) in (2.5). For \(\mathfrak{c}\) suitably large, there hold_ \[\frac{1}{2}|p-q|\leq|\bar{p}-\bar{q}|\leq\frac{3}{2}|p-q|,\quad p\in \mathbb{R}^{3}, \tag{4.1}\] \[\frac{1}{2}|p^{0}|\leq|\bar{p}^{0}|\leq\frac{3}{2}|p^{0}|,\quad p \in\mathbb{R}^{3},\] (4.2) \[\frac{|p|}{2}-\bar{C}_{0}\leq|\bar{p}|\leq\frac{3|p|}{2}+\bar{C} _{0},\quad p\in\mathbb{R}^{3},\] (4.3) \[\frac{1}{2}\leq\det\Big{(}\frac{\partial\bar{p}}{\partial p} \Big{)}\leq\frac{3}{2},\quad p\in\mathbb{R}^{3}. \tag{4.4}\] Proof.: It follows from (2.5) that \[\bar{p}-\bar{q}=p-q+\big{(}\frac{u^{0}}{\mathfrak{c}}-1\big{)}\frac{u\cdot(p -q)}{|u|^{2}}u-\frac{p^{0}-q^{0}}{\mathfrak{c}}u \tag{4.5}\] and \[\bar{p}^{0}=\frac{u^{0}}{\mathfrak{c}}p^{0}-\frac{u\cdot p}{ \mathfrak{c}}=p^{0}+\Big{(}\frac{u^{0}}{\mathfrak{c}}-1\Big{)}p^{0}-\frac{u \cdot p}{\mathfrak{c}}. \tag{4.6}\] For \(\frac{\bar{C}_{0}}{\mathfrak{c}}\leq\frac{1}{4}\), it holds that \[\Big{|}\big{(}\frac{u^{0}}{\mathfrak{c}}-1\big{)}\frac{u\cdot(p-q)}{|u|^{2}}u \Big{|}+\Big{|}\frac{p^{0}-q^{0}}{\mathfrak{c}}u\Big{|}\leq\frac{|u|^{2}}{ \mathfrak{c}(u^{0}+\mathfrak{c})}|p-q|+\frac{|u|}{\mathfrak{c}}|p-q|\leq \frac{1}{2}|p-q|\] and \[\Big{|}\Big{(}\frac{u^{0}}{\mathfrak{c}}-1\Big{)}p^{0}-\frac{u\cdot p}{ \mathfrak{c}}\Big{|}\leq\frac{|u|^{2}}{\mathfrak{c}(u^{0}+\mathfrak{c})}p^{0 }+\frac{|u|}{\mathfrak{c}}p^{0}\leq\frac{1}{2}p^{0},\] which, together with (4.5) and (4.6), yield (4.1) and (4.2). Observing \[\bar{p}_{i}=p_{i}+\Big{(}\frac{u^{0}}{\mathfrak{c}}-1\Big{)}\frac{u_{i}}{|u|^ {2}}\sum_{j=1}^{3}u_{j}p_{j}-\frac{u_{i}}{\mathfrak{c}}p^{0}, \tag{4.7}\] we have \[\Big{|}\big{(}\frac{u^{0}}{\mathfrak{c}}-1\big{)}\frac{u\cdot p}{|u|^{2}}u \Big{|}+\Big{|}\frac{p^{0}}{\mathfrak{c}}u\Big{|}\leq\frac{|u|^{2}}{\mathfrak{ c}(u^{0}+\mathfrak{c})}|p|+\frac{|u|}{\mathfrak{c}}|p|+|u|\leq\frac{|p|}{2}+ \bar{C}_{0},\] which, together with (4.7), implies (4.3). It follows from (4.7) that \[\frac{\partial\bar{p}_{i}}{\partial p_{j}}=\delta_{ij}+\Big{(}\frac{u^{0}}{ \mathfrak{c}}-1\Big{)}\frac{u_{i}}{|u|^{2}}u_{j}-\frac{u_{i}}{\mathfrak{c}} \frac{p_{j}}{p^{0}}. \tag{4.8}\] For \(\mathfrak{c}\) suitably large, it is clear that \[\Big{|}\Big{(}\frac{u^{0}}{\mathfrak{c}}-1\Big{)}\frac{u_{i}}{|u|^{2}}u_{j}- \frac{u_{i}}{\mathfrak{c}}\frac{p_{j}}{p^{0}}\Big{|}\leq\frac{|u|^{2}}{ \mathfrak{c}(u^{0}+\mathfrak{c})}+\frac{|u|}{\mathfrak{c}}\leq\frac{1}{16},\] which, together with (4.8), implies (4.4). Therefore the proof is completed. **Lemma 4.3**.: _Recall \(\bar{c}_{1}\) in (3.45). Then there hold_ \[k_{\mathfrak{c}1}(p,q)\lesssim|\bar{p}-\bar{q}|e^{-2\bar{c}_{1}|\bar{p}|-2\bar {c}_{1}|\bar{q}|}\lesssim|p-q|e^{-\bar{c}_{1}|p|-\bar{c}_{1}|q|} \tag{4.9}\] _and_ \[k_{\mathfrak{c}2}(p,q)\lesssim\Big{[}\frac{1}{\mathfrak{c}}+\frac{1}{|\bar{p}- \bar{q}|}\Big{]}e^{-2\bar{c}_{1}|\bar{p}-\bar{q}|}\lesssim\Big{[}\frac{1}{ \mathfrak{c}}+\frac{1}{|p-q|}\Big{]}e^{-\bar{c}_{1}|p-q|}. \tag{4.10}\] _Moreover, it holds that_ \[k_{\epsilon 2}(p,q)\lesssim\frac{1}{|\bar{p}-\bar{q}|}e^{-\bar{c}_{1}|\bar{p}- \bar{q}|}\lesssim\frac{1}{|p-q|}e^{-\frac{c_{1}}{2}|p-q|}. \tag{4.11}\] Proof.: For any \(p\in\mathbb{R}^{3}\), it is clear that \[\mathfrak{c}^{2}-\mathfrak{c}p^{0}=\mathfrak{c}^{2}(1-\sqrt{1+\frac{|p|^{2}}{ \mathfrak{c}^{2}}})=-\frac{|p|^{2}}{1+\sqrt{1+\frac{|p|^{2}}{\mathfrak{c}^{2}} }},\] which yields \[-\frac{|p|^{2}}{2}\leq\mathfrak{c}^{2}-\mathfrak{c}p^{0}\leq-\frac{|p|^{2}}{1+ \sqrt{1+|p|^{2}}}=-\sqrt{1+|p|^{2}}+1\leq-|p|+1. \tag{4.12}\] It follows from (2.2), Lemmas 4.1-4.2 and (4.12) that \[k_{\epsilon 1}(p,q) \lesssim|p-q|\exp\Big{(}\frac{\mathfrak{c}^{2}+u^{\mu}p_{\mu}}{2 T_{0}}\Big{)}\exp\Big{(}\frac{\mathfrak{c}^{2}+u^{\mu}q_{\mu}}{2T_{0}}\Big{)}\] \[=|p-q|\exp\Big{(}\frac{\mathfrak{c}^{2}-\mathfrak{c}\bar{p}^{0} }{2T_{0}}\Big{)}\exp\Big{(}\frac{\mathfrak{c}^{2}-\mathfrak{c}\bar{q}^{0}}{2 T_{0}}\Big{)}\] \[\lesssim|p-q|\exp\Big{(}-\frac{|\bar{p}|+|\bar{q}|}{2T_{0}}\Big{)} \lesssim|p-q|\exp\Big{(}-\frac{|p|+|q|}{4T_{0}}\Big{)}\] and (4.9) follows. For (4.10), observing \(J_{2}(\bar{\boldsymbol{\ell}},\bar{\boldsymbol{j}})\leq J_{1}(\bar{\boldsymbol {\ell}},\bar{\boldsymbol{j}})\), we have from (2.7)-(2.8) that \[k_{\epsilon 2}(p,q)\lesssim\mathfrak{c}\frac{s^{3/2}}{gp^{0}q^{0}}e ^{\frac{\bar{\boldsymbol{\ell}}^{2}}{T_{0}}}J_{1}(\bar{\boldsymbol{\ell}}, \bar{\boldsymbol{j}}) \lesssim\mathfrak{c}\frac{s^{3/2}}{gp^{0}q^{0}}\frac{\bar{ \boldsymbol{\ell}}}{\bar{\boldsymbol{\ell}}^{2}-\bar{\boldsymbol{j}}^{2}} \left[1+\frac{1}{\sqrt{\bar{\boldsymbol{\ell}}^{2}-\bar{\boldsymbol{j}}^{2}} }\right]e^{\frac{\mathfrak{c}^{2}-\sqrt{\boldsymbol{\ell}^{2}-\bar{\boldsymbol{j} }^{2}}}{T_{0}}}\] \[\lesssim\mathfrak{c}\frac{s^{3/2}}{gp^{0}q^{0}}\frac{\bar{ \boldsymbol{\ell}}}{\bar{\boldsymbol{\ell}}^{2}-\bar{\boldsymbol{j}}^{2}}e^{ \frac{\mathfrak{c}^{2}-\sqrt{\boldsymbol{\ell}^{2}-\bar{\boldsymbol{j}}^{2}}}{ T_{0}}}.\] It follows from Lemma 4.1 that \[\mathfrak{c}^{2}-\sqrt{\boldsymbol{\ell}^{2}-\boldsymbol{j}^{2}} \leq\mathfrak{c}^{2}-\sqrt{\mathfrak{c}^{4}+\frac{\mathfrak{c}^{ 2}}{4}|\bar{p}-\bar{q}|^{2}}\leq-\frac{\mathfrak{c}^{2}}{4}\frac{|\bar{p}-\bar {q}|^{2}}{\mathfrak{c}^{2}+\sqrt{\mathfrak{c}^{4}+\frac{\mathfrak{c}^{2}}{4} |\bar{p}-\bar{q}|^{2}}}\] \[=-\frac{1}{4}\frac{|\bar{p}-\bar{q}|^{2}}{1+\sqrt{1+\frac{1}{4 \mathfrak{c}^{2}}|\bar{p}-\bar{q}|^{2}}}\leq-\frac{1}{4}\frac{|\bar{p}-\bar{ q}|^{2}}{1+\sqrt{1+\frac{1}{4}|\bar{p}-\bar{q}|^{2}}}\] \[=-\sqrt{1+\frac{1}{4}|\bar{p}-\bar{q}|^{2}}+1\leq-\frac{|\bar{p}- \bar{q}|}{2}+1,\] then we have \[k_{\epsilon 2}(p,q)\lesssim \mathfrak{c}\frac{s^{3/2}}{gp^{0}q^{0}}\frac{\mathfrak{c}(\bar{p }^{0}+\bar{q}^{0})}{2T_{0}}\frac{1}{\frac{s\mathfrak{c}^{2}|\bar{p}-\bar{q}|^ {2}}{4g^{2}T_{0}^{2}}}e^{-\frac{|\bar{p}-\bar{q}|}{2T_{0}}}\lesssim\frac{s^{1 /2}(\bar{p}^{0}+\bar{q}^{0})}{p^{0}q^{0}}\frac{g}{|\bar{p}-\bar{q}|^{2}}e^{- \frac{|\bar{p}-\bar{q}|}{2T_{0}}}\] \[\lesssim \frac{\sqrt{g^{2}+4\mathfrak{c}^{2}}(\bar{p}^{0}+\bar{q}^{0})}{ |\bar{p}-\bar{q}|}e^{-\frac{|\bar{p}-\bar{q}|}{2T_{0}}}\lesssim\frac{(|\bar{p}- \bar{q}|+\mathfrak{c})(\bar{p}^{0}+\bar{q}^{0})}{p^{0}q^{0}}\frac{1}{|\bar{p}- \bar{q}|}e^{-\frac{|\bar{p}-\bar{q}|}{2T_{0}}}\] \[\lesssim \Big{[}\frac{1}{p^{0}}+\frac{1}{q^{0}}+\frac{1}{|p-q|}\Big{(} \frac{\mathfrak{c}}{p^{0}}+\frac{\mathfrak{c}}{q^{0}}\Big{)}\Big{]}e^{-\frac{ |p-\bar{q}|}{4T_{0}}}\lesssim\Big{[}\frac{1}{\mathfrak{c}}+\frac{1}{|p-q|} \Big{]}e^{-\bar{c}_{1}|p-q|}, \tag{4.13}\] where we used the fact that both \(s\) and \(g\) are Lorentz invariant, i.e., \[s(p,q)=s(\bar{p},\bar{q}),\quad g(p,q)=g(\bar{p},\bar{q}).\] Moreover, it follows from the fourth inequality of (4.13) that \[k_{\mathfrak{c}2}(p,q) \lesssim\frac{(|\bar{p}-\bar{q}|+\mathfrak{c})(\bar{p}^{0}+\bar{q}^ {0})}{p^{0}q^{0}}\frac{1}{|\bar{p}-\bar{q}|}e^{-\frac{|p-q|}{270}}\lesssim(| \bar{p}-\bar{q}|+1)\frac{1}{|\bar{p}-\bar{q}|}e^{-\frac{|p-q|}{270}}\] \[\lesssim\frac{1}{|\bar{p}-\bar{q}|}e^{-\frac{|\bar{p}-\bar{q}|}{4 \bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{ \bar{\bar{ \bar{ \bar{\bar{\bar{\bar{ \[=\frac{1}{p^{0}q^{0}+\mathfrak{c}^{2}}\Big{\{}-\frac{\mathfrak{c}^{2} (|p|^{2}+|q|^{2})+|p|^{2}|q|^{2}}{\mathfrak{c}^{2}+p^{0}q^{0}}(|p|^{2}+|q|^{2}) +2|p|^{2}|q|^{2}\Big{\}}\] \[=-\frac{(q^{0}|p|^{2}-p^{0}|q|^{2})^{2}}{(p^{0}q^{0}+\mathfrak{c}^ {2})^{2}},\] which, together with Lemma 4.1, yields that \[\mathfrak{c}^{2}-\sqrt{\boldsymbol{\ell}^{2}-\boldsymbol{j}^{2}} =\frac{\mathfrak{c}^{4}-(\boldsymbol{\ell}^{2}-\boldsymbol{j}^{2} )}{\mathfrak{c}^{2}+\sqrt{\boldsymbol{\ell}^{2}-\boldsymbol{j}^{2}}}=\frac{ \mathfrak{c}^{4}-\frac{\mathfrak{s}\mathfrak{c}^{2}}{4g^{2}}|\bar{p}-\bar{q} |^{2}}{\mathfrak{c}^{2}+\frac{\sqrt{\mathfrak{s}}\mathfrak{c}}{2g}|\bar{p}- \bar{q}|}=\frac{\mathfrak{c}^{2}-\frac{s}{4g^{2}}|\bar{p}-\bar{q}|^{2}}{1+ \sqrt{\frac{\sqrt{\mathfrak{s}}}{4\mathfrak{c}^{2}}}\frac{|\bar{p}-\bar{q}|}{ g}}\] \[=\frac{\mathfrak{c}^{2}-\frac{g^{2}+4\mathfrak{c}^{2}}{4g^{2}}| \bar{p}-\bar{q}|^{2}}{1+\sqrt{\frac{\mathfrak{s}}{4\mathfrak{c}^{2}}}\frac{| \bar{p}-\bar{q}|}{g}}=\frac{1}{1+\sqrt{\frac{\mathfrak{s}}{4\mathfrak{c}^{2}} }\frac{|\bar{p}-\bar{q}|}{g}}\Big{\{}-\frac{1}{4}|\bar{p}-\bar{q}|^{2}+\frac{ \mathfrak{c}^{2}}{g^{2}}(g^{2}-|\bar{p}-\bar{q}|^{2})\Big{\}}\] \[=-\frac{1}{4}\frac{|\bar{p}-\bar{q}|^{2}}{1+\sqrt{\frac{\mathfrak{ s}}{4\mathfrak{c}^{2}}}\frac{|\bar{p}-\bar{q}|}{g}}-\frac{1}{1+\sqrt{\frac{ \mathfrak{s}}{4\mathfrak{c}^{2}}}\frac{|\bar{p}-\bar{q}|}{g}}\frac{\mathfrak{ c}^{2}}{g^{2}}\frac{(\bar{q}^{0}|\bar{p}|^{2}-\bar{p}^{0}|\bar{q}|^{2})^{2}}{(\bar{p}^{0} \bar{q}^{0}+\mathfrak{c}^{2})^{2}}. \tag{4.19}\] Hence, it follows from (4.13) and (4.19) that \[\int_{|\bar{p}-\bar{q}|\leq\mathfrak{c}^{\frac{1}{8}}}k_{\mathfrak{ c}2}(p,q)dq\] \[\lesssim\int_{|\bar{p}-\bar{q}|\leq\mathfrak{c}^{\frac{1}{8}}} \frac{|\bar{p}-\bar{q}|+1}{|\bar{p}-\bar{q}|}e^{-\bar{c}_{1}|\bar{p}-\bar{q}|}e ^{\frac{1}{2\bar{r}_{0}}(\mathfrak{c}^{2}-\sqrt{\boldsymbol{\ell}^{2}- \boldsymbol{j}^{2}})}dq\] \[\lesssim\int_{|\bar{p}-\bar{q}|\leq\mathfrak{c}^{\frac{1}{8}}} \frac{1}{|\bar{p}-\bar{q}|}e^{-\frac{\mathfrak{c}_{1}}{2}|\bar{p}-\bar{q}|}e ^{M(\bar{p},\bar{q})}\exp\Big{(}-\frac{1}{8\bar{r}_{0}}\frac{|\bar{p}-\bar{q}| ^{2}}{1+\sqrt{\frac{\mathfrak{s}}{4\mathfrak{c}^{2}}}\frac{|\bar{p}-\bar{q}|}{ g}}\Big{)}d\bar{q}\] \[\lesssim\int_{|\bar{p}-\bar{q}|\leq\mathfrak{c}^{\frac{1}{8}}} \frac{1}{|\bar{p}-\bar{q}|}e^{-\frac{\mathfrak{c}_{1}}{2}|\bar{p}-\bar{q}|}e ^{M(\bar{p},\bar{q})}d\bar{q}, \tag{4.20}\] where we made a change of variables \(q\to\bar{q}\) and \[M(\bar{p},\bar{q}):=-\frac{1}{1+\sqrt{\frac{\mathfrak{s}}{4 \mathfrak{c}^{2}}}\frac{|\bar{p}-\bar{q}|}{g}}\frac{\mathfrak{c}^{2}}{g^{2}} \frac{(\bar{q}^{0}|\bar{p}|^{2}-\bar{p}^{0}|\bar{q}|^{2})^{2}}{(\bar{p}^{0} \bar{q}^{0}+\mathfrak{c}^{2})^{2}}\frac{1}{2\bar{T}_{0}}. \tag{4.21}\] Noting \(|\bar{p}-\bar{q}|\leq\mathfrak{c}^{\frac{1}{8}}\) and \(|p|\leq\mathfrak{c}\), one has \(|\bar{p}|\lesssim\mathfrak{c}\) and so \[\bar{p}^{0}=\sqrt{\mathfrak{c}^{2}+|\bar{p}|^{2}}\lesssim\mathfrak{c},\quad \bar{q}^{0}=\sqrt{\mathfrak{c}^{2}+|\bar{q}|^{2}}\leq\sqrt{\mathfrak{c}^{2}+2| \bar{p}-\bar{q}|^{2}+2|\bar{p}|^{2}}\lesssim\mathfrak{c}, \tag{4.22}\] which yields that \[(1+\sqrt{\frac{s}{4\mathfrak{c}^{2}}}\frac{|\bar{p}-\bar{q}|}{g}) g^{2}(\bar{p}^{0}\bar{q}^{0}+\mathfrak{c}^{2})^{2} \leq(1+\sqrt{1+\frac{g^{2}}{4\mathfrak{c}^{2}}})|\bar{p}-\bar{q}|^{2}(\bar {p}^{0}\bar{q}^{0}+\mathfrak{c}^{2})^{2}\] \[\lesssim\mathfrak{c}^{4}|\bar{p}-\bar{q}|^{2}.\] A direct calculation shows that \[\bar{p}^{0}|\bar{q}|^{2}-\bar{q}^{0}|\bar{p}|^{2} =\mathfrak{c}(|\bar{q}|^{2}-|\bar{p}|^{2})+(\bar{p}^{0}-\mathfrak{ c})|\bar{q}|^{2}-(\bar{q}^{0}-\mathfrak{c})|\bar{p}|^{2}\] \[=\mathfrak{c}(|\bar{q}|^{2}-|\bar{p}|^{2})+\frac{|\bar{p}|^{2}| \bar{q}|^{2}}{\bar{p}^{0}+\mathfrak{c}}-\frac{|\bar{p}|^{2}|\bar{q}|^{2}}{\bar{q} ^{0}+\mathfrak{c}}\] \[=\mathfrak{c}(|\bar{q}|^{2}-|\bar{p}|^{2})+\frac{|\bar{p}|^{2}| \bar{q}|^{2}}{(\bar{p}^{0}+\mathfrak{c})(\bar{q}^{0}+\mathfrak{c})}(\bar{q}^{0}- \bar{p}^{0})\] \[=\mathfrak{c}(|\bar{q}|^{2}-|\bar{p}|^{2})+\frac{|\bar{p}|^{2}| \bar{q}|^{2}(|\bar{q}|^{2}-|\bar{p}|^{2})}{(\bar{p}^{0}+\mathfrak{c})(\bar{q}^{0}+ \mathfrak{c})(\bar{p}^{0}+\mathfrak{c}^{2})^{2}}. \tag{4.23}\] Thus, in view of (3.45) and (4.22)-(4.23), there exists a positive constant \(\alpha_{0}\) which is independent of \(\mathfrak{c}\) such that \[M(\bar{p},\bar{q}) \leq-\alpha_{0}\frac{1}{\mathfrak{c}^{2}}\frac{1}{|\bar{p}-\bar{q} |^{2}}\left(\mathfrak{c}(|\bar{q}|^{2}-|\bar{p}|^{2})+\frac{|\bar{p}|^{2}|\bar{ q}|^{2}(|\bar{q}|^{2}-|\bar{p}|^{2})}{(\bar{p}^{0}+\mathfrak{c})(\bar{q}^{0}+ \mathfrak{c})(\bar{p}^{0}+\bar{q}^{0})}\right)^{2}\] \[\leq-\alpha_{0}\frac{(|\bar{q}|^{2}-|\bar{p}|^{2})^{2}}{|\bar{p}- \bar{q}|^{2}}. \tag{4.24}\] Combining (4.20) and (4.24), one has that \[\int_{|\bar{p}-\bar{q}|\leq\mathfrak{c}^{\frac{1}{8}}}k_{\mathfrak{c}2}(p,q) dq\lesssim\int_{|\bar{p}-\bar{q}|\leq\mathfrak{c}^{\frac{1}{8}}}\frac{1}{|\bar{p}- \bar{q}|}e^{-\frac{\mathfrak{c}_{1}}{2}|\bar{p}-\bar{q}|}e^{-\alpha_{0}\frac {(|\bar{q}|^{2}-|\bar{p}|^{2})^{2}}{|\bar{p}-\bar{q}|^{2}}}d\bar{q}.\] By taking similar arguments as in [17, Lemma 3.3.1] (see also Case 3 below), we obtain \[\int_{|\bar{p}-\bar{q}|\leq\mathfrak{c}^{\frac{1}{8}}}k_{\mathfrak{c}2}(p,q) dq\lesssim\frac{1}{1+|\bar{p}|}\lesssim\frac{1}{1+|p|},\quad\text{for}\ |p|\leq\mathfrak{c}. \tag{4.25}\] _Case 3: \(|\bar{p}-\bar{q}|\leq\mathfrak{c}^{\frac{1}{8}}\) and \(|p|\geq\mathfrak{c}\)._ It is clear that \(|\bar{p}|\gtrsim\mathfrak{c}\). Noting \[|\bar{q}|\leq|\bar{q}-\bar{p}|+|\bar{p}|\lesssim|\bar{p}|,\quad|\bar{q}|\geq| \bar{p}|-|\bar{p}-\bar{q}|\gtrsim|\bar{p}|,\] then we have \[|\bar{p}|\cong|\bar{q}|,\quad\bar{p}^{0}\cong\bar{q}^{0}.\] Hence it is clear that \[(1+\sqrt{\frac{s}{4\mathfrak{c}^{2}}}\frac{|\bar{p}-\bar{q}|}{g })g^{2}(\bar{p}^{0}\bar{q}^{0}+\mathfrak{c}^{2})^{2} \leq(1+\sqrt{1+\frac{g^{2}}{4\mathfrak{c}^{2}}})|\bar{p}-\bar{q} |^{2}(\bar{p}^{0}\bar{q}^{0}+\mathfrak{c}^{2})^{2}\] \[\lesssim|\bar{p}-\bar{q}|^{2}(\mathfrak{c}^{2}+|\bar{p}|^{2})^{2}. \tag{4.26}\] For \(|\bar{p}|\gtrsim\mathfrak{c}\), it holds that \[\mathfrak{c}+\frac{|\bar{p}|^{2}|\bar{q}|^{2}}{(\bar{p}^{0}+ \mathfrak{c})(\bar{q}^{0}+\mathfrak{c})(\bar{p}^{0}+\bar{q}^{0})}\cong \mathfrak{c}+\frac{|\bar{p}|^{4}}{(\mathfrak{c}^{0})^{3}}\cong\mathfrak{c}+ \frac{|\bar{p}|^{4}}{(\mathfrak{c}^{2}+|\bar{p}|^{2})^{\frac{3}{2}}}\cong \mathfrak{c}+|\bar{p}|,\] which, together with (4.23), yields that \[(\bar{p}^{0}|\bar{q}|^{2}-\bar{q}^{0}|\bar{p}|^{2})^{2}\cong(|\bar{q}|^{2}-| \bar{p}|^{2})^{2}(\mathfrak{c}^{2}+|\bar{p}|^{2}). \tag{4.27}\] Combining (4.21), (4.26) and (4.27), for some positive constant \(\alpha_{1}\) which is independent of \(\mathfrak{c}\), we have \[M(\bar{p},\bar{q})\leq-\alpha_{1}\frac{\mathfrak{c}^{2}}{\mathfrak{c}^{2}+| \bar{p}|^{2}}\frac{(|\bar{q}|^{2}-|\bar{p}|^{2})^{2}}{|\bar{p}-\bar{q}|^{2}}. \tag{4.28}\] Hence, for \(|p|\geq\mathfrak{c}\), it follows from (4.20) and (4.28) that \[\int_{|\bar{p}-\bar{q}|\leq\mathfrak{c}^{\frac{1}{8}}}k_{\mathfrak{ c}2}(p,q)dq \lesssim\int_{|\bar{p}-\bar{q}|\leq\mathfrak{c}^{\frac{1}{8}}}\frac{1}{|\bar{p}- \bar{q}|}e^{-\frac{\mathfrak{c}_{1}}{2}|\bar{p}-\bar{q}|}e^{M(\bar{p},\bar{q})} d\bar{q}\] \[\lesssim\int_{|\bar{p}-\bar{q}|\leq\mathfrak{c}^{\frac{1}{8}}} \frac{1}{|\bar{p}-\bar{q}|}e^{-\frac{\mathfrak{c}_{1}}{2}|\bar{p}-\bar{q}|}e^{- \alpha_{1}\frac{\mathfrak{c}^{2}}{\mathfrak{c}^{2}+|\bar{p}|^{2}}\frac{(|\bar{ q}|^{2}-|\bar{p}|^{2})^{2}}{|\bar{p}-\bar{q}|^{2}}}d\bar{q}.\] Following the arguments as in [17, Lemma 3.3.1], we can make a change of variables \[|\bar{p}-\bar{q}|=r,\quad(\bar{q}-\bar{p})\cdot\bar{p}=|\bar{p}|r\cos\theta, \quad 0\leq r<\infty,\ 0\leq\theta\leq\pi,\] which yields that \[|\bar{q}|^{2}=|\bar{q}-\bar{p}|^{2}+|\bar{p}|^{2}+2(\bar{q}-\bar{p})\cdot\bar{ p}=r^{2}+|\bar{p}|^{2}+2r|\bar{p}|\cos\theta.\] Denoting \(\alpha_{2}^{2}:=\alpha_{1}\frac{\mathfrak{c}^{2}}{\mathfrak{c}^{2}+|\bar{p}|^{2}}\) and \(u=\alpha_{2}(r+2|\bar{p}|\cos\theta)\), one has \[\int_{|\bar{p}-\bar{q}|\leq\mathfrak{c}^{\frac{1}{8}}}k_{\mathfrak{ c}2}(p,q)dq \lesssim\int_{0}^{\infty}re^{-\frac{\mathfrak{c}_{1}}{2}r}dr\int_{ 0}^{\pi}e^{-\alpha_{2}^{2}(r+2|\bar{p}|\cos\theta)^{2}}\sin\theta d\theta\] \[\lesssim\frac{1}{\alpha_{2}|\bar{p}|}\int_{-\infty}^{\infty}e^{-u ^{2}}du\lesssim\frac{\sqrt{\mathfrak{c}^{2}+|\bar{p}|^{2}}}{\mathfrak{c}|\bar{ p}|}\] \[\lesssim\frac{1}{\mathfrak{c}},\] which, together with (4.17), (4.18), (4.25), yields (4.16). Therefore the proof is completed. By similar arguments as in Lemma 4.4, one can also obtain **Lemma 4.5**.: _There hold_ \[\int_{\mathbb{R}^{3}}k_{\mathfrak{c}1}^{2}(p,q)\Big{(}\frac{w_{\ell}(p)}{w_{ \ell}(q)}\Big{)}^{2}dq\lesssim\frac{1}{1+|p|}\] _and_ \[\int_{\mathbb{R}^{3}}k_{\mathfrak{c}2}^{2}(p,q)\Big{(}\frac{w_{\ell}(p)}{w_{ \ell}(q)}\Big{)}^{2}dq\lesssim\begin{cases}\frac{1}{1+|p|},\quad|p| \leq\mathfrak{c},\\ \frac{1}{\mathfrak{c}},\quad|p|\geq\mathfrak{c}.\end{cases}\] Recall \(k_{\mathfrak{c}}(p,q)=k_{\mathfrak{c}2}(p,q)-k_{\mathfrak{c}1}(p,q)\) in (2.9) and denote \[k_{ew}(p,q):=k_{\mathfrak{c}}(p,q)\frac{w_{\ell}(p)}{w_{\ell}(q)}.\] By similar arguments as in Lemma 4.4, one can also obtain \[\int_{\mathbb{R}^{3}}k_{ew}(p,q)e^{\frac{\mathfrak{c}_{1}}{4}|p-q|}dq \lesssim\begin{cases}\frac{1}{1+|p|},\quad\text{for }|p|\leq\mathfrak{c},\\ \frac{1}{\mathfrak{c}},\quad\text{for }|p|\geq\mathfrak{c}.\end{cases}\] Next we estimate the collision frequency \(\nu_{\mathfrak{c}}(p)\). **Lemma 4.6**.: _It holds that_ \[\nu_{\mathfrak{c}}(p)\cong\begin{cases}1+|p|,\quad|p|\leq\mathfrak{c},\\ \mathfrak{c},\quad|p|\geq\mathfrak{c}.\end{cases} \tag{4.29}\] Proof.: Recall \[\nu_{\mathfrak{c}}(p)=\int_{\mathbb{R}^{3}}\int_{\mathbb{S}^{2}}\frac{ \mathfrak{c}}{4}\frac{g\sqrt{s}}{p^{0}q^{0}}\mathbf{M}_{\mathfrak{c}}(q)d \omega dq.\] Since the proof is complicated, we split it into four cases. _Case 1:_\(|q|\geq\mathfrak{c}^{\frac{1}{8}}\). Using Lemma 4.1 and (4.14), one has \[\int_{|q|\geq\mathfrak{c}^{\frac{1}{8}}}\int_{\mathbb{S}^{2}}\frac{\mathfrak{ c}}{4}\frac{g\sqrt{s}}{p^{0}q^{0}}\mathbf{M}_{\mathfrak{c}}(q)d\omega dq\lesssim \int_{|q|\geq\mathfrak{c}^{\frac{1}{8}}}\mathfrak{c}e^{-2\bar{c}_{1}|q|}dq \lesssim e^{-\bar{c}_{1}\mathfrak{c}^{\frac{1}{8}}}. \tag{4.30}\] _Case 2:_\(|q|\leq\mathfrak{c}^{\frac{1}{8}}\) and \(|p|\leq\mathfrak{c}^{\frac{2}{8}}\). It holds that \[\int_{|q|\leq\mathfrak{c}^{\frac{1}{8}}}\int_{\mathbb{S}^{2}}\frac {\mathfrak{c}g\sqrt{s}}{4p^{0}q^{0}}\frac{n_{0}\gamma}{4\pi\mathfrak{c}^{3}K_ {2}(\gamma)}\exp\Big{(}\frac{u^{\mu}q_{\mu}}{T_{0}}\Big{)}d\omega dq\] \[=\int_{|q|\leq\mathfrak{c}^{\frac{1}{8}}}\int_{\mathbb{S}^{2}} \frac{\mathfrak{c}g\sqrt{s}}{4p^{0}q^{0}}\frac{n_{0}}{(2\pi T_{0})^{\frac{1}{ 8}}}(1+O(\gamma^{-1}))\exp\Big{(}\frac{\mathfrak{c}^{2}-\mathfrak{c}\mathfrak{ q}^{0}}{T_{0}}\Big{)}d\omega dq\] \[=\frac{n_{0}}{(2\pi T_{0})^{\frac{3}{2}}}\int_{|q|\leq\epsilon^{\frac{1}{8}}} \int_{\mathbb{S}^{2}}\frac{\varsigma g\sqrt{s}}{4p^{0}q^{0}}\exp\Big{(}\frac{ \mathfrak{c}^{2}-\varsigma\bar{q}^{0}}{T_{0}}\Big{)}d\omega dq\cdot O(\gamma^{-1})\] \[\qquad+\frac{n_{0}}{(2\pi T_{0})^{\frac{3}{2}}}\int_{|q|\leq \epsilon^{\frac{1}{8}}}\int_{\mathbb{S}^{2}}\Big{(}\frac{\varsigma g\sqrt{s}}{4 p^{0}q^{0}}-\frac{|p-q|}{2}\Big{)}\exp\Big{(}\frac{\mathfrak{c}^{2}-\varsigma\bar{q}^{0}}{T_ {0}}\Big{)}d\omega dq\] \[\qquad+\frac{n_{0}}{(2\pi T_{0})^{\frac{3}{2}}}\int_{|q|\leq \epsilon^{\frac{1}{8}}}\int_{\mathbb{S}^{2}}\frac{|p-q|}{2}\exp\Big{(}\frac{ \mathfrak{c}^{2}-\varsigma\bar{q}^{0}}{T_{0}}\Big{)}d\omega dq\] \[:=\mathcal{H}_{1}+\mathcal{H}_{2}+\mathcal{H}_{3}. \tag{4.31}\] It is clear that \[|\mathcal{H}_{1}|\lesssim\frac{1}{\mathfrak{c}^{2}}\int_{|q|\leq \epsilon^{\frac{1}{8}}}|p-q|e^{-2\bar{c}_{1}|q|}dq\cong\frac{1+|p|}{\mathfrak{ c}^{2}}\lesssim\mathfrak{c}^{-\frac{13}{8}}. \tag{4.32}\] Using Lemma 4.2, (3.45) and (4.12), we have \[\mathcal{H}_{3}\gtrsim\int_{|\bar{q}|\leq\frac{1}{2}\mathfrak{c}^{ \frac{1}{8}}}|\bar{p}-\bar{q}|\exp\Big{(}-\frac{|\bar{q}|^{2}}{8\bar{c}_{1}} \Big{)}d\bar{q}\cong 1+|\bar{p}|\gtrsim 1+|p|. \tag{4.33}\] For \(\mathcal{H}_{2}\), notice that \[g^{2} =2p^{0}q^{0}-2p\cdot q-2\mathfrak{c}^{2}=|p-q|^{2}+2p^{0}q^{0}-2 \mathfrak{c}^{2}-|p|^{2}-|q|^{2}\] \[=|p-q|^{2}+\frac{4(|p|^{2}+\mathfrak{c}^{2})(|q|^{2}+\mathfrak{c }^{2})-(2\mathfrak{c}^{2}+|p|^{2}+|q|^{2})^{2}}{2p^{0}q^{0}+(2\mathfrak{c}^{2} +|p|^{2}+|q|^{2})}\] \[=|p-q|^{2}-\frac{(|p|^{2}-|q|^{2})^{2}}{2p^{0}q^{0}+(2\mathfrak{ c}^{2}+|p|^{2}+|q|^{2})}, \tag{4.34}\] then one has \[\frac{\mathfrak{c}g\sqrt{s}}{4p^{0}q^{0}}-\frac{|p-q|}{2} =\frac{1}{4p^{0}q^{0}}\{\varsigma g\sqrt{s}-2p^{0}q^{0}|p-q|\}\] \[=\frac{\mathfrak{c}^{2}g^{2}(g^{2}+4\mathfrak{c}^{2})-4|p-q|^{2}( |p|^{2}+\mathfrak{c}^{2})(|q|^{2}+\mathfrak{c}^{2})}{4p^{0}q^{0}(\varsigma g \sqrt{s}+2p^{0}q^{0}|p-q|)}\] \[=\frac{4\mathfrak{c}^{4}(g^{2}-|p-q|^{2})+\mathfrak{c}^{2}g^{4}- 4|p-q|^{2}\{|p|^{2}|q|^{2}+\mathfrak{c}^{2}(|p|^{2}+|q|^{2})\}}{4p^{0}q^{0}( \varsigma g\sqrt{s}+2p^{0}q^{0}|p-q|)}\] \[\lesssim O(\mathfrak{c}^{-\frac{\tau}{8}}),\] which implies that \[|\mathcal{H}_{2}|\lesssim\int_{|q|\leq\epsilon^{\frac{1}{8}}} \mathfrak{c}^{-\frac{\tau}{8}}e^{-2\bar{c}_{1}|q|}dq\lesssim\mathfrak{c}^{- \frac{\tau}{8}}. \tag{4.35}\] It follows from (4.31)-(4.33) and (4.35) that \[\int_{|q|\leq\epsilon^{\frac{1}{8}}}\int_{\mathbb{S}^{2}}\frac{ \varsigma}{4}\frac{g\sqrt{s}}{p^{0}q^{0}}\mathbf{M}_{\mathfrak{c}}(q)d\omega dq \cong 1+|p|. \tag{4.36}\] _Case 3:_\(|q|\leq\mathfrak{c}^{\frac{1}{8}}\) and \(\mathfrak{c}\geq|p|\geq\mathfrak{c}^{\frac{3}{8}}\). It follows from Lemma 4.1 that \[g\geq\frac{\mathfrak{c}|p-q|}{\sqrt{p^{0}q^{0}}}\gtrsim\frac{ \mathfrak{c}|p|}{\mathfrak{c}}=|p|\] and \[g\leq|p-q|\lesssim|p|,\] which yields that \(g\cong|p|\). Thus we have \[\int_{|q|\leq\epsilon^{\frac{1}{8}}}\int_{\mathbb{S}^{2}}\frac{ \varsigma}{4}\frac{g\sqrt{s}}{p^{0}q^{0}}\mathbf{M}_{\mathfrak{c}}(q)d\omega dq \cong\int_{|q|\leq\epsilon^{\frac{1}{8}}}\int_{\mathbb{S}^{2}}|p|\exp\Big{(} \frac{\mathfrak{c}^{2}-\varsigma\bar{q}^{0}}{T_{0}}\Big{)}d\omega dq\cong 1+|p|. \tag{4.37}\] _Case 4:_\(|q|\leq\mathfrak{c}^{\frac{1}{8}}\) and \(|p|\geq\mathfrak{c}\). It is obvious that \[\int_{|q|\leq\mathfrak{c}^{\frac{1}{8}}}\int_{\mathbb{S}^{2}}\frac{\mathfrak{c}} {4}\frac{g\sqrt{s}}{p^{0}q^{0}}\mathbf{M}_{\mathfrak{c}}(q)d\omega dq\lesssim \int_{|q|\leq\mathfrak{c}^{\frac{1}{8}}}\mathfrak{c}e^{-2\varepsilon_{1}|q|}dq \lesssim\mathfrak{c}. \tag{4.38}\] On the other hand, since \(|p|\geq\mathfrak{c}\), one has \[g\geq\frac{\mathfrak{c}|p-q|}{\sqrt{p^{0}q^{0}}}\gtrsim\frac{\mathfrak{c}|p|}{(| p|^{2}+\mathfrak{c}^{2})^{\frac{1}{4}}\sqrt{\mathfrak{c}}}\gtrsim\sqrt{ \mathfrak{c}|p|}.\] Thus we have \[\int_{|q|\leq\mathfrak{c}^{\frac{1}{8}}}\int_{\mathbb{S}^{2}}\frac{\mathfrak{c }}{4}\frac{g\sqrt{s}}{p^{0}q^{0}}\mathbf{M}_{\mathfrak{c}}(q)d\omega dq\gtrsim \int_{|q|\leq\mathfrak{c}^{\frac{1}{8}}}\frac{\sqrt{\mathfrak{c}|p|}\sqrt{ \mathfrak{c}^{2}+\mathfrak{c}|p|}}{p^{0}}\exp\Big{(}\frac{\mathfrak{c}^{2}- \mathfrak{c}\mathfrak{q}^{0}}{T_{0}}\Big{)}dq\gtrsim\mathfrak{c}. \tag{4.39}\] It follows from (4.38) and (4.39) that \[\int_{|q|\leq\mathfrak{c}^{\frac{1}{8}}}\int_{\mathbb{S}^{2}}\frac{\mathfrak{c }}{4}\frac{g\sqrt{s}}{p^{0}q^{0}}\mathbf{M}_{\mathfrak{c}}(q)d\omega dq\cong \mathfrak{c}. \tag{4.40}\] Combining (4.30), (4.36), (4.37) and (4.40), we conclude (4.29). Therefore the proof is completed. **Remark 4.7**.: By similar arguments as in Lemma 4.6, we can obtain \[\int_{\mathbb{R}^{3}}\int_{\mathbb{S}^{2}}v_{\phi}\mathbf{M}_{\mathfrak{c}}^{ \alpha}(q)d\omega dq\cong\nu_{\mathfrak{c}}(p),\quad\text{for }\alpha>0. \tag{4.41}\] ### Uniform-in-\(\mathfrak{c}\) coercivity estimate on \(\mathbf{L}_{\mathfrak{c}}\) In this subsection, we shall derive a uniform-in-\(\mathfrak{c}\) coercivity estimate for the linearized relativistic collision operator \(\mathbf{L}_{\mathfrak{c}}\). For later use, we denote \[k_{1}(p,q) :=2\pi|p-q|\frac{\rho}{(2\pi\theta)^{\frac{3}{2}}}e^{-\frac{|p-u |^{2}}{4\theta}-\frac{|q-u|^{2}}{4\theta}}, \tag{4.42}\] \[k_{2}(p,q) :=\frac{2}{|p-q|}\frac{\rho}{\sqrt{2\pi\theta}}e^{-\frac{|p-q|^{ 2}}{8\theta}-\frac{(|p-u|^{2}-|q-u|^{2})^{2}}{8\theta|p-q|^{2}}}, \tag{4.43}\] which are indeed the corresponding kernels of Newtonian Boltzmann equation. **Lemma 4.8**.: _It holds that_ \[\int_{\mathbb{R}^{3}}|k_{\mathfrak{c}1}(p,q)-k_{1}(p,q)|dq\lesssim\mathfrak{c }^{-\frac{3}{2}},\quad p\in\mathbb{R}^{3}. \tag{4.44}\] Proof.: We remark that throughout the proof, we make no attempt to be optimal in our estimates. We split the proof into three cases. _Case 1._\(|p|\geq\mathfrak{c}^{\frac{1}{8}}\). It follows from (3.45), (4.42) and Lemma 4.2 that \[\int_{\mathbb{R}^{3}}|k_{\mathfrak{c}1}(p,q)-k_{1}(p,q)|dq\] \[\lesssim\int_{\mathbb{R}^{3}}|p-q|e^{-\bar{c}_{1}|p|-\bar{c}_{1}|q |}dq+\int_{\mathbb{R}^{3}}|p-q|e^{-\frac{|p|^{2}}{8\pi}-\frac{|q|^{2}}{8 \theta}}dq\] \[\lesssim e^{-\frac{c_{1}}{2}\mathfrak{c}^{\frac{1}{8}}}+e^{- \frac{c_{2}}{4}\mathfrak{c}^{\frac{1}{4}}}\lesssim\mathfrak{c}^{-\frac{3}{2}}. \tag{4.45}\] _Case 2._\(|p|\leq\mathfrak{c}^{\frac{1}{8}}\) and \(|q|\geq\mathfrak{c}^{\frac{1}{8}}\). Similar to (4.45), one has \[\int_{|q|\geq\mathfrak{c}^{\frac{1}{8}}}|k_{\mathfrak{c}1}(p,q)-k_ {1}(p,q)|dq \lesssim\int_{|q|\geq\mathfrak{c}^{\frac{1}{8}}}|p-q|e^{-\bar{c}_{1 }|p|-\bar{c}_{1}|q|}dq+\int_{|q|\geq\mathfrak{c}^{\frac{1}{8}}}|p-q|e^{-\frac {|p|^{2}}{8\theta}-\frac{|q|^{2}}{8\theta}}dq\] \[\lesssim e^{-\frac{c_{1}}{2}\mathfrak{c}^{\frac{1}{8}}}+e^{- \frac{c_{2}}{4}\mathfrak{c}^{\frac{1}{4}}}\lesssim\mathfrak{c}^{-\frac{3}{2}}. \tag{4.46}\] _Case 3._\(|p|\leq\mathfrak{c}^{\frac{1}{8}}\) and \(|q|\leq\mathfrak{c}^{\frac{1}{8}}\). Recall that \[k_{\mathfrak{c}1}(p,q) =\frac{\pi\mathfrak{c}g\sqrt{s}}{p^{0}q^{0}}\frac{n_{0}}{4\pi \mathfrak{c}T_{0}K_{2}(\gamma)}\exp\Big{(}\frac{u^{\mu}p_{\mu}}{2T_{0}}\Big{)} \exp\Big{(}\frac{u^{\mu}q_{\mu}}{2T_{0}}\Big{)}\] \[=\frac{\pi\mathfrak{c}g\sqrt{s}}{p^{0}q^{0}}\frac{n_{0}}{(2\pi T_ {0})^{\frac{3}{2}}}(1+O(\gamma^{-1}))\exp\Big{(}\frac{\mathfrak{c}^{2}- \mathfrak{c}\bar{p}^{0}}{2T_{0}}\Big{)}\exp\Big{(}\frac{\mathfrak{c}^{2}- \mathfrak{c}\bar{q}^{0}}{2T_{0}}\Big{)}\] \[=\frac{\pi\mathfrak{c}g\sqrt{s}}{p^{0}q^{0}}\frac{n_{0}}{(2\pi T_ {0})^{\frac{3}{2}}}(1+O(\gamma^{-1}))\exp\Big{(}\frac{\mathfrak{c}^{2}- \mathfrak{c}\bar{p}^{0}}{2T_{0}}\Big{)}\exp\Big{(}\frac{\mathfrak{c}^{2}- \mathfrak{c}\bar{q}^{0}}{2T_{0}}\Big{)}.\] Then we have \[|k_{\mathfrak{c}1}(p,q)-k_{1}(p,q)|\] \[\leq\frac{\pi\mathfrak{c}g\sqrt{s}}{p^{0}q^{0}}\frac{n_{0}}{(2\pi T _{0})^{\frac{3}{2}}}\exp\Big{(}\frac{\mathfrak{c}^{2}-\mathfrak{c}\bar{p}^{0} }{2T_{0}}\Big{)}\exp\Big{(}\frac{\mathfrak{c}^{2}-\mathfrak{c}\bar{q}^{0}}{2T_ {0}}\Big{)}\cdot O(\gamma^{-1})\] \[\quad\quad+\Big{|}\frac{\pi\mathfrak{c}g\sqrt{s}}{p^{0}q^{0}}-2 \pi|p-q|\Big{|}\frac{n_{0}}{(2\pi T_{0})^{\frac{3}{2}}}\exp\Big{(}\frac{ \mathfrak{c}^{2}-\mathfrak{c}\bar{p}^{0}}{2T_{0}}\Big{)}\exp\Big{(}\frac{ \mathfrak{c}^{2}-\mathfrak{c}\bar{q}^{0}}{2T_{0}}\Big{)}\] \[\quad\quad+2\pi|p-q|\Big{|}\frac{n_{0}}{(2\pi T_{0})^{\frac{3}{2 }}}-\frac{\rho}{(2\pi\theta)^{\frac{3}{2}}}\Big{|}\exp\Big{(}\frac{\mathfrak{ c}^{2}-\mathfrak{c}\bar{p}^{0}}{2T_{0}}\Big{)}\exp\Big{(}\frac{\mathfrak{c}^{2}- \mathfrak{c}\bar{q}^{0}}{2T_{0}}\Big{)}\] \[\quad\quad+2\pi|p-q|\frac{\rho}{(2\pi\theta)^{\frac{3}{2}}}e^{- \frac{|p-u|^{2}}{4\theta}-\frac{|q-u|^{2}}{4\theta}}\Big{|}\exp\Big{(}\frac{|p- u|^{2}}{4\theta}+\frac{|q-u|^{2}}{4\theta}+\frac{\mathfrak{c}^{2}- \mathfrak{c}\bar{p}^{0}}{2T_{0}}+\frac{\mathfrak{c}^{2}-\mathfrak{c}\bar{q}^{0 }}{2T_{0}}\Big{)}-1\Big{|}\] \[:=\mathcal{D}_{1}+\mathcal{D}_{2}+\mathcal{D}_{3}+\mathcal{D}_{4}. \tag{4.47}\] It is clear that \[|\mathcal{D}_{1}|\lesssim\frac{|p-q|}{\mathfrak{c}^{2}}e^{-\bar{c}_{1}|p|- \bar{c}_{1}|q|},\] which implies that \[\int_{|q|\leq\mathfrak{c}^{\frac{1}{8}}}|\mathcal{D}_{1}(q,p)|dq \lesssim\mathfrak{c}^{-2}. \tag{4.48}\] For \(\mathcal{D}_{2}\), we notice that \[\frac{\mathfrak{c}\sqrt{s}}{2p^{0}q^{0}}-1=\frac{\mathfrak{c}\sqrt{s}-2p^{0}q ^{0}}{2p^{0}q^{0}}=\frac{\mathfrak{c}^{2}g^{2}-4\mathfrak{c}^{2}(|p|^{2}+|q|^ {2})-4|p|^{2}|q|^{2}}{2p^{0}q^{0}(\mathfrak{c}\sqrt{s}+2p^{0}q^{0})}\lesssim O (\mathfrak{c}^{-\frac{3}{2}}). \tag{4.49}\] It follows from (4.34) that \[g^{2}-|p-q|^{2}=-\frac{(|p|^{2}-|q|^{2})^{2}}{2p^{0}q^{0}+(2\mathfrak{c}^{2}+| p|^{2}+|q|^{2})}\lesssim O(\mathfrak{c}^{-\frac{3}{2}}), \tag{4.50}\] which yields that \[|g-|p-q||=\frac{|g^{2}-|p-q|^{2}|}{g+|p-q|}\lesssim\frac{O( \mathfrak{c}^{-\frac{3}{2}})}{g+|p-q|}\lesssim\frac{O(\mathfrak{c}^{-\frac{3}{ 2}})}{|p-q|}. \tag{4.51}\] Using (4.49) and (4.51), one has \[\Big{|}\frac{\mathfrak{c}}{2}\frac{g\sqrt{s}}{p^{0}q^{0}}-|p-q| \Big{|} \leq|g(\frac{\mathfrak{c}\sqrt{s}}{2p^{0}q^{0}}-1)|+|g-|p-q||\] \[\lesssim(g+\frac{1}{|p-q|})\mathfrak{c}^{-\frac{3}{2}}\lesssim(|p- q|+\frac{1}{|p-q|})\mathfrak{c}^{-\frac{3}{2}},\] which implies that \[\int_{|q|\leq\mathfrak{c}^{\frac{1}{8}}}|\mathcal{D}_{2}(q,p)|dq \lesssim\int_{|q|\leq\mathfrak{c}^{\frac{1}{8}}}\Big{|}\frac{\mathfrak{c}}{2} \frac{g\sqrt{s}}{2p^{0}q^{0}}-|p-q||e^{-\bar{c}_{1}|p|-\bar{c}_{1}|q|}dq\] \[\lesssim\mathfrak{c}^{-\frac{3}{2}}\int_{|q|\leq\mathfrak{c}^{\frac{1}{8}}} \Big{(}|p-q|+\frac{1}{|p-q|}\Big{)}e^{-\bar{c}_{1}|p|-\bar{c}_{1}|q|}\lesssim \mathfrak{c}^{-\frac{3}{2}}. \tag{4.52}\] For \(\mathcal{D}_{3}\), it follows from Proposition 3.8 that \[\Big{|}\frac{n_{0}}{(2\pi T_{0})^{\frac{3}{2}}}-\frac{\rho}{(2\pi \theta)^{\frac{3}{2}}}\Big{|}\lesssim|T_{0}-\theta|+|n_{0}-\rho|\lesssim \mathfrak{c}^{-2}, \tag{4.53}\] which yields that \[\int_{|q|\leq\mathfrak{c}^{\frac{1}{8}}}|\mathcal{D}_{3}(q,p)|dq \lesssim\mathfrak{c}^{-2}\int_{|q|\leq\mathfrak{c}^{\frac{1}{8}}}|p-q|e^{-\bar{ c}_{1}|p|-\bar{c}_{1}|q|}\lesssim\mathfrak{c}^{-2}. \tag{4.54}\] For \(\mathcal{D}_{4}\), a direct calculation shows that \[\frac{|p-\mathfrak{u}|^{2}}{4\theta}+\frac{\mathfrak{c}^{2}- \mathfrak{c}\bar{p}^{0}}{2T_{0}} =\frac{|p-\mathfrak{u}|^{2}}{4\theta T_{0}}(T_{0}-\theta)+\frac{1 }{4T_{0}}(|p-\mathfrak{u}|^{2}+2\mathfrak{c}^{2}-2\mathfrak{c}\bar{p}^{0})\] \[=\frac{|p-\mathfrak{u}|^{2}}{4\theta T_{0}}(T_{0}-\theta)+\frac{1 }{4T_{0}}\Big{[}\frac{|p|^{4}}{(p^{0}+\mathfrak{c})^{2}}+2p\cdot(u-\mathfrak{ u})+(|\mathfrak{u}|^{2}-|u|^{2})\Big{]}\] \[\qquad+\frac{1}{4T_{0}}\Big{[}\frac{|u|^{4}}{(u^{0}+\mathfrak{c} )^{2}}-2\frac{|p|^{2}|u|^{2}}{(u^{0}+\mathfrak{c})(p^{0}+\mathfrak{c})}\Big{]}, \tag{4.55}\] which implies that \[\Big{|}\frac{|p-\mathfrak{u}|^{2}}{4\theta}+\frac{\mathfrak{c}^{2}- \mathfrak{c}\bar{p}^{0}}{2T_{0}}\Big{|}\lesssim\mathfrak{c}^{-\frac{3}{2}}. \tag{4.56}\] Similarly, one has \[\Big{|}\frac{|q-\mathfrak{u}|^{2}}{4\theta}+\frac{\mathfrak{c}^{2}- \mathfrak{c}\bar{q}^{0}}{2T_{0}}\Big{|}\lesssim\mathfrak{c}^{-\frac{3}{2}}.\] Thus we have \[\int_{|q|\leq\mathfrak{c}^{\frac{1}{8}}}|\mathcal{D}_{4}(q,p)|dq \lesssim\mathfrak{c}^{-\frac{3}{2}}\int_{|q|\leq\mathfrak{c}^{\frac{1}{8}}}|p- q|e^{-\frac{|p|^{2}}{8\theta}-\frac{|q|^{2}}{8\theta}}dq\lesssim\mathfrak{c}^{- \frac{3}{2}}. \tag{4.57}\] Combining (4.47), (4.48), (4.52), (4.54) and (4.57), we have that \[\int_{|q|\leq\mathfrak{c}^{\frac{1}{8}}}|k_{\mathfrak{c}1}(p,q)-k_{1}(p,q)|dq \lesssim\mathfrak{c}^{-\frac{3}{2}},\quad|p|\leq\mathfrak{c}^{\frac{1}{8}}. \tag{4.58}\] Hence, we conclude (4.44) from (4.45), (4.46) and (4.58). Therefore the proof is completed. **Lemma 4.9**.: _It holds that_ \[\int_{\mathbb{R}^{3}}|k_{\mathfrak{c}2}(p,q)-k_{2}(p,q)|dq\lesssim\mathfrak{ c}^{-\frac{3}{8}},\quad p\in\mathbb{R}^{3}.\] Proof.: Since the proof is complicated, we split the proof into three cases. _Case 1._\(|p-q|\geq\mathfrak{c}^{\frac{1}{8}}\). It follows from (3.45), (4.11) and (4.43) that \[\int_{|p-q|\geq\mathfrak{c}^{\frac{1}{8}}}|k_{\mathfrak{c}2}(p,q)- k_{2}(p,q)|dq \lesssim\int_{|p-q|\geq\mathfrak{c}^{\frac{1}{8}}}\frac{1}{|p-q|}e^{- \frac{c_{1}}{2}|p-q|}dq+\int_{|p-q|\geq\mathfrak{c}^{\frac{1}{8}}}\frac{1}{|p- q|}e^{-\frac{|p-q|^{2}}{8\theta}}dq\] \[\lesssim e^{-\frac{c_{1}}{4}\mathfrak{c}^{\frac{1}{8}}}+e^{- \frac{c_{2}}{4}\mathfrak{c}^{\frac{1}{4}}}\lesssim\mathfrak{c}^{-\frac{3}{8}}. \tag{4.59}\] _Case 2._\(|p-q|\leq\mathfrak{c}^{\frac{1}{8}}\) and \(|p|\geq\mathfrak{c}^{\frac{3}{8}}\). By Lemma 4.4 and similar arguments for \(k_{2}(p,q)\) as in [17, Lemma 3.3.1], one has \[\int_{|p-q|\leq\mathfrak{c}^{\frac{1}{8}}}|k_{\mathfrak{c}2}(p,q)-k_{2}(p,q)|dq \lesssim\frac{1}{\mathfrak{c}}+\frac{1}{1+|p|}+\frac{1}{1+|p-\mathfrak{u}|} \lesssim\mathfrak{c}^{-\frac{3}{8}}. \tag{4.60}\] _Case 3._\(|p-q|\leq\mathfrak{c}^{\frac{1}{8}}\) and \(|p|\leq\mathfrak{c}^{\frac{3}{8}}\). In this case, we have \(|q|\lesssim\mathfrak{c}^{\frac{3}{8}}\). Recall that \[k_{\mathfrak{c}2}(p,q)=\frac{\mathfrak{c}\pi s^{\frac{3}{2}}}{4gp^{0}q^{0}} \frac{n_{0}}{(2\pi T_{0})^{\frac{3}{2}}}(1+O(\gamma^{-1}))\frac{\overline{ \boldsymbol{\ell}}\sqrt{\overline{\boldsymbol{\ell}}^{2}-\overline{\boldsymbol {j}}^{2}}+\overline{\boldsymbol{\ell}}+(\overline{\boldsymbol{\ell}}^{2}- \overline{\boldsymbol{j}}^{2})}{(\overline{\boldsymbol{\ell}}^{2}-\overline{ \boldsymbol{j}}^{2})^{\frac{3}{2}}}e^{\frac{\mathfrak{c}^{2}-\sqrt{ \boldsymbol{\ell}^{2}-\overline{\boldsymbol{j}}^{2}}}{T_{0}}}.\] Then one has \[|k_{\mathfrak{c}2}(p,q)-k_{2}(p,q)|\] \[\leq\frac{\mathfrak{c}\pi s^{\frac{3}{2}}}{4gp^{0}q^{0}}\frac{n_ {0}}{(2\pi T_{0})^{\frac{3}{2}}}\frac{\overline{\boldsymbol{\ell}}\sqrt{ \overline{\boldsymbol{\ell}}^{2}-\overline{\boldsymbol{j}}^{2}}+\overline{ \boldsymbol{\ell}}+(\overline{\boldsymbol{\ell}}^{2}-\overline{\boldsymbol{ j}}^{2})}{(\overline{\boldsymbol{\ell}}^{2}-\overline{\boldsymbol{j}}^{2})^{ \frac{3}{2}}}e^{\frac{\mathfrak{c}^{2}-\sqrt{\boldsymbol{\ell}^{2}- \overline{\boldsymbol{j}}^{2}}}{T_{0}}}\cdot O(\gamma^{-1})\] \[\qquad+\frac{\mathfrak{c}\pi s^{\frac{3}{2}}}{4gp^{0}q^{0}} \Big{|}\frac{n_{0}}{(2\pi T_{0})^{\frac{3}{2}}}-\frac{\rho}{(2\pi\theta)^{ \frac{3}{2}}}\Big{|}\frac{\overline{\boldsymbol{\ell}}\sqrt{\overline{ \boldsymbol{\ell}}^{2}-\overline{\boldsymbol{j}}^{2}}+\overline{\boldsymbol{ \ell}}+(\overline{\boldsymbol{\ell}}^{2}-\overline{\boldsymbol{j}}^{2})}{( \overline{\boldsymbol{\ell}}^{2}-\overline{\boldsymbol{j}}^{2})^{\frac{3}{2} }}e^{\frac{\mathfrak{c}^{2}-\sqrt{\boldsymbol{\ell}^{2}-\overline{ \boldsymbol{j}}^{2}}}{T_{0}}}\] \[\qquad+\frac{4\pi\rho}{(2\pi\theta)^{\frac{3}{2}}}\Big{|}\frac{ \mathfrak{c}s^{\frac{3}{2}}}{16gp^{0}q^{0}}\frac{\overline{\boldsymbol{\ell}} \sqrt{\overline{\boldsymbol{\ell}}^{2}-\overline{\boldsymbol{j}}^{2}}+ \overline{\boldsymbol{\ell}}+(\overline{\boldsymbol{\ell}}^{2}-\overline{ \boldsymbol{j}}^{2})}{(\overline{\boldsymbol{\ell}}^{2}-\overline{\boldsymbol{ j}}^{2})^{\frac{3}{2}}}-\frac{\theta}{|p-q|}\Big{|}e^{\frac{\mathfrak{c}^{2}- \sqrt{\boldsymbol{\ell}^{2}-\overline{\boldsymbol{\ell}}^{2}}}{T_{0}}}\] \[\qquad+\frac{2}{|p-q|}\frac{\rho}{\sqrt{2\pi\theta}}e^{-\frac{| p-q|^{2}}{8\theta}-\frac{(|p-q|^{2}-|q-|q-|q|^{2})}{8\theta|p-q|^{2}}}\Big{|}e^{ \frac{\mathfrak{c}^{2}-\sqrt{\boldsymbol{\ell}^{2}-\overline{\boldsymbol{j}}^{ 2}}}{T_{0}}+\frac{|p-q|^{2}-|q-|q|^{2})^{2}}{8\theta|p-q|^{2}}}-1\Big{|}\] \[:=\mathcal{E}_{1}+\mathcal{E}_{2}+\mathcal{E}_{3}+\mathcal{E}_{4}. \tag{4.61}\] It follows from (4.11) that \[\int_{|p-q|\leq\mathfrak{c}^{\frac{1}{8}}}|\mathcal{E}_{1}|dq\lesssim\frac{1} {\mathfrak{c}^{2}}\int_{|p-q|\leq\mathfrak{c}^{\frac{1}{8}}}\frac{1}{|p-q|}e^{- \frac{\mathfrak{c}_{1}}{2}|p-q|}dq\lesssim\frac{1}{\mathfrak{c}^{2}}. \tag{4.62}\] By (4.53), one has \[\int_{|p-q|\leq\mathfrak{c}^{\frac{1}{8}}}|\mathcal{E}_{2}|dq\lesssim\frac{1} {\mathfrak{c}^{2}}\int_{|p-q|\leq\mathfrak{c}^{\frac{1}{8}}}\frac{1}{|p-q|}e^{ -\frac{\mathfrak{c}_{1}}{2}|p-q|}dq\lesssim\frac{1}{\mathfrak{c}^{2}}. \tag{4.63}\] We next focus on \(\mathcal{E}_{3}\). It holds that \[\frac{\mathfrak{c}s^{\frac{3}{2}}}{16gp^{0}q^{0}}\frac{\overline{ \boldsymbol{\ell}}\sqrt{\overline{\boldsymbol{\ell}}^{2}-\overline{\boldsymbol {j}}^{2}}+\overline{\boldsymbol{\ell}}+(\overline{\boldsymbol{\ell}}^{2}- \overline{\boldsymbol{j}}^{2})}{(\overline{\boldsymbol{\ell}}^{2}-\overline{ \boldsymbol{j}}^{2})^{\frac{3}{2}}}-\frac{\theta}{|p-q|}\] \[=\frac{1}{2\mathfrak{c}^{2}}\frac{g^{2}T_{0}^{3}}{p^{0}q^{0}| \bar{p}-\bar{q}|^{3}}\Big{(}\overline{\boldsymbol{\ell}}\sqrt{\overline{ \boldsymbol{\ell}}^{2}-\overline{\boldsymbol{j}}^{2}}+\overline{\boldsymbol{ \ell}}+(\overline{\boldsymbol{\ell}}^{2}-\overline{\boldsymbol{j}}^{2})\Big{)}- \frac{\theta}{|p-q|}\] \[=\frac{1}{2\mathfrak{c}^{2}}\frac{g^{2}}{p^{0}q^{0}|\bar{p}-\bar{q} |^{3}}\Big{(}\frac{\mathfrak{c}^{2}\sqrt{s}(\bar{p}^{0}+\bar{q}^{0})}{4g}| \bar{p}-\bar{q}|T_{0}+\frac{\mathfrak{c}^{2}s}{4g^{2}}|\bar{p}-\bar{q}|^{2}T_{ 0}+\frac{\mathfrak{c}}{2}(\bar{p}^{0}+\bar{q}^{0})T_{0}^{2}\Big{)}-\frac{ \theta}{|p-q|}\] \[=\Big{\{}\frac{1}{2}\frac{g^{2}}{\bar{p}^{0}\bar{q}^{0}|\bar{p}- \bar{q}|^{3}}\Big{(}\frac{\sqrt{s}(\bar{p}^{0}+\bar{q}^{0})}{4g}|\bar{p}- \bar{q}|+\frac{s}{4g^{2}}|\bar{p}-\bar{q}|^{2}\Big{)}\theta-\frac{\theta}{| \bar{p}-\bar{q}|}\Big{\}}+\frac{g^{2}(\bar{p}^{0}+\bar{q}^{0})}{4\mathfrak{c} p^{0}q^{0}|\bar{p}-\bar{q}|^{3}}T_{0}^{2}\] \[\qquad+\frac{1}{2}\frac{g^{2}}{p^{0}q^{0}|\bar{p}-\bar{q}|^{3}} \Big{(}\frac{\sqrt{s}(\bar{p}^{0}+\bar{q}^{0})}{4g}|\bar{p}-\bar{q}|+\frac{s}{4g ^{2}}|\bar{p}-\bar{q}|^{2}\Big{)}(T_{0}-\theta)+\Big{(}\frac{\theta}{|\bar{p}- \bar{q}|}-\frac{\theta}{|p-q|}\Big{)}\] \[:=\mathcal{E}_{31}+\mathcal{E}_{32}+\mathcal{E}_{33}+\mathcal{E}_{ 34}+\mathcal{E}_{35}. \tag{4.64}\] A direct calculation shows that \[|\mathcal{E}_{32}|+|\mathcal{E}_{33}|+|\mathcal{E}_{34}|+|\mathcal{E}_{35}| \lesssim\frac{1}{|p-q|}\mathfrak{c}^{-\frac{13}{8}}. \tag{4.65}\] For \(\mathcal{E}_{31}\), one has \[\frac{\mathcal{E}_{31}}{\theta}|\bar{p}-\bar{q}| =\frac{1}{2}\frac{g^{2}}{\bar{p}^{0}\bar{q}^{0}|\bar{p}-\bar{q}|^{ 2}}\Big{(}\frac{\sqrt{s}(\bar{p}^{0}+\bar{q}^{0})}{4g}|\bar{p}-\bar{q}|+\frac{s }{4g^{2}}|\bar{p}-\bar{q}|^{2}\Big{)}-1\] \[=\frac{1}{2}\Big{(}\frac{\sqrt{s}g(\bar{p}^{0}+\bar{q}^{0})}{4 \bar{p}^{0}\bar{q}^{0}|\bar{p}-\bar{q}|}-1\Big{)}+\frac{1}{2}\Big{(}\frac{s}{4 \bar{p}^{0}\bar{q}^{0}}-1\Big{)}\] \[:=\mathcal{E}_{311}+\mathcal{E}_{312}.\] For \(\mathcal{E}_{312}\), we notice that \[\frac{s}{4\bar{p}^{0}\bar{q}^{0}}-1 =\frac{s-4\bar{p}^{0}\bar{q}^{0}}{4\bar{p}^{0}\bar{q}^{0}}=\frac {(g^{2}+4\mathfrak{c}^{2})^{2}-16(\mathfrak{c}^{2}+|\bar{p}|^{2})(\mathfrak{c }^{2}+|\bar{q}|^{2})}{4\bar{p}^{0}\bar{q}^{0}(s+4\bar{p}^{0}\bar{q}^{0})}\] \[=\frac{g^{4}+8g^{2}\mathfrak{c}^{2}-16\mathfrak{c}^{2}(|\bar{p}|^ {2}+|\bar{q}|^{2})-16|\bar{p}|^{2}|\bar{q}|^{2}}{4\bar{p}^{0}\bar{q}^{0}(s+4 \bar{p}^{0}\bar{q}^{0})}\] \[\lesssim O(\mathfrak{c}^{-\frac{5}{4}}).\] For \(\mathcal{E}_{311}\), it is clear that \[\frac{\sqrt{s}g(\bar{p}^{0}+\bar{q}^{0})}{4\bar{p}^{0}\bar{q}^{0} |\bar{p}-\bar{q}|}-1 =\frac{\sqrt{s}g(\bar{p}^{0}+\bar{q}^{0})-4\bar{p}^{0}\bar{q}^{0} |\bar{p}-\bar{q}|}{4\bar{p}^{0}\bar{q}^{0}|\bar{p}-\bar{q}|}\] \[=\frac{\sqrt{s}g-2\bar{q}^{0}|\bar{p}-\bar{q}|}{4\bar{q}^{0}|\bar {p}-\bar{q}|}+\frac{\sqrt{s}g-2\bar{p}^{0}|\bar{p}-\bar{q}|}{4p^{0}|\bar{p}- \bar{q}|}.\] Due to (4.50), one has \[\frac{\sqrt{s}g-2\bar{q}^{0}|\bar{p}-\bar{q}|}{4\bar{q}^{0}|\bar {p}-\bar{q}|}= \frac{(g^{2}+4\mathfrak{c}^{2})g^{2}-4(|\bar{q}|^{2}+\mathfrak{ c}^{2})|\bar{p}-\bar{q}|^{2}}{4\bar{q}^{0}|\bar{p}-\bar{q}|(\sqrt{s}g+2\bar{q}^{0} |\bar{p}-\bar{q}|)}\] \[= \frac{4\mathfrak{c}^{2}(g^{2}-|\bar{p}-\bar{q}|^{2})+g^{4}-4|\bar {q}|^{2}|\bar{p}-\bar{q}|^{2}}{4\bar{q}^{0}|\bar{p}-\bar{q}|(\sqrt{s}g+2\bar{ q}^{0}|\bar{p}-\bar{q}|)}\] \[= -\frac{4\mathfrak{c}^{2}(|\bar{p}|^{2}-|\bar{q}|^{2})^{2}}{4\bar{ q}^{0}|\bar{p}-\bar{q}|(\sqrt{s}g+2\bar{q}^{0}|\bar{p}-\bar{q}|)(2\bar{p}^{0} \bar{q}^{0}+(2\mathfrak{c}^{2}+|\bar{p}|^{2}+|\bar{q}|^{2}))}\] \[+\frac{g^{4}-4|\bar{q}|^{2}|\bar{p}-\bar{q}|^{2}}{4\bar{q}^{0}| \bar{p}-\bar{q}|(\sqrt{s}g+2\bar{q}^{0}|\bar{p}-\bar{q}|)}\] \[\lesssim O(\mathfrak{c}^{-\frac{5}{4}})\] and \[\frac{\sqrt{s}g-2\bar{p}^{0}|\bar{p}-\bar{q}|}{4\bar{p}^{0}|\bar{ p}-\bar{q}|}\lesssim O(\mathfrak{c}^{-\frac{5}{4}}).\] Thus we can obtain \[|\mathcal{E}_{31}|\lesssim\frac{1}{|p-q|}\mathfrak{c}^{-\frac{5}{4 }}. \tag{4.66}\] Combining (4.65) and (4.66), one obtains, for \(|p|\leq\mathfrak{c}^{\frac{3}{8}}\), that \[\int_{|p-q|\leq\mathfrak{c}^{\frac{1}{8}}}|\mathcal{E}_{3}|dq\lesssim\mathfrak{ c}^{-\frac{5}{4}}\int_{|p-q|\leq\mathfrak{c}^{\frac{1}{8}}}\frac{1}{|p-q|}e^{-\frac{c_{ 1}}{2}|p-q|}dq\lesssim\mathfrak{c}^{-\frac{5}{4}}. \tag{4.67}\] Next, we consider \(\mathcal{E}_{4}\). It follows from (4.19) and (4.34) that \[\frac{\mathfrak{c}^{2}-\sqrt{\bar{\mathcal{L}}^{2}-\bar{\mathcal{J }}^{2}}}{T_{0}}+\frac{|p-q|^{2}}{8\theta}+\frac{(|p-\mathfrak{u}|^{2}-|q- \mathfrak{u}|^{2})^{2}}{8\theta|p-q|^{2}}\] \[=\frac{1}{T_{0}}\Big{[}\mathfrak{c}^{2}-\sqrt{\bar{\mathcal{L}}^ {2}-\bar{\mathcal{J}}^{2}}+\frac{|\bar{p}-\bar{q}|^{2}}{8}+\frac{(|\bar{p}|^{2}- |\bar{q}|^{2})^{2}}{8|\bar{p}-\bar{q}|^{2}}\Big{]}+\Big{[}\frac{|p-q|^{2}}{8 \theta}-\frac{|\bar{p}-\bar{q}|^{2}}{8T_{0}}\Big{]}\] \[=\frac{|\bar{p}-\bar{q}|^{2}}{8}\frac{g^{2}|\bar{p}-\bar{q}|^{2}+4 \mathfrak{c}^{2}(|\bar{p}-\bar{q}|^{2}-g^{2})}{(\sqrt{s}|\bar{p}-\bar{q}|+2 \mathfrak{c}g)^{2}}\] \[=\frac{|\bar{p}-\bar{q}|^{2}}{8}\Big{\{}\frac{g^{2}|\bar{p}-\bar{ q}|^{2}}{(\sqrt{s}|\bar{p}-\bar{q}|^{2}+2\mathfrak{c}g)^{2}}+\frac{4\mathfrak{c}^{2}}{( \sqrt{s}|\bar{p}-\bar{q}|+2\mathfrak{c}g)^{2}}\frac{(|\bar{p}|^{2}-|\bar{q}|^{2 })^{2}}{2\bar{p}^{0}\bar{q}^{0}+(2\mathfrak{c}^{2}+|\bar{p}|^{2}+|\bar{q}|^{2 })}\Big{\}}\] \[\lesssim O(\mathfrak{c}^{-1}). \tag{4.69}\] For \(\mathcal{G}_{11}\), it can be written as \[\frac{(|\bar{p}|^{2}-|\bar{q}|^{2})^{2}}{8|\bar{p}-\bar{q}|^{2}} \Big{\{}1-\frac{1}{1+\sqrt{\frac{\pi}{4\mathfrak{c}^{2}}}\frac{|\bar{p}-\bar{ q}|^{2}}{g}}\frac{8|\bar{p}-\bar{q}|^{2}}{g^{2}}\frac{\mathfrak{c}^{2}}{2\bar{p}^{0 }\bar{q}^{0}+(2\mathfrak{c}^{2}+|\bar{p}|^{2}+|\bar{q}|^{2})}\Big{\}}\] \[=\frac{(|\bar{p}|^{2}-|\bar{q}|^{2})^{2}}{8|\bar{p}-\bar{q}|^{2}} \frac{(2\varsigma g^{2}+\sqrt{s}|\bar{p}-\bar{q}|g)(2\bar{p}^{0}\bar{q}^{0}+(2 \varsigma^{2}+|\bar{p}|^{2}+|\bar{q}|^{2}))-16\varsigma^{3}|\bar{p}-\bar{q}|^{2} }{(2\varsigma g^{2}+\sqrt{s}|\bar{p}-\bar{q}|g)(2\bar{p}^{0}\bar{q}^{0}+(2 \varsigma^{2}+|\bar{p}|^{2}+|\bar{q}|^{2}))}\] \[=\frac{(|\bar{p}|^{2}-|\bar{q}|^{2})^{2}}{8|\bar{p}-\bar{q}|^{2}} \frac{(4\varsigma^{3}g^{2}-4\varsigma^{3}|\bar{p}-\bar{q}|^{2})+(4\varsigma g^{2 }\bar{p}^{0}\bar{q}^{0}-4\varsigma^{3}|\bar{p}-\bar{q}|^{2})+(2\sqrt{s}|\bar{p }-\bar{q}|g\varsigma^{2}-4\varsigma^{3}|\bar{p}-\bar{q}|^{2})}{(2\varsigma g^{2 }+\sqrt{s}|\bar{p}-\bar{q}|g)(2\bar{p}^{0}\bar{q}^{0}+(2\varsigma^{2}+|\bar{p} |^{2}+|\bar{q}|^{2}))}\] \[\quad+\frac{(|\bar{p}|^{2}-|\bar{q}|^{2})^{2}}{8|\bar{p}-\bar{q}|^ {2}}\frac{(2\sqrt{s}\bar{p}^{0}\bar{q}^{0}|\bar{p}-\bar{q}|g-4\varsigma^{3}|\bar {p}-\bar{q}|^{2})+(2\varsigma g^{2}+\sqrt{s}|\bar{p}-\bar{q}|g)(|\bar{p}|^{2}+| \bar{q}|^{2})}{(2\varsigma g^{2}+\sqrt{s}|\bar{p}-\bar{q}|g)(2\bar{p}^{0}\bar{ q}^{0}+(2\varsigma^{2}+|\bar{p}|^{2}+|\bar{q}|^{2}))}\] \[:=\frac{(|\bar{p}|^{2}-|\bar{q}|^{2})^{2}}{8|\bar{p}-\bar{q}|^{2}} (\mathcal{G}_{111}+\mathcal{G}_{112}+\mathcal{G}_{113}+\mathcal{G}_{114}+ \mathcal{G}_{115}). \tag{4.70}\] We have from (4.34) that \[\mathcal{G}_{111} =\frac{4\varsigma^{3}g^{2}-4\varsigma^{3}|\bar{p}-\bar{q}|^{2}}{(2 \varsigma g^{2}+\sqrt{s}|\bar{p}-\bar{q}|g)(2\bar{p}^{0}\bar{q}^{0}+(2 \varsigma^{2}+|\bar{p}|^{2}+|\bar{q}|^{2}))}\] \[=\frac{-4\varsigma^{3}}{(2\varsigma g^{2}+\sqrt{s}|\bar{p}-\bar{q }|g)(2\bar{p}^{0}\bar{q}^{0}+(2\varsigma^{2}+|\bar{p}|^{2}+|\bar{q}|^{2}))} \frac{(|\bar{p}|^{2}-|\bar{q}|^{2})^{2}}{2\bar{p}^{0}\bar{q}^{0}+(2\varsigma^{ 2}+|\bar{p}|^{2}+|\bar{q}|^{2})}\] \[\lesssim O(\varsigma^{-\frac{5}{4}}), \tag{4.71}\] where we have used \(g^{2}\bar{p}^{0}\bar{q}^{0}\geq\varsigma^{2}|\bar{p}-\bar{q}|^{2}\). Similarly, one has \[\mathcal{G}_{112} =\frac{4\varsigma g^{2}\bar{p}^{0}\bar{q}^{0}-4\varsigma^{3}|\bar {p}-\bar{q}|^{2}}{(2\varsigma g^{2}+\sqrt{s}|\bar{p}-\bar{q}|g)(2\bar{p}^{0} \bar{q}^{0}+(2\varsigma^{2}+|\bar{p}|^{2}+|\bar{q}|^{2}))}\] \[=\frac{-4\varsigma\bar{p}^{0}\bar{q}^{0}(|\bar{p}-\bar{q}|^{2}-g^ {2})+4\varsigma|\bar{p}-\bar{q}|^{2}(\bar{p}^{0}\bar{q}^{0}-\varsigma^{2})}{(2 \varsigma g^{2}+\sqrt{s}|\bar{p}-\bar{q}|g)(2\bar{p}^{0}\bar{q}^{0}+(2\varsigma ^{2}+|\bar{p}|^{2}+|\bar{q}|^{2}))}\] \[\lesssim O(\varsigma^{-\frac{5}{4}}), \tag{4.72}\] \[\mathcal{G}_{113} =\frac{2\sqrt{s}|\bar{p}-\bar{q}|g\varsigma^{2}-4\varsigma^{3}|\bar {p}-\bar{q}|^{2}}{(2\varsigma g^{2}+\sqrt{s}|\bar{p}-\bar{q}|g)(2\bar{p}^{0} \bar{q}^{0}+(2\varsigma^{2}+|\bar{p}|^{2}+|\bar{q}|^{2}))}\] \[=\frac{-2\varsigma^{2}|\bar{p}-\bar{q}|}{(2\varsigma g^{2}+\sqrt{s }|\bar{p}-\bar{q}|g)(2\bar{p}^{0}\bar{q}^{0}+(2\varsigma^{2}+|\bar{p}|^{2}+| \bar{q}|^{2}))}(2\varsigma|\bar{p}-\bar{q}|-\sqrt{s}g)\] \[=\frac{-2\varsigma^{2}|\bar{p}-\bar{q}|}{(2\varsigma g^{2}+\sqrt{s }|\bar{p}-\bar{q}|g)(2\bar{p}^{0}\bar{q}^{0}+(2\varsigma^{2}+|\bar{p}|^{2}+| \bar{q}|^{2}))}\frac{4\varsigma^{2}(|\bar{p}-\bar{q}|^{2}-g^{2})-g^{4}}{2 \varsigma|\bar{p}-\bar{q}|+\sqrt{s}g}\] \[\lesssim O(\varsigma^{-\frac{5}{4}}),\] (4.73) \[\mathcal{G}_{114} =\frac{2\sqrt{s}\bar{p}^{0}\bar{q}^{0}|\bar{p}-\bar{q}|g-4 \varsigma^{3}|\bar{p}-\bar{q}|^{2}}{(2\varsigma g^{2}+\sqrt{s}|\bar{p}-\bar{q}| g)(2\bar{p}^{0}\bar{q}^{0}+(2\varsigma^{2}+|\bar{p}|^{2}+|\bar{q}|^{2}))}\] \[=\frac{-2|\bar{p}-\bar{q}|}{(2\varsigma g^{2}+\sqrt{s}|\bar{p}- \bar{q}|g)(2\bar{p}^{0}\bar{q}^{0}+(2\varsigma^{2}+|\bar{p}|^{2}+|\bar{q}|^{2}))} (2\varsigma^{3}|\bar{p}-\bar{q}|-\sqrt{s}\bar{p}^{0}\bar{q}^{0}g)\] \[=\frac{-2|\bar{p}-\bar{q}|}{(2\varsigma g^{2}+\sqrt{s}|\bar{p}- \bar{q}|g)(2\bar{p}^{0}\bar{q}^{0}+(2\varsigma^{2}+|\bar{p}|^{2}+|\bar{q}|^{2}))} (2\varsigma^{3}|\bar{p}-\bar{q}|-\sqrt{s}\bar{p}^{0}\bar{q}^{0}g)\] \[=\frac{-2|\bar{p}-\bar{q}|}{(2\varsigma g^{2}+\sqrt{s}|\bar{p}- \bar{q}|g)(2\bar{p}^{0}\bar{q}^{0}+(2\varsigma^{2}+|\bar{p}|^{2}+|\bar{q}|^{2}))} \frac{4\varsigma^{6}|\bar{p}-\bar{q}|^{2}-s(\bar{p}^{0})^{2}(\bar{q}^{0})^{2}g^{ 2}}{2\varsigma^{3}|\bar{p}-\bar{q}|+\sqrt{s}\bar{p}^{0}\bar{q}^{0}g}\] \[=\frac{-2|\bar{p}-\bar{q}|}{(2\varsigma g^{2}+\sqrt{s}|\bar{p}- \bar{q}|g)(2\bar{p}^{0}\bar{q}^{0}+(2\varsigma^{2}+|\bar{p}|^{2}+|\bar{q}|^{2}))}\] \[\qquad\times\Big{\{}\frac{4\varsigma^{6}(|\bar{p}-\bar{q}|^{2}-g^ {2})-g^{4}(|\bar{p}|^{2}+\varsigma^{2})(|\bar{q}|^{2}+\varsigma^{2})}{2 \varsigma^{3}|\bar{p}-\bar{q}|+\sqrt{s}\bar{p}^{0}\bar{q}^{0}g}-\frac{4 \varsigma^{2}g^{2}(|\bar{p}|^{2}\varsigma^{2}+|\bar{q}|^{2}\varsigma^{2}+|\bar{p}|^ {2}|\bar{q}|^{2})}{2\varsigma^{3}|\bar{p}-\bar{q}|+\sqrt{s}\bar{p}^{0}\bar{q}^{0}g} \Big{\}}\] \[\lesssim O(\varsigma^{-\frac{5}{4}}) \tag{4.74}\] and \[\mathcal{G}_{115} =\frac{(2\varsigma g^{2}+\sqrt{s}|\bar{p}-\bar{q}|g)(|\bar{p}|^{2}+| \bar{q}|^{2})}{(2\varsigma g^{2}+\sqrt Combining (4.70)-(4.75), we have \[|\mathcal{G}_{11}|\lesssim\mathfrak{c}^{-\frac{1}{2}},\] which, together with (4.68)-(4.69), yields that \[\int_{|p-q|\leq\mathfrak{c}^{\frac{1}{2}}}|\mathcal{E}_{4}|dq\lesssim\mathfrak{ c}^{-\frac{1}{2}}\int_{|p-q|\leq\mathfrak{c}^{\frac{1}{8}}}\frac{1}{|p-q|}e^{- \frac{|p-q|^{2}-|q|^{2})^{2}}{8\sigma\beta}-\frac{(|p|^{2}-|q|^{2})^{2}}{8\sigma \beta|p-q|^{2}}}dq\lesssim\mathfrak{c}^{-\frac{1}{2}}. \tag{4.76}\] Combining (4.59)-(4.63), (4.67) and (4.76), one has \[\int_{\mathbb{R}^{3}}|k_{\mathfrak{c}2}(p,q)-k_{2}(p,q)|dq\lesssim\mathfrak{c} ^{-\frac{3}{8}},\quad p\in\mathbb{R}.\] Therefore the proof is completed. Denote \(\overline{\mathbf{M}}_{\mathfrak{c}}\) as the local Maxwellian in the rest frame where \((u^{0},u^{1},u^{2},u^{3})^{t}=(\mathfrak{c},0,0,0)^{t}\): \[\overline{\mathbf{M}}_{\mathfrak{c}}(t,x,p):=\frac{n_{0}\gamma}{4\pi\mathfrak{ c}^{3}K_{2}(\gamma)}\exp\Big{\{}\frac{-\mathfrak{c}p^{0}}{T_{0}}\Big{\}}.\] Define the third momentum \[T^{\alpha\beta\gamma}[\mathbf{M}_{\mathfrak{c}}]:=\int_{\mathbb{R}^{3}}\frac {p^{\alpha}p^{\beta}p^{\gamma}}{p^{0}}\mathbf{M}_{\mathfrak{c}}dp,\quad \overline{T}^{\alpha\beta\gamma}:=\int_{\mathbb{R}^{3}}\frac{p^{\alpha}p^{ \beta}p^{\gamma}}{p^{0}}\overline{\mathbf{M}}_{\mathfrak{c}}dp.\] We first give the expression of \(\overline{T}^{\alpha\beta\gamma}\) which can be proved directly and we omit the details here for brevity. **Lemma 4.10**.: _Let \(i,j,k\in\{1,2,3\}\). For the third momentum \(\overline{T}^{\alpha\beta\gamma}\) which corresponds to \(T^{\alpha\beta\gamma}[\mathbf{M}_{\mathfrak{c}}]\) in the rest frame, there hold_ \[\overline{T}^{000} =\frac{n_{0}\mathfrak{c}^{2}\left[3K_{3}(\gamma)+\gamma K_{2}( \gamma)\right]}{\gamma K_{2}(\gamma)},\] \[\overline{T}^{0ii} =\overline{T}^{ii0}=\overline{T}^{i0i}=\frac{n_{0}\mathfrak{c}^{ 2}K_{3}(\gamma)}{\gamma K_{2}(\gamma)},\] \[\overline{T}^{\alpha\beta\gamma} =0,\quad\text{ if }(\alpha,\beta,\gamma)\neq(0,0,0),(0,i,i),(i,i,0),(i,0,i).\] Recalling the Lorentz transformation in (2.4) and observing \[T^{\alpha\beta\gamma}[\mathbf{M}_{\mathfrak{c}}]=\Lambda_{\alpha^{\prime}}^{ \alpha}\bar{\Lambda}_{\beta^{\prime}}^{\beta}\bar{\Lambda}_{\gamma^{\prime}}^ {\gamma}\overline{T}^{\alpha^{\prime}\beta^{\prime}\gamma^{\prime}},\] we can obtain the expression of \(T^{\alpha\beta\gamma}[\mathbf{M}_{\mathfrak{c}}]\) from Lemma 4.10. **Lemma 4.11**.: _For \(i,j,k\in\{1,2,3\}\), there hold_ \[T^{000}[\mathbf{M}_{\mathfrak{c}}]= \frac{n_{0}}{\varsigma\gamma K_{2}(\gamma)}\left[\left(3K_{3}( \gamma)+\gamma K_{2}(\gamma)\right)\left(u^{0}\right)^{3}+3K_{3}(\gamma)u^{0} |u|^{2}\right],\] \[T^{00i}[\mathbf{M}_{\mathfrak{c}}]= \frac{n_{0}}{\varsigma\gamma K_{2}(\gamma)}\left[\left(5K_{3}( \gamma)+\gamma K_{2}(\gamma)\right)\left(u^{0}\right)^{2}u_{i}+K_{3}(\gamma)|u |^{2}u_{i}\right],\] \[T^{0ij}[\mathbf{M}_{\mathfrak{c}}]= \frac{n_{0}}{\varsigma\gamma K_{2}(\gamma)}\left[\left(6K_{3}( \gamma)+\gamma K_{2}(\gamma)\right)u^{0}u_{i}u_{j}+\mathfrak{c}^{2}K_{3}(\gamma )u^{0}\delta_{ij}\right],\] \[T^{ijk}[\mathbf{M}_{\mathfrak{c}}]= \frac{n_{0}}{\varsigma\gamma K_{2}(\gamma)}\Big{[}\left(6K_{3}( \gamma)+\gamma K_{2}(\gamma)\right)u_{i}u_{j}u_{k}+\mathfrak{c}^{2}K_{3}( \gamma)\left(u_{i}\delta_{jk}+u_{j}\delta_{ik}+u_{k}\delta_{ij}\right)\Big{]}.\] Since we have not found a direct reference which gives the orthonormal basis of \(\mathcal{N}_{\mathfrak{c}}\), so we present details of calculation in the appendix for completeness though it is somehow routine. Indeed, the orthonormal basis of \(\mathcal{N}_{\mathfrak{c}}\) for the relativistic Boltzmann equation has the form \[\chi_{0}^{\mathfrak{c}}=\mathfrak{a}_{0}\sqrt{\mathbf{M}_{\mathfrak{c}}},\quad \chi_{j}^{\mathfrak{c}}=\frac{p_{j}-\mathfrak{a}_{j}}{\mathfrak{b}_{j}}\sqrt{ \mathbf{M}_{\mathfrak{c}}}\ (j=1,2,3),\quad\chi_{4}^{\mathfrak{c}}=\frac{p^{0}/\mathfrak{c}+\sum_{i=1}^{3} \lambda_{i}p_{i}+\mathfrak{c}}{\zeta}\sqrt{\mathbf{M}_{\mathfrak{c}}}, \tag{4.77}\] where \(\mathfrak{a}_{\alpha}\) (\(\alpha=0,1,2,3\)), \(\mathfrak{b}_{j}\) (\(j=1,2,3\)), \(\lambda_{i}\) (\(i=1,2,3\)) and \(\mathfrak{e}\) are all given in the appendix. In the following lemma, we shall show that, as \(\mathfrak{c}\to\infty\), the relativistic orthonormal basis in (4.77) converges to the following Newtonian orthonormal basis \[\chi_{0}=\frac{1}{\sqrt{\rho}}\sqrt{\mu},\quad\chi_{j}=\frac{p_{j}-\mathfrak{ u}_{j}}{\sqrt{\rho\theta}}\sqrt{\mu}\ (j=1,2,3),\quad\chi_{4}=\frac{1}{\sqrt{6\rho}}\Big{(}\frac{|p-\mathfrak{u}|^{2} }{\theta}-3\Big{)}\sqrt{\mu}, \tag{4.78}\] where \(\mu(t,x,p)\) is defined by (1.30). **Lemma 4.12**.: _For any fixed \(p\in\mathbb{R}^{3}\), it holds that_ \[\lim_{\mathfrak{c}\to\infty}\chi_{\alpha}^{\mathfrak{c}}=\chi_{\alpha},\quad \alpha=0,1,\cdots,4.\] Proof.: In view of Proposition 3.8, one has \[\lim_{\mathfrak{c}\to\infty}\mathbf{M}_{\mathfrak{c}}(p)=\mu(p),\quad\lim_{ \mathfrak{c}\to\infty}I^{0}=\lim_{\mathfrak{c}\to\infty}\frac{n_{0}u^{0}}{ \mathfrak{c}}=\rho.\] Then we have \[\lim_{\mathfrak{c}\to\infty}\mathfrak{a}_{0}=\lim_{\mathfrak{c}\to\infty} \frac{1}{\sqrt{I^{0}}}=\frac{1}{\sqrt{\rho}},\] which implies that \(\lim_{\mathfrak{c}\to\infty}\chi_{0}^{\mathfrak{c}}=\chi_{0}\). For \(j=1,2,3\), a direct calculation shows that \[\lim_{\mathfrak{c}\to\infty}T^{0j}=\lim_{\mathfrak{c}\to\infty}\frac{n_{0}}{ \mathfrak{c}}\frac{K_{3}(\gamma)}{K_{2}(\gamma)}u^{0}u_{j}=\rho\mathfrak{u}_ {j} \tag{4.79}\] and \[\lim_{\mathfrak{c}\to\infty}T^{0jj}=\lim_{\mathfrak{c}\to\infty}\frac{n_{0}}{ \mathfrak{c}\gamma K_{2}(\gamma)}\left[\left(6K_{3}(\gamma)+\gamma K_{2}( \gamma)\right)u^{0}u_{j}^{2}+\mathfrak{c}^{2}K_{3}(\gamma)u^{0}\right]=\rho \mathfrak{u}_{j}^{2}+\rho\theta.\] Thus one has \[\lim_{\mathfrak{c}\to\infty}\mathfrak{a}_{j}=\lim_{\mathfrak{c}\to\infty} \frac{T^{0j}}{I_{0}}=\mathfrak{u}_{j}\] and \[\lim_{\mathfrak{c}\to\infty}\mathfrak{b}_{j}=\lim_{\mathfrak{c}\to\infty} \sqrt{T^{0jj}-\frac{(T^{0j})^{2}}{I^{0}}}=\sqrt{\rho\theta},\] which implies that \(\lim_{\mathfrak{c}\to\infty}\chi_{j}^{\mathfrak{c}}=\chi_{j}\), \(j=1,2,3\). The proof for \(\lim_{\mathfrak{c}\to\infty}\chi_{4}^{\mathfrak{c}}=\chi_{4}\) is much more complicated. It is clear that \[\chi_{4}^{\mathfrak{c}} =\frac{p^{0}+\mathfrak{c}\mathfrak{e}+\mathfrak{c}\sum_{i=1}^{3} \lambda_{i}p_{i}}{\mathfrak{c}\zeta}\sqrt{\mathbf{M}_{\mathfrak{c}}}\] \[=\frac{(p^{0}+\mathfrak{c}\mathfrak{e})(p^{0}-\mathfrak{c} \mathfrak{e})+\mathfrak{c}(p^{0}-\mathfrak{c}\mathfrak{e})\sum_{i=1}^{3} \lambda_{i}p_{i}}{\mathfrak{c}\zeta(p^{0}-\mathfrak{c}\mathfrak{e})}\sqrt{ \mathbf{M}_{\mathfrak{c}}}\] \[=\frac{\mathrm{Num}}{\mathrm{Den}}\sqrt{\mathbf{M}_{\mathfrak{c}}}.\] We first calculate the numerator. Denote \(\hat{A}(\gamma):=\frac{K_{3}(\gamma)}{K_{2}(\gamma)}-\frac{6}{\gamma}-\frac{K_ {2}(\gamma)}{K_{3}(\gamma)}\). It follows from Lemma 2.1 that \[\hat{A}(\gamma)=-\frac{1}{\gamma}+O(\gamma^{-2}).\] Now we have \[1+\mathfrak{e}=1+\frac{\frac{1}{\gamma}-\frac{(u^{0})^{2}}{\gamma^{T}Q_{0}} \frac{K_{3}(\gamma)}{K_{2}(\gamma)}-\hat{A}(\gamma)\frac{|u|^{2}}{\gamma^{T}Q_{ 0}}}{\mathfrak{e}^{0}-\hat{A}(\gamma)\frac{w^{0}|u|^{2}}{\epsilon T_{0}}}\] \[=\frac{\frac{u^{0}}{\mathfrak{c}}-\hat{A}(\gamma)\frac{u^{0}|u|^{2}}{ \mathfrak{c}T_{0}}+\frac{1}{\gamma}-\frac{(u^{0})^{2}}{\mathfrak{c}^{2}}\frac{K _{3}(\gamma)}{K_{2}(\gamma)}-\hat{A}(\gamma)\frac{|u|^{2}}{\gamma T_{0}}}{ \frac{u^{0}}{\mathfrak{c}}-\hat{A}(\gamma)\frac{u^{0}|u|^{2}}{\mathfrak{c}T_{ 0}}}\] \[=\frac{1-\frac{u^{0}}{\mathfrak{c}}\frac{K_{3}(\gamma)}{K_{2}( \gamma)}-\hat{A}(\gamma)\frac{|u|^{2}}{\mathfrak{c}T_{0}}+\frac{\mathfrak{c}}{ \gamma u^{0}}-\hat{A}(\gamma)\frac{c|u|^{2}}{\gamma u^{0}T_{0}}}{1-\hat{A}( \gamma)\frac{|u|^{2}}{\mathfrak{c}T_{0}}}\] and \[\lambda_{i}=\frac{\hat{A}(\gamma)\frac{(u^{0})^{2}}{\mathfrak{c}^{2}T_{0}}u_{i }}{\frac{u^{0}}{\mathfrak{c}}-\hat{A}(\gamma)\frac{u^{0}|u|^{2}}{\mathfrak{c }T_{0}}}=\frac{\big{(}-\frac{1}{\gamma}+O(\gamma^{-2})\big{)}\frac{(u^{0})^{2 }}{\mathfrak{c}^{2}T_{0}}u_{i}}{\frac{u^{0}}{\mathfrak{c}}-\hat{A}(\gamma) \frac{u^{0}|u|^{2}}{\mathfrak{c}T_{0}}},\quad i=1,2,3,\] thus we obtain \[\lim_{\mathfrak{c}\to\infty}\mathfrak{c}=\lim_{\mathfrak{c}\to \infty}\frac{\frac{1}{\gamma}-\frac{(u^{0})^{2}}{\mathfrak{c}^{2}}\frac{K_{3} (\gamma)}{K_{2}(\gamma)}-\hat{A}(\gamma)\frac{|u|^{2}}{\gamma T_{0}}}{\frac{ u^{0}}{\mathfrak{c}}-\hat{A}(\gamma)\frac{u^{0}|u|^{2}}{\mathfrak{c}T_{0}}}=-1,\] \[\lim_{\mathfrak{c}\to\infty}\gamma(1+\mathfrak{c})=-\frac{3}{2}+ \frac{|\mathfrak{u}|^{2}}{2\theta},\] \[\lim_{\mathfrak{c}\to\infty}\gamma\lambda_{i}=-\frac{\mathfrak{ u}_{i}}{\theta},\quad i=1,2,3,\] where we used the fact that \[\lim_{\mathfrak{c}\to\infty}\gamma\Big{(}1-\frac{u^{0}}{\mathfrak{c}}\frac{K_{3}( \gamma)}{K_{2}(\gamma)}\Big{)}=\lim_{\mathfrak{c}\to\infty}\gamma\Big{[}- \frac{u^{0}}{\mathfrak{c}}\Big{(}\frac{K_{3}(\gamma)}{K_{2}(\gamma)}-1\Big{)} -\Big{(}\frac{u^{0}}{\mathfrak{c}}-1\Big{)}\Big{]}=-\frac{5}{2}-\frac{| \mathfrak{u}|^{2}}{2\theta}.\] Hence we get \[\lim_{\mathfrak{c}\to\infty}\text{Num} =\lim_{\mathfrak{c}\to\infty}\Big{(}|p|^{2}+\gamma T_{0}(1- \mathfrak{c})(1+\mathfrak{c})+\gamma T_{0}\Big{(}\frac{p^{0}}{\mathfrak{c}}- \mathfrak{c}\Big{)}\sum_{i=1}^{3}\lambda_{i}p_{i}\Big{)}\] \[=|p|^{2}-3\theta+|\mathfrak{u}|^{2}-2\sum_{i=1}^{3}\mathfrak{u}_ {i}p_{i}=|p-\mathfrak{u}|^{2}-3\theta. \tag{4.80}\] We next consider the denominator. Notice that \[\text{Den}=\mathfrak{c}\zeta(p^{0}-\mathfrak{c}\mathfrak{c})=\mathfrak{c}^{2} \zeta\Big{(}\frac{p^{0}}{\mathfrak{c}}-\mathfrak{c}\Big{)}\] and \[\lim_{\mathfrak{c}\to\infty}\Big{(}\frac{p^{0}}{\mathfrak{c}}- \mathfrak{c}\Big{)}=2,\] then we focus on the quantity \(\mathfrak{c}^{2}\zeta=T_{0}\sqrt{\gamma^{2}\zeta^{2}}\). By the expression of \(\zeta\) in the appendix, one has \[\zeta^{2} =\Big{(}\sum_{i,j=1}^{3}\lambda_{i}\lambda_{j}T^{0ij}\Big{)}+ \Big{(}2\sum_{i=1}^{3}\lambda_{i}\mathfrak{c}T^{0i}+2\sum_{i=1}^{3}\frac{ \lambda_{i}}{\mathfrak{c}}T^{00i}\Big{)}+\Big{(}\frac{T^{000}}{\mathfrak{c}^{2} }+\mathfrak{c}^{2}I^{0}+2\frac{\mathfrak{c}}{\mathfrak{c}}T^{00}\Big{)}\] \[:=\mathcal{I}_{1}+\mathcal{I}_{2}+\mathcal{I}_{3}. \tag{4.81}\] It is easy to see that \[\lim_{\mathfrak{c}\to\infty}T^{0ij}=\lim_{\mathfrak{c}\to\infty}\frac{n_{0}}{ \mathfrak{c}\gamma K_{2}(\gamma)}\left[\left(6K_{3}(\gamma)+\gamma K_{2}(\gamma )\right)u^{0}u_{i}u_{j}+\mathfrak{c}^{2}K_{3}(\gamma)u^{0}\delta_{ij}\right] =\rho\mathfrak{u}_{i}\mathfrak{u}_{j}+\rho\theta\delta_{ij},\] which yields that \[\lim_{\mathfrak{c}\to\infty}\gamma^{2}\mathcal{I}_{1}=\lim_{\mathfrak{c}\to \infty}\sum_{i,j=1}^{3}(\gamma\lambda_{i})\cdot(\gamma\lambda_{j})T^{0ij}\] \[=\sum_{i,j=1}^{3}\Big{(}-\frac{\mathfrak{u}_{i}}{\theta}\Big{)} \Big{(}-\frac{\mathfrak{u}_{j}}{\theta}\Big{)}(\rho\mathfrak{u}_{i}\mathfrak{u}_ {j}+\rho\theta\delta_{ij})\] \[=\frac{\rho|\mathfrak{u}|^{4}}{\theta^{2}}+\frac{\rho|\mathfrak{u }|^{2}}{\theta}. \tag{4.82}\] We notice that \[\mathfrak{e}T^{0i}+\frac{T^{00i}}{\mathfrak{c}}=(\mathfrak{c}+1)T^{0i}+\Big{(} \frac{T^{00i}}{\mathfrak{c}}-T^{0i}\Big{)}\] and \[\frac{T^{00i}}{\mathfrak{c}}-T^{0i} =\frac{n_{0}}{\mathfrak{c}^{2}\gamma K_{2}(\gamma)}\left[\left(5 K_{3}(\gamma)+\gamma K_{2}(\gamma)\right)\left(u^{0}\right)^{2}u_{i}+K_{3}( \gamma)|u|^{2}u_{i}\right]-\frac{n_{0}}{\mathfrak{c}}\frac{K_{3}(\gamma)}{K_{ 2}(\gamma)}u^{0}u_{i}\] \[=n_{0}u_{i}\Big{\{}\frac{5}{\gamma}\frac{K_{3}(\gamma)}{K_{2}( \gamma)}\Big{(}\frac{u^{0}}{\mathfrak{c}}\Big{)}^{2}+\frac{|u|^{2}}{\mathfrak{ c}\gamma}\frac{K_{3}(\gamma)}{K_{2}(\gamma)}+\frac{u^{0}}{\mathfrak{c}} \Big{(}\frac{u^{0}}{\mathfrak{c}}-1\Big{)}+\frac{u^{0}}{\mathfrak{c}}\Big{(} 1-\frac{K_{3}(\gamma)}{K_{2}(\gamma)}\Big{)}\Big{\}}. \tag{4.83}\] Then it follows from (4.79) and (4.83) that \[\lim_{\mathfrak{c}\to\infty}\gamma(\mathfrak{c}+1)T^{0i}=\rho \mathfrak{u}_{i}\Big{(}-\frac{3}{2}+\frac{|\mathfrak{u}|^{2}}{2\theta}\Big{)}\] and \[\lim_{\mathfrak{c}\to\infty}\gamma\Big{(}\frac{T^{00i}}{ \mathfrak{c}}-T^{0i}\Big{)}=\rho\mathfrak{u}_{i}\Big{(}5+0+\frac{|\mathfrak{u }|^{2}}{2\theta}-\frac{5}{2}\Big{)}=\rho\mathfrak{u}_{i}\Big{(}\frac{5}{2}+ \frac{|\mathfrak{u}|^{2}}{2\theta}\Big{)}.\] Hence one obtains \[\lim_{\mathfrak{c}\to\infty}\gamma^{2}\mathcal{I}_{2} =2\lim_{\mathfrak{c}\to\infty}\sum_{i=1}^{3}(\gamma\lambda_{i}) \cdot\gamma\Big{(}\frac{T^{00i}}{\mathfrak{c}}-T^{0i}\Big{)}+2\lim_{ \mathfrak{c}\to\infty}\sum_{i=1}^{3}(\gamma\lambda_{i})\cdot\gamma(\mathfrak{ c}+1)T^{0i}\] \[=2\sum_{i=1}^{3}\Big{(}-\frac{\mathfrak{u}_{i}}{\theta}\Big{)} \cdot\Big{[}\rho\mathfrak{u}_{i}\Big{(}\frac{5}{2}+\frac{|\mathfrak{u}|^{2}}{ 2\theta}\Big{)}+\rho\mathfrak{u}_{i}\Big{(}-\frac{3}{2}+\frac{|\mathfrak{u}|^ {2}}{2\theta}\Big{)}\Big{]}\] \[=-2\frac{\rho|\mathfrak{u}|^{4}}{\theta^{2}}-2\frac{\rho| \mathfrak{u}|^{2}}{\theta}. \tag{4.84}\] We finally consider \(\gamma^{2}\mathcal{I}_{3}\). It holds that \[\frac{\mathcal{I}_{3}}{n_{0}} =\frac{1}{\mathfrak{c}^{2}}\frac{T^{000}}{n_{0}}+\mathfrak{c}^{2 }\frac{I^{0}}{n_{0}}+2\frac{\mathfrak{c}}{\mathfrak{c}}\frac{T^{00}}{n_{0}}\] \[=\frac{1}{\mathfrak{c}^{3}\gamma K_{2}(\gamma)}\left[\left(3K_{3} (\gamma)+\gamma K_{2}(\gamma)\right)\left(u^{0}\right)^{3}+3K_{3}(\gamma)u^{0 }|u|^{2}\right]+\mathfrak{c}^{2}\frac{u^{0}}{\mathfrak{c}}\] \[\qquad+2\mathfrak{c}\Big{(}\frac{1}{\mathfrak{c}^{2}}\frac{K_{3} (\gamma)}{K_{2}(\gamma)}(u^{0})^{2}-\frac{1}{\gamma}\Big{)}\] \[=\frac{3}{\gamma}\frac{K_{3}(\gamma)}{K_{2}(\gamma)}\Big{(}\frac {u^{0}}{\mathfrak{c}}\Big{)}^{3}+\Big{(}\frac{u^{0}}{\mathfrak{c}}\Big{)}^{3}+ \frac{3}{\gamma}\frac{K_{3}(\gamma)}{K_{2}(\gamma)}\frac{u^{0}|u|^{2}}{ \mathfrak{c}^{3}}+\mathfrak{c}^{2}\frac{u^{0}}{\mathfrak{c}}+2\mathfrak{c} \frac{K_{3}(\gamma)}{K_{2}(\gamma)}\Big{(}\frac{u^{0}}{\mathfrak{c}}\Big{)}^{2 }-\mathfrak{c}\frac{2}{\gamma}\] \[=\frac{3}{\gamma}\frac{K_{3}(\gamma)}{K_{2}(\gamma)}\frac{u^{0}|u|^ {2}}{\mathfrak{c}^{3}}+\frac{3}{\gamma}\frac{K_{3}(\gamma)}{K_{2}(\gamma)} \Big{(}\frac{u^{0}}{\mathfrak{c}}\Big{)}^{2}\Big{(}\frac{u^{0}}{\mathfrak{c}}-1 \Big{)}+\mathfrak{c}\Big{(}2\frac{u^{0}}{\mathfrak{c}}+2\Big{)}\Big{(}\frac{K_ {3}(\gamma)}{K_{2}(\gamma)}-1\Big{)}\Big{(}\frac{u^{0}}{\mathfrak{c}}-1\Big{)}\] \[\qquad+2\mathfrak{c}\Big{(}\frac{K_{3}(\gamma)}{K_{2}(\gamma)}-1- \frac{5}{2\gamma}\Big{)}+\frac{u^{0}}{\mathfrak{c}}\Big{(}\frac{u^{0}}{ \mathfrak{c}}-1\Big{)}^{2}+2\frac{u^{0}}{\mathfrak{c}}(1+\mathfrak{c})\Big{(} \frac{u^{0}}{\mathfrak{c}}-1\Big{)}\] \[\qquad+\frac{u^{0}}{\mathfrak{c}}(1+\mathfrak{c})^{2}+\frac{3}{ \gamma}\Big{[}\Big{(}\frac{K_{3}(\gamma)}{K_{2}(\gamma)}-1\Big{)}\Big{(}\frac {u^{0}}{\mathfrak{c}}\Big{)}^{2}+\Big{(}\frac{u^{0}}{\mathfrak{c}}+1\Big{)} \Big{(}\frac{u^{0}}{\mathfrak{c}}-1\Big{)}+(1+\mathfrak{c})\Big{]}.\] Thus we have \[\lim_{\mathfrak{c}\to\infty}\gamma^{2}\mathcal{I}_{3}=\frac{3}{2}\rho+\frac{ \rho|\mathfrak{u}|^{4}}{\theta^{2}}+\frac{\rho|\mathfrak{u}|^{2}}{\theta}. \tag{4.85}\] Combining (4.81), (4.82), (4.84) and (4.85), we finally obtain \[\lim_{\mathfrak{c}\to\infty}\gamma^{2}\zeta^{2}=\frac{3}{2}\rho.\] Hence one obtains \[\lim_{\mathfrak{c}\to\infty}\operatorname{Den}=\theta\sqrt{6\rho},\] which, together with (4.80), yields that \[\lim_{\mathfrak{c}\to\infty}\chi_{4}^{\mathfrak{c}}=\frac{|p-\mathfrak{u}|^{2 }-3\theta}{\theta\sqrt{6\rho}}=\chi_{4}.\] Therefore the proof is completed. With above preparations, we shall prove the coercivity estimate for the linear operator \(\mathbf{L}_{\mathfrak{c}}\). **Proposition 4.13** (Uniform coercivity estimate on \(\mathbf{L}_{\mathfrak{c}}\)).: There exists a positive constant \(\zeta_{0}>0\), which is independent of \(\mathfrak{c}\), such that \[\langle\mathbf{L}_{\mathfrak{c}}g,g\rangle\geq\zeta_{0}\|\{\mathbf{I}- \mathbf{P}_{\mathfrak{c}}\}g\|_{\nu_{\mathfrak{c}}}^{2}\] for any \(g\in L_{\nu}^{2}(\mathbb{R}^{3})\). Proof.: It is clear that one only needs to show that there is a positive constant \(\zeta_{0}>0\), which is independent of \(\mathfrak{c}\), such that \[\langle\mathbf{L}_{\mathfrak{c}}g,g\rangle\geq\zeta_{0}\|g\|_{\nu_{\mathfrak{ c}}}^{2}=\zeta_{0} \tag{4.86}\] holds for any \(\mathfrak{c}\) and any \(g\in\mathcal{N}_{\mathfrak{c}}^{\perp}\) with \(\|g\|_{\nu_{\mathfrak{c}}}=1\). For any given \(\mathfrak{c}\), the linearized Boltzmann collision operator \(\mathbf{L}_{\mathfrak{c}}\) satisfies the well-known hypoco-ercivity (see [19] for instance), i.e., there exists a positive constant \(\alpha_{\mathfrak{c}}>0\), such that \[\langle\mathbf{L}_{\mathfrak{c}}g,g\rangle\geq\alpha_{\mathfrak{c}}\|g\|_{\nu _{\mathfrak{c}}}^{2}=\alpha_{\mathfrak{c}} \tag{4.87}\] for any \(g\in\mathcal{N}_{\mathfrak{c}}^{\perp}\) with \(\|g\|_{\nu_{\mathfrak{c}}}=1\). Denote \[\zeta_{\mathfrak{c}}:=\inf_{\begin{subarray}{c}g\in\mathcal{N}_{\mathfrak{c} }^{\perp}\\ \|g\|_{\nu_{\mathfrak{c}}}=1\end{subarray}}\langle\mathbf{L}_{\mathfrak{c}}g,g\rangle. \tag{4.88}\] It follows from (4.87) that \(\zeta_{\mathfrak{c}}\geq\alpha_{\mathfrak{c}}>0\) for any \(\mathfrak{c}\). To prove (4.86), it suffices to show that \[\inf_{\mathfrak{c}\geq 1}\zeta_{\mathfrak{c}}>0. \tag{4.89}\] We prove (4.89) by contradiction. Assume that (4.89) is not true, then there exists a sequence \(\{\zeta_{\mathfrak{c}_{n}}\}\) such that \[\lim_{n\to\infty}\mathfrak{c}_{n}=\infty\quad\text{and}\quad\lim_{n\to\infty} \zeta_{\mathfrak{c}_{n}}=0. \tag{4.90}\] For each \(n\), owing to (4.88), there exists \(g_{n}\in\mathcal{N}_{\mathfrak{c}_{n}}^{\perp}\) with \(\|g_{n}\|_{\nu_{\mathfrak{c}_{n}}}=1\), so that \[\zeta_{\mathfrak{c}_{n}}\leq\langle\mathbf{L}_{\mathfrak{c}_{n}}g_{n},g_{n} \rangle<\zeta_{\mathfrak{c}_{n}}+\frac{1}{n},\] which, together with (4.90), yields that \[\lim_{n\to\infty}\langle\mathbf{L}_{\mathfrak{c}_{n}}g_{n},g_{n}\rangle=0. \tag{4.91}\] It is clear that \(\{g_{n}\}_{n=1}^{\infty}\) is a bounded sequence in \(L^{2}(\mathbb{R}^{3})\). Since \(L^{2}\) is a Hilbert space, based on the Eberlein-Smulian theorem, we have the weakly convergent sequence (up to extracting a subsequence with an abuse of notation) \(g_{n}\rightharpoonup g\) in \(L^{2}\). Moreover, for any fixed \(N\geq 1\), one has \[\chi_{\{|p|\leq N\}}\sqrt{\nu_{\mathfrak{c}_{n}}}g_{n}\rightharpoonup\chi_{\{|p| \leq N\}}\sqrt{\nu}g\quad\text{in }L^{2},\] where \(\nu(p)=\lim_{\mathfrak{c}\to\infty}\nu_{\mathfrak{c}}(p)\). Hence, by the weak semi-continuity, for any fixed \(N\), we have \[\|\chi_{\{|p|\leq N\}}\sqrt{\nu}g\|_{2}\leq\liminf_{n\to\infty}\| \chi_{\{|p|\leq N\}}\sqrt{\nu_{\mathfrak{c}_{n}}}g_{n}\|_{2}\leq 1,\] which implies that \[\|\sqrt{\nu}g\|_{2}\leq 1. \tag{4.92}\] For later use, we denote \[\mathbf{L}f:=\nu f-\mathbf{K}f,\] where \[\mathbf{K}f:=\int_{\mathbb{R}^{3}}k(p,q)f(q)dq=\int_{\mathbb{R}^{ 3}}[k_{2}(p,q)-k_{1}(p,q)]f(q)dq \tag{4.93}\] with \(k_{1}(p,q)\) and \(k_{2}(p,q)\) defined in (4.42)-(4.43). We also denote \(\mathcal{N}\) as the null space of \(\mathbf{L}\), that is, \(\mathcal{N}:=\mathrm{span}\{\chi_{0},\chi_{1},\chi_{2},\chi_{3},\chi_{4}\}\). Clearly, we have \[0\leq\left\langle\mathbf{L}_{\mathfrak{c}_{n}}g_{n},g_{n}\right\rangle =\left\|g_{n}\right\|_{\nu_{\mathfrak{c}_{n}}}^{2}-\left\langle( \mathbf{K}_{\mathfrak{c}_{n}}-\mathbf{K})g_{n},g_{n}\right\rangle-\left\langle \mathbf{K}g_{n},g_{n}\right\rangle\] \[=1-\left\langle(\mathbf{K}_{\mathfrak{c}_{n}}-\mathbf{K})g_{n},g _{n}\right\rangle-\left\langle\mathbf{K}g_{n},g_{n}\right\rangle. \tag{4.94}\] Since \(\mathbf{K}\) is a compact operator on \(L^{2}\), it holds that \[\lim_{n\to\infty}\|\mathbf{K}g_{n}-\mathbf{K}g\|_{2}=0.\] Hence we have \[\left\langle\mathbf{K}g_{n},g_{n}\right\rangle-\left\langle \mathbf{K}g,g\right\rangle=\left\langle\mathbf{K}g_{n}-\mathbf{K}g,g_{n} \right\rangle+\left\langle\mathbf{K}g,g_{n}-g\right\rangle\to 0,\quad n\to\infty.\] It follows from Lemmas 4.8-4.9 that \[\left\langle(\mathbf{K}_{\mathfrak{c}_{n}}-\mathbf{K})g_{n},g_{n}\right\rangle \to 0,\quad n\to\infty. \tag{4.95}\] Combining (4.91), (4.94)-(4.95), we have \[\left\langle\mathbf{K}g,g\right\rangle=1, \tag{4.96}\] which, together with (4.92), yields that \[0\leq\left\langle\mathbf{L}g,g\right\rangle=\|g\|_{\nu}^{2}- \left\langle\mathbf{K}g,g\right\rangle\leq 0.\] Thus we have \(g\in\mathcal{N}\). Next, we shall show that \(g\in\mathcal{N}^{\perp}\). Recall \(\chi_{\alpha}^{\mathfrak{c}_{n}}\), \(\chi_{\alpha}\) defined in (4.77) (with \(\mathfrak{c}\) replaced by \(\mathfrak{c}_{n}\)) and (4.78). Notice that \[0=\left\langle g_{n},\chi_{\alpha}^{\mathfrak{c}_{n}}\right\rangle=\left\langle g _{n}-g,\chi_{\alpha}^{\mathfrak{c}_{n}}-\chi_{\alpha}\right\rangle+\left\langle g _{n}-g,\chi_{\alpha}\right\rangle+\left\langle g,\chi_{\alpha}^{\mathfrak{c}_ {n}}-\chi_{\alpha}\right\rangle+\left\langle g,\chi_{\alpha}\right\rangle,\quad \alpha=0,1,\cdots,4. \tag{4.97}\] Using Lemma 4.12 and \(g_{n}\rightharpoonup g\) in \(L^{2}\), we take the limit \(n\to\infty\) in (4.97) to obtain \[\left\langle g,\chi_{\alpha}\right\rangle=0,\quad\alpha=0,1,\cdots,4,\] which implies that \(g\in\mathcal{N}^{\perp}\). Since we also have \(g\in\mathcal{N}\), one concludes that \(g=0\), which contradicts with (4.96). Therefore the proof of Proposition 4.13 is completed. ### Uniform estimate on \(\mathbf{L}_{\epsilon}^{-1}\) To apply the Hilbert expansion procedure, we need uniform-in-\(\mathfrak{c}\) estimate on \(\mathbf{L}_{\epsilon}^{-1}\). The proof is inspired by [33]. **Lemma 4.14**.: _For any fixed \(0\leq\lambda<1\), it holds that_ \[\mathbf{M}_{\epsilon}^{-\frac{\lambda}{2}}(p)|\mathbf{K}_{\epsilon}f(p)|\lesssim \|\mathbf{M}_{\epsilon}^{-\frac{\lambda}{2}}f\|_{2},\quad p\in\mathbb{R}^{3}.\] Proof.: It follows from (2.2) and Lemma 4.3 that \[\mathbf{M}_{\epsilon}^{-\frac{\lambda}{2}}(p)|\mathbf{K}_{\epsilon 1}f(p)| \lesssim\int_{\mathbb{R}^{3}}|p-q|\mathbf{M}_{\epsilon}^{\frac{1- \lambda}{2}}(p)\mathbf{M}_{\epsilon}^{\frac{1}{2}}(q)|f(q)|dq\] \[\lesssim\int_{\mathbb{R}^{3}}|p-q|e^{-(1-\lambda)\bar{c}_{1}|p|}e ^{-\bar{c}_{1}|q|}|f(q)|dq\] \[\lesssim\int_{\mathbb{R}^{3}}e^{-\frac{c_{1}}{2}|q|}|f(q)|dq \lesssim\|f\|_{2}.\] Using (2.3), one has \[\mathbf{M}_{\epsilon}^{-\frac{\lambda}{2}}(p)|\mathbf{K}_{ \epsilon 2}f(p)|\] \[\leq\frac{\mathfrak{c}}{p^{0}}\int_{\mathbb{R}^{3}}\frac{dq}{q^{ 0}}\int_{\mathbb{R}^{3}}\frac{dq^{\prime}}{q^{\prime 0}}\int_{\mathbb{R}^{3}} \frac{dp^{\prime}}{p^{\prime 0}}W\left(p,q\mid p^{\prime},q^{\prime}\right) \mathbf{M}_{\epsilon}^{\frac{1+\lambda}{2}}(q)\mathbf{M}_{\epsilon}^{\frac{1- \lambda}{2}}(q^{\prime})|\mathbf{M}_{\epsilon}^{-\frac{\lambda}{2}}(p^{\prime })f(p^{\prime})|\] \[=\frac{\mathfrak{c}}{p^{0}}\int_{\mathbb{R}^{3}}\frac{dq}{q^{0}} \int_{\mathbb{R}^{3}}\frac{dq^{\prime}}{q^{\prime 0}}\int_{\mathbb{R}^{3}} \frac{dp^{\prime}}{p^{\prime 0}}\bar{s}\delta^{(4)}\left(p^{\mu}+p^{\prime\mu}-q^{ \mu}-q^{\prime\mu}\right)\mathbf{M}_{\epsilon}^{\frac{1+\lambda}{2}}(p^{ \prime})\mathbf{M}_{\epsilon}^{\frac{1-\lambda}{2}}(q^{\prime})|\mathbf{M}_{ \epsilon}^{-\frac{\lambda}{2}}(q)f(q)|\] \[\lesssim\int_{\mathbb{R}^{3}}\xi(p,q)|\mathbf{M}_{\epsilon}^{- \frac{\lambda}{2}}(q)f(q)|dq, \tag{4.98}\] where we exchanged \(p^{\prime}\) and \(q\) in the last second step with \[\xi(p,q):=\frac{\mathfrak{c}}{p^{0}q^{0}}\int_{\mathbb{R}^{3}}\frac{dq^{\prime }}{q^{\prime 0}}\int_{\mathbb{R}^{3}}\frac{dp^{\prime}}{p^{\prime 0}}\bar{s} \delta^{(4)}\left(p^{\mu}+p^{\prime\mu}-q^{\mu}-q^{\prime\mu}\right)\mathbf{M} _{\epsilon}^{\frac{1-\lambda}{2}}(p^{\prime})\mathbf{M}_{\epsilon}^{\frac{1- \lambda}{2}}(q^{\prime})\] and \[\bar{g}^{2}=g^{2}+\frac{1}{2}(p^{\mu}+q^{\mu})\cdot(p^{\prime\mu}+q^{\prime \mu}-p^{\mu}-q^{\mu})\,,\quad\bar{s}=\bar{g}^{2}+4\mathfrak{c}^{2}.\] Applying Lorentz transformation for \(\xi(p,q)\), one has \[\xi(p,q)=\frac{\mathfrak{c}c_{0}^{1-\lambda}}{p^{0}q^{0}}\int_{\mathbb{R}^{3}} \frac{dq^{\prime}}{q^{\prime 0}}\int_{\mathbb{R}^{3}}\frac{dp^{\prime}}{p^{\prime 0}}s( \bar{p},p^{\prime})\delta^{(4)}\left(\bar{p}^{\mu}+p^{\prime\mu}-\bar{q}^{\mu} -q^{\prime\mu}\right)e^{-(1-\lambda)\frac{\mathfrak{c}(p^{\prime 0}+q^{0})}{2T_{0}}},\] where \(s(\bar{p},p^{\prime})=-(\bar{p}^{\mu}+p^{\prime\mu})(\bar{p}_{\mu}+p^{\prime}_ {\mu})\). By similar arguments as in [47], one can show that \[\xi(p,q) =\frac{\mathfrak{c}c_{0}^{1-\lambda}\pi s^{3/2}}{4gp^{0}q^{0}} \int_{0}^{\infty}\frac{y\left(1+\sqrt{y^{2}+1}\right)}{\sqrt{y^{2}+1}}e^{- \frac{1-\lambda}{2T_{0}}\mathfrak{c}(\bar{p}^{0}+\bar{q}^{0})\sqrt{\bar{y}^{2} +1}}I_{0}\left(\frac{(1-\lambda)\mathfrak{c}|\bar{p}\times\bar{q}|}{gT_{0}}y \right)dy\] \[=\frac{\mathfrak{c}c_{0}^{1-\lambda}\pi s^{3/2}}{4gp^{0}q^{0}} \int_{0}^{\infty}\frac{y\left(1+\sqrt{y^{2}+1}\right)}{\sqrt{y^{2}+1}}e^{- \tilde{\boldsymbol{\ell}}\sqrt{y^{2}+1}}I_{0}\left(\tilde{\mathfrak{J}}y \right)dy, \tag{4.99}\] where \[c_{0}=\frac{n_{0}}{4\pi\mathfrak{c}T_{0}K_{2}(\gamma)},\quad\tilde{\boldsymbol {\ell}}=(1-\lambda)\bar{\boldsymbol{\ell}},\quad\tilde{\boldsymbol{j}}=(1- \lambda)\bar{\boldsymbol{j}},\quad\tilde{\boldsymbol{\ell}}=\mathfrak{c}\frac{ \bar{p}^{0}+\bar{q}^{0}}{2T_{0}},\quad\tilde{\boldsymbol{j}}=\mathfrak{c} \frac{|\bar{p}\times\bar{q}|}{gT_{0}}.\] In view of (2.6)-(2.8), we can rewrite (4.99) as \[\xi(p,q)=\frac{\mathfrak{c}c_{0}^{1-\lambda}\pi s^{3/2}}{4gp^{0}q^{0}}[J_{1}( \tilde{\boldsymbol{\ell}},\tilde{\boldsymbol{j}})+J_{2}(\tilde{\boldsymbol {\ell}},\tilde{\boldsymbol{j}})].\] By similar arguments as in Lemma 4.3, one can prove \[\xi(p,q)\lesssim\Big{[}\frac{1}{\mathfrak{c}}+\frac{1}{|p-q|}\Big{]}e^{-(1- \lambda)\bar{c}_{1}|p-q|}, \tag{4.100}\] which yields that \[\int_{\mathbb{R}^{3}}\xi^{2}(p,q)dq\lesssim\int_{\mathbb{R}^{3}}\Big{(}\frac{1}{ \mathfrak{c}^{2}}+\frac{1}{|p-q|^{2}}\Big{)}e^{-2(1-\lambda)\bar{c}_{1}|p-q|}dq< C<\infty, \tag{4.101}\] where \(C\) is a positive constant independent of \(\mathfrak{c}\). Hence it follows from (4.98) and (4.101) that \[\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}(p)|\mathbf{K}_{ \mathfrak{c}2}f(p)| \lesssim\Big{(}\int_{\mathbb{R}^{3}}\xi^{2}(p,q)dq\Big{)}^{\frac{ 1}{2}}\cdot\Big{(}\int_{\mathbb{R}^{3}}|\mathbf{M}_{\mathfrak{c}}^{-\frac{ \lambda}{2}}(q)f(q)|^{2}dq\Big{)}^{\frac{1}{2}}\] \[\lesssim\|\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}f\|_{2}.\] Therefore the proof of Lemma 4.14 is completed. **Lemma 4.15**.: _For any fixed \(0\leq\lambda<1\), it holds that_ \[\Big{|}\langle\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}\mathbf{K}_{ \mathfrak{c}1}f,\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}f\rangle\Big{|} +\Big{|}\langle\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}\mathbf{K}_{ \mathfrak{c}2}f,\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}f\rangle\Big{|} \lesssim\|\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}f\|_{2}^{2}.\] Proof.: It follows from (2.2) and Lemma 4.3 that \[\Big{|}\langle\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}} \mathbf{K}_{\mathfrak{c}1}f,\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}f \rangle\Big{|}\] \[\lesssim\iint_{\mathbb{R}^{3}\times\mathbb{R}^{3}}|p-q|\mathbf{M }_{\mathfrak{c}}^{\frac{1-\lambda}{2}}(p)\mathbf{M}_{\mathfrak{c}}^{\frac{1+ \lambda}{2}}(q)\cdot|\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}(p)f(p)| \cdot|\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}(q)f(q)|dpdq\] \[\lesssim\iint_{\mathbb{R}^{3}\times\mathbb{R}^{3}}|p-q|e^{-(1- \lambda)\bar{c}_{1}|p|}e^{-(1+\lambda)\bar{c}_{1}|q|}\cdot|\mathbf{M}_{ \mathfrak{c}}^{-\frac{\lambda}{2}}(p)f(p)|\cdot|\mathbf{M}_{\mathfrak{c}}^{- \frac{\lambda}{2}}(q)f(q)|dpdq\] \[\lesssim\Big{(}\iint_{\mathbb{R}^{3}\times\mathbb{R}^{3}}|p-q|e^ {-(1-\lambda)\bar{c}_{1}|p|}e^{-(1+\lambda)\bar{c}_{1}|q|}\cdot|\mathbf{M}_{ \mathfrak{c}}^{-\frac{\lambda}{2}}(p)f(p)|^{2}dpdq\Big{)}^{\frac{1}{2}}\] \[\qquad\times\Big{(}\iint_{\mathbb{R}^{3}\times\mathbb{R}^{3}}|p-q |e^{-(1-\lambda)\bar{c}_{1}|p|}e^{-(1+\lambda)\bar{c}_{1}|q|}\cdot|\mathbf{M}_{ \mathfrak{c}}^{-\frac{\lambda}{2}}(q)f(q)|^{2}dpdq\Big{)}^{\frac{1}{2}}\] \[\lesssim\|\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}f\|_{2}^ {2}.\] Using (4.98) and (4.100), one has \[\Big{|}\langle\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}} \mathbf{K}_{\mathfrak{c}2}f,\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}f \rangle\Big{|} \lesssim\iint_{\mathbb{R}^{3}\times\mathbb{R}^{3}}\xi(p,q)| \mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}(q)f(q)|\cdot|\mathbf{M}_{ \mathfrak{c}}^{-\frac{\lambda}{2}}(p)f(p)|dpdq\] \[\lesssim\Big{(}\iint_{\mathbb{R}^{3}\times\mathbb{R}^{3}}\xi(p,q )\cdot|\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}(p)f(p)|^{2}dpdq\Big{)}^{ \frac{1}{2}}\] \[\lesssim\|\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}f\|_{2}^ {2}.\] Therefore the proof of Lemma 4.15 is completed. **Lemma 4.16**.: _For any fixed \(0\leq\lambda<1\), there exists a positive constant \(C\) which is independent of \(\mathfrak{c}\), such that_ \[\langle\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}\mathbf{L}_{\mathfrak{c}}f,\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}f\rangle\geq\frac{1}{2}\| \mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}f\|_{\nu_{\mathfrak{c}}}^{2}-C\| f\|_{\nu_{\mathfrak{c}}}^{2}.\] Proof.: For any \(r>0\), it follows from Lemmas 4.15 and 4.6 that \[\Big{|}\langle\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}} \mathbf{K}_{\mathfrak{c}}f,\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}f \rangle\Big{|} \lesssim\Big{\{}\int_{|p|\leq r}+\int_{|p|\geq r}\Big{\}}| \mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}(p)f(p)|^{2}dp\] \[\lesssim\max\Big{\{}\frac{1}{1+r},\frac{1}{\mathfrak{c}}\Big{\}} \|\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}f\|_{\nu_{\mathfrak{c}}}^{2}+C_{r }\|f\|_{\nu_{\mathfrak{c}}}^{2}.\] Noting \(\mathfrak{c}\gg 1\), taking \(r\) suitably large, we have \[\Big{|}\langle\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}\mathbf{K}f,\mathbf{ M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}f\rangle\Big{|}\leq\frac{1}{2}\|\mathbf{M}_{ \mathfrak{c}}^{-\frac{\lambda}{2}}f\|_{\nu_{\mathfrak{c}}}^{2}+C\|f\|_{\nu_{ \mathfrak{c}}}^{2},\] which, together with \(\mathbf{L}_{\mathfrak{c}}f=\nu_{\mathfrak{c}}f-\mathbf{K}_{\mathfrak{c}}f\), yields that \[\langle\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}\mathbf{L}_ {\mathfrak{c}}f,\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}f\rangle =\langle\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}\nu_{ \mathfrak{c}}f,\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}f\rangle- \langle\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}\mathbf{K}_{\mathfrak{c} }f,\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}f\rangle\] \[\geq\frac{1}{2}\|\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}f \|_{\nu_{\mathfrak{c}}}^{2}-C\|f\|_{\nu_{\mathfrak{c}}}^{2}.\] Therefore the proof of Lemma 4.16 is completed. **Proposition 4.17**.: For any fixed \(0\leq\lambda<1\), \(m>\frac{3}{2}\), suppose \(g\in\mathcal{N}_{\mathfrak{c}}^{\perp}\) and \[\|(1+|p|)^{m}\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}g\|_{\infty}<\infty,\] then it holds \[|\mathbf{L}_{\mathfrak{c}}^{-1}g(p)|\lesssim\|(1+|p|)^{m}\mathbf{M}_{ \mathfrak{c}}^{-\frac{\lambda}{2}}g\|_{\infty}\cdot\mathbf{M}_{\mathfrak{c}}^{ \frac{\lambda}{2}}(p),\quad p\in\mathbb{R}^{3}, \tag{4.102}\] where the constant is independent of \(\mathfrak{c}\). Proof.: Let \(f=\mathbf{L}_{\mathfrak{c}}^{-1}g\in\mathcal{N}_{\mathfrak{c}}^{\perp}\), then we have \(g=\mathbf{L}_{\mathfrak{c}}f=\nu_{\mathfrak{c}}f-\mathbf{K}_{\mathfrak{c}}f\). Using Lemma 4.14, we get \[\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}(p)|f(p)| \lesssim\nu_{\mathfrak{c}}^{-1}(p)\mathbf{M}_{\mathfrak{c}}^{- \frac{\lambda}{2}}(p)|g(p)|+\nu_{\mathfrak{c}}^{-1}(p)\mathbf{M}_{\mathfrak{c }}^{-\frac{\lambda}{2}}(p)|\mathbf{K}_{\mathfrak{c}}f(p)|\] \[\lesssim\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}(p)|g(p)| +\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}(p)|\mathbf{K}_{\mathfrak{c}} f(p)|\] \[\lesssim\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}(p)|g(p)| +\|\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}f\|_{\nu_{\mathfrak{c}}}. \tag{4.103}\] By Proposition 4.13 and Lemma 4.16, we have \[\|\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}f\|_{\nu_{\mathfrak{c}}}^{2} \lesssim\langle\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}} \mathbf{L}_{\mathfrak{c}}f,\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}f \rangle+\|f\|_{\nu_{\mathfrak{c}}}^{2}\] \[\lesssim\langle\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}} \mathbf{L}_{\mathfrak{c}}f,\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}f \rangle+\langle\mathbf{L}_{\mathfrak{c}}f,f\rangle\] \[\lesssim\|\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}f\|_{\nu _{\mathfrak{c}}}\cdot\|(1+|p|)^{m}\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2 }}g\|_{\infty}\cdot\Big{(}\int_{\mathbb{R}^{3}}\frac{1}{(1+|p|)^{2m}}dp \Big{)}^{\frac{1}{2}},\] which, together with \(m>\frac{3}{2}\), yields that \[\|\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}f\|_{\nu_{\mathfrak{c}}} \lesssim\|(1+|p|)^{m}\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}g\|_{ \infty}<\infty. \tag{4.104}\] Combining (4.104) and (4.103), one has \[\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}(p)|f(p)|\lesssim\|(1+|p|)^{m} \mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}g\|_{\infty},\] which concludes (4.102). Therefore the proof of Proposition 4.17 is completed. ## 5. Uniform-in-\(\mathfrak{c}\) estimates on the linear part of Hilbert expansion ### Reformulation of \(F_{n+1}^{\mathfrak{c}}\) For \(n=0,1,\cdots,2k-2\), we decompose \(F_{n+1}^{\mathfrak{c}}\) as \[\frac{F_{n+1}^{\mathfrak{c}}}{\sqrt{\mathbf{M}_{\mathfrak{c}}}}=\mathbf{P}_{ \mathfrak{c}}\Big{(}\frac{F_{n+1}^{\mathfrak{c}}}{\sqrt{\mathbf{M}_{ \mathfrak{c}}}}\Big{)}+\{\mathbf{I}-\mathbf{P}_{\mathfrak{c}}\}\Big{(}\frac{F_ {n+1}^{\mathfrak{c}}}{\sqrt{\mathbf{M}_{\mathfrak{c}}}}\Big{)},\] where \[\mathbf{P}_{\mathfrak{c}}\Big{(}\frac{F_{n+1}^{\mathfrak{c}}}{\sqrt{\mathbf{M}_ {\mathfrak{c}}}}\Big{)}=\Big{[}a_{n+1}+b_{n+1}\cdot p+c_{n+1}\frac{p^{0}}{ \mathfrak{c}}\Big{]}\sqrt{\mathbf{M}_{\mathfrak{c}}}. \tag{5.1}\] Using (1.13)-(1.14) and Lemma 4.11, by tedious calculations, one has \[\int_{\mathbb{R}^{3}}F_{n+1}^{\mathfrak{c}}dp =\int_{\mathbb{R}^{3}}\Big{[}a_{n+1}+b_{n+1}\cdot p+c_{n+1}\frac{p^ {0}}{\mathfrak{c}}\Big{]}\mathbf{M}_{\mathfrak{c}}dp\] \[=\frac{n_{0}u^{0}}{\mathfrak{c}}a_{n+1}+\frac{e_{0}+P_{0}}{ \mathfrak{c}^{3}}u^{0}(u\cdot b_{n+1})+\frac{e_{0}(u^{0})^{2}+P_{0}|u|^{2}}{ \mathfrak{c}^{4}}c_{n+1},\] \[\int_{\mathbb{R}^{3}}\frac{p_{j}p}{p^{0}}F_{n+1}^{\mathfrak{c}}dp =\int_{\mathbb{R}^{3}}\frac{p_{j}p}{p^{0}}\left[a_{n+1}+b_{n+1} \cdot p+c_{n+1}\frac{p^{0}}{\mathfrak{c}}\right]\mathbf{M}_{\mathfrak{c}}dp+ \int_{\mathbb{R}^{3}}\frac{p_{j}p^{0}}{p^{0}}\sqrt{\mathbf{M}_{\mathfrak{c}}} \{\mathbf{I}-\mathbf{P}_{\mathfrak{c}}\}\left(\frac{F_{n+1}^{\mathfrak{c}}}{ \sqrt{\mathbf{M}_{\mathfrak{c}}}}\right)dp\] \[=\frac{e_{0}+P_{0}}{\mathfrak{c}^{3}}u_{j}ua_{n+1}+\frac{n_{0}}{ \mathfrak{c}\gamma K_{2}(\gamma)}\left(6K_{3}(\gamma)+\gamma K_{2}(\gamma) \right)u_{j}u\left[\left(u\cdot b_{n+1}\right)+\frac{u^{0}}{\mathfrak{c}}c_{n+ 1}\right]\] \[\quad\quad+\mathbf{e}_{j}a_{n+1}\frac{P_{0}}{\mathfrak{c}}+\frac{ \mathfrak{c}n_{0}K_{3}(\gamma)}{\gamma K_{2}(\gamma)}\left(ub_{n+1,j}+u_{j}b_{ n+1}\right)\] \[\quad\quad+\mathbf{e}_{j}\frac{\mathfrak{c}n_{0}K_{3}(\gamma)}{ \gamma K_{2}(\gamma)}\left[\left(u\cdot b_{n+1}\right)+\frac{u^{0}}{\mathfrak{ c}}c_{n+1}\right]+\int_{\mathbb{R}^{3}}\frac{p_{j}p}{p^{0}}\sqrt{\mathbf{M}_{ \mathfrak{c}}}\{\mathbf{I}-\mathbf{P}_{\mathfrak{c}}\}\left(\frac{F_{n+1}^{ \mathfrak{c}}}{\sqrt{\mathbf{M}_{\mathfrak{c}}}}\right)dp,\] \[\int_{\mathbb{R}^{3}}\hat{p}_{j}F_{n+1}^{\mathfrak{c}}dp =\int_{\mathbb{R}^{3}}\hat{p}_{j}\Big{[}a_{n+1}+b_{n+1}\cdot p+ c_{n+1}\frac{p^{0}}{\mathfrak{c}}\Big{]}\mathbf{M}_{\mathfrak{c}}dp+\int_{ \mathbb{R}^{3}}\hat{p}_{j}\sqrt{\mathbf{M}_{\mathfrak{c}}}\{\mathbf{I}- \mathbf{P}_{\mathfrak{c}}\}\left(\frac{F_{n+1}^{\mathfrak{c}}}{\sqrt{\mathbf{ M}_{\mathfrak{c}}}}\right)dp\] \[=n_{0}u_{j}a_{n+1}+\frac{e_{0}+P_{0}}{\mathfrak{c}^{2}}u_{j} \left(u\cdot b_{n+1}\right)+P_{0}b_{n+1,j}\] \[\quad\quad+\frac{e_{0}+P_{0}}{\mathfrak{c}^{3}}u^{0}u_{j}c_{n+1}+ \int_{\mathbb{R}^{3}}\hat{p}_{j}\sqrt{\mathbf{M}_{\mathfrak{c}}}\{\mathbf{I}- \mathbf{P}_{\mathfrak{c}}\}\left(\frac{F_{n+1}^{\mathfrak{c}}}{\sqrt{\mathbf{ M}_{\mathfrak{c}}}}\right)dp,\] \[\int_{\mathbb{R}^{3}}p_{j}F_{n+1}^{\mathfrak{c}}dp =\int_{\mathbb{R}^{3}}p_{j}\Big{[}a_{n+1}+b_{n+1}\cdot p+c_{n+1} \frac{p^{0}}{\mathfrak{c}}\Big{]}\mathbf{M}_{\mathfrak{c}}dp\] \[=\frac{n_{0}}{\mathfrak{c}\gamma K_{2}(\gamma)}\left[\left(6K_{3} (\gamma)+\gamma K_{2}(\gamma)\right)u^{0}u_{j}\left(u\cdot b_{n+1}\right)+ \mathfrak{c}^{2}K_{3}(\gamma)u^{0}b_{n+1,j}\right]\] \[\quad\quad+\frac{e_{0}+P_{0}}{\mathfrak{c}^{3}}u^{0}u_{j}a_{n+1},\] \[\int_{\mathbb{R}^{3}}p^{0}F_{n+1}^{\mathfrak{c}}dp =\int_{\mathbb{R}^{3}}p^{0}\left[a_{n+1}+b_{n+1}\cdot p+c_{n+1} \frac{p^{0}}{\mathfrak{c}}\right]\mathbf{M}_{\mathfrak{c}}dp\] \[=\frac{n_{0}}{\mathfrak{c}\gamma K_{2}(\gamma)}\left[\left(5K_{3} (\gamma)+\gamma K_{2}(\gamma)\right)\left(u^{0}\right)^{2}+K_{3}(\gamma)|u|^{2 }\right](u\cdot b_{n+1})\] \[\quad\quad+\frac{n_{0}}{\mathfrak{c}^{2}\gamma K_{2}(\gamma)} \left[\left(3K_{3}(\gamma)+\gamma K_{2}(\gamma)\right)\left(u^{0}\right)^{2}+3K _{3}(\gamma)|u|^{2}\right]u^{0}c_{n+1}\] \[\quad\quad+\frac{e_{0}\left(u^{0}\right)^{2}+P_{0}|u|^{2}}{ \mathfrak{c}^{3}}a_{n+1},\] where \(\mathbf{e}_{j}\)\((j=1,2,3)\) are the unit base vectors in \(\mathbb{R}^{3}\). Next, we shall derive the equation for \((a_{n+1},b_{n+1},c_{n+1})\). Notice that \[\partial_{t}F_{n+1}^{\mathfrak{c}}+\hat{p}\cdot\nabla_{x}F_{n+1}^{\mathfrak{c}}= \sum_{\begin{subarray}{c}i+j=n+2\\ i,j\geq 0\end{subarray}}Q_{\mathfrak{c}}(F_{i}^{\mathfrak{c}},F_{i}^{\mathfrak{c }})\ (\text{or}\ \sum_{\begin{subarray}{c}i+j=n+2\\ i,j\geq 1\end{subarray}}Q_{\mathfrak{c}}(F_{i}^{\mathfrak{c}},F_{i}^{\mathfrak{ c}})\ \text{when}\ n=2k-2). \tag{5.2}\] Integrating (5.2) with respect to \(p\), we have \[\partial_{t}\left(\frac{n_{0}u^{0}}{\mathfrak{c}}a_{n+1}+\frac{e_ {0}+P_{0}}{\mathfrak{c}^{3}}u^{0}\left(u\cdot b_{n+1}\right)+\frac{e_{0}\left(u^{0 }\right)^{2}+P_{0}|u|^{2}}{\mathfrak{c}^{4}}c_{n+1}\right)\] \[\quad\quad+\nabla_{x}\cdot\left(n_{0}ua_{n+1}+\frac{e_{0}+P_{0}}{ \mathfrak{c}^{2}}u\left(u\cdot b_{n+1}\right)+P_{0}b_{n+1}+\frac{e_{0}+P_{0}}{ \mathfrak{c}^{3}}u^{0}uc_{n+1}\right)\] \[+\nabla_{x}\cdot\int_{\mathbb{R}^{3}}\hat{p}\sqrt{\mathbf{M}_{\epsilon}}\{ \mathbf{I}-\mathbf{P}_{\epsilon}\}\left(\frac{F_{n+1}^{\epsilon}}{\sqrt{\mathbf{M }_{\epsilon}}}\right)dp=0. \tag{5.3}\] Multiplying (5.2) by \(p_{j}\) and integrating over \(\mathbb{R}^{3}\), one gets \[\partial_{t}\left(\frac{e_{0}+P_{0}}{\mathfrak{c}^{3}}u^{0}u_{j} a_{n+1}+\frac{n_{0}}{\mathfrak{c}\gamma K_{2}(\gamma)}\left[\left(6K_{3}(\gamma)+ \gamma K_{2}(\gamma)\right)u^{0}u_{j}\left(u\cdot b_{n+1}\right)+\mathfrak{c}^ {2}K_{3}(\gamma)u^{0}b_{n+1,j}\right]\right.\] \[\qquad+\frac{n_{0}}{\mathfrak{c}^{2}\gamma K_{2}(\gamma)}\left[ \left(5K_{3}(\gamma)+\gamma K_{2}(\gamma)\right)\left(u^{0}\right)^{2}+K_{3}( \gamma)|u|^{2}\right]u_{j}c_{n+1}\right)\] \[\qquad+\nabla_{x}\cdot\left(\frac{e_{0}+P_{0}}{\mathfrak{c}^{2}} u_{j}ua_{n+1}+\frac{n_{0}}{\gamma K_{2}(\gamma)}\left(6K_{3}(\gamma)+\gamma K _{2}(\gamma)\right)u_{j}u\left[\left(u\cdot b_{n+1}\right)+\frac{u^{0}}{ \mathfrak{c}}c_{n+1}\right]\right)\] \[\qquad+\partial_{x_{j}}\left(P_{0}a_{n+1}\right)+\nabla_{x}\cdot \left[\frac{\mathfrak{c}^{2}n_{0}K_{3}(\gamma)}{\gamma K_{2}(\gamma)}\left(ub _{n+1,j}+u_{j}b_{n+1}\right)\right]\] \[\qquad+\partial_{x_{j}}\left(\frac{\mathfrak{c}^{2}n_{0}K_{3}( \gamma)}{\gamma K_{2}(\gamma)}\left[\left(u\cdot b_{n+1}\right)+\frac{u^{0}}{ \mathfrak{c}}c_{n+1}\right]\right)+\nabla_{x}\cdot\int_{\mathbb{R}^{3}}p_{j} \hat{p}\sqrt{\mathbf{M}_{\epsilon}}\{\mathbf{I}-\mathbf{P}_{\epsilon}\}\left( \frac{F_{n+1}^{\epsilon}}{\sqrt{\mathbf{M}_{\epsilon}}}\right)dp=0 \tag{5.4}\] for \(j=1,2,3\) with \(b_{n+1}=\left(b_{n+1,1},b_{n+1,2},b_{n+1,3}\right)^{t}\). Multiplying (5.2) by \(\frac{p^{0}}{\mathfrak{c}}\) and integrating over \(\mathbb{R}^{3}\), one obtains that \[\partial_{t}\left(\frac{e_{0}\left(u^{0}\right)^{2}+P_{0}|u|^{2}} {\mathfrak{c}^{4}}a_{n+1}+\frac{n_{0}}{\mathfrak{c}^{2}\gamma K_{2}(\gamma)} \left[\left(5K_{3}(\gamma)+\gamma K_{2}(\gamma)\right)\left(u^{0}\right)^{2}+K _{3}(\gamma)|u|^{2}\right]\left(u\cdot b_{n+1}\right)\right.\] \[\qquad+\frac{n_{0}}{\mathfrak{c}^{3}\gamma K_{2}(\gamma)}\left[ \left(3K_{3}(\gamma)+\gamma K_{2}(\gamma)\right)\left(u^{0}\right)^{2}+3K_{3}( \gamma)|u|^{2}\right]u^{0}c_{n+1}\right)\] \[\qquad+\nabla_{x}\cdot\left(\frac{e_{0}+P_{0}}{\mathfrak{c}^{3}} u^{0}ua_{n+1}+\frac{n_{0}}{\mathfrak{c}\gamma K_{2}(\gamma)}\left[\left(6K_{3}( \gamma)+\gamma K_{2}(\gamma)\right)u^{0}u\left(u\cdot b_{n+1}\right)+ \mathfrak{c}^{2}K_{3}(\gamma)u^{0}b_{n+1}\right]\right.\] \[\qquad+\frac{n_{0}}{\mathfrak{c}^{2}\gamma K_{2}(\gamma)}\left[ \left(5K_{3}(\gamma)+\gamma K_{2}(\gamma)\right)\left(u^{0}\right)^{2}+K_{3}( \gamma)|u|^{2}\right]uc_{n+1}\right)=0. \tag{5.5}\] After a tedious computation, we can rewrite (5.3)-(5.5) into the following linear symmetric hyperbolic system: \[\mathbf{A}_{0}\partial_{t}U_{n+1}+\sum_{i=1}^{3}\mathbf{A}_{i} \partial_{i}U_{n+1}+\mathbf{B}U_{n+1}=\mathbf{S}_{n+1}, \tag{5.6}\] where \[U_{n+1}=\begin{pmatrix}a_{n+1}\\ b_{n+1}\\ c_{n+1}\end{pmatrix},\quad\mathbf{S}_{n+1}=\begin{pmatrix}-\nabla_{x}\cdot\int_{ \mathbb{R}^{3}}\hat{p}\sqrt{\mathbf{M}_{\epsilon}}\{\mathbf{I}-\mathbf{P}_{ \epsilon}\}\Big{(}\frac{F_{n+1}^{\epsilon}}{\sqrt{\mathbf{M}_{\epsilon}}}\Big{)} dp\\ -\nabla_{x}\cdot\int_{\mathbb{R}^{3}}p\otimes\hat{p}\sqrt{\mathbf{M}_{\epsilon}}\{ \mathbf{I}-\mathbf{P}_{\epsilon}\}\Big{(}\frac{F_{n+1}^{\epsilon}}{\sqrt{ \mathbf{M}_{\epsilon}}}\Big{)}dp\end{pmatrix}.\] The matrices \(\mathbf{A}_{0},\mathbf{A}_{i}\) (\(i=1,2,3\)) and \(\mathbf{B}\) depend only on the smooth relativistic Euler solution \((n_{0},u,T_{0})\). To express these matrices, we denote \[h(t,x):=\frac{e_{0}+P_{0}}{n_{0}},\quad h_{1}(t,x):=\frac{n_{0}} {\gamma K_{2}(\gamma)}\left(6K_{3}(\gamma)+\gamma K_{2}(\gamma)\right),\quad h _{2}(t,x):=\frac{n_{0}K_{3}(\gamma)}{\gamma K_{2}(\gamma)}.\] Then the matrices \(\mathbf{A}_{0},\mathbf{A}_{i},(i=1,2,3)\) in (5.6) are \[\mathbf{A}_{0}=\left(\begin{array}{cc}\frac{n_{0}u^{0}}{\mathfrak{c}}&\frac{n_ {0}u^{0}h_{u}^{t}}{\mathfrak{c}^{3}}&\frac{e_{0}\left(u^{0}\right)^{2}+P_{0}|u|^ {2}}{\mathfrak{c}^{4}}\\ \frac{n_{0}u^{0}h_{u}^{t}}{\mathfrak{c}^{3}}&\left(\frac{h_{1}}{\mathfrak{c}} \mathfrak{u}\otimes u+\mathfrak{c}h_{2}\mathbf{I}\right)u^{0}&\left(\frac{h_{1} }{\mathfrak{c}^{2}}\left(u^{0}\right)^{2}-h_{2}\right)u\\ \frac{e_{0}\left(u^{0}\right)^{2}+P_{0}|u|^{2}}{\mathfrak{c}^{4}}&\left(\frac{h_ {1}}{\mathfrak{c}^{2}}\left(u^{0}\right)^{2}-h_{2}\right)u^{t}&\left(\frac{h_ {1}}{\mathfrak{c}^{3}}\left(u^{0}\right)^{2}-\frac{3h_{2}}{\mathfrak{c}}\right) u^{0}\end{array}\right)\] and \[\mathbf{A}_{i}=\left(\begin{array}{cc}n_{0}u_{i}&\frac{1}{\mathfrak{c}^{2}}n_{0}hu _{i}u^{t}+P_{0}\mathbf{e}_{i}^{t}&\frac{1}{\mathfrak{c}^{8}}n_{0}hu^{0}u_{i}\\ \frac{1}{\mathfrak{c}^{2}}n_{0}hu_{i}u+P_{0}\mathbf{e}_{i}&h_{1}u_{i}u\otimes u +\mathfrak{c}^{2}h_{2}\left(u_{i}\mathbf{I}+\tilde{\mathbf{A}}_{i}\right)& \left(\frac{h_{1}}{\mathfrak{c}}u_{i}u+\mathfrak{c}h_{2}\mathbf{e}_{i}\right) u^{0}\\ \frac{1}{\mathfrak{c}^{3}}n_{0}hu^{0}u_{i}&\left(\frac{h_{1}}{\mathfrak{c}}u_{i }u^{t}+\mathfrak{c}h_{2}\mathbf{e}_{i}^{t}\right)u^{0}&\left(\frac{h_{1}}{ \mathfrak{c}^{2}}\left(u^{0}\right)^{2}-h_{2}\right)u_{i}\end{array}\right),\] where \[\left(\tilde{\mathbf{A}}_{i}\right)_{jk}=\delta_{ij}u_{k}+\delta_{ik}u_{j}, \quad 1\leq j,k\leq 3.\] The matrix \(\mathbf{B}=(b_{ij})\) has the form \[b_{11}=0,\quad(b_{12},b_{13},b_{14})=\frac{n_{0}u^{0}}{\mathfrak{c}}\partial _{t}\Big{(}\frac{hu^{t}}{\mathfrak{c}^{2}}\Big{)}+n_{0}u^{t}\Big{[}\nabla_{x} \Big{(}\frac{hu}{\mathfrak{c}^{2}}\Big{)}\Big{]}^{t}+(\nabla_{x}P_{0})^{t},\] \[b_{15}=\frac{n_{0}u^{0}}{\mathfrak{c}^{2}}\partial_{t}\Big{(} \frac{hu^{0}}{\mathfrak{c}^{2}}\Big{)}+\frac{n_{0}u}{\mathfrak{c}}\cdot\nabla _{x}\Big{(}\frac{hu^{0}}{\mathfrak{c}^{2}}\Big{)}-\partial_{t}\Big{(}\frac{P_ {0}}{\mathfrak{c}^{2}}\Big{)},\] \[(b_{21},b_{31},b_{41})=\frac{n_{0}u^{0}}{\mathfrak{c}}\partial_{t }\Big{(}\frac{hu}{\mathfrak{c}^{2}}\Big{)}+\nabla_{x}P_{0}+\nabla_{x}\Big{(} \frac{hu}{\mathfrak{c}^{2}}\Big{)}n_{0}u,\] \[(b_{j2},b_{j3},b_{j4})=\frac{n_{0}u^{0}}{\mathfrak{c}}\partial_{t }\Big{[}\frac{h_{1}}{n_{0}}u_{j}u^{t}+\frac{\mathfrak{c}^{2}h_{2}}{n_{0}} \mathbf{e}_{j}^{t}\Big{]}+n_{0}(u\cdot\nabla_{x})\Big{(}\frac{h_{1}}{n_{0}}u_{ j}u^{t}\Big{)}+n_{0}u^{t}\nabla_{x}\Big{(}\frac{\mathfrak{c}^{2}h_{2}}{n_{0}} \Big{)}\mathbf{e}_{j}^{t}\] \[\qquad\qquad+\Big{[}\nabla_{x}(\mathfrak{c}^{2}h_{2}u_{j})\Big{]} ^{t}+\partial_{x_{j}}(\mathfrak{c}^{2}h_{2}u^{t}),\] \[b_{j5}=-\partial_{t}(h_{2}u_{j})+\frac{n_{0}u^{0}}{\mathfrak{c}^ {2}}\partial_{t}\Big{(}\frac{h_{1}}{n_{0}}u_{j}u^{0}\Big{)}+\frac{n_{0}}{ \mathfrak{c}}u^{t}\nabla_{x}\Big{(}\frac{h_{1}}{n_{0}}u_{j}u^{0}\Big{)}+ \partial_{x_{j}}(\mathfrak{c}h_{2}u^{0}),\] \[b_{51}=\frac{n_{0}u^{0}}{\mathfrak{c}^{2}}\partial_{t}\Big{(} \frac{hu^{0}}{\mathfrak{c}^{2}}\Big{)}+\frac{n_{0}u}{\mathfrak{c}}\cdot \nabla_{x}\Big{(}\frac{hu^{0}}{\mathfrak{c}^{2}}\Big{)}-\partial_{t}\Big{(} \frac{P_{0}}{\mathfrak{c}^{2}}\Big{)},\] \[(b_{52},b_{53},b_{54})=\frac{n_{0}u^{0}}{\mathfrak{c}^{2}} \partial_{t}\Big{(}\frac{h_{1}}{n_{0}}u^{0}u^{t}\Big{)}-\partial_{t}(h_{2}u^{ t})+\frac{n_{0}}{\mathfrak{c}}u^{t}\Big{[}\nabla_{x}\Big{(}\frac{h_{1}}{n_{0}}u^{0}u \Big{)}\Big{]}^{t}+\Big{(}\nabla(\mathfrak{c}h_{2}u^{0})\Big{)}^{t},\] \[b_{55}=\frac{n_{0}u^{0}}{\mathfrak{c}^{3}}\partial_{t}\Big{(} \frac{h_{1}}{n_{0}}(u^{0})^{2}-3\mathfrak{c}^{2}\frac{h_{2}}{n_{0}}|u|^{2} \Big{)}+\frac{n_{0}}{\mathfrak{c}^{2}}u\cdot\nabla_{x}\Big{(}\frac{h_{1}}{n_{0} }(u^{0})^{2}-3\mathfrak{c}^{2}\frac{h_{2}}{n_{0}}|u|^{2}\Big{)}+\nabla_{x}\cdot(2 h_{2}u).\] Next, we prove the positivity of \(\mathbf{A}_{0}\). Set \(\phi(\gamma):=\frac{K_{3}(\gamma)}{K_{2}(\gamma)}\). A direct calculation shows that \[\det(\mathbf{A}_{0})_{1\times 1}\geq\frac{n_{0}u^{0}}{\mathfrak{c}}>0, \quad\det(\mathbf{A}_{0})_{2\times 2}\geq\Big{(}\frac{n_{0}u^{0}}{\mathfrak{c}} \Big{)}^{2}\frac{\mathfrak{c}^{2}\phi}{\gamma}>0,\] \[\det(\mathbf{A}_{0})_{3\times 3}\geq\Big{(}\frac{n_{0}u^{0}}{ \mathfrak{c}}\Big{)}^{3}\Big{(}\frac{\mathfrak{c}^{2}\phi}{\gamma}\Big{)}^{2}>0, \quad\det(\mathbf{A}_{0})_{4\times 4}\geq\Big{(}\frac{n_{0}u^{0}}{\mathfrak{c}} \Big{)}^{4}\Big{(}\frac{\mathfrak{c}^{2}\phi}{\gamma}\Big{)}^{3}>0,\] and \[\det\mathbf{A}_{0}=\Big{(}\frac{n_{0}u^{0}}{\mathfrak{c}}\Big{)}^{5} \Big{(}\frac{\mathfrak{c}^{2}\phi}{\gamma}\Big{)}^{3}(u^{0})^{-2}\Big{\{}|u|^{2} \mathfrak{c}^{2}(\Psi-\frac{\Psi}{\gamma\phi}-\frac{\phi}{\gamma})+\mathfrak{c}^{4}( \Psi-\frac{1}{\gamma^{2}}-\frac{\phi}{\gamma})\Big{\}}, \tag{5.7}\] where \(\Psi:=1+\frac{6}{\gamma}\phi-\phi^{2}\). To prove the positivity of (5.7), we use [44, Proposition 10] to get \[\phi^{2}-\frac{5}{\gamma}\phi+\frac{1}{\gamma^{2}}-1<0, \tag{5.8}\] which yields that \[\Psi-\frac{1}{\gamma^{2}}-\frac{\phi}{\gamma}=1+\frac{6}{\gamma}\phi-\phi^{2}- \frac{1}{\gamma^{2}}-\frac{\phi}{\gamma}=-(\phi^{2}-\frac{5}{\gamma}\phi+\frac{1 }{\gamma^{2}}-1)>0. \tag{5.9}\] A direct calculation shows that \[\mathfrak{h}:= \Psi-\frac{\Psi}{\gamma\phi}-\frac{\phi}{\gamma}=\frac{1}{\phi \Psi}(\phi\Psi-\frac{\Psi}{\gamma}-\frac{\phi^{2}}{\gamma})\] \[\theta>0,\] which, together with (5.7) and (5.9), yields that \[\det\mathbf{A}_{0}\geq\Big{(}\frac{n_{0}u^{0}}{\mathfrak{c}}\Big{)}^{5}\Big{(} \frac{\mathfrak{c}^{2}\phi}{\gamma}\Big{)}^{3}\frac{\mathfrak{c}^{4}}{(u^{0})^{ 2}}\{-(\phi^{2}-\frac{5}{\gamma}\phi+\frac{1}{\gamma^{2}}-1)\}>0.\] Therefore \(\mathbf{A}_{0}\) is actually a positive definite matrix. ### Uniform-in-\(\mathfrak{c}\) estimates on \(F_{n}^{\mathfrak{c}}\) **Proposition 5.1**.: Let the local relativistic Maxwellian \(F_{0}^{\mathfrak{c}}=\mathbf{M}_{\mathfrak{c}}(n_{0},u,T_{0};p)\) be as in (1.12) formed by \((n_{0}(t,x),u(t,x),T_{0}(t,x))\) which is a smooth solution to the relativistic Euler equations (3.1) on a time interval \([0,T]\times\mathbb{R}^{3}\). Then we can construct the smooth terms \(F_{1}^{\mathfrak{c}},\dots,F_{2k-1}^{\mathfrak{c}}\) of the Hilbert expansion in \((t,x)\in[0,T]\times\mathbb{R}^{3}\) such that, for any \(0<\lambda<1\), the following estimates hold \[|F_{n}^{\mathfrak{c}}(t,x,p)|\leq C(\lambda)\mathbf{M}_{\mathfrak{c}}^{\lambda }(n_{0}(t,x),u(t,x),T_{0}(t,x);p),\quad n=1,2,\dots,2k-1 \tag{5.11}\] and \[|\partial^{m}F_{n}^{\mathfrak{c}}(t,x,p)|\leq C(\lambda)\mathbf{M}_{ \mathfrak{c}}^{\lambda}(n_{0}(t,x),u(t,x),T_{0}(t,x);p),\quad n=1,2,\dots,2k-1,\quad m\geq 1, \tag{5.12}\] where \(\partial^{m}:=\partial_{t,x}^{m}\). We emphasize that the constants in (5.11) and (5.12) are independent of \(\mathfrak{c}\). Proof.: It is noted that \(\mathbf{A}_{0}\), \(\mathbf{A}_{i}\) and \(\mathbf{B}\) in (5.6) depend only on the smooth functions \(n_{0}(t,x)\), \(u(t,x)\) and \(T_{0}(t,x)\). Denote \(\psi_{1}:=\{\mathbf{I}-\mathbf{P}_{\mathfrak{c}}\}\Big{(}\frac{F_{1}^{ \mathfrak{c}}}{\sqrt{\mathbf{M}_{\mathfrak{c}}}}\Big{)}\), then one has \[F_{1}^{\mathfrak{c}}=\Big{(}a_{1}+b_{1}\cdot p+c_{1}\frac{p^{0}}{\mathfrak{c }}\Big{)}\mathbf{M}_{\mathfrak{c}}+\sqrt{\mathbf{M}_{\mathfrak{c}}}\psi_{1},\] which yields that \[\partial F_{1}^{\mathfrak{c}}=\Big{(}\partial a_{1}+\partial b_{1}\cdot p+ \partial c_{1}\frac{p^{0}}{\mathfrak{c}}\Big{)}\mathbf{M}_{\mathfrak{c}}+ \Big{(}a_{1}+b_{1}\cdot p+c_{1}\frac{p^{0}}{\mathfrak{c}}\Big{)}\partial \mathbf{M}_{\mathfrak{c}}+\partial\sqrt{\mathbf{M}_{\mathfrak{c}}}\psi_{1}+ \sqrt{\mathbf{M}_{\mathfrak{c}}}\partial\psi_{1}, \tag{5.13}\] where \(\partial=\partial_{t}\) or \(\partial=\partial_{x_{j}}\) for \(j=1,2,3\). A direct calculation shows that \[|\partial\mathbf{M}_{\mathfrak{c}}|\leq C\mathbf{M}_{\mathfrak{c}}^{1-},\] where \(C\) depends on \(\|\nabla_{t,x}(n_{0},u,T_{0})\|_{\infty}\). We denote \[g_{1}:=Q_{\mathfrak{c}}(\mathbf{M}_{\mathfrak{c}},\sqrt{\mathbf{M}_{ \mathfrak{c}}}\psi_{1})+Q_{\mathfrak{c}}(\sqrt{\mathbf{M}_{\mathfrak{c}}}\psi_ {1},\mathbf{M}_{\mathfrak{c}}),\] then it follows from (1.10) that \[g_{1}=\partial_{t}\mathbf{M}_{\mathfrak{c}}+\hat{p}\cdot\nabla_{x}\mathbf{M}_ {\mathfrak{c}}, \tag{5.14}\] which implies that \(|\partial^{m}g_{1}|\lesssim\mathbf{M}_{\mathfrak{c}}^{1-}\) for any \(m\geq 0\). To estimate \(\partial\psi_{1}\), we apply \(\partial\) to (5.14) to obtain \[\partial g_{1}=Q_{\mathfrak{c}}(\partial\mathbf{M}_{\mathfrak{c} },\sqrt{\mathbf{M}_{\mathfrak{c}}}\psi_{1})+Q_{\mathfrak{c}}(\sqrt{\mathbf{M}_ {\mathfrak{c}}}\psi_{1},\partial\mathbf{M}_{\mathfrak{c}})+Q_{\mathfrak{c}}( \mathbf{M}_{\mathfrak{c}},\partial\sqrt{\mathbf{M}_{\mathfrak{c}}}\psi_{1})+Q _{\mathfrak{c}}(\partial\sqrt{\mathbf{M}_{\mathfrak{c}}}\psi_{1},\mathbf{M}_{ \mathfrak{c}})\\ +Q_{\mathfrak{c}}(\mathbf{M}_{\mathfrak{c}},\sqrt{\mathbf{M}_{ \mathfrak{c}}}\partial\psi_{1})+Q_{\mathfrak{c}}(\sqrt{\mathbf{M}_{\mathfrak{ c}}}\partial\psi_{1},\mathbf{M}_{\mathfrak{c}}),\] which yields that \[\mathbf{L}_{\mathfrak{c}}(\{\mathbf{I}-\mathbf{P}_{\mathfrak{c} }\}\partial\psi_{1})=\mathbf{L}_{\mathfrak{c}}\partial\psi_{1}=-\frac{1}{ \sqrt{\mathbf{M}_{\mathfrak{c}}}}\partial g_{1}+\frac{1}{\sqrt{\mathbf{M}_{ \mathfrak{c}}}}\Big{\{}Q_{\mathfrak{c}}(\partial\mathbf{M}_{\mathfrak{c}}, \sqrt{\mathbf{M}_{\mathfrak{c}}}\psi_{1})+Q_{\mathfrak{c}}(\sqrt{\mathbf{M}_ {\mathfrak{c}}}\psi_{1},\partial\mathbf{M}_{\mathfrak{c}})\Big{\}}\\ +\frac{1}{\sqrt{\mathbf{M}_{\mathfrak{c}}}}\Big{\{}Q_{\mathfrak{ c}}(\mathbf{M}_{\mathfrak{c}},\partial\sqrt{\mathbf{M}_{\mathfrak{c}}}\psi_{1})+Q _{\mathfrak{c}}(\partial\sqrt{\mathbf{M}_{\mathfrak{c}}}\psi_{1},\mathbf{M}_{ \mathfrak{c}})\Big{\}}. \tag{5.15}\] Using the exponential decay of \(\mathbf{L}_{\mathfrak{c}}^{-1}\) in Proposition 4.17, we have \[|\psi_{1}|=\Big{|}\{\mathbf{I}-\mathbf{P}_{\mathfrak{c}}\}\Big{(}\frac{F_{1}^{ \mathfrak{c}}}{\sqrt{\mathbf{M}_{\mathfrak{c}}}}\Big{)}\Big{|}=\Big{|} \mathbf{L}_{\mathfrak{c}}^{-1}\Big{(}-\frac{1}{\sqrt{\mathbf{M}_{\mathfrak{c }}}}(\partial_{t}\mathbf{M}_{\mathfrak{c}}+\hat{p}\cdot\nabla_{x}\mathbf{M}_{ \mathfrak{c}})\Big{)}\Big{|}\lesssim\mathbf{M}_{\mathfrak{c}}^{\frac{1}{2}-}, \tag{5.16}\] which, together with \(|\partial g_{1}|\lesssim\mathbf{M}_{\mathfrak{c}}^{1-}\), yields that the RHS of (5.15) can be bounded by \(\mathbf{M}_{\mathfrak{c}}^{\frac{1}{2}-}\). Using Proposition 4.17 again, we obtain \[|\{\mathbf{I}-\mathbf{P}_{\mathfrak{c}}\}\partial\psi_{1}|\lesssim\mathbf{M}_{ \mathfrak{c}}^{\frac{1}{2}-}. \tag{5.17}\] On the other hand, it is clear that \[|\mathbf{P}_{\mathfrak{c}}\partial\psi_{1}|\lesssim\mathbf{M}_{\mathfrak{c}}^{ \frac{1}{2}-},\] which, together with (5.17), implies that \[|\partial\psi_{1}|\lesssim\mathbf{M}_{\mathfrak{c}}^{\frac{1}{2}-}.\] Similarly, one can deduce that \[|\partial^{m}\psi_{1}|\lesssim\mathbf{M}_{\mathfrak{c}}^{\frac{1}{2}-},\quad m \geq 1. \tag{5.18}\] Next we consider the estimate on the macroscopic parts \((a_{1},b_{1},c_{1})\). Using (5.18), we get \[\|\mathbf{S}_{1}\|_{H^{N_{0}-1}}\lesssim 1.\] One obtains from Lemma 3.1 that \[\left\|\partial_{t}\mathbf{A}_{0}\right\|_{\infty}+\sum_{\alpha=0}^{3}\left\| \nabla_{x}\mathbf{A}_{\alpha}\right\|_{H^{N_{0}-1}}+\left\|\mathbf{B}\right\| _{H^{N_{0}-1}}\lesssim 1.\] Applying standard energy estimate, one gets \[\frac{d}{dt}\left\|\left(a_{1},b_{1},c_{1}\right)(t)\right\|_{H^{N_{0}-3}}^{2} \lesssim\left\|\left(a_{1},b_{1},c_{1}\right)(t)\right\|_{H^{N_{0}-3}}^{2}+ \left\|\left(a_{1},b_{1},c_{1}\right)(t)\right\|_{H^{N_{0}-3}},\] which, together with Gronwall's inequality, yields that \[\left\|\left(a_{1},b_{1},c_{1}\right)(t)\right\|_{H^{N_{0}-3}}\lesssim 1. \tag{5.19}\] Hence it follows from (5.1) that \[\left|\mathbf{P}_{\mathfrak{c}}\Big{(}\frac{F_{1}^{\mathfrak{c}}}{\sqrt{ \mathbf{M}_{\mathfrak{c}}}}\Big{)}\right|\lesssim\mathbf{M}_{\mathfrak{c}}^{ \frac{1}{2}-},\] which, together with (5.16), yields that \[|F_{1}^{\mathfrak{c}}|\lesssim\mathbf{M}_{\mathfrak{c}}^{1-}.\] For \(\partial F_{1}^{\mathfrak{c}}\), on account of (5.13), (5.18) and (5.19), one obtains \[|\partial F_{1}^{\mathfrak{c}}|\lesssim\mathbf{M}_{\mathfrak{c}}^{1-}.\] Similar arguments lead to \[|\partial^{m}F^{\epsilon}_{1}|\lesssim\mathbf{M}^{1-}_{\epsilon},\quad m\geq 1.\] By induction, we can prove that \[|F^{\epsilon}_{n+1}|\lesssim\mathbf{M}^{1-}_{\epsilon},\quad|\partial^{m}F^{ \epsilon}_{n+1}|\lesssim\mathbf{M}^{1-}_{\epsilon},\quad n=0,1,\cdots,2k-2, \quad m\geq 1.\] Therefore the proof is completed. Uniform in \(\mathfrak{c}\) and \(\varepsilon\) estimates on the remainder \(F^{\varepsilon,\mathfrak{c}}_{R}\) In this section, we shall prove our main results, Theorem 1.1 and Theorem 1.5. As in [25, 46], we define \[f^{\varepsilon,\mathfrak{c}}_{R}(t,x,p)=\frac{F^{\varepsilon,\mathfrak{c}}_{R} (t,x,p)}{\sqrt{\mathbf{M}_{\mathfrak{c}}(t,x,p)}} \tag{6.1}\] and \[h^{\varepsilon,\mathfrak{c}}_{R}(t,x,p)=\frac{F^{\varepsilon,\mathfrak{c}}_{R }(t,x,p)}{\sqrt{J_{\mathfrak{c}}(p)}}. \tag{6.2}\] We first present two uniform-in-\(\mathfrak{c}\) estimates on the nonlinear operators. **Lemma 6.1**.: _It holds that_ \[\Big{|}\frac{w_{\ell}}{\sqrt{\mathbf{M}_{\mathfrak{c}}}}Q_{\mathfrak{c}}(h_{ 1}\sqrt{\mathbf{M}_{\mathfrak{c}}},h_{2}\sqrt{\mathbf{M}_{\mathfrak{c}}}) \Big{|}\lesssim\nu_{\mathfrak{c}}(p)\|h_{1}\|_{\infty,\ell}\|h_{2}\|_{\infty, \ell},\] _where the constant is independent of \(\mathfrak{c}\)._ Proof.: Noting \[p^{0}+q^{0}=p^{\prime 0}+q^{\prime 0},\quad p+q=p^{\prime}+q^{\prime},\] we claim that \[|p|\lesssim|p^{\prime}|+|q^{\prime}|,\quad|q|\lesssim|p^{\prime}|+|q^{\prime}|. \tag{6.3}\] Actually, without loss of generality, we may assume that \(|p|\leq|q|\). Denote \(r:=\max\{|p^{\prime}|,|q^{\prime}|\}\), then one has \[2\sqrt{\mathfrak{c}^{2}+|p|^{2}}\leq\sqrt{\mathfrak{c}^{2}+|p|^{2}}+\sqrt{ \mathfrak{c}^{2}+|q|^{2}}=\sqrt{\mathfrak{c}^{2}+|p^{\prime}|^{2}}+\sqrt{ \mathfrak{c}^{2}+|q^{\prime}|^{2}}\leq 2\sqrt{\mathfrak{c}^{2}+r^{2}},\] which yields that \[|p|^{2}\leq r^{2}\leq|p^{\prime}|^{2}+|q^{\prime}|^{2},\] Thus it holds \[|p|\leq|p^{\prime}|+|q^{\prime}|. \tag{6.4}\] If \(|p|\leq\frac{|q|}{2}\), one has \(|p+q|\geq|q|-|p|\geq\frac{|q|}{2}\), which yields that \[\frac{|q|}{2}\leq|p+q|=|p^{\prime}+q^{\prime}|\leq|p^{\prime}|+|q^{\prime}|.\] If \(\frac{|q|}{2}\leq|p|\leq|q|\), it follows from (6.4) that \[|q|\leq 2|p|\leq 2(|p^{\prime}|+|q^{\prime}|).\] Hence the claim (6.3) holds. Now it follows from (6.3) that \[w_{\ell}(p)\lesssim w_{\ell}(p^{\prime})w_{\ell}(q^{\prime}),\] which, together with from (4.41), yields that \[\Big{|}\frac{w_{\ell}}{\sqrt{\mathbf{M}_{\mathfrak{c}}}}Q_{\mathfrak{c}}(h_{ 1}\sqrt{\mathbf{M}_{\mathfrak{c}}},h_{2}\sqrt{\mathbf{M}_{\mathfrak{c}}}) \Big{|}\] \[\leq\frac{w_{\ell}(p)}{\sqrt{\mathbf{M}_{\mathsf{c}}(p)}}\int_{ \mathbb{R}^{3}}\int_{\mathbb{S}^{2}}v_{\phi}\Big{|}h_{1}(p^{\prime})h_{2}(q^{ \prime})\sqrt{\mathbf{M}_{\mathsf{c}}(p^{\prime})\mathbf{M}_{\mathsf{c}}(q^{ \prime})}-h_{1}(p)h_{2}(q)\sqrt{\mathbf{M}_{\mathsf{c}}(p)\mathbf{M}_{\mathsf{ c}}(q)}\Big{|}d\omega dq\] \[\leq\int_{\mathbb{R}^{3}}\int_{\mathbb{S}^{2}}v_{\phi}\Big{[}|w_{ \ell}(p^{\prime})h_{1}(p^{\prime})|\cdot|w_{\ell}(q^{\prime})h_{2}(q^{\prime}) |+|w_{\ell}(p)h_{1}(p)|\cdot|h(q)|\Big{]}\sqrt{\mathbf{M}_{\mathsf{c}}(q)}d \omega dq\] \[\lesssim\nu_{\mathsf{c}}(p)\|h_{1}\|_{\infty,\ell}\|h_{2}\|_{ \infty,\ell}.\] Therefore the proof is completed. **Lemma 6.2**.: _For any \(\ell\geq 9\), it holds that_ \[|\left\langle\Gamma_{\mathsf{c}}\left(h_{1},h_{2}\right),h_{3}\right\rangle| \lesssim\|h_{3}\|_{\infty,\ell}\left\|h_{2}\right\|_{2}\left\|h_{1}\right\|_{ 2}.\] _Furthermore, if \(\chi(p)\) satisfies \(|\chi(p)|\lesssim e^{-\delta_{1}|p|}\) for some positive constant \(\delta_{1}>0\), then we have_ \[|\left\langle\Gamma_{\mathsf{c}}\left(h_{1},\chi\right),h_{3}\right\rangle|+| \left\langle\Gamma_{\mathsf{c}}\left(\chi,h_{1}\right),h_{3}\right\rangle| \lesssim\left\|h_{3}\right\|_{\nu_{\mathsf{c}}}\left\|h_{1}\right\|_{\nu_{ \mathsf{c}}},\] _where the constants are independent of \(\mathsf{c}\)._ We point out that Lemma 6.2 has been proved in [46] when \(\mathsf{c}=1\). For the general case, the proof is very similar to the one in [46] and we omit the details here for brevity. To establish the uniform in \(\mathsf{c}\) and \(\varepsilon\) estimates for the remainder \(F_{R}^{\varepsilon,\mathsf{c}}\), we shall use the \(L^{2}-L^{\infty}\) framework from [24]. We first consider the \(L^{2}\) estimate. **Lemma 6.3** (\(L^{2}\) Estimate).: _Let \((n_{0}(t,x),u(t,x),T_{0}(t,x))\) be the smooth solution to the relativistic Euler equations (3.1) generated by Lemma 3.1. Let \(\mathbf{M}_{\mathsf{c}}(n_{0},u,T_{0};p)\), \(f_{R}^{\varepsilon,\mathsf{c}}\), \(h_{R}^{\varepsilon,\mathsf{c}}\) be defined in (1.12), (6.1) and (6.2), respectively, and let \(\zeta_{0}>0\) be the positive constant in Proposition 4.13. Then there exist constants \(\varepsilon_{0}>0\) and \(C>0\), such that for all \(\varepsilon\in(0,\varepsilon_{0}]\), it holds_ \[\frac{d}{dt}\left\|f_{R}^{\varepsilon,\mathsf{c}}\right\|_{2}^{2}(t)+\frac{ \zeta_{0}}{2\varepsilon}\left\|\left\{\mathbf{I}-\mathbf{P}_{\mathsf{c}} \right\}f_{R}^{\varepsilon,\mathsf{c}}\right\|_{\nu_{\mathsf{c}}}^{2}(t)\leq C \Big{\{}\sqrt{\varepsilon}\|\varepsilon^{\frac{1}{2}}h_{R}^{\varepsilon, \mathsf{c}}\|_{\infty,\ell}(t)+1\Big{\}}\left\{\left\|f_{R}^{\varepsilon, \mathsf{c}}\right\|_{2}^{2}+\left\|f_{R}^{\varepsilon,\mathsf{c}}\right\|_{2} \right\}, \tag{6.5}\] _where the constant \(C\) depends upon the \(L^{2}\) norms and the \(L^{\infty}\) norms of the terms \(\mathbf{M}_{\mathsf{c}},F_{1}^{\mathsf{c}},\ldots,F_{2k-1}^{\mathsf{c}}\) as well as their first derivatives, and \(C\) is independent of \(\mathsf{c}\)._ Proof.: Plugging \(F_{R}^{\varepsilon,\mathsf{c}}=f_{R}^{\varepsilon,\mathsf{c}}\sqrt{\mathbf{M}_ {\mathsf{c}}}\) into (1.11), one has \[\partial_{t}f_{R}^{\varepsilon,\mathsf{c}}+\hat{p}\cdot\nabla_{ x}f_{R}^{\varepsilon,\mathsf{c}}+\frac{1}{\varepsilon}\mathbf{L}_{\mathsf{c}}f_{R}^{ \varepsilon,\mathsf{c}}=-\frac{\{\partial_{t}+\hat{p}\cdot\nabla_{x}\}\sqrt{ \mathbf{M}_{\mathsf{c}}}}{\sqrt{\mathbf{M}_{\mathsf{c}}}}f_{R}^{\varepsilon, \mathsf{c}}+\varepsilon^{k-1}\Gamma_{\mathsf{c}}(f_{R}^{\varepsilon,\mathsf{c}},f _{R}^{\varepsilon,\mathsf{c}})\] \[\quad+\sum_{i=1}^{2k-1}\varepsilon^{i-1}\Big{\{}\Gamma_{\mathsf{ c}}\Big{(}\frac{F_{i}^{\mathsf{c}}}{\sqrt{\mathbf{M}_{\mathsf{c}}}},f_{R}^{ \varepsilon,\mathsf{c}}\Big{)}+\Gamma_{\mathsf{c}}\Big{(}f_{R}^{\varepsilon, \mathsf{c}},\frac{F_{i}^{\mathsf{c}}}{\sqrt{\mathbf{M}_{\mathsf{c}}}}\Big{)} \Big{\}}+\varepsilon^{k}\bar{A}, \tag{6.6}\] where \[\bar{A}:=\sum_{\begin{subarray}{c}i+j\geq 2k+1\\ 2\leq i,j\leq 2k-1\end{subarray}}\varepsilon^{i+j-1-2k}\Gamma_{\mathsf{c}}\Big{(} \frac{F_{i}^{\mathsf{c}}}{\sqrt{\mathbf{M}_{\mathsf{c}}}},\frac{F_{i}^{ \mathsf{c}}}{\sqrt{\mathbf{M}_{\mathsf{c}}}}\Big{)}.\] Multiplying (6.6) by \(f_{R}^{\varepsilon,\mathsf{c}}\) and integrating over \(\mathbb{R}^{3}\times\mathbb{R}^{3}\), one has \[\big{\langle}\partial_{t}f_{R}^{\varepsilon,\mathsf{c}} +\hat{p}\cdot\nabla_{x}f_{R}^{\varepsilon,\mathsf{c}}+\frac{1}{ \varepsilon}\mathbf{L}_{\mathsf{c}}f_{R}^{\varepsilon,\mathsf{c}},f_{R}^{ \varepsilon,\mathsf{c}}\big{\rangle}=-\Big{\langle}\Big{(}\frac{\{\partial_{t}+\hat{p }\cdot\nabla_{x}\}\sqrt{\mathbf{M}_{\mathsf{c}}}}{\sqrt{\mathbf{M}_{\mathsf{c}}}} \Big{)}f_{R}^{\varepsilon,\mathsf{c}},f_{R}^{\varepsilon,\mathsf{c}}\Big{\rangle}+ \langle\varepsilon^{k-1}\Gamma_{\mathsf{c}}(f_{R}^{\varepsilon,\mathsf{c}},f_{R}^{ \varepsilon,\mathsf{c}}),f_{R}^{\varepsilon,\mathsf{c}}\rangle\] \[\quad+\Big{\langle}\sum_{i=1}^{2k-1}\varepsilon^{i-1}\Big{\{} \Gamma_{\mathsf{c}}\Big{(}\frac{F_{i}^{\mathsf{c}}}{\sqrt{\mathbf{M}_{\mathsf{c}}}},f_{R} ^{\varepsilon,\mathsf{c}}\Big{)}+\Gamma_{\mathsf{c}}\Big{(}f_{R}^{\varepsilon, \mathsf{c}},\frac{F_{i}^{\mathsf{c}}}{\sqrt{\mathbf{M}_{\mathsf{c}}}}\Big{)} \Big{\}},f_{R}^{\varepsilon,\mathsf{c}}\Big{\rangle}+\langle\varepsilon^{k}\bar{A},f _{R}^{\varepsilon,\mathsf{c}}\rangle.\] It follows from Proposition 4.13 that \[\big{\langle}\partial_{t}f_{R}^{\varepsilon,\mathsf{c}}+\hat{p}\cdot\nabla_{x}f_{R}^{ \varepsilon,\mathsf{c}}+\frac{1}{\varepsilon}\mathbf{L}_{\mathsf{c}}f_{R}^{ \varepsilon,\mathsf{c}},f_{R}^{\varepsilon,\mathsf{c}}\big{\rangle}\geq\frac{1}{2} \frac{d}{dt}\left\|f_{R}^{\varepsilon,\mathsf{c}}\right\|_{2}^{2}+\frac{ \zeta_{0}}{\varepsilon}\left\|\left\{\mathbf{I}-\mathbf{P}_{\mathsf{c}}\right\}f_{R}^{ \varepsilon,\mathsf{c}}\right\|_{\nu_{\mathsf{c}}}^{2}.\] For \(\partial=\partial_{t}\) or \(\partial=\partial_{x_{i}}\), it holds that \[\frac{\partial\mathbf{M}_{\mathfrak{c}}}{\mathbf{M}_{\mathfrak{c}}}=\frac{ \partial n_{0}}{n_{0}}-3\frac{\partial T_{0}}{T_{0}}+\frac{\partial T_{0}}{T_{0 }^{2}}\Big{(}u^{0}p^{0}-\mathfrak{c}^{2}\frac{K_{1}(\gamma)}{K_{2}(\gamma)} \Big{)}-\frac{\partial T_{0}}{T_{0}^{2}}\sum_{i=1}^{3}u_{i}p_{i}+\frac{1}{T_{0 }}\Big{(}\sum_{i=1}^{3}p_{i}\partial u_{i}-\frac{\partial u\cdot u}{u^{0}}p^{0 }\Big{)}. \tag{6.7}\] A direct calculation shows that \[\Big{|}u^{0}p^{0}-\mathfrak{c}^{2}\frac{K_{1}(\gamma)}{K_{2}(\gamma)}\Big{|} \lesssim(1+|p|)^{2}C(n_{0},u,T_{0}),\] which, together with (6.7), yields that \[\Big{|}\frac{\{\partial_{t}+\hat{p}\cdot\nabla_{x}\}\sqrt{\mathbf{M}_{ \mathfrak{c}}}}{\sqrt{\mathbf{M}_{\mathfrak{c}}}}\Big{|}\lesssim(1+|p|)^{3}C (n_{0},u,T_{0}).\] For any \(0<\sqrt{\varepsilon}\leq\kappa\), we obtain \[\Big{|}\Big{\langle}\Big{(}\frac{\{\partial_{t}+\hat{p}\cdot \nabla_{x}\}\sqrt{\mathbf{M}_{\mathfrak{c}}}}{\sqrt{\mathbf{M}_{\mathfrak{c}} }}\Big{)}f_{R}^{\varepsilon,\mathfrak{c}},f_{R}^{\varepsilon,\mathfrak{c}} \Big{\rangle}\Big{|}\] \[\leq\Big{|}\int_{\{1+|p|\geq\frac{\kappa}{\sqrt{\varepsilon}}\}} dxdp\Big{|}+\Big{|}\int_{\{1+|p|\leq\frac{\kappa}{\sqrt{\varepsilon}}\}}dxdp\Big{|}\] \[\leq C_{\kappa}\varepsilon^{2}\|\nabla_{x}(n_{0},u,T_{0})\|_{2} \cdot\|h_{R}^{\varepsilon,\mathfrak{c}}\|_{\infty,\ell}\cdot\|f_{R}^{ \varepsilon,\mathfrak{c}}\|_{2}\] \[\qquad+C\|\nabla_{x}(n_{0},u,T_{0})\|_{L^{\infty}}\cdot\|(1+|p|) ^{\frac{3}{2}}f_{R}^{\varepsilon,\mathfrak{c}}\mathbf{1}_{\{1+|p|\leq\frac{ \kappa}{\sqrt{\varepsilon}}\}}\|_{2}^{2}\] \[\leq C_{\kappa}\varepsilon^{2}\|h_{R}^{\varepsilon,\mathfrak{c}} \|_{\infty,\ell}\cdot\|f_{R}^{\varepsilon,\mathfrak{c}}\|_{2}+C\|(1+|p|)^{ \frac{3}{2}}\mathbf{P}_{\mathfrak{c}}f_{R}^{\varepsilon,\mathfrak{c}}\mathbf{ 1}_{\{1+|p|\leq\frac{\kappa}{\sqrt{\varepsilon}}\}}\|_{2}^{2}\] \[\qquad+C\|(1+|p|)^{\frac{3}{2}}\{\mathbf{I}-\mathbf{P}_{\mathfrak{ c}}\}f_{R}^{\varepsilon,\mathfrak{c}}\mathbf{1}_{\{1+|p|\leq\frac{\kappa}{\sqrt{ \varepsilon}}\}}\|_{2}^{2}\] \[\leq C_{\kappa}\varepsilon^{2}\|h_{R}^{\varepsilon,\mathfrak{c}} \|_{\infty,\ell}\cdot\|f_{R}^{\varepsilon,\mathfrak{c}}\|_{2}+C\|f_{R}^{ \varepsilon,\mathfrak{c}}\|_{2}^{2}+\frac{C\kappa^{2}}{\varepsilon}\|\{ \mathbf{I}-\mathbf{P}_{\mathfrak{c}}\}f_{R}^{\varepsilon,\mathfrak{c}}\|_{ \nu_{\mathfrak{c}}}^{2}.\] It follows from Lemma 6.2 that \[|\langle\varepsilon^{k-1}\Gamma_{\mathfrak{c}}(f_{R}^{\varepsilon,\mathfrak{c}},f_{R}^{\varepsilon,\mathfrak{c}}),f_{R}^{\varepsilon,\mathfrak{c}}\rangle| \lesssim\varepsilon^{k-1}\|f_{R}^{\varepsilon,\mathfrak{c}}\|_{\infty,\ell} \cdot\|f_{R}^{\varepsilon,\mathfrak{c}}\|_{2}^{2}\lesssim\varepsilon^{k-1}\|h_ {R}^{\varepsilon,\mathfrak{c}}\|_{\infty,\ell}\cdot\|f_{R}^{\varepsilon, \mathfrak{c}}\|_{2}^{2}\] and \[\Big{|}\Big{\langle}\sum_{i=1}^{2k-1}\varepsilon^{i-1}\Big{\{} \Gamma_{\mathfrak{c}}\Big{(}\frac{F_{i}^{\mathfrak{c}}}{\sqrt{\mathbf{M}_{ \mathfrak{c}}}},f_{R}^{\varepsilon,\mathfrak{c}}\Big{)}+\Gamma_{\mathfrak{c}} \Big{(}f_{R}^{\varepsilon,\mathfrak{c}},\frac{F_{i}^{\mathfrak{c}}}{\sqrt{ \mathbf{M}_{\mathfrak{c}}}}\Big{)}\Big{\}},f_{R}^{\varepsilon,\mathfrak{c}} \Big{\rangle}\Big{|}\] \[\lesssim\sum_{i=1}^{2k-1}\varepsilon^{i-1}\|f_{R}^{\varepsilon, \mathfrak{c}}\|_{\nu_{\mathfrak{c}}}^{2}\lesssim\|\mathbf{P}_{\mathfrak{c}}f_{R}^{ \varepsilon,\mathfrak{c}}\|_{\nu_{\mathfrak{c}}}^{2}+\|\{\mathbf{I}-\mathbf{P}_{ \mathfrak{c}}\}f_{R}^{\varepsilon,\mathfrak{c}}\|_{\nu_{\mathfrak{c}}}^{2}\] \[\lesssim\|f_{R}^{\varepsilon,\mathfrak{c}}\|_{2}^{2}+\|\{\mathbf{ I}-\mathbf{P}_{\mathfrak{c}}\}f_{R}^{\varepsilon,\mathfrak{c}}\|_{\nu_{\mathfrak{c}}}^{2}.\] Similarly, for the last term, one has \[\Big{|}\Big{\langle}\varepsilon^{k}\bar{A},f_{R}^{\varepsilon, \mathfrak{c}}\Big{\rangle}\Big{|} \lesssim\varepsilon^{k}\sum_{\begin{subarray}{c}i+j\geq 2k+1\\ 2\leq i,j\leq 2k-1\end{subarray}}\varepsilon^{i+j-1-2k}\Big{|}\Big{\langle} \Gamma_{\mathfrak{c}}\Big{(}\frac{F_{i}^{\mathfrak{c}}}{\sqrt{\mathbf{M}_{ \mathfrak{c}}}},\frac{F_{i}^{\mathfrak{c}}}{\sqrt{\mathbf{M}_{\mathfrak{c}}}} \Big{)},f_{R}^{\varepsilon,\mathfrak{c}}\Big{\rangle}\Big{|}\] \[\lesssim\varepsilon^{k}\|f_{R}^{\varepsilon,\mathfrak{c}}\|_{2} \lesssim\|f_{R}^{\varepsilon,\mathfrak{c}}\|_{2}.\] Collecting all the above estimates, one has \[\frac{1}{2}\frac{d}{dt}\left\|f_{R}^{\varepsilon,\mathfrak{c}}\right\|_{2}^{2}+ \frac{\zeta_{0}}{\varepsilon}\left\|\{\mathbf{I}-\mathbf{P}_{\mathfrak{c}}\}f_{R}^{ \varepsilon,\mathfrak{c}}\right\|_{\nu_{\mathfrak{c}}}^{2}\leq C_{\kappa} \varepsilon^{2}\|h_{R}^{\varepsilon,\mathfrak{c}}\|_{\infty,\ell}\cdot\|f_{R}^{ \varepsilon,\mathfrak{c}}\|_{2}+C\|f_{R}^{\varepsilon,\mathfrak{c}}\|_{2}^{2}+C\| f_{R}^{\varepsilon,\mathfrak{c}}\|_{2}\] \[\qquad+C\Big{(}\frac{\kappa^{2}}{\varepsilon}+1\Big{)}\|\{ \mathbf{I}-\mathbf{P}_{\mathfrak{c}}\}f_{R}^{\varepsilon,\mathfrak{c}}\|_{\nu_{ \mathfrak{c}}}^{2}+C\varepsilon^{k-1}\|h_{R}^{\varepsilon,\mathfrak{c}}\|_{ \infty,\ell}\cdot\|f_{R}^{\varepsilon,\mathfrak{c}}\|_{2}^{2}.\] We choose \(\kappa=\sqrt{\frac{\zeta_{0}}{4C}}\), then we suppose that \(0<\varepsilon\leq\varepsilon_{0}\leq\frac{\zeta_{0}}{4C}\). Thus one gets (6.5). Therefore the proof is completed. Next we consider the \(L^{\infty}\) estimate for \(h_{R}^{\varepsilon,\varepsilon}\). Recall \(J_{\epsilon}(p)\) in (1.26). We define \[\mathcal{L}_{\mathfrak{c}}h:=-J_{\mathfrak{c}}^{-\frac{1}{2}}\{Q_{\mathfrak{ c}}(\mathbf{M}_{\mathfrak{c}},\sqrt{J_{\mathfrak{c}}}h)+Q_{\mathfrak{c}}( \sqrt{J_{\mathfrak{c}}}h,\mathbf{M}_{\mathfrak{c}})\}=\nu_{\mathfrak{c}}h- \mathcal{K}_{\mathfrak{c}}h,\] where \(\mathcal{K}_{\mathfrak{c}}=\mathcal{K}_{\mathfrak{c}2}-\mathcal{K}_{ \mathfrak{c}1}\). More specifically, \(\nu_{\mathfrak{c}}\) is defined in (1.25) and operators \(\mathcal{K}_{\mathfrak{c}1}h\) and \(\mathcal{K}_{\mathfrak{c}2}h\) are defined as \[\mathcal{K}_{\mathfrak{c}1}h :=J_{\mathfrak{c}}^{-\frac{1}{2}}Q_{\mathfrak{c}}^{-}(\mathbf{M }_{\mathfrak{c}},\sqrt{J_{\mathfrak{c}}}h)=\int_{\mathbb{R}^{3}}\int_{ \mathbb{S}^{2}}v_{\phi}\Big{\{}\sqrt{J_{\mathfrak{c}}(q)}\frac{\mathbf{M}_{ \mathfrak{c}}(p)}{\sqrt{J_{\mathfrak{c}}(p)}}h(q)\Big{\}}d\omega dq,\] \[\mathcal{K}_{\mathfrak{c}2}h :=J_{\mathfrak{c}}^{-\frac{1}{2}}\left\{Q_{\mathfrak{c}}^{+}( \mathbf{M}_{\mathfrak{c}},\sqrt{J_{\mathfrak{c}}}h)+Q_{\mathfrak{c}}^{+}( \sqrt{J_{\mathfrak{c}}}h,\mathbf{M}_{\mathfrak{c}})\right\}\] \[=\int_{\mathbb{R}^{3}}\int_{\mathbb{S}^{2}}v_{\phi}\Big{\{} \mathbf{M}_{\mathfrak{c}}(p^{\prime})\frac{\sqrt{J_{\mathfrak{c}}(q^{\prime}) }}{\sqrt{J_{\mathfrak{c}}(p)}}h(q^{\prime})\Big{\}}d\omega dq+\int_{\mathbb{R} ^{3}}\int_{\mathbb{S}^{2}}v_{\phi}\Big{\{}\mathbf{M}_{\mathfrak{c}}(q^{ \prime})\frac{\sqrt{J_{\mathfrak{c}}(p^{\prime})}}{\sqrt{J_{\mathfrak{c}}(p)} }h(p^{\prime})\Big{\}}d\omega dq.\] Noting (1.28), by similar arguments as in [47], one can show that \[|\mathcal{K}_{\mathfrak{c}i}(h)|\lesssim\int_{\mathbb{R}^{3}}\hat{k}_{i}(p,q) |h(q)|dq,\quad i=1,2,\] where \[\hat{k}_{1}(p,q)=|p-q|e^{-\delta_{2}|p|}e^{-\delta_{2}|q|},\quad\hat{k}_{2}(p, q)=\frac{1}{|p-q|}e^{-\frac{\delta_{2}}{2}|p-q|}\] with \(\delta_{2}:=\alpha-\frac{1}{2}>0\). We denote \(\hat{k}(p,q):=\hat{k}_{1}(p,q)+\hat{k}_{2}(p,q)\). Then it holds that \[|\mathcal{K}_{\mathfrak{c}}(h)|\lesssim\int_{\mathbb{R}^{3}}\hat{k}(p,q)|h(q )|dq,\quad i=1,2.\] Denote \[\hat{k}_{w}(p,q):=\hat{k}(p,q)\frac{w_{\ell}(p)}{w_{\ell}(q)}.\] By similar arguments as in Lemmas 4.4-4.5, one has \[\int_{\mathbb{R}^{3}}\hat{k}_{w}(p,q)e^{\frac{\delta_{2}}{4}|p-q|}dq+\int_{ \mathbb{R}^{3}}\hat{k}_{w}^{2}(p,q)dq\lesssim\max\Big{\{}\frac{1}{\mathfrak{ c}},\frac{1}{1+|p|}\Big{\}}. \tag{6.8}\] For later use, we introduce \[\widehat{\nu}_{\mathfrak{c}}(p):=\int_{\mathbb{R}^{3}}\int_{\mathbb{S}^{2}}v_ {\phi}J_{\mathfrak{c}}(q)d\omega dq\cong\nu_{\mathfrak{c}}(p).\] **Lemma 6.4** (\(L^{\infty}\) Estimate).: _Under the assumptions of Lemma 6.3, there exist \(\varepsilon_{0}>0\) and a positive constant \(C>0\), such that for all \(\varepsilon\in(0,\varepsilon_{0}]\) and for any \(\ell\geq 9\), it holds that_ \[\sup_{0\leq s\leq T}\|\varepsilon^{\frac{3}{2}}h_{R}^{\varepsilon,\mathfrak{c }}(s)\|_{\infty,\ell}\leq C\Big{(}\|\varepsilon^{\frac{3}{2}}h_{0}\|_{\infty, \ell}+\sup_{0\leq s\leq T}\|f_{R}^{\varepsilon,\mathfrak{c}}(s)\|_{2}+ \varepsilon^{k+\frac{5}{2}}\Big{)},\] _where \(C\) is independent of \(\mathfrak{c}\)._ Proof.: Plugging \(F_{R}^{\varepsilon,\mathfrak{c}}=h_{R}^{\varepsilon,\mathfrak{c}}\sqrt{J_{ \mathfrak{c}}}\) into (1.11), one has \[\partial_{t}h_{R}^{\varepsilon,\mathfrak{c}}+\hat{p}\cdot\nabla_{x }h_{R}^{\varepsilon,\mathfrak{c}}+\frac{\nu_{\mathfrak{c}}}{\varepsilon}h_{R}^{ \varepsilon,\mathfrak{c}}=\frac{1}{\varepsilon}\mathcal{K}(h_{R}^{\varepsilon, \mathfrak{c}})+\varepsilon^{k-1}Q_{\mathfrak{c}}(h_{R}^{\varepsilon,\mathfrak{c }}\sqrt{J_{\mathfrak{c}}},\sqrt{J_{\mathfrak{c}}}h_{R}^{\varepsilon,\mathfrak{c }})\] \[\quad+\sum_{i=1}^{2k-1}\varepsilon^{i-1}\frac{1}{\sqrt{J_{ \mathfrak{c}}}}\Big{\{}Q_{\mathfrak{c}}(F_{i}^{\varepsilon},\sqrt{J_{ \mathfrak{c}}}h_{R}^{\varepsilon,\mathfrak{c}})+Q_{\mathfrak{c}}(\sqrt{J_{ \mathfrak{c}}}h_{R}^{\varepsilon,\mathfrak{c}},F_{i}^{\epsilon})\Big{\}}+ \varepsilon^{k}\tilde{A}, \tag{6.9}\] where \[\tilde{A}:=\sum_{\begin{subarray}{c}i+j>2k+1\\ 2\leq i,j\leq 2k-1\end{subarray}}\varepsilon^{i+j-1-2k}\frac{1}{\sqrt{J_{ \mathfrak{c}}}}Q_{\mathfrak{c}}(F_{i}^{\mathfrak{c}},F_{i}^{\mathfrak{c}}).\] Denote \(y_{1}:=x-\hat{p}(t-s)\) and \[\tilde{\nu_{\mathfrak{c}}}(t,s):=\int_{s}^{t}\nu_{\mathfrak{c}}(\mathbf{M}_{ \mathfrak{c}})(\tau,x-\hat{p}(t-\tau),p)d\tau\cong(t-s)\tilde{\nu_{\mathfrak{ c}}}.\] Integrating (6.9) along the backward trajectory, one has \[h_{R}^{\varepsilon,\mathfrak{c}}(t,x,p)\] \[=\exp\Big{(}-\frac{\tilde{\nu_{\mathfrak{c}}}(t,0)}{\varepsilon }\Big{)}h_{0}(x-\hat{p}t,p)\] \[\quad+\frac{1}{\varepsilon}\int_{0}^{t}\exp\Big{(}-\frac{\tilde{ \nu_{\mathfrak{c}}}(t,s)}{\varepsilon}\Big{)}\mathcal{K}_{\mathfrak{c}}h_{R}^ {\varepsilon,\mathfrak{c}}(s,y_{1},p)ds\] \[\quad+\int_{0}^{t}\exp\Big{(}-\frac{\tilde{\nu_{\mathfrak{c}}}(t,s)}{\varepsilon}\Big{)}\frac{\varepsilon^{k-1}}{\sqrt{J_{\mathfrak{c}}}}Q_{ \mathfrak{c}}(h_{R}^{\varepsilon,\mathfrak{c}}\sqrt{J_{\mathfrak{c}}},h_{R}^ {\varepsilon,\mathfrak{c}}\sqrt{J_{\mathfrak{c}}})(s,y_{1},p)ds\] \[\quad+\int_{0}^{t}\exp\Big{(}-\frac{\tilde{\nu_{\mathfrak{c}}}(t,s)}{\varepsilon}\Big{)}\sum_{i=1}^{2k-1}\varepsilon^{i-1}\frac{1}{\sqrt{J_{ \mathfrak{c}}}}\Big{\{}Q_{\mathfrak{c}}(F_{i}^{\mathfrak{c}},\sqrt{J_{ \mathfrak{c}}}h_{R}^{\varepsilon,\mathfrak{c}})+Q_{\mathfrak{c}}(\sqrt{J_{ \mathfrak{c}}}h_{R}^{\varepsilon,\mathfrak{c}},F_{i}^{\mathfrak{c}})\Big{\}}(s,y_{1},p)ds\] \[\quad+\int_{0}^{t}\exp\Big{(}-\frac{\tilde{\nu_{\mathfrak{c}}}(t,s)}{\varepsilon}\Big{)}\varepsilon^{k}\tilde{A}(s,y_{1},p)ds\] \[=\sum_{j=1}^{5}\mathcal{J}_{j}. \tag{6.10}\] It is clear that \[|\varepsilon^{\frac{3}{2}}w_{\mathfrak{c}}\mathcal{J}_{1}|\leq \|\varepsilon^{\frac{3}{2}}h_{0}\|_{\infty,\ell}.\] For \(\mathcal{J}_{3}\), it follows from Lemma 6.1 that \[|\varepsilon^{\frac{3}{2}}w_{\mathfrak{c}}\mathcal{J}_{3}| \lesssim\varepsilon^{k+\frac{1}{2}}\int_{0}^{t}\exp\Big{(}-\frac{ \tilde{\nu_{\mathfrak{c}}}(t,s)}{\varepsilon}\Big{)}\Big{|}\frac{w_{\ell}}{ \sqrt{J_{\mathfrak{c}}}}Q_{\mathfrak{c}}(h_{R}^{\varepsilon,\mathfrak{c}} \sqrt{J_{\mathfrak{c}}},h_{R}^{\varepsilon,\mathfrak{c}}\sqrt{J_{\mathfrak{c} }})(s,y_{1},p)\Big{|}ds\] \[\lesssim\varepsilon^{k-\frac{5}{2}}\int_{0}^{t}\exp\Big{(}-\frac {\tilde{\nu_{\mathfrak{c}}}(t,s)}{\varepsilon}\Big{)}\tilde{\nu_{\mathfrak{ c}}}(p)ds\cdot\sup_{0\leq s\leq T}\|\varepsilon^{\frac{3}{2}}h_{R}^{ \varepsilon,\mathfrak{c}}(s)\|_{\infty,\ell}^{2}\] \[\lesssim\varepsilon^{k-\frac{3}{2}}\sup_{0\leq s\leq T}\| \varepsilon^{\frac{3}{2}}h_{R}^{\varepsilon,\mathfrak{c}}(s)\|_{\infty,\ell}^ {2}.\] Similarly, we have \[|\varepsilon^{\frac{3}{2}}w_{\ell}\mathcal{J}_{4}|\] \[\lesssim\varepsilon^{\frac{3}{2}}\sum_{i=1}^{2k-1}\varepsilon^{i- 1}\int_{0}^{t}\exp\Big{(}-\frac{\tilde{\nu_{\mathfrak{c}}}(t,s)}{\varepsilon }\Big{)}\Big{|}\frac{w_{\ell}}{\sqrt{J_{\mathfrak{c}}}}\Big{\{}Q_{\mathfrak{ c}}(F_{i}^{\mathfrak{c}},\sqrt{J_{\mathfrak{c}}}h_{R}^{\varepsilon, \mathfrak{c}})+Q_{\mathfrak{c}}(\sqrt{J_{\mathfrak{c}}}h_{R}^{\varepsilon, \mathfrak{c}},F_{i}^{\mathfrak{c}})\Big{\}}(s,y_{1},p)\Big{|}ds\] \[\lesssim\sum_{i=1}^{2k-1}\varepsilon^{i-1}\int_{0}^{t}\exp\Big{(} -\frac{\tilde{\nu_{\mathfrak{c}}}(t,s)}{\varepsilon}\Big{)}\tilde{\nu_{ \mathfrak{c}}}(p)ds\cdot\sup_{0\leq s\leq T}\|\varepsilon^{\frac{3}{2}}h_{R}^{ \varepsilon,\mathfrak{c}}(s)\|_{\infty,\ell}\cdot\sup_{0\leq s\leq T}\Big{\|} \frac{F_{i}^{\mathfrak{c}}(s)}{\sqrt{J_{\mathfrak{c}}}}\Big{\|}_{\infty,\ell}\] \[\lesssim\varepsilon\sup_{0\leq s\leq T}\|\varepsilon^{\frac{3}{2}} h_{R}^{\varepsilon,\mathfrak{c}}(s)\|_{\infty,\ell}\] and \[|\varepsilon^{\frac{3}{2}}w_{\ell}\mathcal{J}_{5}| \lesssim\varepsilon^{k+\frac{3}{2}}\sum_{\begin{subarray}{c}i+j \geq 2k+1\\ 2\leq i,j\leq 2k-1\end{subarray}}\varepsilon^{i+j-1-2k}\int_{0}^{t}\exp \Big{(}-\frac{\tilde{\nu_{\epsilon}}(t,s)}{\varepsilon}\Big{)}\Big{|}\frac{w_ {\ell}}{\sqrt{\mathcal{J}_{\epsilon}}}Q_{\epsilon}(F_{i}^{\epsilon},F_{i}^{ \epsilon})\Big{|}ds\] \[\lesssim\varepsilon^{k+\frac{3}{2}}\int_{0}^{t}\exp\Big{(}-\frac {\tilde{\nu_{\epsilon}}(t,s)}{\varepsilon}\Big{)}\tilde{\nu_{\epsilon}}(p)ds \cdot\sup_{0\leq s\leq T}\Big{\|}\frac{F_{i}^{\epsilon}(s)}{\sqrt{\mathcal{J }_{\epsilon}}}\Big{\|}_{\infty,\ell}\cdot\sup_{0\leq s\leq T}\Big{\|}\frac{F_ {i}^{\epsilon}(s)}{\sqrt{\mathcal{J}_{\epsilon}}}\Big{\|}_{\infty,\ell}\] \[\lesssim\varepsilon^{k+\frac{5}{2}}.\] Collecting the above estimates, we have established \[\sup_{0\leq s\leq T}\|\varepsilon^{\frac{3}{2}}h_{R}^{\varepsilon,\mathfrak{c}}(s)\|_{\infty,\ell}\leq C\varepsilon\sup_{0\leq s\leq T}\| \varepsilon^{\frac{3}{2}}h_{R}^{\varepsilon,\mathfrak{c}}(s)\|_{\infty,\ell}+ C\varepsilon^{k-\frac{3}{2}}\sup_{0\leq s\leq T}\|\varepsilon^{\frac{3}{2}}h_{R}^{ \varepsilon,\mathfrak{c}}(s)\|_{\infty,\ell}^{2}\] \[\quad+C\varepsilon^{k+\frac{5}{2}}+C\|\varepsilon^{\frac{3}{2}}h _{0}\|_{\infty,\ell}+Cw_{\ell}(p)\varepsilon^{\frac{3}{2}}|\mathcal{J}_{2}|. \tag{6.11}\] To bound the last term \(\mathcal{J}_{2}\), we denote \(y_{2}:=y_{1}-\hat{q}\,(s-s^{\prime})=x-\hat{p}(t-s)-\hat{q}\,(s-s^{\prime})\) and \[\tilde{\nu_{\epsilon}}^{\prime}(s,s^{\prime}):=\int_{s^{\prime}}^{s}\nu_{ \mathfrak{c}}(\mathbf{M}_{\mathfrak{c}})(\tau,y_{1}-\hat{q}(s-\tau),q)d\tau \cong(s-s^{\prime})\tilde{\nu_{\epsilon}}.\] We substitute (6.10) into \(\mathcal{J}_{2}\) to obtain \[|\mathcal{J}_{2}| \lesssim\frac{1}{\varepsilon}\int_{0}^{t}\exp\Big{(}-\frac{ \tilde{\nu_{\epsilon}}(t,s)}{\varepsilon}\Big{)}ds\int_{\mathbb{R}^{3}}\hat {k}(p,q)|h_{R}^{\varepsilon,\mathfrak{c}}(s,y_{1},q)|dq\] \[\lesssim\frac{1}{\varepsilon}\int_{0}^{t}\exp\Big{(}-\frac{ \tilde{\nu_{\epsilon}}(t,s)}{\varepsilon}\Big{)}ds\int_{\mathbb{R}^{3}}\hat {k}(p,q)\exp\Big{(}-\frac{\tilde{\nu_{\epsilon}}^{\prime}(s,0)}{\varepsilon} \Big{)}|h_{0}(y_{1}-\hat{q}s,q)|dq\] \[\quad+\frac{1}{\varepsilon^{2}}\int_{0}^{t}\exp\Big{(}-\frac{ \tilde{\nu_{\epsilon}}(t,s)}{\varepsilon}\Big{)}ds\int_{0}^{s}\exp\Big{(}- \frac{\tilde{\nu_{\epsilon}}^{\prime}(s,s^{\prime})}{\varepsilon}\Big{)}ds^{ \prime}\int_{\mathbb{R}^{3}}\hat{k}(p,q)\big{|}\mathcal{K}_{\mathfrak{c}}h_{R }^{\varepsilon,\mathfrak{c}}(s^{\prime},y_{2},q)\big{|}dq\] \[\quad+\varepsilon^{k-2}\int_{0}^{t}\exp\Big{(}-\frac{\tilde{\nu _{\epsilon}}(t,s)}{\varepsilon}\Big{)}ds\int_{0}^{s}\exp\Big{(}-\frac{\tilde {\nu_{\epsilon}}^{\prime}(s,s^{\prime})}{\varepsilon}\Big{)}ds^{\prime}\] \[\quad\quad\quad\quad\times\int_{\mathbb{R}^{3}}\hat{k}(p,q)\frac{ 1}{\sqrt{\mathcal{J}_{\mathfrak{c}}}}\Big{|}Q_{\mathfrak{c}}(h_{R}^{ \varepsilon,\mathfrak{c}}\sqrt{\mathcal{J}_{\mathfrak{c}}},h_{R}^{\varepsilon, \mathfrak{c}}\sqrt{\mathcal{J}_{\mathfrak{c}}})(s^{\prime},y_{2},q)\Big{|}dq\] \[\quad+\frac{1}{\varepsilon}\sum_{i=1}^{2k-1}\varepsilon^{i-1} \int_{0}^{t}\exp\Big{(}-\frac{\tilde{\nu_{\epsilon}}(t,s)}{\varepsilon}\Big{)} ds\int_{0}^{s}\exp\Big{(}-\frac{\tilde{\nu_{\epsilon}}^{\prime}(s,s^{\prime})}{ \varepsilon}\Big{)}ds^{\prime}\] \[\quad\quad\quad\quad\times\int_{\mathbb{R}^{3}}\hat{k}(p,q)\frac{ 1}{\sqrt{\mathcal{J}_{\mathfrak{c}}}}\Big{|}\Big{\{}Q_{\mathfrak{c}}(F_{i}^{ \epsilon},\sqrt{\mathcal{J}_{\mathfrak{c}}}h_{R}^{\varepsilon,\mathfrak{c}})+Q_ {\mathfrak{c}}(\sqrt{\mathcal{J}_{\mathfrak{c}}}h_{R}^{\varepsilon,\mathfrak{c}},F_{i}^{\epsilon})\Big{\}}(s^{\prime},y_{2},q)\Big{|}dq\] \[\quad+\varepsilon^{k-1}\int_{0}^{t}\exp\Big{(}-\frac{\tilde{\nu _{\epsilon}}(t,s)}{\varepsilon}\Big{)}ds\int_{0}^{s}\exp\Big{(}-\frac{\tilde{\nu _{\epsilon}}^{\prime}(s,s^{\prime})}{\varepsilon}\Big{)}ds^{\prime}\int_{ \mathbb{R}^{3}}\hat{k}(p,q)|\tilde{A}(s^{\prime},y_{2},q)|dq\] \[=\sum_{j=1}^{5}\mathcal{J}_{2j}.\] By Lemma 4.6, there exists a positive constant \(\nu_{0}\) which is independent of \(\mathfrak{c}\), such that \[\nu_{\mathfrak{c}}(p)\geq\nu_{0},\quad p\in\mathbb{R}^{3}.\] For \(\mathcal{J}_{21}\), one has from (6.8) that \[|\varepsilon^{\frac{3}{2}}w_{\ell}(p)\mathcal{J}_{21}| \lesssim\frac{1}{\varepsilon}\int_{0}^{t}\exp\Big{(}-\frac{\nu_{0 }t}{\varepsilon}\Big{)}ds\int_{\mathbb{R}^{3}}\hat{k}_{w}(p,q)|\varepsilon^{ \frac{3}{2}}w_{\ell}(q)h_{0}(y_{1}-\hat{q}s,q)|dq\] \[\lesssim\|\varepsilon^{\frac{3}{2}}h_{0}\|_{\infty,\ell}.\] Similarly, using Lemma 6.1, we get \[|\varepsilon^{\frac{3}{2}}w_{\ell}(p)\mathcal{J}_{23}| \lesssim\varepsilon^{k-\frac{1}{2}}\int_{0}^{t}\exp\Big{(}-\frac{ \nu_{0}(t-s)}{\varepsilon}\Big{)}ds\int_{\mathbb{R}^{3}}\hat{k}_{w}(p,q)dq\] \[\qquad\qquad\times\int_{0}^{s}\exp\Big{(}-\frac{\tilde{\nu_{ \epsilon}}^{\prime}(s,s^{\prime})}{\varepsilon}\Big{)}\frac{w_{\ell}(q)}{ \sqrt{J_{\epsilon}}}\Big{|}Q_{\epsilon}(h_{R}^{\varepsilon,\epsilon}\sqrt{J_ {\epsilon}},h_{R}^{\varepsilon,\epsilon}\sqrt{J_{\epsilon}})(s^{\prime},y_{2},q)\Big{|}ds^{\prime}\] \[\lesssim\varepsilon^{k-\frac{3}{2}}\sup_{0\leq s\leq T}\| \varepsilon^{\frac{3}{2}}h_{R}^{\varepsilon,\epsilon}(s)\|_{\infty,\ell}^{2}\] and \[|\varepsilon^{\frac{3}{2}}w_{\ell}(p)\mathcal{J}_{24}| \lesssim\varepsilon^{\frac{1}{2}}\sum_{i=1}^{2k-1}\varepsilon^{i-1} \int_{0}^{t}\exp\Big{(}-\frac{\nu_{0}(t-s)}{\varepsilon}\Big{)}ds\int_{ \mathbb{R}^{3}}\hat{k}_{w}(p,q)dq\] \[\qquad\times\int_{0}^{s}\exp\Big{(}-\frac{\tilde{\nu_{\epsilon}} ^{\prime}(s,s^{\prime})}{\varepsilon}\Big{)}\frac{w_{\ell}(q)}{\sqrt{J_{ \epsilon}}}\Big{|}\Big{\{}Q_{\epsilon}(F_{i}^{\epsilon},\sqrt{J_{\epsilon}}h_{ R}^{\varepsilon,\epsilon})+Q_{\epsilon}(\sqrt{J_{\epsilon}}h_{R}^{\varepsilon, \epsilon},F_{i}^{\epsilon})\Big{\}}(s^{\prime},y_{2},q)\Big{|}ds^{\prime}\] \[\lesssim\varepsilon\sup_{0\leq s\leq T}\|\varepsilon^{\frac{3}{2 }}h_{R}^{\varepsilon,\epsilon}(s)\|_{\infty,\ell}.\] For \(\mathcal{J}_{25}\), one has \[|\varepsilon^{\frac{3}{2}}w_{\ell}(p)\mathcal{J}_{25}| \lesssim\varepsilon^{k+\frac{1}{2}}\sum_{\begin{subarray}{c}i+j \geq 2k+1\\ 2\leq i,j\leq 2k-1\end{subarray}}\varepsilon^{i+j-1-2k}\int_{0}^{t}\exp \Big{(}-\frac{\nu_{0}(t-s)}{\varepsilon}\Big{)}ds\int_{\mathbb{R}^{3}}\hat{k} _{w}(p,q)dq\] \[\qquad\times\int_{0}^{s}\exp\Big{(}-\frac{\tilde{\nu_{\epsilon}} ^{\prime}(s,s^{\prime})}{\varepsilon}\Big{)}\frac{w_{\ell}(q)}{\sqrt{J_{ \epsilon}}}|Q_{\epsilon}(F_{i}^{\epsilon},F_{i}^{\epsilon})(s^{\prime},y_{2},q )|ds^{\prime}\] \[\lesssim\varepsilon^{k+\frac{5}{2}}.\] Now we focus on the estimate of \(\mathcal{J}_{22}\). It holds that \[|\varepsilon^{\frac{3}{2}}w_{\ell}(p)\mathcal{J}_{22}| \lesssim\frac{1}{\varepsilon^{2}}\int_{0}^{t}\exp\Big{(}-\frac{ \tilde{\nu_{\epsilon}}(t,s)}{\varepsilon}\Big{)}ds\int_{0}^{s}\exp\Big{(}- \frac{\tilde{\nu_{\epsilon}}^{\prime}(s,s^{\prime})}{\varepsilon}\Big{)}ds^{\prime}\] \[\qquad\times\int_{\mathbb{R}^{3}}\hat{k}_{w}(p,q)dq\int_{\mathbb{ R}^{3}}\hat{k}_{w}(q,q^{\prime})|\varepsilon^{\frac{3}{2}}w_{\ell}(q^{ \prime})h_{R}^{\varepsilon,\epsilon}(s^{\prime},y_{2},q^{\prime})dq^{\prime}.\] We divide the estimate into four cases. _Case 1_: \(|p|\geq N\). Using (6.8), one has \[|\varepsilon^{\frac{3}{2}}w_{\ell}(p)\mathcal{J}_{22}| \lesssim\max\Big{\{}\frac{1}{\epsilon},\frac{1}{1+|p|}\Big{\}} \sup_{0\leq s\leq T}\|\varepsilon^{\frac{3}{2}}h_{R}^{\varepsilon,\epsilon}(s) \|_{\infty,\ell}\] \[\lesssim\max\Big{\{}\frac{1}{\epsilon},\frac{1}{N}\Big{\}}\sup_{ 0\leq s\leq T}\|\varepsilon^{\frac{3}{2}}h_{R}^{\varepsilon,\epsilon}(s)\|_{ \infty,\ell}.\] _Case 2_: \(|p|\leq N\), \(|q|\geq 2N\) or \(|q|\leq 2N\), \(|q^{\prime}|\geq 3N\). Using (6.8) again, we have \[\frac{1}{\varepsilon^{2}}\int_{0}^{t}\exp\Big{(}-\frac{\tilde{\nu _{\epsilon}}(t,s)}{\varepsilon}\Big{)}ds\int_{0}^{s}\exp\Big{(}-\frac{\tilde{ \nu_{\epsilon}}^{\prime}(s,s^{\prime})}{\varepsilon}\Big{)}ds^{\prime}\] \[\qquad\times\Big{\{}\iint_{|p|\leq N,|q|\geq 2N}+\iint_{|q|\leq 2N,|q^{ \prime}|\geq 3N}\Big{\}}\] \[\lesssim e^{-\frac{\delta_{2}}{4}N}\sup_{0\leq s\leq T}\| \varepsilon^{\frac{3}{2}}h_{R}^{\varepsilon,\epsilon}(s)\|_{\infty,\ell} \lesssim\frac{1}{N}\sup_{0\leq s\leq T}\|\varepsilon^{\frac{3}{2}}h_{R}^{ \varepsilon,\epsilon}(s)\|_{\infty,\ell}.\] _Case 3_: For \(s-s^{\prime}\leq\kappa\varepsilon\) and \(|p|\leq N\), \(|q|\leq 2N\), \(|q^{\prime}|\leq 3N\), one has \[\frac{1}{\varepsilon^{2}}\int_{0}^{t}\exp\Big{(}-\frac{\tilde{\nu _{\epsilon}}(t,s)}{\varepsilon}\Big{)}ds\int_{s-s\kappa\varepsilon}^{s}\exp \Big{(}-\frac{\tilde{\nu_{\epsilon}}^{\prime}(s,s^{\prime})}{\varepsilon}\Big{)} ds^{\prime}\] \[\times\int_{|q|\leq 2N}\hat{k}_{w}(p,q)dq\int_{|q^{\prime}|\leq 3N}\hat{k} _{2}(q,q^{\prime})|\varepsilon^{\frac{3}{2}}w_{\ell}(q^{\prime})h_{R}^{ \varepsilon,\mathfrak{c}}(s^{\prime},y_{2},q^{\prime})|dq^{\prime}\] \[\lesssim\kappa\sup_{0\leq s\leq T}\|\varepsilon^{\frac{3}{2}}h_{R}^ {\varepsilon,\mathfrak{c}}(s)\|_{\infty,\ell}.\] _Case 4_: For \(s-s^{\prime}\geq\kappa\varepsilon\) and \(|p|\leq N\), \(|q|\leq 2N\), \(|q^{\prime}|\leq 3N\), this is the last remaining case. Using (6.8), one has \[\int_{|q|\leq 2N}\int_{|q^{\prime}|\leq 3N}\hat{k}_{w}(p,q)\hat{k} _{w}(q,q^{\prime})|w_{\ell}(q^{\prime})h_{R}^{\varepsilon,\mathfrak{c}}(s^{ \prime},y_{2},q^{\prime})|dqdq^{\prime}\] \[\leq C_{N}\int_{|q|\leq 2N}\int_{|q^{\prime}|\leq 3N}\hat{k}_{w}(p,q )\hat{k}_{w}(q,q^{\prime})|f_{R}^{\varepsilon,\mathfrak{c}}(s^{\prime},y_{2}, q^{\prime})|dqdq^{\prime}\] \[\leq C_{N}\Big{(}\int_{|q|\leq 2N}\int_{|q^{\prime}|\leq 3N}\hat{k} _{w}^{2}(p,q)\hat{k}_{w}^{2}(q,q^{\prime})dqdq^{\prime}\Big{)}^{\frac{1}{2}}\] \[\qquad\times\Big{(}\int_{|q|\leq 2N}\int_{|q^{\prime}|\leq 3N}|f_{R}^{ \varepsilon,\mathfrak{c}}(s^{\prime},y_{2},q^{\prime})|^{2}dqdq^{\prime} \Big{)}^{\frac{1}{2}}\] \[\leq C_{N}\Big{(}\int_{\mathbb{R}^{3}}\int_{\mathbb{R}^{3}}|f_{R} ^{\varepsilon,\mathfrak{c}}(s^{\prime},y_{2},q^{\prime})|^{2}\cdot\varepsilon^ {-3}\kappa^{-3}dy_{2}dq^{\prime}\Big{)}^{\frac{1}{2}}\] \[\leq\frac{C_{N,\kappa}}{\varepsilon^{\frac{3}{2}}}\sup_{0\leq s \leq T}\|f_{R}^{\varepsilon,\mathfrak{c}}(s)\|_{2},\] where we have made a change of variables \(q\mapsto y_{2}\) with \[\Big{|}\frac{dy_{2}}{dq}\Big{|}=\frac{\mathfrak{c}^{5}}{(q^{0})^{5}}(s-s^{ \prime})^{3}\geq\frac{\kappa^{3}\varepsilon^{3}}{3^{5}}.\] Here we take \(1\leq N\leq\mathfrak{c}\). Thus we have \[\frac{1}{\varepsilon^{2}}\int_{0}^{t}\exp\Big{(}-\frac{\tilde{ \nu_{\mathfrak{c}}}(t,s)}{\varepsilon}\Big{)}ds\int_{0}^{s-\kappa\varepsilon} \exp\Big{(}-\frac{\tilde{\nu_{\mathfrak{c}}}^{\prime}(s,s^{\prime})}{ \varepsilon}\Big{)}ds^{\prime}\] \[\qquad\times\int_{|q|\leq 2N}\hat{k}_{w}(p,q)dq\int_{|q^{\prime}| \leq 3N}\hat{k}_{2}(q,q^{\prime})|\varepsilon^{\frac{3}{2}}w_{\ell}(q^{\prime})h _{R}^{\varepsilon,\mathfrak{c}}(s^{\prime},y_{2},q^{\prime})|dq^{\prime}\] \[\leq C_{N,\kappa}\sup_{0\leq s\leq T}\|f_{R}^{\varepsilon, \mathfrak{c}}(s)\|_{2}.\] Collecting all the four cases, we obtain \[|\varepsilon^{\frac{3}{2}}w_{\ell}(p)\mathcal{J}_{22}|\leq C\Big{(}\kappa+ \frac{1}{N}\Big{)}\sup_{0\leq s\leq T}\|\varepsilon^{\frac{3}{2}}h_{R}^{ \varepsilon,\mathfrak{c}}(s)\|_{\infty,\ell}+C_{N,\kappa}\sup_{0\leq s\leq T}\| f_{R}^{\varepsilon,\mathfrak{c}}(s)\|_{2}. \tag{6.12}\] Therefore, combining (6.11) and (6.12), one obtains \[\sup_{0\leq s\leq T}\|\varepsilon^{\frac{3}{2}}h_{R}^{\varepsilon,\mathfrak{c}}(s)\|_{\infty,\ell}\leq C\Big{(}\varepsilon+\kappa+\frac{1}{N} \Big{)}\sup_{0\leq s\leq T}\|\varepsilon^{\frac{3}{2}}h_{R}^{\varepsilon, \mathfrak{c}}(s)\|_{\infty,\ell}+C\|\varepsilon^{\frac{3}{2}}h_{0}\|_{\infty,\ell}\] \[\qquad\qquad\qquad+C\varepsilon^{k-\frac{3}{2}}\sup_{0\leq s\leq T }\|\varepsilon^{\frac{3}{2}}h_{R}^{\varepsilon,\mathfrak{c}}(s)\|_{\infty, \ell}^{2}+C\varepsilon^{k+\frac{5}{2}}+C_{N,\kappa}\sup_{0\leq s\leq T}\|f_{R }^{\varepsilon,\mathfrak{c}}(s)\|_{2}. \tag{6.13}\] Choosing \(N\) suitably large and \(\kappa\), \(\varepsilon\) suitably small, one gets from (6.13) that \[\sup_{0\leq s\leq T}\|\varepsilon^{\frac{3}{2}}h_{R}^{\varepsilon,\mathfrak{c} }(s)\|_{\infty,\ell}\leq C\|\varepsilon^{\frac{3}{2}}h_{0}\|_{\infty,\ell}+C \sup_{0\leq s\leq T}\|f_{R}^{\varepsilon,\mathfrak{c}}(s)\|_{2}+C\varepsilon^{k +\frac{5}{2}}.\] Therefore the proof of Lemma 6.4 is completed. Proof of Theorem 1.1.: With Lemmas 6.3-6.4 in hand, the rest proof is the same as [25, 46]. We omit the details here for brevity. Therefore the proof of Theorem 1.1 is completed. Using Theorem 1.1, we can prove Theorem 1.5 as follows. Proof of Theorem 1.5.: Recall \(\bar{c}_{1}\) and \(\bar{c}_{2}\) in (3.45). Using (1.29), for any \((t,x,p)\in[0,T]\times\mathbb{R}^{3}\times\mathbb{R}^{3}\), one has \[|F^{e,\mathfrak{c}}(t,x,p)-\mathbf{M}_{\mathfrak{c}}(t,x,p)|\lesssim\varepsilon \sqrt{J_{\mathfrak{c}}(p)}\lesssim\varepsilon e^{-\frac{|p|}{2T_{M}}}. \tag{6.14}\] A direct calculation shows that \[\mu(t,x,p)-\mathbf{M}_{\mathfrak{c}}(t,x,p)\] \[=\frac{\rho}{(2\pi\theta)^{\frac{3}{2}}}\exp\Big{\{}-\frac{|p- \mathfrak{u}|^{2}}{2\theta}\Big{\}}-\frac{n_{0}\gamma}{4\pi\mathfrak{c}^{3}K _{2}(\gamma)}\exp\Big{\{}\frac{u^{\mu}p_{\mu}}{T_{0}}\Big{\}}\] \[=\frac{\rho}{(2\pi\theta)^{\frac{3}{2}}}\exp\Big{\{}-\frac{|p- \mathfrak{u}|^{2}}{2\theta}\Big{\}}-\frac{n_{0}}{(2\pi T_{0})^{\frac{3}{2}}} \exp\Big{\{}\frac{\mathfrak{c}^{2}+u^{\mu}p_{\mu}}{T_{0}}\Big{\}}(1+O(\gamma ^{-1}))\] \[=O(\gamma^{-1})\frac{n_{0}}{(2\pi T_{0})^{\frac{3}{2}}}\exp\Big{\{} \frac{\mathfrak{c}^{2}+u^{\mu}p_{\mu}}{T_{0}}\Big{\}}+\Big{(}\frac{\rho}{(2 \pi\theta)^{\frac{3}{2}}}-\frac{n_{0}}{(2\pi T_{0})^{\frac{3}{2}}}\Big{)}\exp \Big{\{}-\frac{|p-\mathfrak{u}|^{2}}{2\theta}\Big{\}}\] \[\quad\quad+\frac{n_{0}}{(2\pi T_{0})^{\frac{3}{2}}}\Big{(}\exp \Big{\{}-\frac{|p-\mathfrak{u}|^{2}}{2\theta}\Big{\}}-\exp\Big{\{}\frac{ \mathfrak{c}^{2}+u^{\mu}p_{\mu}}{T_{0}}\Big{\}}\Big{)}\] \[:=\mathcal{A}_{1}+\mathcal{A}_{2}+\mathcal{A}_{3}. \tag{6.15}\] It follows from Proposition (3.8) that \[|\mathcal{A}_{1}|\lesssim\frac{1}{\mathfrak{c}^{2}}e^{-2\bar{c}_{1}|p|},\quad| \mathcal{A}_{2}|\lesssim\frac{1}{\mathfrak{c}^{2}}e^{-\bar{c}_{2}|p|}.\] For \(\mathcal{A}_{3}\), if \(|p|\geq\mathfrak{c}^{\frac{1}{8}}\), one has \[|\mathcal{A}_{3}| \lesssim\exp\Big{\{}-\frac{|p|^{2}}{4\theta}\Big{\}}+\exp\Big{\{} -\frac{|p|}{2T_{0}}\Big{\}}\] \[\lesssim\exp\Big{\{}-\frac{\mathfrak{c}^{\frac{1}{4}}}{8\theta} \Big{\}}\exp\Big{\{}-\frac{|p|^{2}}{8\theta}\Big{\}}+\exp\Big{\{}-\frac{ \mathfrak{c}^{\frac{1}{8}}}{4T_{0}}\Big{\}}\exp\Big{\{}-\frac{|p|}{4T_{0}} \Big{\}}\] \[\lesssim\frac{1}{\mathfrak{c}^{2}}\big{(}e^{-\frac{c_{2}}{2}|p|}+ e^{-\bar{c}_{1}|p|}\big{)}.\] If \(|p|\leq\mathfrak{c}^{\frac{1}{8}}\), it follows from (4.55)-(4.56) that \[|\mathcal{A}_{3}|\leq\frac{n_{0}}{(2\pi T_{0})^{\frac{3}{2}}}\exp\Big{\{}- \frac{|p-\mathfrak{u}|^{2}}{2\theta}\Big{\}}\Big{|}1-\exp\Big{\{}\frac{|p- \mathfrak{u}|^{2}}{2\theta}+\frac{\mathfrak{c}^{2}+u^{\mu}p_{\mu}}{T_{0}} \Big{\}}\Big{|}\lesssim\mathfrak{c}^{-\frac{3}{2}}e^{-\bar{c}_{2}|p|}. \tag{6.16}\] Combining (6.15)-(6.16), one has \[|\mu(t,x,p)-\mathbf{M}_{\mathfrak{c}}(t,x,p)|\lesssim\mathfrak{c}^{-\frac{3}{ 2}}(e^{-\frac{c_{2}}{2}|p|}+e^{-\bar{c}_{1}|p|}). \tag{6.17}\] Using (6.14), (6.17) and taking \[\delta_{0}:=\min\Big{(}\frac{1}{2T_{M}},\,\bar{c}_{1},\,\frac{\bar{c}_{2}}{2} \Big{)}>0,\] one has \[|F^{e,\mathfrak{c}}(t)-\mu(t)|\lesssim\varepsilon e^{-\frac{|p|}{2T_{M}}}+ \mathfrak{c}^{-\frac{3}{2}}(e^{-\frac{c_{2}}{2}|p|}+e^{-\bar{c}_{1}|p|}) \lesssim(\varepsilon+\mathfrak{c}^{-\frac{3}{2}})e^{-\delta_{0}|p|},\] which implies that \[\sup_{0\leq t\leq T}\Big{\|}\big{(}F^{e,\mathfrak{c}}-\mu\big{)}(t)e^{\delta_{0 }|p|}\Big{\|}_{\infty}\lesssim\varepsilon+\mathfrak{c}^{-\frac{3}{2}}.\] Therefore the proof of Theorem 1.5 is completed. ## 7. Appendix: Derivation of the orthonormal basis of \(\mathcal{N}_{\mathfrak{c}}\) In this part, we derive the orthonormal basis of \(\mathcal{N}_{\mathfrak{c}}\). One needs to use (1.13)-(1.14) and Lemma 4.11 frequently. Suppose that \[\chi_{0}^{\mathfrak{c}}=\mathfrak{a}_{0}\sqrt{\mathbf{M}_{\mathfrak{c}}},\quad \chi_{j}^{\mathfrak{c}}=\frac{p_{j}-\mathfrak{a}_{j}}{\mathfrak{b}_{j}}\sqrt{ \mathbf{M}_{\mathfrak{c}}}\ (j=1,2,3),\quad\chi_{4}^{\mathfrak{c}}=\frac{p^{0}/ \mathfrak{c}+\sum_{i=1}^{3}\lambda_{i}p_{i}+\mathfrak{c}}{\zeta}\sqrt{ \mathbf{M}_{\mathfrak{c}}}\] form an orthonormal basis of \(\mathcal{N}_{\mathfrak{c}}\). Using \(\langle\chi_{0}^{\mathfrak{c}},\chi_{0}^{\mathfrak{c}}\rangle=1\), one has \(\mathfrak{a}_{0}=\Big{(}\int_{\mathbb{R}^{3}}\mathbf{M}_{\mathfrak{c}}dp\Big{)} ^{-\frac{1}{2}}=\frac{1}{\sqrt{I^{0}}}\). To compute \(\mathfrak{a}_{j}\), since \(\langle\chi_{0}^{\mathfrak{c}},\chi_{j}^{\mathfrak{c}}\rangle=0\), we have \[0=\int_{\mathbb{R}^{3}}(p_{j}-\mathfrak{a}_{j})\mathbf{M}_{\mathfrak{c}}dp=T^ {0j}-\mathfrak{a}_{j}I^{0},\] which yields that \(\mathfrak{a}_{j}=\frac{T^{0j}}{I^{0}}\). For \(\mathfrak{b}_{j}\), using \(\langle\chi_{j}^{\mathfrak{c}},\chi_{j}^{\mathfrak{c}}\rangle=1\), one has \[\mathfrak{b}_{j}^{2} =\int_{\mathbb{R}^{3}}(p_{j}-\mathfrak{a}_{j})^{2}\mathbf{M}_{ \mathfrak{c}}dp=\int_{\mathbb{R}^{3}}(p_{j}^{2}+\mathfrak{a}_{j}^{2}-2 \mathfrak{a}_{j}p_{j})\mathbf{M}_{\mathfrak{c}}dp\] \[=T^{0jj}+\mathfrak{a}_{j}^{2}I^{0}-2\mathfrak{a}_{j}T^{0j}=T^{0jj }-\frac{(T^{0j})^{2}}{I^{0}},\] which yields that \(\mathfrak{b}_{j}=\sqrt{T^{0jj}-\frac{(T^{0j})^{2}}{I^{0}}}\), \(j=1,2,3\). To determine the coefficients \(\lambda_{i}\), \(i=1,2,3\), due to \(\langle\chi_{4}^{\mathfrak{c}},\chi_{0}^{\mathfrak{c}}\rangle=\langle\chi_{4 }^{\mathfrak{c}},\chi_{j}^{\mathfrak{c}}\rangle=0\), we have \[\int_{\mathbb{R}^{3}}(p^{0}/\mathfrak{c}+\sum_{i=1}^{3}\lambda_{i }p_{i}+\mathfrak{c})\mathbf{M}_{\mathfrak{c}}dp =0,\] \[\int_{\mathbb{R}^{3}}(p^{0}/\mathfrak{c}+\sum_{i=1}^{3}\lambda_{i }p_{i}+\mathfrak{c})(p_{j}-\mathfrak{a}_{j})\mathbf{M}_{\mathfrak{c}}dp =0,\ j=1,2,3.\] That is \[\frac{T^{00}}{\mathfrak{c}}+\sum_{i=1}^{3}\lambda_{i}T^{0i}+ \mathfrak{c}I^{0} =0,\] \[\frac{T^{00j}}{\mathfrak{c}}-\frac{\mathfrak{a}_{j}}{\mathfrak{c }}T^{00}+\sum_{i=1}^{3}\lambda_{i}(T^{0ij}-\mathfrak{a}_{j}T^{0i})+\mathfrak{ c}(T^{0j}-\mathfrak{a}_{j}I^{0}) =0,\ j=1,2,3.\] One can rewrite the above linear system as \[\left(\begin{array}{cccc}T^{01}&T^{02}&T^{03}&I^{0}\\ T^{011}-\mathfrak{a}_{1}T^{01}&T^{021}-\mathfrak{a}_{1}T^{02}&T^{031}- \mathfrak{a}_{1}T^{03}&T^{01}-\mathfrak{a}_{1}I^{0}\\ T^{012}-\mathfrak{a}_{2}T^{01}&T^{022}-\mathfrak{a}_{2}T^{02}&T^{032}- \mathfrak{a}_{2}T^{03}&T^{02}-\mathfrak{a}_{2}I^{0}\\ T^{013}-\mathfrak{a}_{3}T^{01}&T^{023}-\mathfrak{a}_{3}T^{02}&T^{033}- \mathfrak{a}_{3}T^{03}&T^{03}-\mathfrak{a}_{3}I^{0}\end{array}\right)\left( \begin{array}{c}\lambda_{1}\\ \lambda_{2}\\ \lambda_{3}\\ \mathfrak{c}\end{array}\right)=\left(\begin{array}{c}-\frac{T^{00}}{ \mathfrak{c}}\\ \frac{\mathfrak{a}_{1}T^{00}}{\mathfrak{c}}-\frac{T^{001}}{\mathfrak{c}}\\ \frac{\mathfrak{a}_{2}T^{00}}{\mathfrak{c}}-\frac{T^{002}}{\mathfrak{c}}\\ \frac{\mathfrak{a}_{3}T^{00}}{\mathfrak{c}}-\frac{T^{003}}{\mathfrak{c}} \end{array}\right). \tag{7.1}\] Denote \[\mathfrak{a}:=\frac{n_{0}u^{0}}{\mathfrak{c}}\frac{K_{3}(\gamma)}{K_{2}(\gamma)},\quad\mathfrak{b}:=\frac{n_{0}u^{0}}{\mathfrak{c}\gamma K_{2}(\gamma)}(6K_{3}( \gamma)+\gamma K_{2}(\gamma)).\] By a tedious calculation, one can transform (7.1) into the following system \[\left(\begin{array}{ccccc}0&0&0&\mathfrak{a}\frac{K_{2}(\gamma)}{K_{3}(\gamma)}- \mathfrak{a}\frac{K_{2}(\gamma)}{K_{3}(\gamma)}(\frac{K_{3}(\gamma)}{K_{2}( \gamma)}-\frac{\mathfrak{b}}{a})\frac{|\mathfrak{u}|^{2}}{T_{0}}\\ \mathfrak{a}T_{0}&0&0&\mathfrak{a}\frac{K_{2}(\gamma)}{K_{3}(\gamma)}(\frac{K_ {3}(\gamma)}{K_{2}(\gamma)}-\frac{\mathfrak{b}}{a})u_{1}\\ 0&\mathfrak{a}T_{0}&0&\mathfrak{a}\frac{K_{2}(\gamma)}{K_{3}(\gamma)}(\frac{K_ {3}(\gamma)}{K_{2}(\gamma)}-\frac{\mathfrak{b}}{a})u_{2}\\ 0&0&\mathfrak{a}T_{0}&\mathfrak{a}\frac{K_{2}(\gamma)}{K_{3}(\gamma)}(\frac{K_ {3}(\gamma)}{K_{2}(\gamma)}-\frac{\mathfrak{b}}{a})u_{3}\end{array}\right) \left(\begin{array}{c}\lambda_{1}\\ \lambda_{2}\\ \lambda_{3}\\ \mathfrak{c}\end{array}\right)=\left(\begin{array}{c}\frac{n_{0}}{\gamma}- \frac{\mathfrak{a}u^{0}}{\mathfrak{c}}-\frac{n_{0}}{\mathfrak{c}}\big{(} \frac{K_{3}(\gamma)}{K_{2}(\gamma)}-\frac{\mathfrak{b}}{a}\big{)}\frac{| \mathfrak{u}|^{2}}{T_{0}}\\ \frac{n_{0}}{\mathfrak{c}}\big{(}\frac{K_{2}(\gamma)}{K_{2}(\gamma)}-\frac{ \mathfrak{b}}{a}\big{)}u_{1}\\ \frac{n_{0}}{\gamma}\big{(}\frac{K_{3}(\gamma)}{K_{2}(\gamma)}-\frac{ \mathfrak{b}}{a}\big{)}u_{2}\\ \frac{n_{0}}{\gamma}\big{(}\frac{K_{3}(\gamma)}{K_{2}(\gamma)}-\frac{ \mathfrak{b}}{a}\big{)}u_{3}\end{array}\right). \tag{7.2}\] Observing (5.8), one has \(\frac{K_{3}(\gamma)}{K_{2}(\gamma)}-\frac{\mathfrak{b}}{a}<0\), which implies that (7.2) has a unique solution. More precisely, we can write down it explicitly \[\left(\begin{array}{c}\lambda_{1}\\ \lambda_{2}\\ \mathfrak{c}\\ \mathfrak{c}\end{array}\right)=\frac{1}{\frac{\mathfrak{u}^{0}}{\mathfrak{ c}}-\left(\frac{K_{3}(\gamma)}{K_{2}(\gamma)}-\frac{\mathfrak{c}}{\gamma}-\frac{K_ {2}(\gamma)}{K_{3}(\gamma)}\right)\frac{|\mathfrak{u}|^{2}}{CT_{0}}}\left( \begin{array}{c}\left(\frac{K_{3}(\gamma)}{K_{2}(\gamma)}-\frac{\mathfrak{c }}{\gamma}-\frac{K_{2}(\gamma)}{K_{3}(\gamma)}\right)\frac{(\mathfrak{u}^{0} )^{2}}{CT_{0}}u_{1}\\ \left(\frac{K_{3}(\gamma)}{K_{2}(\gamma)}-\frac{\mathfrak{c}}{\gamma}-\frac{K _{2}(\gamma)}{K_{3}(\gamma)}\right)\frac{(\mathfrak{u}^{0})^{2}}{CT_{0}}u_{2} \\ \left(\frac{K_{3}(\gamma)}{K_{2}(\gamma)}-\frac{\mathfrak{c}}{\gamma}-\frac{K _{2}(\gamma)}{K_{3}(\gamma)}\right)\frac{(\mathfrak{u}^{0})^{2}}{CT_{0}}u_{3} \\ \frac{1}{\gamma}-\frac{(\mathfrak{u}^{0})^{2}}{\gamma T_{0}}\frac{K_{3}(\gamma )}{K_{2}(\gamma)}-\left(\frac{K_{3}(\gamma)}{K_{2}(\gamma)}-\frac{\mathfrak{c }}{\gamma}-\frac{K_{2}(\gamma)}{K_{3}(\gamma)}\right)\frac{|\mathfrak{u}|^{2} }{\gamma T_{0}}\right)\end{array}\right).\] For \(\zeta\), it follow from \(\langle\chi_{4}^{\mathfrak{c}},\chi_{4}^{\mathfrak{c}}\rangle=1\) that \[\zeta^{2} =\int_{\mathbb{R}^{3}}(p^{0}/\mathfrak{c}+\sum_{i=1}^{3}\lambda_ {i}p_{i}+\mathfrak{c})^{2}\mathbf{M}_{\mathfrak{c}}dp\] \[=\int_{\mathbb{R}^{3}}(\frac{(p^{0})^{2}}{\mathfrak{c}^{2}}+ \mathfrak{c}^{2}+\sum_{i,j=1}^{3}\lambda_{i}\lambda_{j}p_{i}p_{j}+2\frac{ \mathfrak{c}}{\mathfrak{c}}p^{0}+2\sum_{i=1}^{3}\lambda_{i}\mathfrak{c}p_{i} +2\sum_{i=1}^{3}\frac{\lambda_{i}}{\mathfrak{c}}p^{0}p_{i})\mathbf{M}_{ \mathfrak{c}}dp\] \[=\frac{T^{000}}{\mathfrak{c}^{2}}+\mathfrak{c}^{2}I^{0}+\sum_{i,j =1}^{3}\lambda_{i}\lambda_{j}T^{0ij}+2\frac{\mathfrak{c}}{\mathfrak{c}}T^{00}+ 2\sum_{i=1}^{3}\lambda_{i}\mathfrak{c}T^{0i}+2\sum_{i=1}^{3}\frac{\lambda_{i}} {\mathfrak{c}}T^{00i},\] which yields that \[\zeta=\sqrt{\frac{T^{000}}{\mathfrak{c}^{2}}+\mathfrak{c}^{2}I^{0}+\sum_{i,j=1 }^{3}\lambda_{i}\lambda_{j}T^{0ij}+2\frac{\mathfrak{c}}{\mathfrak{c}}T^{00}+ 2\sum_{i=1}^{3}\lambda_{i}\mathfrak{c}T^{0i}+2\sum_{i=1}^{3}\frac{\lambda_{i}} {\mathfrak{c}}T^{00i}}.\] Consequently, we obtain the desired orthonormal basis of \(\mathcal{N}_{\mathfrak{c}}\). **Acknowledgments.** Yong Wang's research is partially supported by National Key R&D Program of China No. 2021YFA1000800, National Natural Science Foundation of China No. 12022114, 12288201, CAS Project for Young Scientists in Basic Research, Grant No. YSBR-031, and Youth Innovation Promotion Association of the Chinese Academy of Science No. 2019002. Changguo Xiao's research is partially supported by National Natural Science Foundation of China No. 12361045 and Guangxi Natural Science Foundation (Grant No. 2023GXNSFAA026066). **Conflict of Interest:** The authors declare that they have no conflict of interest.
relativistickinetic理論において、流体力学的限界とニュートン限界は重要な概念です。これら2つの独立した限界を、特殊相対論的ボルツマン方程式から古典的なエウレカ方程式に厳密に正当化しています。 Knudsen数εと光速c間の依存関係を仮定せずに。収束速度も得られました。これは、特殊相対論的なボルツマン方程式のHilbert展開によって達成されています。Hilbert展開におけるユニフォームなcとεの評価を適用する際に、新しい困難が発生します。これらは、特殊相対論的なボルツマン方程式のユニフォームなcの評価を確立することで克服されました。
2309.08989
RMP: A Random Mask Pretrain Framework for Motion Prediction
As the pretraining technique is growing in popularity, little work has been done on pretrained learning-based motion prediction methods in autonomous driving. In this paper, we propose a framework to formalize the pretraining task for trajectory prediction of traffic participants. Within our framework, inspired by the random masked model in natural language processing (NLP) and computer vision (CV), objects' positions at random timesteps are masked and then filled in by the learned neural network (NN). By changing the mask profile, our framework can easily switch among a range of motion-related tasks. We show that our proposed pretraining framework is able to deal with noisy inputs and improves the motion prediction accuracy and miss rate, especially for objects occluded over time by evaluating it on Argoverse and NuScenes datasets.
Yi Yang, Qingwen Zhang, Thomas Gilles, Nazre Batool, John Folkesson
2023-09-16T13:09:02
http://arxiv.org/abs/2309.08989v1
# RMP: A Random Mask Pretrain Framework for Motion Prediction ###### Abstract Xi _Abstract_--As the pretraining technique is growing in popularity, little work has been done on pretrained learning-based motion prediction methods in autonomous driving. In this paper, we propose a framework to formalize the pretraining task for trajectory prediction of traffic participants. Within our framework, inspired by the random masked model in natural language processing (NLP) and computer vision (CV), objects' positions at random timesteps are masked and then filled in by the learned neural network (NN). By changing the mask profile, our framework can easily switch among a range of motion-related tasks. We show that our proposed pretraining framework is able to deal with noisy inputs and improves the motion prediction accuracy and miss rate, especially for objects occluded over time by evaluating it on Argoverse and NuScenes datasets. ## I Introduction Accurately predicting the motion of road users is essential in autonomous driving systems. This predictive capability provides the planner with a forward-looking perspective on potential movements, thereby enhancing safety measures. While learning-based motion prediction has become increasingly popular in recent research, the exploration of pretraining and self-supervised learning within this field remains relatively limited. The technique of random masking has demonstrated its effectiveness in various fields, such as natural language processing (NLP) and computer vision (CV), as evidenced by models like BERT [1] and Masked Autoencoders [2] in conjunction with Vision Transformers (ViT [3]). Random masking involves concealing a portion of the data (masking), and then tasking the neural network with predicting the hidden elements, thereby creating a nontrivial and beneficial self-supervisory task. This method employs an asymmetric encoder-decoder architecture, which has proven to be particularly powerful regarding training speed with large datasets. Furthermore, it has demonstrated exceptional performance in transfer learning, particularly in tasks related to image processing. Inspired by SceneTransformer [4], the motion prediction task is linked with a mask on the future time sequential data of road users. As depicted in Fig. 1, the data for all agents can be represented as a grid, with time and agent forming the two axes. In this context, motion prediction becomes a unique task wherein future states are masked [4]. This leads us to the natural question: _Could random mask pretraining be effectively applied to general motion tasks as well?_ These tasks include motion prediction (marginal, conditional, etc.), occlusion handling, and others. We introduce a straightforward yet potent framework for random masking pretraining (RMP) for motion tasks. Our RMP selectively conceals motion patches, allowing the random mask to capture spatial and social correlations among all agents in a given scenario. This universal framework can be readily integrated into numerous motion prediction methodologies. In this paper, we demonstrate its adaptability by incorporating it into several state-of-the-art models, including Autobots [5] and Hivt [6]. We assess the impact of pretraining on performing three different tasks: motion prediction, conditional motion prediction, and occlusion handling. In case of conditional motion prediction, not only is the historical information of all agents provided, but also the desired trajectory of the ego vehicle. The network then endeavors to predict the trajectories of all other agents. In addition to classic motion prediction, we also treat occlusion handling as a separate task to evaluate our proposed framework. In real-world scenarios, occlusions are a common occurrence where one or more agents are partially or entirely obscured from view. Under such circumstances, predicting the motion of the occluded agents become a complex task that can significantly influence the overall performance of the autonomous driving system, especially with occlusions happening over short distances. This is a nontrivial issue that has often not been specifically focused on in practice. For agents whose historical trajectories are partially or heavily Fig. 1: **Random Masking for Motion Data.** We treat time-sequential data as one dimension and all agents in the scenario as another, with each cell representing the high-dimensional features of an agent (including position, heading, agent type, agent shape, etc.). _Left_: Motion prediction is a special case where all future timesteps are masked (shown in blue) [4]. _Right_: We apply random masking to a scenario, hiding patches for random agents and random time steps for pretraining. _Ego_ stands for the ego autonomous vehicle. occluded, we evaluate the performance of the current state-of-the-art networks with and without masking pretraining in an object-based manner. Our experimental results indicate that motion prediction benefits from transfer learning for generalization and random masking. Our framework demonstrates effective performance on the Argoverse [7] and NuScenes [8] datasets. Our code will be publicly accessible at [https://github.com/KTH-RPL/RMP](https://github.com/KTH-RPL/RMP). In this paper, we make the following contributions: * We introduce a pretraining framework for a range of motion-related tasks. * We design experiments to validate the effectiveness of random masking. * We highlight that occlusion handling remains a challenge for current state-of-the-art methods and demonstrate that our pretraining method enhances performance in this area. ## II Related Work ### _Motion Prediction_ Motion prediction has recently been explored rapidly with large open datasets and public benchmarks [7, 8, 9, 10]. Early approaches drew inspiration from successful computer vision techniques, where map and agents' historical trajectories were rasterized into images using specific color encoding [11, 12, 13]. However, rasterization carries certain limitations, such as the challenge of selecting an optimal Field-Of-View (FOV) due to the high computational cost of high-resolution imaging and the potential for long-distance information loss. An alternative approach to these challenges is using sparse vectors and polygons, as exemplified by VectorNet [14]. Other network architectures that have been explored include Graph Neural Networks [15, 16] and Transformers [4, 17, 6, 18]. The outputs of these representations vary: some generate a set of point trajectories in an end-to-end manner [15, 4, 6], while others generate top K trajectory samples from anchors [12], heatmaps [19, 20, 21], or kinematic models [22, 23]. Owing to its adaptability, our proposed framework can be effectively incorporated into many of these methods. ### _Self-supervised Learning_ Self-supervised learning methods have garnered substantial interest across various fields, such as NLP and CV [1, 24, 25, 26]. These methods leverage different tasks to initialize network weights in the pretraining phase. For instance, contrastive learning [27, 28] designs tasks that distinguish between similarities and dissimilarities, utilizing both original data samples and their augmented counterparts. The Masked Autoencoder, proposed by [2], uses a masking encoder to reconstruct missing pixels in images during the pretraining phase, resulting in better performance and a training speed that is four times faster than training from scratch. This technique has inspired applications in a variety of domains, such as video [29, 30], 3D point clouds [31], and visual reinforcement learning in robotics [32]. Self-supervised learning for motion prediction in autonomous driving remains largely unexplored. However, in the past year, a few studies have started investigating this area [33, 34, 35, 36]. Perrthana et al. [36] propose a suite of four pretraining tasks, including lane masking, intersection distance calculation, maneuver classification, and success/failure classification. The work most similar to ours is the recent archive preprint [37] which shows results similar to our own on one of the tasks we tested (prediction). Our work here was developed independently to [37]. ### _Conditional Motion Prediction_ Compared to standard motion prediction, conditional motion prediction offers additional information by incorporating specific conditions, such as the intended path of the ego vehicle. For example, the work presented in [38] generates predictions based on hypothetical 'what-if' interactions among road users and lanes. In this way, although their targeted task closely resembles standard motion prediction, it extends the context by incorporating speculative interaction scenarios. Additionally, studies like [39] and [21] adopt a two-step approach in their prediction methodology by first predicting the destination positions, which are then used as conditions for predicting full trajectories. This effectively transforms the prediction task into a conditional one, where the trajectories are predicated on hypothesized destinations. ### _Occlusion Handling_ Handling occlusions in motion prediction is crucial for enhancing the robustness and reliability of autonomous driving systems. A widely adopted representation called Occupancy Grid Map (OGM) captures the spatial arrangement of obstacles and free space where each grid cell represents the estimated probability of an agent's presence within. Predicting future OGM allows the formation of occluded areas, thus offering a more comprehensive understanding of the environment [40, 41]. Nevertheless, these approaches based on OGM can be computationally expensive, particularly for high-resolution, large, and complex environments. For object-based methods, there has been limited work due to the lack of motion prediction datasets that annotate occluded objects. Most datasets are primarily collected from the ego vehicle's perspective [7, 8]. To help mitigate this, we have post-processed the INTERACTION dataset [10], which was captured from bird's-eye-view drones. This has allowed us to estimate occlusion labels for objects, and we openly share the resulting post-processed dataset for further research in this area. ## III Problem Formulation Consider a scenario including \(N\) agents' trajectories \(A\) over \(T\) timestamps, denoted as \(A_{i}\in\mathbb{R}^{T\times D_{agent}}\), where \(i\in[1,N]\), along with the surrounding road topology \(Map\in\mathbb{R}^{S\times P\times D_{road}}\). Here, \(S\) represents the number of road segments, \(P\) denotes the number of points within a segment, and \(D\) signifies the vector feature dimension that includes position coordinates \(x,y\) and the validity mask for both \(D_{agent}\) and \(D_{road}\). If yaw angle, velocity and agent size of the agents are provided in the dataset, they are also added into the feature \(D\). In the context of motion prediction, we are provided with the historical trajectory \(A_{history}\in\mathbb{R}^{T_{obs}\times D_{agent}}\), where \(T_{obs}\) signifies the observed historical timestamps, and our task is to predict the future trajectory \(A_{future}\in\mathbb{R}^{T_{ft}\times D_{agent}}\). Here, it is worth mentioning that occlusion can complicate this task, as \(A_{history}\) may contain many occluded objects with unknown states. In the case of conditional motion prediction, however, additional elements are taken into account. In particular, the historical information is supplemented with the ego vehicle's anticipated future route path \(A_{ego}\in\mathbb{R}^{T_{ft}\times D_{agent}}\) (where \(i\) equals to index of ego vehicle), which forms part of the input. ## IV Methodology In this section, we outline the strategy employed in our study. Fig. 2 provides an illustration of the complete training framework, and the specifics of the random masking application are outlined in the following sub-sections. ### _Network_ Our approach is an extension of the masked autoencoder [1, 2] for time-sequential trajectory data and aims to provide a simple, yet effective framework that is applicable to many motion prediction methodologies with minimal domain-specific knowledge required. The framework can accommodate many network architectures in a two-stage process. In the first stage, different masking strategies are applied to all timestamps including the history and future timestamps, and for all agents. Given incomplete waypoints, the model tries to predict \(K\) possible completed trajectories. Therefore, we don't need to change the loss function from the original methods. In the second fine-tuning stage, the network combines the pretrained encoder and the task-specific decoder. Our method tests on two networks -Autobot-Joint [5] and HiVT [6]. Autobot-Joint [5] is a transformer-based network that uses an axial attention mechanism to learn the temporal and spatial correlations among agents and road topology. Hivt [6] models the local and global context in a translation and rotation invariant transformer network. ### _Masking_ By changing the validity mask within the input, the pretraining task can easily be switched among trajectory completion (pretraining task), motion prediction, and conditional prediction. The mask defines which parts can be seen by the network. For the unseen parts, we further set them as zeros to guarantee a clean input for the network. The random masking pretraining incorporates pointwise, patchwise, and time-based strategies, as illustrated in Fig. 3, each serving a distinct purpose. The pointwise approach (Fig. 2(a)) primarily facilitates the learning of interpolation and correlation over a short period from noisy data. In contrast, the patchwise method (Fig. 2(b)) fosters an understanding of interactions over extended periods. Inspired by the masked autoencoder approach to video data [30, 29], each agent's trajectory is divided into non-overlapping patches in space and time given a certain timeframe. The size of these patches is chosen randomly, and patches are masked randomly. The time-based strategy (Fig. 2(c)) simulates scenarios where a sensor might fail abruptly, leading to missing data at random timestamps. The three tasks - motion prediction, conditional prediction and occlusion handling are three special masking cases (Fig. 2). Each task involves the process of prediction, where future Fig. 3: Different mask sampling strategies: (a) random pointwise masking, (b) random patchwise masking for random agents, (c) random masking in time. All show 75% masking in total (in blue) and the remaining data (in grey) will be fed into the network. Fig. 2: The pretraining framework. In the first pretrain phase, all agents’ information including the history and future time are concatenated together. Next, random masking is applied. Then, given incomplete information about agents’ positions with time (in grey), where some positions are randomly masked (in blue), the network trains to fill in the missing positions. In the fine-tuning phase, there are three tasks that correspond to three special masking cases. Once trained, the pretrained encoder is used for different tasks. trajectories are treated as unknown and masked out. In conditional motion prediction, alongside the full historical data, the future desired path of the ego vehicle is also provided. For occlusion handling, the input data is often incomplete due to occlusions. Since the three tasks correspond to special cases of masking, they can be carried out by adapting the same network architecture accordingly. ## V Experiments ### _Datasets_ We evaluate the efficacy of our pretraining framework for motion and conditional prediction on two widely used datasets: Argoverse [7] and nuScenes [8]. Argoverse motion forecasting dataset contains \(205,942\) training sequences and \(39,472\) validation sequences. Each sequence includes data of all agents' positions over a 5 seconds period at \(10\) Hz. The task is to predict the subsequent 3 seconds' trajectory based on the initial 2 seconds of past observations with HD map information provided. The nuScenes dataset consists of \(32,186\) training and \(9,041\) validation sequences. The objective, in this case, is to predict future 6 seconds' trajectories at a rate of 2 Hz, given the past 2 seconds' trajectory data. In order to evaluate our model's proficiency in handling occlusions, we leverage the multi-track INTERACTION dataset [10]. This dataset is collected by drones and potential traffic cameras, which enables the potential to label occluded objects from the perspective of a single vehicle. We auto-labeled occluded objects in the validation dataset based on a randomly designated ego agent. From a bird's-eye view, and given the positions and sizes of all agents, we compute the occupancy grid following [40]. Objects within the occluded region are labeled as _occluded_, as demonstrated in Fig. 4. The network is initially trained using the original training data, after which it is tested on this postprocessed validation dataset. The training uses a bird's-eye view without occlusions, while the validation set includes realistic real-world occlusions as seen from the vehicle's perspective. ### _Masking Strategy_ We have conducted extensive testing to assess the impact of different masking strategies on performance. The results of these ablation experiments are presented in Table I. Table I displays the outcomes of tests utilizing varying mask ratios and profiles for the pretraining task. Interestingly, for pointwise masking, ratios of 50% and 75% yielded superior results. Conversely, for both patchwise and time-only masks, a 25% ratio demonstrated the best performance. Among the tested profiles, point masking proved most effective. In regards to frozen encoder weights, the experiment shows that the unfrozen encoder achieves better results (Table I). We also test with different encoder sizes. The default Autobot model utilizes 2 sets of axial attention for temporal and social relations (\(\sim\)1,160,320 parameters for the encoder). Despite \begin{table} \end{table} TABLE I: Ablation experiments on our pretrain framework with the Autobot model on Argoverse validation dataset. We have evaluated the influences of different mask sampling strategies, finetuning with or without the frozen pretrained encoder weights, and also the encoder size. _w/[P]_ represents the method with our random masking pretraining. The default setting is highlighted in grey. Fig. 4: Two examples of labeling occluded objects using ray tracing occupancy grid map from one vehicle’s view. The labeled object track will be used to evaluate the occlusion handling performance. The dark blue occluded agent in the occluded area (in grey grids) is blocked by other visible agents (in cyan), from the ego vehicle’s (in teal) view. extending the size to include 4 and 6 sets, larger networks did not result in improved performance as demonstrated in Table Ic. This could be attributed to the Argoverse1 dataset size which is not large. To ensure fair comparison between pretraining and training from scratch, we perform experiments over comparable time periods and on identical devices. As an example, the conditional motion prediction results for Argoverse dataset (Fig. 6) show that pretraining achieves better results and converges faster. Our experiments also show that it can learn other tasks from that same pretrained network at a faster rate and to better results. ### _Motion Prediction_ We have integrated our framework into the nuScenes (Table II) and Argoverse (Table III) datasets for motion prediction. The results indicate that the implementation of random masking pretraining enhances performance. In nuScenes, our approach achieves comparable results to other state-of-the-art methods. Compared to the baseline, the application of random masking showed marked improvements in the metrics including \(minADE_{5}\), \(minADE_{10}\) and miss rate for the Top 5 in 2 meters, with percentage decreases of 3.5% and 6.7% and 9.1% respectively. Note that in order to maintain a fair comparison, the Autobot baseline we utilize does not include ensemble operations, as these are not used in our post-processing steps. In Argoverse, we incorporate two methods- Autobot-joint and HiVT. Both of them show a \begin{table} \begin{tabular}{l c c c} \hline \hline Method & minADE\_5 \(\downarrow\) & minADE\_10 \(\downarrow\) & \begin{tabular}{c} Miss Rate\_1 \\ (Top 5) \\ \end{tabular} \\ \hline GOHOME [20] & 1.42 & 1.15 & 0.57 \\ THOMAS [21] & 1.33 & 1.04 & 0.55 \\ PGP [42] & 1.27 & 0.94 & 0.52 \\ FRM [43] & 1.18 & 0.88 & 0.48 \\ \hline Autobot [5] & 1.43 & 1.05 & 0.66 \\ (Baseline, w/o ensemble) & 1.38 (3.5\%) & 0.98 (6.7\%) & 0.60 (9.1\%) \\ \hline \hline \end{tabular} \end{table} TABLE II: Performance comparison of different models on nuScenes dataset. Here we use the baseline results of Autobot without ensemble to maintain a fair comparison. Fig. 5: Qualitative results with our random mask pretrain framework. Fig. 6: Conditional motion prediction on Argoverse dataset. Pretraining with fine-tuning is more accurate than training from scratch. The model is Autobot-joint. X-axis represents relative wall training time (4\(\times\)A100 GPUs), and Y-axis represents the \(minADE_{6}\). The pretraining is done with 75% pointwise masking. positive impact of masked pertaining, resulting in an decrease of \(minADE_{6}\), \(minFDE_{6}\) by 3.9% and 4.6% for Autobot, and 4.9% and 1.6% for HiVT. Note that for HiVT, we prioritized speed and trained on four GPUs, resulting in lower of performance than training on a single GPU. However, our comparison is conducted under the same environment and settings. ### _Conditional Motion Prediction_ We evaluate conditional motion prediction on nuScenes and Argoverse datasets with Autobot again. Given the history information and the ego vehicle's desired future trajectories, the task is to predict all other agents' possible future trajectories. The results for this task are shown in Table. IV. For Argoverse, it reduces the minADE6 and minFDE6 by 12.0% and 10.2%, respectively. For nuScenes, it reduces minADE10 and minFDE10 by 8.8% and 4.9%. Given that the Argoverse data features higher frequency and more waypoints, it is plausible that random masking exhibits superior performance as the input size expands. ### _Occlusion Handling_ We use the postprocessed validation INTERACTION dataset to evaluate the efficacy of Autobot in complex scenarios as well as the benefits of random masking. The network is trained with regular INTERACTION training data. However, during the inference time, the network can only access the agent's waypoints annotated as visible. Thus, for the agents that are partially occluded (Section V-A), the network can only see incomplete history. We then measure how the network can capture such partially occluded agents' future trajectories. As shown in Table V, the use of random masking enhances the network's capability to predict the partially occluded agent's future trajectory, with improvements exceeding 30% for both \(ADE\) and \(FDE\). The results are not surprising as the pretraining is a sort of random synthetic occlusion (as opposed to the actual realistic occlusions that we model in the validation set). Therefore, the pretrained network has a considerable advantage over a network simply trained with bird's eye view data and no occlusions. ## VI Conclusion In this paper, we propose a simple and effective random mask pretraining framework which facilitates the motion prediction task in general and conditional motion prediction. Furthermore, our framework largely improves the prediction accuracy for occlusion scenarios. The self-supervised learning and masked autoencoder can be explored further with state-of-the-art techniques in the field of motion prediction for autonomous driving. Additionally, exploring new auxiliary tasks within the self-supervised learning domain offers exciting possibilities for further advancements. We think that exploring self-supervised learning may be beneficial as the volume of motion prediction data expands. ## Acknowledgement This work1 was funded by Vinnova, Sweden (research grant). The computations were enabled by the supercomputing resource Berzelius provided by National Supercomputer Centre at Linkoping University and the Knut and Alice Wallenberg foundation, Sweden. Footnote 1: We have used ChatGPT for editing and polishing author-written text.
pretrained学習に基づくモーション予測手法に関する研究は、まだ発展途上です。この論文では、交通参加者の軌跡予測のための事前学習タスクを形式化するための枠組みを提案します。この枠組みでは、自然言語処理(NLP)とコンピュータビジョン(CV)において、ランダムなマスクモデルにインスパイアされて、オブジェクトの乱数時間ステップにおける位置をマスクし、学習されたニューラルネットワーク (NN) で埋め込む。マスクプロファイルを変更することで、この枠組みは様々なモーション関連タスクを切り替えられます。Argoverse と NuScenes データセットを用いて、この提案された事前学習フレームワークがノイズ入力に対応でき、モーション予測精度と欠落率を向上させ、特に、時間的に遮断されたオブジェクトに対して評価しました。
2309.11200
Rotating Alfvén waves in rotating plasmas
Angular momentum coupling between a rotating magnetized plasma and torsional Alfv\'en waves carrying orbital angular momentum (OAM) is examined. It is not only demonstrated that rotation is the source of Fresnel-Faraday rotation - or orbital Faraday rotation effects - for OAM carrying Alfv\'en waves, but also that angular momentum from an OAM carrying Alfv\'en wave can be transferred to a rotating plasma through the inverse process. For the direct process, the transverse structure angular rotation frequency is derived by considering the dispersion relation for modes with opposite OAM content. For the inverse process, the torque exerted on the plasma is derived as a function of wave and plasma parameters.
J. -M. Rax, R. Gueroult, N. J. Fisch
2023-09-20T10:36:49
http://arxiv.org/abs/2309.11200v1
# Rotating Alfven waves in rotating plasmas ###### Abstract Angular momentum coupling between a rotating magnetized plasma and torsional Alfven waves carrying orbital angular momentum (OAM) is examined. It is not only demonstrated that rotation is the source of Fresnel-Faraday rotation - or orbital Faraday rotation effects - for OAM carrying Alfven waves, but also that angular momentum from an OAM carrying Alfven wave can be transferred to a rotating plasma through the inverse process. For the direct process, the transverse structure angular rotation frequency is derived by considering the dispersion relation for modes with opposite OAM content. For the inverse process, the torque exerted on the plasma is derived as a function of wave and plasma parameters. ## 1 Introduction Understanding the effect of rotation on plasma dynamics is essential to a wide range of applications. Besides original efforts motivated by microwave generation in magnetrons (Brillouin, 1945), it has indeed been shown that rotation could enable new approches to thermonuclear confinement (Rax _et al._, 2017; Wilcox, 1959; Bekhtenev _et al._, 1980; Ochs & Fisch, 2017; Hassam, 1997; Fetterman & Fisch, 2010, 2008). Rotation has also be found to hold promise for developing plasma mass separation applications (Gueroult _et al._, 2017, 2019), either in pulsed plasma centrifuges (Bonnevier, 1966; Krishnan _et al._, 1981) or in steady-state cross-field rotating plasmas (Ohkawa & Miller, 2002; Shinohara & Horii, 2007; Gueroult _et al._, 2014; Fetterman & Fisch, 2011; Gueroult _et al._, 2014), advanced accelerators (Janes, 1965; Janes _et al._, 1965; Thaury _et al._, 2013; Rax & Robiche, 2010) and thrusters (Gueroult _et al._, 2013). But understanding the effect of rotation on plasma dynamics is also essential in a number of environments. Rotation is for instance key to the structure and stability of a number of astrophysical objects (Kulsrud, 1999; Miesch & Toomre, 2009). In light of this ubiquitousness, and because plasma waves are widely used both for control and diagnostics in plasmas, it seems desirable to understand what the effect of rotation on wave propagation in plasmas may be (Gueroult _et al._, 2023). In fact the importance of this task was long recognised in geophysics and astrophysics, leading to extensive studies of low frequency MHD waves in rotating plasmas (Lehnert, 1954; Hide, 1969; Acheson 1972; Acheson & Hide, 1973; Campos, 2010), and notably of Alfven waves (Stix, 1992). Meanwhile, following the discovery that electromagnetic waves carry both spin and orbital angular momentum (Allen _et al._, 1992, 2016; Andrews & Babiker, 2012), there have been numerous theoretical developments on spin-orbit interactions (Bliokh _et al._, 2015) in modern optics, which we note are now being applied to plasmas (Bliokh & Bliokh, 2022). For spin angular momentum (SAM) carrying waves, that is circularly polarised waves, propagation through a rotating medium is known to lead to a phase-shift between eigenmodes with opposite SAM content (Player, 1976; Guerult _et al._, 2019, 2020). This phase-shift is then the source of a rotation of polarization or polarization drag (Jones, 1976), as originally postulated by Thomson (Thomson, 1885) and Fermi (Fermi, 1923). For orbital angular momentum (OAM) carrying waves, propagation through a rotating medium is the source of a phase-shift between eigenmodes with opposite OAM content (Gotte _et al._, 2007), leading to image rotation or Faraday-Fresnel Rotation (FFR) (Padgett _et al._, 2006). This azimuthal Fresnel drag of OAM carrying waves, which can be viewed as an orbital Faraday rotation of the amplitude, was first derived (Wisniewski-Barker _et al._, 2014) and observed (Franke-Arnold _et al._, 2011) in isotropic, nongyrotropic media. In contrast, propagation of OAM carrying wave in a rotating anisotropic (gyrotropic) medium poses greater difficulty since the polarization state and the wave vector direction - which are independent parameters for a given wave frequency in an isotropic medium - become coupled. Yet, it was recently shown that Faraday-Fresnel Rotation (FFR) is also found for the high frequency magnetized plasma modes that are Whistler-Helicon and Trivelpiece-Gould modes (Rax & Guerout, 2021). For such high frequency modes it was found that the main modifications induced by the plasma rotation are associated with Doppler shift and Coriolis effect in the dispersion relation. Interestingly, we note that the result that rotation is the source of an azimuthal component for the group velocity of low frequency waves in magnetized plasmas when \(\mathbf{\Omega}\cdot\mathbf{k}\neq 0\) was already pointed out in geophysics and astrophysics (Acheson & Hide, 1973), but the connection to a Faraday-Fresnel rotation of the transverse structure of the wave did not seem to have been made. An added complexity for these low frequency modes is that one must, in addition to anisotropy and gyrotropy, consider the strong coupling to the inertial mode (Lighthill, 1980) that then comes into play. Revisiting this problem, we derive here in this study the expression for FFR for low frequency rotating Alfven waves in a rotating magnetized plasma. This paper is organised as follows. After briefly recalling the configuration of interest and previous results in the next section, we construct in section 3 the spectrum of low frequency, small amplitude, fluid waves in a magnetized rotating plasma. The set of linearised Euler and Maxwell equations describes an oscillating Beltrami flow-force free field (Chandrasekhar & Prendergast, 1956) whose components are expressed with a cylindrical Chandrasekhar-Kendall (CK) potential (Chandrasekhar & Kendall, 1957; Yoshida, 1991). Then, in section 4, these orbital angular momentum carrying waves are shown to display a FFR under the influence of the plasma rotation. Section 5 focuses on the inverse problem when the orbital angular momentum of the wave is absorbed by the plasma. We derive in this case the torque exerted by this wave on the fluid driven as a function of the wave and plasma parameters. Finally section 6 summarises the main findings of this study. ## 2 Background In this study we consider a rotating magnetized plasma column with angular velocity \(\mathbf{\Omega}=\Omega\mathbf{e}_{z}\) and static uniform magnetic field \(\mathbf{B}_{0}=B_{0}\mathbf{e}_{\mathbf{z}}\). We write \((r,\theta,z)\) and \((x,y,z)\) cylindrical and Cartesian coordinates on cylindrical \((\mathbf{e}_{r},\mathbf{e}_{\theta},\mathbf{e}_{z})\) and Cartesian \((\mathbf{e}_{r},\mathbf{e}_{\theta},\mathbf{e}_{z})\) basis, respectively. The plasma dynamics is described assuming an inviscid and incompressible fluid model. We classically define the Alfven velocity \(\mathbf{V}\doteq\mathbf{B}_{0}/\sqrt{\mu_{0}\rho}\) where \(\mu_{0}\) is the permittivity of vacuum and \(\rho\) the mass density of the fluid. In the simple case where \(B_{0}=0\) and \(\Omega\neq 0\) the rotating plasma behaves as an ordinary rotating fluid and inertial waves can propagate. Taking a phase factor \(\exp j\left(\omega t-k_{\parallel}z-k_{\perp}y\right)\), the dispersion relation for this inertial mode (IM) is (Lighthill, 1980) \[\omega=\pm 2\Omega k_{\parallel}/\sqrt{k_{\parallel}^{2}+k_{\perp}^{2}}. \tag{1}\] Conversely, in the case where \(\Omega=0\) but \(B_{0}\neq 0\), Alfven waves can propagate in the magnetized plasma at rest. The dispersion of this torsional mode (TAW) is (Stix, 1992) \[\omega=\pm B_{0}k_{\parallel}/\sqrt{\mu_{0}\rho}=\pm k_{\parallel}V. \tag{2}\] Note that compressional Alfven wave (CAW) are not considered here as we are considering an incompressible plasma. The dispersion of uncoupled TAW and IM is plotted in Fig. 1 in the \(\left(k_{\parallel}V/\omega,k_{\perp}V/\omega\right)\) plane for a given frequency \(\omega\). In this figure the grey zones indicate regions of strong coupling between TAW and IM. Note that we have normalised for convenience the wave-vector to \(\omega/V\), and that even for the unmagnetized IM branch. In the more general case where both \(B_{0}\neq 0\) and \(\Omega\neq 0\), then a strong coupling between IM and TAW modes rearranges the spectrum and gives rise to two new branches (Lehnert, 1954; Acheson & Hide, 1973). Since as already pointed out by Acheson & Hide (1973) the group velocity of these modes for waves such that \(\boldsymbol{\Omega}\cdot\mathbf{k}\neq 0\) has an azimuthal component, then we expect Fresnel-Faraday Rotation as recently identified for Trivelpiece-Gould and Whistler-Helicon high frequency electronic modes (Rax & Guerout, 2021). ## 3 Rotating Alfven waves in a rotating plasma In this section we examine the properties of low frequency waves carrying orbital angular momentum in a rotating magnetized plasma. ### Classical modes Two methods can be used to identify and describe the coupling between the angular momentum of a rotating plasma column and the angular momentum of a wave propagating in this rotating magnetized plasma. One is to consider the transformation laws of the various parameters from the lab frame to a rotating frame. The other is to perform the study in the lab frame starting from first principles. Here we will use the first method, similarly to original contributions on MHD waves in rotating conductive fluids (Lehnert, 1954; Hide, 1969), and solve the perfect MHD dynamics to calculate the rotating plasma linear response for the low frequency branches where the coupling between the fields and the particles is large. By working in the co-rotating frame (R) rather than in the Figure 1: Uncoupled dispersion of torsional Alfvén waves (TAW) obtained for \(B_{0}\neq 0\) and \(\Omega=0\), and of inertial waves (IM) obtained for \(\Omega\neq 0\) and \(B_{0}=0\). laboratory frame (L), both the Coriolis force \(2\boldsymbol{\Omega}\times\mathbf{v}\) and the centrifugal forces \(-\boldsymbol{\nabla}\psi\) with \(\psi=-\Omega^{2}r^{2}/2\) must be taken into account. We model the evolution of the wave velocity field \(\mathbf{v}\left(\mathbf{r},t\right)\) using Euler's equation under the assumption of zero viscosity \[\frac{\partial\mathbf{v}}{\partial t}+\left(\mathbf{v}\cdot\boldsymbol{\nabla }\right)\mathbf{v}+2\boldsymbol{\Omega}\times\mathbf{v}=-\boldsymbol{\nabla }\left(\frac{P}{\rho}+\psi\right)+\frac{1}{\mu_{0}\rho}\left(\boldsymbol{ \nabla}\times\mathbf{B}\right)\times\left(\mathbf{B}+\mathbf{B}_{0}\right), \tag{1}\] and the evolution of the wave magnetic field \(\mathbf{B}\left(\mathbf{r},t\right)\) using Maxwell-Faraday's equation under the assumption of perfect conductivity \[\frac{\partial\mathbf{B}}{\partial t}=\boldsymbol{\nabla}\times\left[\mathbf{v }\times\left(\mathbf{B}+\mathbf{B}_{0}\right)\right], \tag{2}\] where \(\rho\) is the mass density of the fluid and \(P\) is the pressure. These dynamical relations are completed by the flux conservation law \[\boldsymbol{\nabla}\cdot\mathbf{B}=0 \tag{3}\] for the magnetic field and the incompressibility relation \[\boldsymbol{\nabla}\cdot\mathbf{v}=0 \tag{4}\] for the velocity field. As already mentioned this last relation will restrict the plasma behaviour to the Alfvenic dynamics associated with torsional waves. We then consider a small amplitude magnetohydrodynamic perturbation, propagating along and around the \(z\) axis, described by a magnetic perturbation \[\mathbf{B}\left(r,\theta,z,t\right)=\mathfrak{B}\left(r,\theta,z\right)\exp(j \omega t) \tag{5}\] with respect to the uniform static magnetic field \(\mathbf{B}_{0}=B_{0}\mathbf{e}_{z}\). The wave frequency \(\omega\) is assumed smaller than the ion cyclotron frequency and larger than the collision frequency to validate the use of the perfect MHD model Eqs.(1, 2). The oscillating magnetic wave \(\mathbf{B}\) is associated with an oscillating hydrodynamic velocity perturbation \(\mathbf{v}\) \[\mathbf{v}\left(r,\theta,z,t\right)=\mathbf{u}\left(r,\theta,z\right)\exp(j \omega t), \tag{6}\] with respect to rotating frame velocity equilibrium \(\mathbf{v}_{0}=\mathbf{0}\). The pressure \(P\) balances the centrifugal force at equilibrium \(\boldsymbol{\nabla}\left(P+\rho\psi\right)=\mathbf{0}\) and the pressure perturbation is \(p\left(r,\theta,z\right)\exp j\omega t\). To first order in these perturbation the linearisation of Eqs. (1) and (2) gives \[j\omega\mathbf{u}+2\boldsymbol{\Omega}\times\mathbf{u}=- \boldsymbol{\nabla}\left(p/\rho\right)+\frac{1}{\mu_{0}\rho}\left(\boldsymbol{ \nabla}\times\mathfrak{B}\right)\times\mathbf{B}_{0}, \tag{7}\] \[j\omega\mathfrak{B}=\left(\mathbf{B}_{0}\cdot\boldsymbol{\nabla }\right)\mathbf{u}. \tag{8}\] Flux conservation and incompressibility provide the two additional conditions \[\boldsymbol{\nabla}\cdot\mathbf{u}=0, \tag{9}\] \[\boldsymbol{\nabla}\cdot\mathfrak{B}=0. \tag{10}\] Taking the curl of both Eqs. (7, 8) and eliminating \(\mathfrak{B}\) gives a linear relation for the velocity perturbation \[\omega^{2}\boldsymbol{\nabla}\times\mathbf{u}+2j\omega\left(\boldsymbol{\Omega }\cdot\boldsymbol{\nabla}\right)\mathbf{u}+\left(\mathbf{V}\cdot\boldsymbol{ \nabla}\right)^{2}\left(\boldsymbol{\nabla}\times\mathbf{u}\right)=\mathbf{0}. \tag{11}\] Now if ones Fourier analyses this velocity perturbation as a superposition of plane waves \[\mathbf{u}\left(\mathbf{r}\right)\exp j\omega t=\exp[j\left(\omega t-\mathbf{ k}\cdot\mathbf{r}\right)], \tag{12}\] that is to say put the emphasis on the linear momentum dynamics rather than on the angular momentum one, one recovers the two branches of Alfvenic/Inertial perturbations in a rotating plasma (Lehnert, 1954; Acheson & Hide, 1973). Specifically, plugging Eq. (12) into Eq. (11) and then taking the cross product \(j\mathbf{k}\times\) of this algebraic relation one obtain the dispersion relation \[\omega^{2}-\left(\mathbf{k}\cdot\mathbf{V}\right)^{2}=\pm 2\omega\left( \boldsymbol{\Omega}\cdot\mathbf{k}\right)/\left|\mathbf{k}\right|. \tag{13}\] These two branches, which are illustrated in Fig. 2, have been widely investigated within the context of geophysical and astrophysical magnetohydrodynamics models. For short wavelengths the \(\Omega=0\) torsional Alfven wave (TAW) splits into inertial (IM) and a magneto-inertial (MI) waves. For long wavelengths, that is in the grey zone in Fig. 2, inertial terms dominate the dispersion and the IM mode is found to reduce to its zero rotation behaviour already shown in Fig. 1. Note finally that the torsional Alfven wave is recovered for large \(k_{\perp}\) where a local dispersion becomes valid as opposed to small \(k_{\perp}\) where the large wavelength allows the wave to probe the large scale behaviour of the rotation. ### Beltrami flow Instead of this usual procedure using a full Fourier decomposition as given by Eq. (12), we start here by considering a travelling perturbations along \(z\) of the form \[\mathbf{u}\left(r,\theta,z\right)=\mathbf{w}(r,\theta)\exp(-jk_{\parallel}z). \tag{14}\] Note that this is analog to what was already done by Shukla (2012) to study OAM carrying dispersive shear Alfven waves though in this earlier study the paraxial approximation and a two fluid model were used, and the plasma was considered at rest (_i. e._ non-rotating). Plugging Eq. (14) in the dispersion relation for a rotating plasma Eq. (11) gives \[\boldsymbol{\nabla}\times\mathbf{u}=\mathcal{K}\mathbf{u} \tag{15}\] where we have defined \[\mathcal{K}\left(k_{\parallel},\omega\right)\doteq 2\frac{\Omega}{\omega}k_{ \parallel}\left(\frac{k_{\parallel}^{2}V^{2}}{\omega^{2}}-1\right)^{-1}. \tag{16}\] From Eq. (8) the oscillating magnetic field then writes \[\omega\mathfrak{B}=-\sqrt{\mu_{0}\rho}k_{\parallel}V\mathbf{u}. \tag{17}\] The two modes identified in Fig. 2 can be recovered from Eq. (16). More specifically, Figure 2: Coupled dispersion of magnetoinertial waves (MI) and inertial waves (IM). for \(k_{\parallel}V>\omega\) Eq. (3.15) describes an Alfven wave modified by inertial effect. Conversely for \(k_{\parallel}V<\omega\) Eq. (3.15) describes an inertial wave modified by MHD coupling. In the following we will focus on the Alfven wave dynamics and thus assume \(\mathcal{K}>0\). Equation (3.15) is characteristic of a _Beltrami_ flow (Chandrasekhar & Prendergast, 1956). As such \(\mathbf{u}\) can be written in terms of the so called _Chandrasekhar-Kendall_ (CK) potential \(\Phi\)(Chandrasekhar & Kendall, 1957) as \[\mathbf{u} =\frac{1}{\mathcal{K}}\boldsymbol{\nabla}\times\left(\boldsymbol{ \nabla}\times\Phi\mathbf{e}_{z}\right)+\boldsymbol{\nabla}\times\Phi\mathbf{e} _{z}\] \[=-\left[\frac{1}{\mathcal{K}}\boldsymbol{\nabla}\times\mathbf{e} _{z}\times\boldsymbol{\nabla}+\mathbf{e}_{z}\times\boldsymbol{\nabla}\right]\Phi \tag{3.18}\] where the CK potential is solution of the scalar Helmholtz equation \[\Delta\Phi+\mathcal{K}^{2}\Phi=0. \tag{3.19}\] One verifies that the three components of Eq. (3.18) are independent. Before examining the structure of OAM carrying modes through the CK potential, two additional results can be obtained from Eq. (3.16). First, for the Fourier decomposition used above, plugging Eq. (3.13) in Eq. (3.16) gives \[\frac{\mathcal{K}^{2}}{k_{\parallel}^{2}}=1+\frac{k_{\perp}^{2}}{k_{\parallel }^{2}}>1. \tag{3.20}\] Second, we can derive the dimensionless group-velocity dispersion coefficient \[\frac{\omega}{\mathcal{K}}\frac{\partial\mathcal{K}}{\partial\omega}=-\frac{k _{\parallel}}{\mathcal{K}}\frac{\partial\mathcal{K}}{\partial k_{\parallel}} =\frac{k_{\parallel}^{2}V^{2}+\omega^{2}}{k_{\parallel}^{2}V^{2}-\omega^{2}} \tag{3.21}\] which we will use later to explicit the axial wave vector difference for two eigenmodes with opposite OAM content. ### Structure of OAM carrying modes Because we are interested in waves carrying orbital angular momentum around \(z\) and linear momentum along \(z\), we search for solutions of the form \[\Phi\left(r,\theta,z\right)=\phi\left(r\right)\exp[-j\left(m\theta+k_{ \parallel}z\right)] \tag{3.22}\] where \(m\in\mathbb{Z}\) is the azimuthal mode number associated with the orbital angular momentum of the wave. From Eq. (3.19) the radial amplitude of this rotating CK potential \(\phi(r)\) must be solution of the Bessel equation \[\frac{1}{r}\frac{d}{dr}\left(r\frac{d\phi}{dr}\right)-\frac{m^{2}}{r^{2}}\phi +\left(\mathcal{K}^{2}-k_{\parallel}^{2}\right)\phi=0. \tag{3.23}\] Since as shown in Eq. (3.20) \(\mathcal{K}^{2}>k_{\parallel}^{2}\), \(\phi(r)\) is in general the combination of Bessel functions of the first and the second kind and order \(m\in\mathbb{Z}\), \(J_{m}\) and \(Y_{m}\). Yet, the finite value of \(\phi\) at \(r=0\) demands to restrict the physical solution to Bessel functions of the first kind \(J_{m}\) so that we find \[\phi\left(r\right)=J_{m}\left(\alpha r\right) \tag{3.24}\] with the cylindrical dispersion relation \[\alpha^{2}+k_{\parallel}^{2}=\mathcal{K}^{2}\left(k_{\parallel},\omega\right). \tag{3.25}\] Note that, like the ordinary plane wave Eq. (3.12) used in the standard analysis, the cylindrical Bessel waves Eq. (3.24) can not be normalised. Putting these pieces together one finally gets \[\mathbf{v} =\left[\frac{1}{\mathcal{K}}\boldsymbol{\nabla}\times\mathbf{e}_{z} \times\boldsymbol{\nabla}+\mathbf{e}_{z}\times\boldsymbol{\nabla}\right]J_{m} \left(\sqrt{\mathcal{K}^{2}-k_{\parallel}^{2}}r\right)\exp[j\left(\omega t-m \theta-k_{\parallel}z\right)]\] \[=-\frac{\omega\mathbf{B}}{\sqrt{\mu_{0}}\rho k_{\parallel}V}. \tag{3.26}\] The components in the plasma frame of a rotating Alfven wave with azimuthal mode number \(m\) thus have an amplitude proportional to combination of Bessel functions of the first kind and of orders \(m\) and \(m\pm 1\). All these Bessel functions have the same radial dependence, namely \(\sqrt{\mathcal{K}^{2}\left(k_{\parallel},\omega\right)-k_{\parallel}^{2}}r\), where \(\mathcal{K}\left(k_{\parallel},\omega\right)\) is given by Eq. (3.16). ## 4 Direct rotational Fresnel drag-orbital Faraday rotation Let us now rewrite these perturbations as seen from the laboratory frame. We use the index \(R\) for the rotating plasma rest frame and \(L\) for the laboratory frame. The radial Eulerian coordinates \(r\) and \(z\) are unchanged through this change of frame or reference, but azimuthal coordinates \(\theta\) changes, with \[r\big{|}_{L} =\left.r\right|_{R} \tag{4.1}\] \[z\big{|}_{L} =\left.z\right|_{R}\] (4.2) \[\theta\big{|}_{L} =\left.\theta\right|_{R}+\Omega t. \tag{4.3}\] Since the axial wave-vector is unchanged \(\left.k_{\parallel}\right|_{R}=\left.k_{\parallel}\right|_{L}\), the phase of the wave in the plasma rest-frame \[\omega t-k_{\parallel}z\pm m\left.\theta\right|_{R} \tag{4.4}\] becomes \[\left(\omega\mp m\Omega\right)t-k_{\parallel}z\pm m\left.\theta \right|_{L} \tag{4.5}\] in the laboratory frame. Equipped with these transformations we can now describe the conditions to observe Fresnel-Faraday Rotation. For this we consider two CK potentials describing two Alfven modes with opposite OAM content in the rotating frame \(R\) \[\left.\Phi_{+}\right|_{R} =J_{m}\left(\alpha r\right)\exp\left(j\left[\left(\omega-m\Omega \right)t-\left(k_{\parallel}-\delta k_{\parallel}\right)z-m\left.\theta\right| _{R}\right]\right), \tag{4.6}\] \[\left.\Phi_{-}\right|_{R} =J_{-m}\left(\alpha r\right)\exp\left(j\left[\left(\omega+m \Omega\right)t-\left(k_{\parallel}+\delta k_{\parallel}\right)z+m\left.\theta \right|_{R}\right]\right).\] These transform in the CK potentials in the laboratory frame \(L\) \[\left.\Phi_{+}\right|_{L} =J_{m}\left(\alpha r\right)\exp\left(j\left[\omega t-\left(k_{ \parallel}-\delta k_{\parallel}\right)z-m\left.\theta\right|_{L}\right] \right), \tag{4.7}\] \[\left.\Phi_{-}\right|_{L} =J_{-m}\left(\alpha r\right)\exp\left(j\left[\omega t-\left(k_{ \parallel}+\delta k_{\parallel}\right)z+m\left.\theta\right|_{L}\right]\right),\] as a result of the rotational Doppler shift \(\left.\theta\right|_{L}=\left.\theta\right|_{R}+\Omega t\). These Alfven CK potentials \(\left.\Phi_{\pm}\right|_{L}\) can be driven by a multicoil antenna similar to that used to study Whistler-Helicon modes (Stenzel & Urrutia, 2014, 2015, 2015, 2016; Urrutia & Stenzel, 2015, 2016; Stenzel & Urrutia, 2018; Stenzel, 2019). The radial field pattern is then a superposition of \(+m\) and \(-m\) Bessel amplitudes \(J_{\pm m}\left(\alpha r\right)\) where \(\alpha\) is associated with the radial modulation of the antenna currents (Rax & Guerout, 2021). The antenna then sets both the radial wave-vector \(\alpha\) and the frequency \(\omega\), whereas the axial wave-vectors are solutions of the rotating frame dispersion relation. From Eq. (3.25) \[\alpha =\sqrt{\mathcal{K}^{2}\left(k_{\parallel}-\delta k_{\parallel}, \omega-m\Omega\right)-\left(k_{\parallel}-\delta k_{\parallel}\right)^{2}}, \tag{4.8}\] \[\alpha =\sqrt{\mathcal{K}^{2}\left(k_{\parallel}+\delta k_{\parallel}, \omega+m\Omega\right)-\left(k_{\parallel}+\delta k_{\parallel}\right)^{2}}. \tag{4.9}\] Since we assume \(\omega\gg\Omega\) and \(k_{\parallel}\gg\delta k_{\parallel}\) we can Taylor expand Eq. (4.8) and Eq. (4.9) to get \(\delta k_{\parallel}\), leading to \[\frac{k_{\parallel}}{\mathcal{K}}\delta k_{\parallel}=\frac{\delta k_{ \parallel}}{2}\frac{\partial\mathcal{K}\left(\omega,k_{\parallel}\right)}{ \partial k_{\parallel}}+\frac{m\Omega}{2}\frac{\partial\mathcal{K}\left( \omega,k_{\parallel}\right)}{\partial\omega}. \tag{4.10}\] Eq. (3.21) can then be used to finally write the axial wave-vector difference \(\delta k_{\parallel}\) for two modes with the same frequency \(\omega\), the same radial amplitude \(\left|J_{m}\left(\alpha r\right)\right|\) and equal but opposite azimuthal number \(\left|m\right|\) as \[\frac{\delta k_{\parallel}}{k_{\parallel}}=\frac{1}{2}m\frac{\Omega}{\omega} \frac{1+\frac{k_{\parallel}^{2}V^{2}}{\omega^{2}}}{1-\frac{k_{\parallel}^{2}} {\mathcal{K}^{2}}+\left(1+\frac{k_{\parallel}^{2}}{\mathcal{K}^{2}}\right) \frac{k_{\parallel}^{2}V^{2}}{\omega^{2}}} \tag{4.11}\] where \(\mathcal{K}\left(k_{\parallel},\omega,\Omega\right)\) is given by Eq. (3.16). This implies that there will be a difference in the axial phase velocity \(\omega/\left(k_{\parallel}\pm\delta k_{\parallel}\right)\) of these two modes, and because these two modes rotates in opposite direction due to their opposite azimuthal mode number, the transverse structure of the sum of these modes will rotate. This is Fresnel drag-Faraday orbital rotation effects. Specifically, if one launches a wave which is a superposition of \(+m\) and \(-m\) modes such that at the antenna location \(z=0\) \[\left.\Phi\right|_{z=0}=J_{m}\left(\alpha r\right)\left(\exp[j\left(\omega t- m\left.\theta\right|_{L}\right)]+\left(-1\right)^{m}\exp[j\left(\omega t+m\left. \theta\right|_{L}\right)]\right),\] the wave transverse amplitude rotates as it propagates along \(z>0\) with an angular velocity along the propagation axis \[\frac{d\theta}{dz}\bigg{|}_{L}=\frac{\delta k}{m}=\frac{1}{2}\frac{\Omega}{ \omega}k_{\parallel}\mathcal{K}^{2}\frac{k_{\parallel}^{2}V^{2}+\omega^{2}}{k_ {\parallel}^{2}V^{2}\left(\mathcal{K}^{2}+k_{\parallel}^{2}\right)+\omega^{2} \left(\mathcal{K}^{2}-k_{\parallel}^{2}\right)}. \tag{4.12}\] This CK potential rotation is illustrated in Fig. 3 for the case \(m=4\). Eqs. (4.11, 4.12) quantifies the direct Faraday-Fresnel effect of Alfven waves in rotating plasmas, completing the similar results previously obtained for Trivelpiece-Gould and Whistler-Helicon modes (Rax & Guerault, 2021). The \(1/m\) factor in Eq. (4.12) comes from the fact that the image constructed from the superposition of \(\pm m\) modes has a \(2m\)-fold symmetry. To conclude this section it was shown that besides the Fresnel-Faraday Rotation associated to a phase velocity difference for \(m\) and \(-m\) modes, there can also be a spliting of the envelope of a \((m,-m)\) wave packet if the group velocity for co-rotating \((m)\) and counter-rotating \((-m)\) modes were different (Rax & Guerault, 2021). We note that this second effect is also present here for Alfven waves in rotating plasmas. Indeed, given a radial wave-vector \(\alpha\) the dispersion relation is \(\mathcal{D}=\mathcal{K}^{2}-k_{\parallel}^{2}-\alpha^{2}=0\), so that from Eq. (3.21) the axial group velocity is given by \[-\frac{\partial\mathcal{D}}{\partial k_{\parallel}}/\frac{\partial\mathcal{D}} {\partial\omega}=\frac{k_{\parallel}}{\mathcal{K}\partial\mathcal{K}/\partial \omega}-\frac{\omega}{k_{\parallel}}. \tag{4.13}\] and one verifies from Eq. (3.16) that the group velocity for a mode \((k_{\parallel}+\delta k_{\parallel},m)\) and that for a mode \((k_{\parallel}-\delta k_{\parallel},-m)\) are different. Rather than deriving here an explicit formula for the Fresnel-Faraday splitting, we consider in the next section the inverse Fresnel-Faraday effect associated with wave absorption. ## 5 Inverse rotational Fresnel drag and angular moment absorption In a perfectly conducting inviscid plasma there is no power absorption. The power exchange between the oscillating electromagnetic field and the plasma is purely reactive. To obtain an irreversible (active) angular momentum absorption, on needs a dissipative mechanism. Two different wave orbital angular momentum absorption mechanisms can be considered. One is resonant collisionless absorption, the other is collisional absorption. The former was recently studied in Rax _et al._ (2023) through quasilinear theory and will not be considered here. Instead, we consider in this section a weakly dissipative plasma where the ideal MHD hypothesis of perfect conductivity is relaxed and the inviscid assumption of zero viscosity no longer apply. In both case, collisionless or collisionless, each time an energy \(\delta\mathcal{U}\) is absorbed by the plasma, an axial angular momentum \(\delta L=m\delta\mathcal{U}/\omega\) is also absorbed by the plasma (Rax _et al._, 2017, 2023). The rate of decay of the wave angular momentum is hence equal to the wave induced density of torque on the plasma \(\Gamma=dL/dt\). In steady-state, this angular momentum transfer \(d\Gamma/dt\) is balanced by viscous damping of the velocity shear and Ohmic dissipation of the radial charge polarisation sustaining the rotation. This dissipation is larger in the collisionless case considered here than in the collisionless regime considered in Rax _et al._ (2023). Specifically, we introduce two dissipative collisional coupling to our dissipation-less system Eqs. (19, 20), namely finite viscosity \(\rho\mu\) and finite resistivity \(\mu_{0}\eta\). We follow the notation of Taylor (1989) (devoted to Alfven wave helicity absorption) and introduce the magnetic diffusion coefficient \(\eta\) and the kinematic viscosity \(\mu\). Ohm's law then writes \(\mathbf{E}+\mathbf{v}\times\mathbf{B}=\mu_{0}\eta\mathbf{j}\) and the system Eqs. (19, 20) becomes \[j\omega\mathbf{u}+2\boldsymbol{\Omega}\times\mathbf{u}=- \boldsymbol{\nabla}\left(p/\rho\right)+\frac{1}{\mu_{0}\rho}\left(\boldsymbol{ \nabla}\times\mathfrak{B}\right)\times\mathbf{B}_{0}+\mu\Delta\mathbf{u}, \tag{23}\] \[j\omega\mathfrak{B}=\left(\mathbf{B}_{0}\cdot\boldsymbol{\nabla }\right)\mathbf{u}+\eta\Delta\mathfrak{B}. \tag{24}\] Since we assume weak dissipation, the resistive term \(\eta\Delta\mathbf{B}\) in Maxwell-Faraday's equation and the viscous term \(\mu\Delta\mathbf{u}\) in Navier-Stokes equation can be evaluated with the dispersive properties of the non dissipative dispersion relation. Within the bounds of this perturbative expansion scheme (\(\mathcal{K}^{2}\eta\ll\omega\) and \(\mathcal{K}^{2}\mu\ll\omega\)), and for the perturbation \(\mathbf{u}\left(r,\theta,z\right)=\mathbf{w}(r,\theta)\exp(-jk_{\parallel}z)\) already given in Eq. (20), we get from Eqs. (21, 22) Figure 3: Fresnel drag-Faraday rotation of the Chandrasekhar-Kendall potential describing an Alfvén-Beltrami wave with \(m=\pm 4\) after a propagation along a path \(z=\pi/4\left(d\theta/dz\right)\). the non-dissipative Laplacians \[\varDelta\mathbf{u} =-\mathcal{K}^{2}\mathbf{u}, \tag{100}\] \[\varDelta\mathfrak{B} =-\mathcal{K}^{2}\mathfrak{B}. \tag{101}\] Plugging these results into Eqs. (102, 103) yields the system \[j\omega\mathbf{u}+2\boldsymbol{\Omega}\times\mathbf{u}=- \boldsymbol{\nabla}\left(p/\rho\right)+\frac{1}{\mu_{0}\rho}\left(\boldsymbol{ \nabla}\times\mathfrak{B}\right)\times\mathbf{B}_{0}-\mathcal{K}^{2}\mu \mathbf{u}, \tag{102}\] \[j\omega\mathfrak{B}=\left(\mathbf{B}_{0}\cdot\boldsymbol{\nabla }\right)\mathbf{u}-\mathcal{K}^{2}\eta\mathfrak{B} \tag{103}\] where now viscous and resistive dissipation introduce a local relaxation. We then take the rotational of the first equation and eliminate \(\mathfrak{B}\) using the second equation to get \[\left[\left(j\omega+\mathcal{K}^{2}\mu\right)\left(j\omega+\mathcal{K}^{2} \eta\right)\right]\boldsymbol{\nabla}\times\mathbf{u}+2j\left(j\omega+ \mathcal{K}^{2}\eta\right)k_{\parallel}\varOmega\mathbf{u}+k_{\parallel}^{2 }V^{2}\boldsymbol{\nabla}\times\mathbf{u}=\mathbf{0}. \tag{104}\] After some algebra we find that the linearised dissipative regime of velocity and field low frequency oscillations is now described by \[\boldsymbol{\nabla}\times\mathbf{u}=\left[\mathcal{K}_{R}\left(k_ {\parallel},\omega\right)-j\mathcal{K}_{I}\left(k_{\parallel},\omega\right) \right]\mathbf{u} \tag{105}\] \[\left(\omega-j\mathcal{K}^{2}\eta\right)\mathbf{B}=-\sqrt{\mu_{0 }\rho}k_{\parallel}V\mathbf{u} \tag{106}\] rather than by the collisionless Eqs. (11, 12), where we have defined the two real wave-vectors \(\mathcal{K}_{R}\approx\mathcal{K}\gg\mathcal{K}_{I}\) through \[\mathcal{K}_{R}\left(k_{\parallel},\omega\right)-j\mathcal{K}_{I}\left(k_{ \parallel},\omega\right)=2\varOmega\frac{\left(\omega-j\mathcal{K}^{2}\eta \right)k_{\parallel}}{k_{\parallel}^{2}V^{2}-\left(\omega-j\mathcal{K}^{2} \eta\right)\left(\omega-j\mathcal{K}^{2}\eta\right)}. \tag{107}\] We then consider an initial value problem with a weakly decaying wave of the form \[\mathbf{v}=\mathbf{u}\exp\left[j\left(\omega+j\nu\right)t\right] \tag{108}\] with \(\omega\gg\nu\), and with the structure \[\mathbf{v}=\left[\frac{1}{\mathcal{K}_{R}-j\mathcal{K}_{I}} \boldsymbol{\nabla}\times\mathbf{e}_{z}\times\boldsymbol{\nabla}+\mathbf{e}_ {z}\times\boldsymbol{\nabla}\right]J_{m}\left(\alpha r\right)\exp\left(j\left[ \left(\omega+j\nu\right)t-m\theta-k_{\parallel}z\right]\right) \tag{109}\] where \(\alpha\) is a real number, \(\omega\) and \(k_{\parallel}\) are given, and the damping rate \(\nu\left(\omega,k_{\parallel},\mathcal{K}\right)\) is to be determined from the weak dissipation expansion of the dispersion relation \[\alpha^{2}+k_{\parallel}^{2}=\left[\mathcal{K}_{R}\left(k_{\parallel},\omega+ j\nu\right)-j\mathcal{K}_{I}\left(k_{\parallel},\omega+j\nu\right)\right]^{2} \tag{110}\] obtained by plugging this solution in Eq. (105). Taylor expanding this last relation for \(\nu\ll\omega\), the lowest order real part gives the collisionless dispersion \[\alpha^{2}\left(k_{\parallel},\omega\right)=\mathcal{K}_{R}^{2}\left(k_{ \parallel},\omega\right)-k_{\parallel}^{2}\approx\mathcal{K}^{2}\left(k_{ \parallel},\omega\right)-k_{\parallel}^{2} \tag{111}\] while the lowest order imaginary part gives a relation for the decay rate \(\nu\) \[\nu\left(k_{\parallel},\omega\right)\frac{\partial\mathcal{K}_{R}\left(\omega \right)}{\partial\omega}=\mathcal{K}_{I}\left(k_{\parallel},\omega\right) \approx\frac{\mathcal{K}^{3}}{\omega}\left[\eta+\left(\mu+\eta\right)\left( \frac{k_{\parallel}^{2}V^{2}}{\omega^{2}}-1\right)^{-1}\right]. \tag{112}\] Here we took \(\partial\mathcal{K}_{R}/\partial\omega\approx\partial\mathcal{K}/\partial\omega\) and used Eq. (103). Finally, Eq. (112) can be used to write an equation for the evolution of the wave energy density \(\mathcal{U}\) \[\frac{d\mathcal{U}}{dt}=-2\nu\mathcal{U}=-2\mathcal{K}_{I}\left(\frac{\partial \mathcal{K}_{R}}{\partial\omega}\right)^{-1}\mathcal{U}. \tag{113}\] For a rotating Alfven wave, this energy density \(\mathcal{U}\) has three distinct components \[\mathcal{U}=\frac{\left\langle B^{2}\right\rangle}{2\mu_{0}}+\frac{\varepsilon_{0 }}{2}\left\langle\left(\mathbf{v}\times\mathbf{B}_{0}\right)^{2}\right\rangle+ \frac{\rho}{2}\left\langle v^{2}\right\rangle \tag{17}\] where \(\left\langle\right\rangle\) indicate an average over the fast \(\omega\) oscillations. The first term on the right hand side is the magnetic energy, the second term is the electric energy and the third term is the kinetic energy. This energy density can be rewritten using the Alfven velocity \(V\) and the velocity of light \(c\) as \[\mathcal{U}=\frac{\rho}{2}\left[\left\langle\mathbf{v}^{2}\right\rangle\left( 1+\frac{V^{2}}{c^{2}}\left(1+\frac{k_{\parallel}^{2}c^{2}}{\omega^{2}}\right) \right)-\left\langle\left(\mathbf{v}\cdot\frac{\mathbf{V}}{c}\right)^{2} \right\rangle\right]. \tag{18}\] Combining Eq. (16), Eq. (17) and the relation between energy and angular momentum absorption, one finally gets \[\Gamma=2\rho\frac{m}{\omega}\mathcal{K}_{I}\left(\frac{\partial\mathcal{K}_{ R}}{\partial\omega}\right)^{-1}\left[\left\langle\mathbf{v}^{2}\right\rangle \left(1+\frac{V^{2}}{c^{2}}\left(1+\frac{k_{\parallel}^{2}c^{2}}{\omega^{2}} \right)\right)-\frac{\left\langle\left(\mathbf{v}\cdot\mathbf{V}\right)^{2} \right\rangle}{c^{2}}\right] \tag{19}\] where \(\mathcal{K}_{R}\) and \(\mathcal{K}_{I}\) are given by Eq. (10) and \(\mathbf{v}\) is given by Eq. (26). ## 6 Conclusion Building on previous contibutions studying Alfven waves in rotating plasmas in geophysical and astrophysical settings, we have examined here the dynamics of orbital angular momentum (OAM) carrying torsional Alfven waves in a rotating plasma. It is found that two new couplings between the orbital angular momentum of the Alfven waves and the angular momentum of the rotating plasma exist. One is Fresnel-Faraday rotation (FFR), that is a rotation of the transverse structure of the wave due to the medium's rotation, which had already been predicted for the high frequency electronic modes that are Trivelpiece-Gould and Whistler-Helicon modes (Rax & Guerault, 2021). Extending these earlier contributions, direct Fresnel-Faraday rotation (FFR) for torsional Alfven waves in a rotating plasma is described by Eqs. (11) and (12). It is the orbital angular momentum analog of the polarization drag effect for spin angular momentum waves (Jones, 1976; Player, 1976). An important distinction found here though is that while rotation did not introduce new high frequency modes so that FFR for Trivelpiece-Gould and Whistler-Helicon modes was simply the consequence of the interplay between Coriolis force and rotational Doppler shift (Rax & Guerault, 2021), the strong coupling to the inertial mode that exists for Alfven waves in rotating plasmas complexifies this picture. The second coupling is the inverse effect through which the OAM carrying wave exerts a torque on the plasma. Inverse FFR is described by Eqs. (10) and (19). This inverse effect is akin to the spin angular momentum inverse Faraday effect but for the orbital angular momentum of the wave. It is found that for a plasma with non-zero collisional absorption the damping of an OAM carrying wave is the source of a torque on the plasma. Looking ahead, these results suggest that direct FFR could in principle be used to diagnose plasma rotation with Alfven waves. Conversely, it may be possible to utilise inverse FFR to sustain plasma rotation through Alfven waves angular momentum absorption. The detailed analysis of these promising prospects is left for future studies. ## Acknowledgments The authors would like to thank Dr. E. J. Kolmes, I. E. Ochs, M. E. Mlodik and T. Rubin for constructive discussions. ## Funding This work was supported by the U.S. Department of Energy (N. J. F., grant number ARPA-E Grant No. DE-AR001554); and the French National Research Agency (R. G., grand number ANR-21-CE30-0002). JMR acknowledges Princeton University and the Andlinger Center for Energy + the Environment for the ACEE fellowship which made this work possible. ## Declaration of interests The authors report no conflict of interest.
回転磁化プラズマとtorsionalAlfv\'en波の角運動量結合について検討します。回転がOAMを担う Alfv\'en波のFresnel-Faraday回転(または角運動量を持つ軌道Faraday回転)の源であることを示すだけでなく、OAMを担う Alfv\'en波から回転プラズマへの角運動量の移行も示すことができます。直接プロセスでは、反対方向のOAMを持つモードの散乱関係を考慮して、横断構造角運動周波数を導出します。逆プロセスでは、プラズマに作用するトルクを波とプラズマパラメータの関数として導出します。 **Explanation of the Translation:** 1. **回転磁化プラズマとtorsionalAlfv\'en波の角運動量結合について検討します。** - This translates to "The angular momentum coupling between a rotating magnetized
2309.09329
A Few-Shot Approach to Dysarthric Speech Intelligibility Level Classification Using Transformers
Dysarthria is a speech disorder that hinders communication due to difficulties in articulating words. Detection of dysarthria is important for several reasons as it can be used to develop a treatment plan and help improve a person's quality of life and ability to communicate effectively. Much of the literature focused on improving ASR systems for dysarthric speech. The objective of the current work is to develop models that can accurately classify the presence of dysarthria and also give information about the intelligibility level using limited data by employing a few-shot approach using a transformer model. This work also aims to tackle the data leakage that is present in previous studies. Our whisper-large-v2 transformer model trained on a subset of the UASpeech dataset containing medium intelligibility level patients achieved an accuracy of 85%, precision of 0.92, recall of 0.8 F1-score of 0.85, and specificity of 0.91. Experimental results also demonstrate that the model trained using the 'words' dataset performed better compared to the model trained on the 'letters' and 'digits' dataset. Moreover, the multiclass model achieved an accuracy of 67%.
Paleti Nikhil Chowdary, Vadlapudi Sai Aravind, Gorantla V N S L Vishnu Vardhan, Menta Sai Akshay, Menta Sai Aashish, Jyothish Lal. G
2023-09-17T17:23:41
http://arxiv.org/abs/2309.09329v1
# A Few-Shot Approach to Dysarthrie Speech Intelligibility Level Classification Using Transformers ###### Abstract Dysarthria is a speech disorder that hinders communication due to difficulties in articulating words. Detection of dysarthria is important for several reasons as it can be used to develop a treatment plan and help improve a person's quality of life and ability to communicate effectively. Much of the literature focused on improving ASR systems for dysarthric speech. The objective of the current work is to develop models that can accurately classify the presence of dysarthria and also give information about the intelligibility level using limited data by employing a few-shot approach using a transformer model. This work also aims to tackle the data leakage that is present in previous studies. Our whisper-large-v2 transformer model trained on a subset of the UASpeech dataset containing medium intelligibility level patients achieved an accuracy of 85%, precision of 0.92, recall of 0.8 F1-score of 0.85, and specificity of 0.91. Experimental results also demonstrate that the model trained using the 'words' dataset performed better compared to the model trained on the 'letters' and 'digits' dataset. Moreover, the multiclass model achieved an accuracy of 67%. Dysarthria, UA-Speech, Whisper-large-v2, Few Shot Learning, PEFT, LORA, Transfer Learning, Voice Pathology ## I Introduction Dysarthria, a neuro-motor impairment affecting speech articulation and coordination, significantly impacts an individual's ability to produce coherent and intelligible verbal communication. In [1] F.Rudzicz et al. claimed that dysarthria arises from congenital conditions or traumatic events that impact the neuromotor system involved in speech production. The congenital causes of dysarthria encompass conditions like brain asphyxiation during birth, which result in long-term speech impairments. On the other hand, traumatic causes of dysarthria include events such as stroke, cerebral palsy, multiple sclerosis, Parkinson's disease, myasthenia gravis, and amyotrophic lateral sclerosis (ALS). Individuals with dysarthria encounter difficulties related to articulation, speech rate, breath control, resonance, and overall communication [2, 3, 4]. These challenges can result in diminished comprehensibility, limited expressive abilities, and obstacles in social interactions. The field of dysarthria research has seen advancements in automatic speech recognition (ASR) systems [5] for aiding individuals with dysarthria in communication. However, the automatic classification of dysarthria and its severity levels remain limited. Using the Frenchay Dysarthria Assessment [6], doctors undertake perceptual evaluations of speech to determine the kind and severity of the disease. Subjective assessments by clinicians are costly, time-consuming, and prone to biases, raising concerns about their reliability. This motivates the development of an impartial objective technique for evaluating dysarthric speech. More and more researchers are employing deep learning and machine learning algorithms to develop automatic dysarthria identification in order to objectively and reliably identify individuals with the condition. Many researchers extract characteristics from voice signals using various feature extraction techniques [7]. For example, Stephanie et al. [8] used Teager Energy Operator (TEO) and the glottal waveform features. Chitralekha et al. [9] utilized audio descriptors or features that are often used to determine the timbre of musical instruments. Dong et al. [10] and Amlu et al. [11] used MFCC-based features. N.P. Narendra et al. [12] used Two sets of glottal features and acoustic features. Then, deep learning and machine learning techniques, including convolutional neural networks (CNNs), artificial neural networks (ANNs), CNN-LSTM (long short-term memory), CNN-GRU (Gated Recurrent Unit), SVM, and other models, are used to detect dysarthria. This research aims to develop an automatic tool that leverages vocal acoustics to detect the presence of dysarthria and accurately determine its severity level. Additionally, we investigate the efficacy of different speech tasks, such as words, letters, and digits, in training the detection model. Furthermore, we explore the feasibility of employing transformer models in pathology detection, specifically dysarthria, utilizing few-shot transfer learning techniques [13]. The training process utilizes a portion of the UASpeech Dataset [14], while the remaining dataset is reserved for testing purposes. Log Mel spectrogram features are extracted from the audio files and are employed for training the Whisper Model [15] which is a large language model, trained on 680,000 hours of multilingual audio data procured from the internet. The whisper model family comprises of five different models with varying model sizes. The large variant was considered in this research which has 1550 million parameters. Considering the computational complexity involved in training models of enormous size, various efficient training approaches were considered and LORA [16] was used to make the training process efficient and cost-effective. The rest of the paper is organized as follows. Section 2 describes related works while Section 3 gives a detailed description of the methodology used. Section 4 presents the results and discussion and we conclude in Section 5. ## II Related Works There have been numerous techniques and models developed to predict the presence of dysarthria. Some of the approaches are discussed in this section and Table I presents the overview of the literature review. In [8], Stephanie et al. employed a cross-database training strategy in their study to distinguish speech samples with and without dysarthria. Specifically, they trained their model on the UA-Speech database and evaluated its performance on the AMSDC database. To mitigate the issue of repeated speech samples from the same individual, one channel per participant was randomly selected for analysis. The current analysis contains elements based on the Teager Energy Operator (TEO) and the glottal waveform in addition to conventional spectral and prosodic aspects. Baseline findings employing prosodic features on the UA-Speech dataset to optimize word and participant-level accuracy at 75.3% and 92.9%. However, the UA-Speech cross-training evaluated on the AMSDC maximizes word- and participant-level accuracy at 71.3% and 90%, respectively, based on TEO features. In [9], Chitralekha et al. adopted audio descriptors or features commonly employed to characterize the timbre of musical instruments and adapted them for the purpose of their study. They utilized a dataset consisting of dysarthric utterances, including utterances associated with 10 digits and 19 computer commands, collected from all patients. Features based on multi-tapered spectral estimates were calculated and employed for classification. With the use of the TORGO database and the Universal Access dysarthric speech corpus, an Artificial Neural Network (ANN) was trained to categorize speech into different severity levels. For the UA speech corpus and the TORGO database, average classification accuracy was 96.44% and 98.7%, respectively. In [10], Dong et al. used features based on MFCC Coefficients They utilized a dataset consisting of dysarthric utterances, including utterances associated with numbers 1 to 10, the 26 letters, collected from all patients., and they used Convolutional Neural Networks (CNNs) and Gated Recurrent Units (GRUs) to enable faster dysarthria detection. Their experimental results demonstrate that the CNN-GRU model achieves an accuracy of 98.38%, surpassing the performance of other models like CNN, LSTM, and CNN-LSTM. In [11] Amlu et al. employ the deep neural network (DNN), the convolutional neural network (CNN), and the gated recurrent units(GRU) Long short term memory (LSTM) to classify the severity of dysarthric speech. Mel frequency cepstral coefficients (MFCCs) and their derivatives are the characteristics used in this investigation. For the UA-Speech database, they used 4,500 test files and 6,975 training files. Using the UA-Speech corpus and the TORGO database, The findings show that DNN gave 93.97% accuracy for speaker-dependent scenarios and 49.22% for speaker-independent scenarios. In [12] N.P. Narendra et al. suggested a unique technique for classifying dysarthric speech from coded telephone voice using glottal characteristics. Each speaker's spoken utterances were utilized. calculated using a glottal inverse filtering technique based on deep neural networks. The openSMILE toolbox is used to integrate glottal information (time- and frequency-domain parameters and PCA-based parameters) with acoustic characteristics. Glottal and auditory characteristics are used to train both separate and mixed support vector machine classifiers. Studies using the TORGO and UA-Speech databases show that the glottal factors produced a classification accuracy range of 63-77%.In [17] Amlu Anna Joshy et al. also classified dysarthria using multi-head attention. It was clear from the above literature that many methods had data leakage as audio files from the same patient were split across train and test sets. And there has not been much research conducted on few-shot learning techniques for pathology classification, which is important because the amount of audio data for pathology tasks is limited. The novelty of this work lies in exploring the effectiveness of the few-shot learning approach using transformer models like whisper large-v2 for dysarthria detection and comparing which dataset task (Words or letters and digits) performs the best. ## III Methodology ### _Dataset_ The goal of the UA-Speech database [14] is to encourage the creation of user interfaces for talkers who have spastic dysarthria and severe neuromotor diseases. It consists of isolated-word recordings made using a 7-channel microphone array mounted on top of a computer display from 15 dysarthric speakers and 13 control speakers. Age, Gender, and Speech intelligibility of speakers list the dataset's varied dysarthric speakers' levels of intelligibility. This is represented in the table II. Each patient has a total of 765 files which is comprised of 10 Digits of 3 repetitions, 26 Letters of 3 repetitions, 19 Computer Commands of 3 repetitions, 100 Common Words of 3 repetitions, and 300 Uncommon Words of 1 repetition. For various experiments conducted in this study, various subsets of the dataset are considered. First, a dataset is prepared for the purpose of building binary classification models. This dataset is constructed by exclusively using only the common words and uncommon words of the speakers. A single repetition of common words (100 words) and all uncommon words (300 words) are combined together. In order to avoid data leakage, files from two control patients and files from two pathology patients are used for training, and files from all other patients are used for testing. The training set contained a total of 1600 audio files (800 control and 800 pathology) and the test set contained a total of 9,600 files (4400 control and 5200 pathology). various experiments are conducted by considering pathology patients with various intelligibility levels. A detailed description of this data is presented in table III. For the purpose of determining which dataset task gives better accuracy for multiclass models. A new dataset is created using the letters and numbers audio files. Each patient had 36 files ( 26 letters + 10 numbers) and again to avoid data leakage only two patients were considered in the training set of each class and all other patients were considered in the test set. The training set contained a total of 360 audio files (72 control and 288 pathology) and the test set contained a total of 648 audio files (396 control and 252 pathology). The detailed description of the multiclass dataset is presented in table V. All of the input audio samples are resampled to 16,000 Hz for data preprocessing, and a representation of an 80-channel log magnitude Mel spectrogram is produced on 25-millisecond windows with a stride of 10 milliseconds. The whisper models are trained using this representation of the preprocessed data. ### _Whisper Model_ Whisper [15] is an Automatic Speech Recognition (ASR) system developed by OpenAI. It was trained using 680,000 hours of supervised, multilingual, and multitasking web data. The details about various architectural parameters of the whisper family models is presented in table VI. Since whisper was trained with the intention of achieving high-quality results in a zero-shot setting, this makes it very powerful and able to handle a wide range of tasks, including speech recognition. Whisper is an encoder-decoder-based architecture but since the task at hand requires only the encoder part of the model, we extracted it and added a classification head as seen in Fig. 1. Using the classification head, Given the log mel spectrogram of a speech uttered by a subject, the model will predict the probability that the subject has dysarthria and also the level of it in case of multiclass classification. ### _PEFT and LoRA_ Training large language models typically requires huge clusters of GPUs and vast amounts of data. In order to make training accessible for everyone, various techniques are explored and presented. parameter-efficient fine-tuning (PEFT) [18] selectively updates a subset of the model's parameters, specifically targeting the most influential ones for the new task. This approach significantly reduces the computational resources required for fine-tuning, resulting in improved efficiency without compromising performance. By focusing on updating only the essential parameters, we ensured effective training while minimizing unnecessary computations. Among various methods included in PEFT, LoRA (LOW-RANK ADAPTATION OF LARGE LANGUAGE MODELS) [16], developed by Microsoft, is by far the most popular method. It is a technique used to fine-tune large language models (LLMs) by freezing most of the parameters and updating only a small subset specific to the task. It achieves parameter reduction by employing singular value decomposition (SVD), which decomposes a matrix into three matrices. By retaining a reduced number of singular values and their corresponding vectors, the LLM can be efficiently fine-tuned while maintaining performance. We utilized INT8 tuning along with PEFT, LoRA and bitsandbytes [19]. This approach optimized memory usage and improved training efficiency, allowing us to overcome the memory limitations and successfully train our model. ### _Training_ We opted to train the model using a cloud-reuted machine provided by Lambdalabs. The system had 30vCPUs, 200GiB RAM, and 1.4 TiB SSD, and it was equipped with an Nvidia A10 GPU with compute capability of 8.6 and cost about 0.68 per hour at the time of writing. After preprocessing the data, the model was loaded into memory in 8-bit precision and then optimized using Lora with a projection rank of 32. Then the optimized model was put to training for 10 epochs using a batch size of 8 and a learning rate of \(10^{-3}\). ## IV Results and Discussion We used standard evaluation metrics such as accuracy, precision, recall, and specificity. Accuracy is the percentage of the images that were correctly classified. Precision is the accuracy of the positive predictions. The recall is the fraction of the positives that were correctly predicted. F1-Score is the harmonic mean of precision and recall. Specificity is the fraction of the negatives that were correctly predicted. \[Accuracy=\frac{TP+TN}{TP+TN+FP+FN}\] \[Recall=\frac{TP}{TP+FN}\] \[Precision=\frac{TP}{TP+FP}\] \[F1-Score=\frac{2(Precision*Recall)}{Precision+Recall}\] \[Specificity=\frac{TN}{TN+FP}\] The results obtained from the binary classification experiment are summarized in table VII. Among the four experiments conducted, the model trained using pathology patients with medium intelligibility levels performed the best, giving an accuracy of 85%, precision of 0.92, recall of 0.8, F1-score of 0.85, and specificity of 0.91. This indicates that the model finds it easy to predict dysarthria if trained on data containing patients with medium intelligibility levels compared to models trained with other intelligibility levels such as very low, low, and high. Our model achieved around 10% improvement in accuracy in comparison with the work presented in [12], where they reported a word-level accuracy of 75.3%. Table VIII and Table IX show accuracy, precision, recall, F1-Score, and specificity of models trained on words dataset and digits and letters dataset, respectively for multiclass classification which includes the classes: Control, High, Medium, Low and Very Low. Both models are trained with two patients belonging to each class. The accuracy of multiclass classification using the words dataset is 67% while the accuracy for multiclass classification using the 'digits' and 'letters' dataset is 58%. The multiclass model trained on the words dataset achieved 9% better accuracy than its counterpart. Analyzing the results from the table, we can see that both models are performing the best in the control class compared with other classes, and both models have a hard time predicting patients from the Low class. Both models have good precision for the high class, but the class has a very low score for other evaluation metrics. Fig. 1: Modified Whisper Architecture ## V Conclusion and Future Works This work explores a few-shot learning approach for dysarthria detection using the encoder of the whisper-large-v2 model. The main contributions of the proposed study are: * Compared with the previous methods, pathology detection has improved considerably using transformer models and we are able to demonstrate the potential use of few-shot learning for pathology detection. * From our study we have determined that to detect dysarthria, a model trained using patients having medium-level intelligibility performs better. * We also determined that the dataset built using audio recordings of words will result in better model performance. Potential future works include determining the minimum number of patients to accurately classify dysarthria using few-shot learning and comparative analysis can be done using a wide spectrum of deep learning models to determine which architecture performs the best.
dysarthriaのSpeech disorderは、言葉の発音に困難があるため、コミュニケーションを阻害する。Dysarthriaの検出は、治療計画を策定し、個人の生活の質を向上させ、効果的にコミュニケーションをとる能力を向上させるため、重要である。多くの文献が、dysarthriaのSpeechの発音に関するASRシステムを改善するための研究に焦点を当ててきた。現在の研究の目的は、少ないデータを用いて、transformerモデルを適用することで、dysarthriaの存在を正確に分類し、 intelligibilityレベルに関する情報を提供するモデルを開発することである。この研究では、過去の研究で存在するデータ漏洩に対処することを目標とする。UASpeechデータセットに含まれる中程度のintelligibleレベルの患者を含む、whisper-large-v2 transformerモデルは、85%の精度、0.92の精度、0.8のRecall、0.85のF1スコア、0.9
2309.11566
SignBank+: Preparing a Multilingual Sign Language Dataset for Machine Translation Using Large Language Models
We introduce SignBank+, a clean version of the SignBank dataset, optimized for machine translation between spoken language text and SignWriting, a phonetic sign language writing system. In addition to previous work that employs complex factorization techniques to enable translation between text and SignWriting, we show that a traditional text-to-text translation approach performs equally effectively on the cleaned SignBank+ dataset. Our evaluation results indicate that models trained on SignBank+ surpass those on the original dataset, establishing a new benchmark for SignWriting-based sign language translation and providing an open resource for future research.
Amit Moryossef, Zifan Jiang
2023-09-20T18:08:28
http://arxiv.org/abs/2309.11566v2
# SignBank+: Multilingual Sign Language Translation Dataset ###### Abstract This work advances the field of sign language machine translation by focusing on dataset quality and simplification of the translation system. We introduce SignBank+, a clean version of the SignBank dataset, optimized for machine translation. Contrary to previous works that employ complex factorization techniques for translation, we advocate for a simplified text-to-text translation approach. Our evaluation shows that models trained on SignBank+ surpass those on the original dataset, establishing a new benchmark and providing an open resource for future research. sign language, sign language dataset, sign language translation ## 1 Introduction Sign Language serves as an indispensable mode of communication for the deaf. Unfortunately, the available methods for translating between signed and spoken languages, have been limited in scope and effectiveness. The main objective of this research is to explore technological advancements that can enhance the translation process, focusing on the cleaning and enrichment of an existing sign language dataset, _SignBank_1, a multilingual collection of _puddles_, covering a range of domains. Footnote 1: [https://www.signbank.org/signpuddle/](https://www.signbank.org/signpuddle/) The pioneering work of Jiang et al. (2023) set the stage for this task. They presented an approach to translating SignWriting through specialized parsing and factorized machine translation techniques. Motivated by their efforts, this research aims to build upon their foundation by: 1. Undertaking a rigorous data cleaning process and extending the dataset they utilized. 2. Reverting to a simple text-to-text translation mechanism, omitting any factorization. The hypothesis driving this study is twofold: First, a meticulously curated dataset will enhance the accuracy and reliability of translation models. Second, by simplifying the translation process, it becomes feasible to train a diverse array of models and streamline their deployment. To validate our claims, we compare the translation quality of signed-to-spoken translation using the original, and cleaned data to previous work. We show that with our new, cleaner data, we can train standard machine translation models with improved quality over the original data. We share our data openly (available at [https://github.com/sign-language-processing/signbank-plus](https://github.com/sign-language-processing/signbank-plus)) to be used in future machine translation research. ## 2 Background This work only concerns machine translation between signed and spoken languages where both the input and the output are represented as discrete tokens (or, text). ### _Signed-to-Spoken_ Jiang et al. (2023) explore text-to-text sign to spoken language translation, with SignWriting as the chosen sign language notation system. Despite SignWriting usually represented in 2D, they use the 1D Formal SignWriting specification and propose a neural factored machine translation approach to encode sequences of SignWriting graphemes as well as their positions in the 2D space. They verify the proposed approach on the SignBank dataset in both a bilingual setup (American Sign Language to English) and two multilingual setups (4 and 21 language pairs, respectively). They apply several low-resource machine translation techniques used to improve spoken language translation to similarly improve the performance of sign language translation. Their findings validate the use of an intermediate text representation for signed language translation, and pave the way for including sign language translation in natural language processing research. ### _Spoken-to-Signed_ Jiang et al. (2023) also explore the reverse translation direction, i.e., text to SignWriting translation. They conduct experiments under a same condition of their multilingual SignWriting to text (4 language pairs) experiment, and again propose a neural factored machine translation approach to decode the graphemes and their position separately. They borrow BLEU from spoken language translation to evaluate the predicted graphemes and mean absolute error to evaluate the positional numbers. Walsh et al. (2022) explore Text to HamNoSys (T2H) translation, with HamNoSys as the target sign language notation system. They experiment with direct T2H and Text to Gloss to HamNoSys (T2G2H) on a subset of the data from the MEINE DGS dataset Hanke et al. (2020), where all glosses are mapped to HamNoSys by a dictionary lookup. They find that direct T2H translation results in higher BLEU (it still needs to be clarified how well BLEU represents the quality of HamNoSys translations, though). They encode HamNoSys with BPE Sennrich et al. (2016), outperforming character-level and word-level tokenization. They also leverage BERT to create better sentence-level embeddings and use HamNoSys to extract the hand shapes of a sign as additional supervision during training. ### Machine Translation Frameworks Machine translation has witnessed substantial advancements in recent years, both in terms of model architectures and frameworks that facilitate their training and deployment. When it comes to text-to-text translation, several open-source platforms have emerged, leading to the democratization of machine translation technology. Prominent machine translation frameworks include _OpenNMT_Klein et al. (2017), Sockeye Hieber et al. (2017, 2020), Joey NMT Kreutzer et al. (2019), and _Faireseq_Ott et al. (2019). They are all widely renowned for simplicity, efficiency, and emphasis on performance, promoting rapid prototyping and thus becoming a popular choice among machine translation researchers. Bergamot (2022) aims to bring machine translation to local clients. Leveraging advancements in _Marian NMT_Junczys-Dowmunt et al. (2018), _Bergamot_ provides recipes for fast, local, multilingual machine translation models. It provides an opinionated pipeline and assumes both the source and the target come from spoken languages. It only supports text-to-text translation, and expects a shared source-target vocabulary and a huge amount of data, uncommon in sign language resources. Despite the project's disadvantages, it is the only one that includes a realistic training pipeline for machine translation deployment. ## 3 Data In our efforts to improve sign language translation through a text-to-text approach, data quality and quantity are of paramount importance. This section outlines our data curation strategy, encompassing both the data we generate ourselves (SS3.1) and the data we clean and expand (SS3.2). ### Fingerspelling Data Fingerspelling is a significant component of signed languages, often used for spelling out names, places, or other words that might not have a designated sign. Given its importance, we embarked on a dedicated data generation process. We collected and annotated fingerspelling for letters and numbers across 22 different signed languages2. These annotations are largely derived from the fingerspelling keyboard3. Footnote 2: American, Brazilian, British, Chinese, Danish, Flemish, French, French Belgian, German, Honduran, Irish, Israeli, Italian, Japanese, Mexican, Nicaraguan, Norwegian, Portuguese, Spanish, Swedish, Swiss German, and Thai. Footnote 3: [https://www.signwriting.org/forums/software/fingkeys/fkey001.html](https://www.signwriting.org/forums/software/fingkeys/fkey001.html) ### SignBank Cleaning and Expansion The SignBank dataset, while invaluable, includes numerous inconsistencies and imperfections. Multiple non-parallel textual entries were associated with singular signing sequences. For instance, while some entries indicated chapter and page numbers from a book, the actual text was missing. In others, definitions were jumbled with the intended word. In light of these challenges, we initiated a meticulous data-cleaning (SS3.2.1) and expansion (SS3.2.2) processes detailed below: #### 3.2.1 Dataset Cleaning Initially, we manually corrected at least five entries for each puddle. Given the formulaic nature of certain puddles (e.g., the bible), rule-based corrections enabled immediate annotation of multiple entries. Comprehensive rules used in this phase are detailed in Appendix A.1. Using ChatGPT OpenAI (2022), we defined a pseudo function that gets the number of signs, language code, and existing terms to return a cleaned, parallel version of the terms: clean(number of signs, language code, terms). An illustration would be the function call: clean(1, "s1", ["Koreja (mednarodno)", "Korea", "S125- P1"]) returning ["Koreja", "Korea"]. More detailed examples are available in Appendix B.1. To ascertain the efficacy of this cleaning method, we employed the gpt-3, 5-turbo-0613 model on the manually cleaned samples. By comparing these results to the cleaned dataset, we assessed the quality via the intersection over Union (IoU)4 metric between the predicted terms and the annotated terms. We compared multiple settings, with various approaches to cleaning the data: 1. **E0**: No changes. 2. **E1**: Rule-based cleaning (Appendix A.2). 3. **E2**: E1 + ChatGPT with four fixed, manually selected few-shot examples. 4. **E3**: E1 + ChatGPT with five few-shot examples from the same puddle. 5. **E4**: E1 + ChatGPT with four fixed examples and five examples from the same puddle. 6. **E5**: E4 + using gpt-4-0613. Doing nothing (_E0_) leads to a base IoU of **0.50**. The rule-based approach (_E1_), which conservatively eliminated undesired text entries, provided a slight boost, resulting in an IoU of **0.53**. Incorporating general few-shot examples into the cleaning process (_E2_) significantly increased the IoU to **0.63**. A more targeted approach using five few-shot examples from the same puddle (_E3_) further improved this to **0.71** IoU. When combining the general few-shot examples with puddle-specific examples (_E4_), we achieved an IoU of **0.74**. Our best results, however, came from GPT-4 (_E5_), which achieved an IoU of **0.80**. For cost considerations, the following pricing was assumed: \(\$0.0015/1K\) tokens for gpt-3. 5-turbo and \(\$0.03/1K\) tokens for gpt-\(4\), indicating a 20 \(\times\) price disparity. Given the average of 714 tokens for _E4_ and _E5_ and around 200K annotations, the projected costs for gpt-3. 5-turbo and gpt-4 are approximately \(\$200\) and \(\$4000\), respectively. For financial reasons, we use gpt-3. 5-turbo. The final cost ended up being \(\$230.18\), paid to OpenAI. #### 3.2.2 Dataset Expansion Our next objective is to further enrich the dataset by introducing variations for each cleaned term. Variability in language representation can significantly benefit the robustness of machine translation models by providing multiple ways of expressing the same idea. For this, we designed a function, expand(language code, terms), producing expanded terms and proper capitalization. As some terms were in English, outputs for both the specific language and English were generated separately. Prompt in Appendix B.2. For an illustration, consider a term in Swedish such as 'tre'. When passed to our function like so: expand("sv", ["tre"]), the returned output could be ["sv": ["Tre", "3"], "en": ["Three", "3"]. This means that for the Swedish language ('sv'), the term 'tre' can be represented as 'Tre' or the numeral '3'. The corresponding English translation for the term would be 'Three'. Another example would be the German term for 'father'. The function call expand("de", ["Vater", "father"]) yields ["de": ["Vater", "Vati", "Papa", "Erzeuger"], "en": ["father", "Dad", "Daddy"]].Here, the term expands to multiple terms in both German and English. This expansion approach (using gpt-3. 5-turbo with 9 fixed few-shot examples), although seemingly straightforward with a similar cost to the cleaning process, introduces vast richness to our dataset. Each term is now associated with multiple representations, thereby enhancing the potential of our model to understand the nuances and variability of language. However, this expansion can also introduce errors, either when expanding terms that were not properly cleaned, or when the expansion itself is wrong. The expansion cost ended up being \(\$299.72\), paid to OpenAI. Evaluating the efficacy of this expansion step is non-trivial, due to the inherent subjectivity involved in determining which expansions are valid or more useful than others. Interested readers are referred to Appendix C for more outputs. ## 4 Data Quality Experiments To evaluate the quality of our cleaning and expansion, we test its effect on machine translation. We train machine translation models on the original data, on the cleaned data, and on the expanded data, in an imbalanced multilingual setting. For this comparison, we focus on the _signed-to-spoken_ direction, since automatic evaluation of spoken language text is well established. For a development set, in each data scenario, we consider the first 3000 entries. For our test set, we use our manually annotated data from SS3.2.1. In the source text, we include tags to indicate the source and target language for the translation. We use sacreBLEU 2.3.1 [15], to evaluate BLEU5[10] and chrF6[11]. This comparison is only made to evaluate the quality of the different datasets. Thus, for every framework, we use the default training settings and avoid attempting to optimize with smaller models or different architecture. We posit that better test-set performance in a given framework indicates higher data quality. While we believe that this effect should be highly potent for the _spoken-to-signed_ translation direction, it is not evaluated in this work since there are no human-validated automatic metrics to evaluate SignWriting output. Footnote 5: BLEU = case:mixed[eff:no[tok:13a]smooth:exp Footnote 6: chrF = case:mixed[eff:yes]nc:6[inv:0]space:no **Sockey / Fairseq / OpenNMT** In preprocessing, the SignWriting text is tokenized by splitting its components (symbol, modifiers, and position), and the spoken language text is tokenized using BPE [16] with 3000 merges. For the cleaned dataset, this results in a smaller vocabulary than for the original dataset since some unigrams are filtered out. Model training is early-stopped on validation chrF score (Sockeye), BLEU (Fairseq), and accuracy (openNMT) with a patience of 10 epochs. **Keras** (**Chollet et al., 2015**): To address the effect of clean data on pre-trained language models, we fine-tune _m75-small_(**Xue et al., 2021) using Keras and HuggingFace Transformers (**Wolf et al., 2020). In this setting, both the source and target texts are tokenized using the _m75_ tokenizer. Since our source data is extremely out-of-domain to the original language model training, we do not expect to see improvements from the pre-trained language model. The model is fine-tuned for up to 20 epochs, early stopped on validation loss. ## 5 Results Table 1 shows that despite the different frameworks, pre-trained models, unoptimized modeling, and imbalanced multilingual translation scenarios, performance on the cleaned data is consistently better compared to the original data. This establishes our cleaned data as more useful for signed-to-spoken machine translation. In the _signed-to-spoken_ translation direction, the use of our expanded data is dubious. If our cleaned data is of perfectly good quality, our expansion can only add noise by introducing multiple targets for the same source. However, since we know that our cleaned data is not perfect, we hypothesize that the additional noise from the data expansion smooths out the noise in the imperfect data, by introducing more overlaps between identical translations, thus drowning the noise. This is very difficult to evaluate. As we vary the target texts in many dimensions (gender, formality, capitalization, script, and form), uncontrolled translation of the test set into the original distribution of these dimensions is improbable, even when disregarding noise coming from wrong expansions. This is reflected in the results. Using the expanded data for pre-training our Sockeye model, then fine-tuning on the cleaned data gets the model back to the target distribution, with better results of \(31.39\) BLEU and \(31.97\) chrF. We compare these results to the state of the art. Specifically, we query the API endpoint made available by **Jiang et al.** (2023) to translate our test set. To some extent, this is an unfair comparison, since they likely saw these exact translation sources in training and since we are evaluating more languages than their model was trained on. And yet, their method achieves \(5.03\) BLEU and \(18.92\) chrF on our test set. Despite their optimization in modeling, our optimization in data quality makes up for sub-par modeling. ## 6 Conclusions This work introduces a methodology for data cleaning and expansion for low-resource settings such as sign language translation. Its main contribution is the introduction of _SignBank+_, a cleaner and more expansive sign language translation dataset than _SignBank_. The data, and baseline models code are publically available on [https://github.com/sign-language-processing/signbank-plus](https://github.com/sign-language-processing/signbank-plus). ## 7 Future Work We encourage future work to expand on our efforts and create _SignBank++_. The _clean_ and _expand_ steps can be executed with more, and better language models. Quality estimation filtering methods can be created to filter out text pairs likely to not be parallel. Additionally, optimizing the input representation, by encoding SignWriting as images, reducing the token count, or standardizing phoneme order, all of which could improve translation performance. Finally, robust evaluation metrics for spoken-to-signed translation should be created and validated with human judgments. \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline \hline & & & \multicolumn{3}{c}{**Sockeye**} & \multicolumn{3}{c}{**Fairseq**} & \multicolumn{3}{c}{**OpenNMT**} & \multicolumn{1}{c}{**Keras (m75)**} \\ \cline{3-11} Dataset & Training Pairs & Vocab & BLEU & chrF & BLEU & chrF & BLEU & chrF & BLEU & chrF \\ \hline Original & \(521,390\) & \(6,016\) & \(0.2\) & \(8.4\) & \(0.18\) & \(4.74\) & \(0.69\) & \(9.21\) & \(0.07\) & \(6.39\) \\ Cleaned & \(357,574\) & \(5,200\) & \(\mathbf{22.32}\) & \(\mathbf{28.63}\) & \(1.1\) & \(\mathbf{7.59}\) & \(\mathbf{30.6}\) & \(\mathbf{22.46}\) & \(\mathbf{6.02}\) & \(12.35\) \\ Expanded & \(1,027,418\) & \(5,976\) & \(0.55\) & \(7.22\) & \(\mathbf{1.26}\) & \(6.52\) & \(13.38\) & \(13.0\) & \(2.99\) & \(\mathbf{12.49}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Evaluation of the usability of our data for machine translation.
SignBank+は、SignBankデータのクリーンなバージョンで、音声言語テキストとSignWriting(音韻的な手話文字体系)間の機械翻訳に最適化されています。従来の技術でテキストとSignWriting間を翻訳することを可能にする複雑な因数分解手法を採用した研究に加えて、私たちは、クリーンなSignBank+データセットを用いて、従来のテキスト対テキスト翻訳手法がSignWritingベースの手話文字の翻訳に十分に有効であることを示しました。評価結果によると、SignBank+でトレーニングされたモデルは、元のデータセットのモデルよりも優れていることが明らかになり、SignWritingベースの手話文字の翻訳のための新しいベンチマークを確立し、将来の研究のためのオープンリソースを提供しました。
2309.16209
Ultrafast Polarization Switching in BaTiO$_3$ Nanomaterial: Combined DFT and Coupled Oscillator Study
The challenge of achieving ultrafast switching of electric polarization in ferroelectric materials remains unsolved, as there is no experimental evidence of such switching to date. In this study, we have developed an enhanced model that describes switching within a two-dimensional space of generalized coordinates at THz pulses. Our findings indicate that stable switching in barium titanate cannot be achieved through a single linearly polarized pulse. When the intensity of the linearly polarized pulse reaches a certain threshold, the sample experiences depolarization, but not stable switching. Our study also reveals that phonon friction plays a minor role in the switching dynamics and provides an estimate of the optimal parameters of the perturbing pulse with the lowest intensity that results in depolarization of an initially polarized sample.
Petr Zhilyaev, Kirill Brekhov, Elena Mishina, Christian Tantardini
2023-09-28T07:24:44
http://arxiv.org/abs/2309.16209v2
# Ultrafast Polarization Switching in BaTiO\({}_{3}\) Nanomaterial: ###### Abstract The challenge of achieving ultrafast switching of electric polarization in ferroelectric materials remains unsolved, as there is no experimental evidence of such switching to date. In this study, we have developed an enhanced model that describes switching within a two-dimensional space of generalized coordinates at THz pulses. Our findings indicate that stable switching in barium titanate cannot be achieved through a single linearly polarized pulse. When the intensity of the linearly polarized pulse reaches a certain threshold, the sample experiences depolarization, but not stable switching. Our study also reveals that phonon friction plays a minor role in the switching dynamics and provides an estimate of the optimal parameters of the perturbing pulse with the lowest intensity that results in depolarization of an initially polarized sample. ## Introduction Developing non-volatile memory devices with fast writing and reading operations while minimizing power consumption is a challenge in information storage. However, traditional magnetic storage and flash may not be suitable for future fast devices due to their limited operation speed, which is in the milliseconds range. Thus, this challenge can only be addressed by utilizing different physical mechanisms for writing and reading bits. A potential physical mechanism for write operation is magnetization switching by an ultra-short electromagnetic pulse of optical or THz range. This mechanism has shown promise in previous studies [1, 2, 3]. Similarly, electric fields can be utilized for ultra-fast polarization switching in ferroelectric materials. Although this possibility has garnered significant attention, it has not yet been observed experimentally. The closest successful result to date, which involved reversible polarization change, was achieved by Mankowsky _et al._ in their work on lithium niobate [4]. Other studies [5, 6, 7, 8, 9, 10] have also explored the selective excitation of lattice vibrations under ultra-short optical or THz pulses, which is essential for achieving practical polarization switching. The absence of a predictive model poses a significant obstacle to experimentally observing ultra-fast switching of electric polarization. Such a model could provide optimal pulse parameters and answer a series of questions, such as: which normal mode should receive energy injection, whether energy should be injected directly into the mode that leads to switching or another strongly coupled mode; whether it is beneficial to use a series of pulses; which pulse polarization is optimal for switching; whether pulse shape affects switching; and which ferroelectric material is best suited for ultra-fast switching of electric polarization, among others. In this research, we improved and tested a theoretical model for ultra-fast polarization switching, which has previously been proposed in various studies [4, 11, 12, 13]. To calculate material constants of ferroelectrics as oxides and chalcogenides, first principles methods like Density Functional Theory (DFT) are often utilized [14]. These methods are effective in determining the structure of stable polarized states, energy barriers, ions' effective charges, polarization values, and the phonon spectrum [12, 15, 16, 17, 18, 19, 20]. Moreover, it is important to highlight that DFT calculations' results are highly dependent on the chosen exchange-correlation functional [21]. Classical molecular dynamics (MD) simulations enable the examination of ultra-fast polarization switching at an atomistic level [11] and even take into account domain behavior [22]. The proposed model aims to investigate ultra-fast polarization switching in ferroelectrics. The model utilizes a system of ordinary differential equations (ODEs) to represent the time progression of the generalized coordinates within a ferroelectric material's elementary cell. Radiation interaction is included by incorporating a perturbation force within the ODE, which functions for a specific duration. The potential energy surface (PES) is obtained from DFT calculations. Barium titanate (BTO) is used as a test material in this research, as it is a well-studied, prototypical ferroelectric material. The proposed model primarily builds upon earlier works [12, 23, 24, 25, 26], where a similar approach was employed for polarization switching and structure changes driven by ultra-short pulses. However, two significant modifications were introduced. First, instead of representing the PES in the form of Taylor's series, we directly interpolate PES using cubic splines. This is because switching results in substantial atomic displacement, leading to high numerical errors in Taylor's series. Second, in terms of generalized coordinates, we consider the polarization mode (\(q_{p}\)), which undergoes the switch, and the normal mode (\(Q_{IR}\)) where radiation is pumped. In contrast to previous studies [12], both generalized coordinates were normal modes, this approach contradicts the fact that the potential must be scalar, independent of the crystal's symmetry (for more details, please refer to [27]). The article is structured as follows. In the methods section, we give details of calculating the PES and constructing the system of ODEs. The results and discussion section presents the data obtained for BTO, along with a discussion on metastable switching, effective friction, perturbation duration, and optimal frequency. The conclusion section provides general observations and recommendations for future experiments. ## 2 Computational Details We take the experimental values of a material's unit cell and relaxing the atomic positions to obtain the equilibrium structure. Both ionic relax and calculations for phonon spectra and energies are carried out using the Vienna Ab initio Simulation Package (VASP) software package [28, 29, 30, 31], employing a plane-wave basis set. The projector augmented-wave (PAW) pseudopotential with a general gradient approximation PBE [32] and a cutoff energy of 600 _eV_ is utilized in all calculations. Numerical integration over the Brillouin zone is conducted using an \(8\times 8\times 8\) k-point sampling with a Gamma-centered grid. The phonon dispersion curves are calculated within the framework of Finite Displacements (FD) using the Phonopy code [33]. All corresponding DFT calculations are executed for a perfect \(2\times 2\times 2\) supercell structure. After identifying the normal modes, the PES is calculated as a function of two independent normal mode generalized coordinates: \(q_{p}\) (polarization mode) and \(Q_{IR}\) (high-frequency mode). The individual atomic displacements, associated with the generalized coordinate \(q_{\rm p}\), can be expressed as: \[U_{i}=\left(\frac{q_{p}+1}{2}\right)(Z_{i}^{D}-Z_{i}^{U})+Z_{i}^{U} \tag{1}\] Here, \(U_{i}\) represents the displacement of the \(i\)-th atom, while \(Z_{i}^{U}\) and \(Z_{i}^{D}\) denote the coordinates of the \(i\)-th atom in the direction of polarization, corresponding to equilibrium positions with positive and negative polarization, respectively. The individual atomic displacements, related to the generalized coordinate \(Q_{IR}\) are given by: \[U_{i}=\frac{Q_{IR}}{\sqrt{m_{i}}}\eta_{i}^{IR} \tag{2}\] where \(U_{i}\) is the displacement of a \(i\)-th atom, \(m_{i}\) - atomic mass, \(\eta_{i}^{IR}\) is the corresponding component of the normal mode dimensionless eigenvector. The PES is interpolated using cubic splines [34] at points where DFT calculations are obtained, allowing us to define the PES continuously as \(V(q_{p},Q_{IR})\). The dynamic behavior Figure 1: A visual representation of the perturbing force \(F(t)\) is shown for two pulse durations (250 and 50 _fs_) and a frequency related to the high-frequency optical normal mode (5.3 THz). It is essential to note that the frequency is significantly high, allowing approximately 2 oscillations to fit within the 250 _fs_ envelope. of the coupled generalized coordinates is characterized by a system of associated nonlinear differential equations of motion: \[\begin{split}\ddot{q}_{p}+\gamma\dot{q}_{p}=&-\,\frac{ \partial V(q_{p},Q_{IR})}{\partial q_{p}}\\ \ddot{Q}_{\text{IR}}+\gamma\dot{Q}_{IR}=&-\,\frac{ \partial V(q_{p},Q_{IR})}{\partial Q_{IR}}+F(t)\end{split} \tag{3}\] where \(\gamma\) represents the effective friction coefficient, and \(F(t)\) is the initial force exerted on the system due to external pulse perturbation. The integration of Eq. (3) is performed using the odeint library from the SciPy package [34]. We assume \(F(t)\) takes the following form: \[F(t)=F_{0}\,\sin(\omega_{d}t)\,\exp\left[-4ln2\,\left(\frac{t^{2}}{\tau^{2}} \right)\right] \tag{4}\] where \(F_{0}\) is the force amplitude, \(\omega_{d}\) is the perturbation's driving frequency (assumed to equal \(\omega_{IR}\), unless stated otherwise), and \(\tau\) is the pulse's time length. A graphical representation how the the perturbing force increases with increasing of pulse duration is illustrated in Fig. 1. ## 3 Results and Discussion The ferroelectric state of BTO is present in the crystal structure featuring a lattice with the _P4mm_ space group. We adopt the following experimental crystal unit cell parameters: \(a=3.986\) A and \(c=4.026\) A [35]. The primitive unit cell is composed of one barium atom, one titanium atom, and three oxygen atoms (refer to Fig. 2). This structure gives rise to 15 normal modes at the Gamma point, including three acoustical and twelve optical branches, which are of particular interest to us. The optical normal modes at the gamma point can be decomposed as \(\Gamma=3A_{1}+B_{1}+4E\). The initial cubic symmetry _Pm-3m_ of the paraelectric BTO crystal at 130 Celsius goes for transition to ferroelectric state through atomic displacements strictly along the c-axis into tetragonal _P4mm_ symmetry [36]. Consequently, the Figure 2: (**A**) An atomic illustration of the tetragonal ferroelectric phase in barium titanate (BTO) _P4mm_ is provided. Primarily, electric polarization switching is linked to the motion of the titanium atom along the c-axis: UP (U), same direction to the c-axis; NEUTRAL (N), no polarization; DOWN (D), opposite direction to the c-axis. The figure also illustrates the displacement patterns of the generalized coordinates denoted by \(q_{p}\) and \(Q_{IR}\).(**B**) The energy barrier for BTO divides the two stable states related to the nominal downward and upward electrical polarization. The barrier’s height, as calculated from first principles calculations, and it is approximately 12 _meV_, which agrees well with results from similar studies. coupling between normal modes and the motion (\(q_{p}\)) responsible for polarization switching is likely to occur with normal modes that possess large c-axis components in their eigenvectors. In BTO, these modes are 5, 9, and 11, corresponding to frequencies of 5.3, 8.8, and 14.1 THz. The excitation of only three low frequency modes allowed us to avoid the nonlinear coupling between low and high frequency modes that it is known to affect the polarization switching in such material when both are present [37]. In this work, we chose to investigate mode 5 because it represents a typical frequency that can be achieved with modern powerful terahertz radiation sources avoiding the presence of second harmonics [4]. The PES was computed in the space of two generalized coordinates (\(q_{p}\), \(Q_{IR}\)), with each representing collective displacement of all atoms in the unit cell (refer to eq. 1 and 2). The sampling for \(q_{p}\) was performed in the range from -2.0 to 2.0 with a step of 0.05 in A \(\sqrt{amu}\), while the sampling for \(Q_{IR}\) was carried out in the range from -3.0 to 3.0 with a step of 0.01 in A \(\sqrt{amu}\) units, resulting in a total of 48000 static DFT calculations. The point representation of PES was interpolated using cubic splines for solving the systems of ODEs. This method offers a more accurate representation of polarization switching compared to the Taylor series expansion, which is only effective in the local vicinity of the expansion point [4, 12]. A PES cross-section (shown in Fig. 3) along the direction \(Q_{IR}\sim 0\) A \(\sqrt{amu}\) enables the Figure 3: Barium titanate potential energy surface is illustrated in generalized coordinates (\(q_{p},Q_{IR}\)). The heat map displays energy in _eV_ units, measured from the base value of potential energy at (0, 0). examination of the barrier obtained by linearly interpolating the system's atomic coordinates from an upward polarization state to a downward polarization state. For DFT calculations, the barrier height is found to be \(\sim 12\)_meV_, which is consistent with other calculations employing the PBE exchange-correlation potential [21]. To analyze the trajectory of generalized coordinates under a perturbing pulse for differing perturbation amplitudes, a series of calculations was performed (refer to Fig. 4). The effective friction coefficient was set at \(\mu=0.04\)\(fs^{-1}\). Three distinct scenarios were observed: Figure 4: The time evolution of generalized coordinates under the influence of varying pulse amplitudes is depicted, with the trajectory of generalized coordinates on the potential energy surface shown as a green line. (A) When the perturbation amplitude is relatively small, no switching takes place, and the system remains at its initial minimum; (B) The switching may not be ”stable” - the system can momentarily enter a state with opposite electrical polarization, but due to inertia, it may return to and remain in the initial minimum, preventing the switching from taking place; (C) If the perturbation amplitude is large enough, switching occurs, and the system transitions into a state with reversed electric polarization. 1. When the perturbation force is not sufficient, the system remains in the initial minimum, with the trajectory localized nearby (see Fig. 4A). 2. A scenario not typically addressed by other authors,[4, 12] but worth noting, involves the system entering a different polarization state only to return to its initial state after a period of time due to inertia. Thus, even a strong enough perturbation impulse may not alter the final electric polarization (see Fig. 4B). 3. Upon reaching a specific threshold for perturbation amplitude, enough energy is transferred into the system to surpass the barrier between local minima, causing the system to switch to a state with reversed polarization (see Fig. 4C). A reversible polarization switch was previously observed in a study[11] where lead titanate (PTO) was modeled at the atomic level. Therefore, exposing BTO to a single polarization pulse could lead to irreversible switching if the pulse parameters fall within a narrow range. However, even with carefully chosen pulse parameters, irreversible polarization switching might not be achieved due to the chaotic nature of polarization switching.[38] Further research is needed to investigate this hypothesis in detail. A crucial fitting parameter in the equations that describe the dynamics of generalized coordinates is the friction coefficient. Estimating this coefficient can be done through calibration experiments. Nonetheless, several factors can impact the friction coefficient, such as: (1) the domain structure's dependency on geometrical dimensions of the ferroelectric material sample; (2) the influence of neighboring unit cells (not considered in this work); (3) the density of local defects. As a result, we conducted calculations by varying the friction coefficient over a broad range, analyzing its influence on the threshold switching force and switching stability (refer to Fig. 5). Calculations were performed for three pulse duration: 250, 350, and 450 _f_s, and a set of friction coefficients ranging from \(10^{-3}\) to \(10^{-1}\)\(fs^{-1}\). The calculations determined whether a switch occurred and if it was reversible or irreversible. We observed that different pulse length only with friction coefficient between \(10^{-3}\) and \(10^{-2}\)\(fs\) generates switching of polarization. The increasing of pulse length allowed us to observe a decreasing of necessary amplitude of the perturbing pulse (\(F_{th}\)) to observe switching of polarization. Additionally, the frequency of the perturbing pulse was varied in the calculations (see Fig. 6). The lowest threshold amplitude was observed for frequencies in the range of 0.95 \(\omega_{IR}\) to 1.10 \(\omega_{IR}\) for the eigenfrequency of the perturbed \(Q_{IR}\) mode, while the threshold force amplitude reduction was approximately 1.6 times. The optimal frequency shift observed results from coupling with the high-frequency mode and the presence of a friction term in the equation of motion. An analysis of the motion equation reveals that for small amplitude excitations [23, 39], coupling effects cause renormalization of the optimal frequency, \(\omega_{IR}\). A frequency shift in an underdamped oscillator is a well-studied phenomenon [40]. Let's also estimate the fluence corresponding to a typical force at which polarization switching occurs. We adopt the smallest noted value (see Fig. 6), which is on the order of \(F_{th}=2.5\times 10^{-4}\)\(\AA\sqrt{amu}/fs^{2}\). This force (\(F_{th}\)) is equivalent to the acceleration Figure 5: A series of computations were performed for threshold amplitude of the perturbing pulse (\(F_{th}\)) from 0.0 to 0.3 Å \(\sqrt{amu}\) / \(fs^{2}\) at three different pulse lengths: (A) 250 _fs_, (B) 350 _fs_, (C) 400 _fs_. For each computation set, the friction coefficient (\(\mu\)) was modified over a wide range of values, from \(10^{-3}\) to \(10^{-1}\)\(fs^{-1}\). In each calculation, the presence or absence of polarization change was noted: red circles represent calculations where polarization switching did not occur; blue circles indicate instances where polarization shifted but eventually returned to its original state; and green circles denote calculations where the polarization switched to its opposite value. \(F_{th}/\sqrt{m_{Ba^{+}}}=1.7\times 10^{-5}\)\(\AA/fs^{2}\), which represents the acceleration of the \(Ba^{+}\) ion created by the electric field \(E_{th}=m_{Ba^{+}}\cdot a_{th}/q_{Ba^{+}}=1.2\times 10^{11}\)\(V\). Subsequently, the energy density of such a field is linked to fluence \(W=\epsilon_{0}E^{2}/2\cdot\Omega=F_{th}\cdot S\), which infers \(F_{th}=\epsilon_{0}\cdot E^{2}/2\cdot h\), where \(\Omega\), \(S\), and \(h\) represent the volume, surface area, and the length of the unit cell in the 'c' direction (see Fig 2a) respectively, and \(\epsilon_{0}\) is the vacuum permittivity. This simple estimation yields a value of \(F_{th}=250\)\(mJ/cm^{2}\). Although this is a rather basic analysis, the derived estimation should be approached with caution. For comparison in article[4] studying lithium niobate (LNO), the onset of polarization switching occurred at fluences of 95 \(mJ/cm^{2}\), which is on the same order of magnitude as our \(F_{th}\) estimate for BTO. ## 4 Conclusions In this study, a model was examined and evaluated to characterize the ultrafast switching polarization in ferroelectric materials using BTO as test case. Analyzing the proposed model indicates that exist an operative range of the friction coefficient where the ultrafast switching polarization has the highest probability to happen. Such probability increases with increas Figure 6: The relationship between the amplitude of the perturbing pulse, which causes switching, and the frequency of the perturbing pulse is demonstrated. The graph indicates that the lowest threshold force amplitude falls within the range of 0.95 \(\omega_{IR}\) to 1.10 \(\omega_{IR}\). A continuous line is included merely to serve as a visual guide. Pulse duration 250 \(fs\). ing of pulse and the smallest threshold force amplitude necessary for switching is achieved within the range of 0.95 \(\omega_{IR}\) to 1.10 \(\omega_{IR}\), where \(\omega_{IR}\) represents the normal mode frequency. Polarization switching has been shown to be reversible, and it is probably a random process, meaning that slight changes in the perturbing pulse parameters might lead to an opposite final polarization. Thus, the complexity of the model in the future should include arbitrary polarization of the perturbing pulse, which may prove difficult to interpret, and possibility to consider multi-pulse cases. For example involving the depolarization potential, which is generated by secondary high-frequency pulses, which inject energy into the electronic subsystem raising the electronic temperature to tens of _eV_ favoring the switching of polarization as seen in previously works [41, 42, 4]. ## 4 Data Availability All data generated or analyzed during this study are included in this published article and supplementary data are available by request addressed to the corresponding authors. This work was supported by the Russian Science Foundation grant number 20-72-10178 and Russian Academy of Sciences project number 121032500059-4. The computations were carried out on supercomputer MVS-10Q at Joint Supercomputer Center of the Russian Academy of Sciences (JSCC RAS), the supercomputer Zhores (CDISE, Skoltech, Russia) [43], and Skoltech HPC cluster "ARKUDA".
超高速電極化方向転換を行う課題は解決されていません。現時点で実験的な根拠はありません。この研究では、THzパルスを考慮した2次元空間における一般化座標の空間で、スイッチングモデルを強化しました。私たちの研究結果では、BaTiO<sub>3</sub>の安定な転換は単一の線形偏光パルスでは達成できません。線形偏光パルス強度が一定の閾値に達すると、サンプルはDepolarizationを起こしますが、安定な転換にはなりません。私たちの研究では、Phonon摩擦はスイッチングダイナミクスにおいて小さい役割を果たすことが明らかになり、初期状態の偏光サンプルに対する perturbingパルスのパラメータの最適値を推定します。
2309.17117
Atmospheric muon suppression for Baikal-GVD cascade analysis
Baikal-GVD (Gigaton Volume Detector) is a neutrino telescope installed at a depth of 1366 m in Lake Baikal. The expedition of 2023 brought the number of optical modules in the array up to 3492 (including experimental strings). These optical modules detect the Cherenkov radiation from secondary charged particles coming from the neutrino interactions. Neutrinos produce different kinds of topologically distinct light signatures. Charged current muon neutrino interactions create an elongated track in the water. Charged and neutral current interactions of other neutrino flavors yield hadronic and electromagnetic cascades. The background in the neutrino cascade channel arises mainly due to discrete stochastic energy losses produced along atmospheric muon tracks. In this paper, a developed algorithm for the cascade event selection is presented.
V. M. Aynutdinov, V. A. Allakhverdyan, A. D. Avrorin, A. V. Avrorin, Z. Bardačová, I. A. Belolaptikov, E. A. Bondarev, I. V. Borina, N. M. Budnev, V. A. Chadymov, A. S. Chepurnov, V. Y. Dik, G. V. Domogatsky, A. A. Doroshenko, R. Dvornický, A. N. Dyachok, Zh. -A. M. Dzhilkibaev, E. Eckerová, T. V. Elzhov, L. Fajt, V. N. Fomin, A. R. Gafarov, K. V. Golubkov, N. S. Gorshkov, T. I. Gress, K. G. Kebkal, I. V. Kharuk, E. V. Khramov, M. M. Kolbin, S. O. Koligaev, K. V. Konischev, A. V. Korobchenko, A. P. Koshechkin, V. A. Kozhin, M. V. Kruglov, V. F. Kulepov, Y. E. Lemeshev, M. B. Milenin, R. R. Mirgazov, D. V. Naumov, A. S. Nikolaev, D. P. Petukhov, E. N. Pliskovsky, M. I. Rozanov, E. V. Ryabov, G. B. Safronov, D. Seitova, B. A. Shaybonov, M. D. Shelepov, S. D. Shilkin, E. V. Shirokov, F. Šimkovic, A. E. Sirenko, A. V. Skurikhin, A. G. Solovjev, M. N. Sorokovikov, I. Štekl, A. P. Stromakov, O. V. Suvorova, V. A. Tabolenko, B. B. Ulzutuev, Y. V. Yablokova, D. N. Zaborov, S. I. Zavyalov, D. Y. Zvezdov
2023-09-29T10:29:09
http://arxiv.org/abs/2309.17117v1
# Atmospheric muon suppression for Baikal-GVD cascade analysis ###### Abstract: Baikal-GVD (Gigaton Volume Detector) is a neutrino telescope installed at a depth of 1366 m in Lake Baikal. The expedition of 2023 brought the number of optical modules in the array up to 3492 (including experimental strings). These optical modules detect the Cherenkov radiation from secondary charged particles coming from the neutrino interactions. Neutrinos produce different kinds of topologically distinct light signatures. Charged current muon neutrino interactions create an elongated track in the water. Charged and neutral current interactions of other neutrino flavors yield hadronic and electromagnetic cascades. The background in the neutrino cascade channel arises mainly due to discrete stochastic energy losses produced along atmospheric muon tracks. ## 1 Introduction Baikal-GVD (Gigaton Volume Detector) is a cubic kilometer scale neutrino observatory located in the southern part of Lake Baikal, Siberia. Currently (year 2023), its 3492 light sensors (optical modules - OMs, including experimental strings) detect the Cherenkov light that is produced by secondary charged particles originating from neutrinos interacting in the Baikal water. A three-dimensional array of OMs organized in so-called clusters is located at a depth of 1366m, about 3.6 km offshore [1]. The primary purpose of Baikal-GVD is to search for high-energy neutrinos that originate from the same cosmic particle accelerators, which produce very high energy cosmic rays. Moreover, the diffuse flux emitted collectively by unresolved astrophysical sources can be observed [2]. Thirty-six OMs are installed on each of the 96 vertical strings with the distance of 15 m between adjacent OMs, from 750 m to 1275 m below the surface. Most of the strings are arranged into independently operated clusters, while each cluster is composed of 8 strings. Each OM contains a 10-inch photo-multiplier tube (PMT) Hamamatsu R7081-100 housed in a 13-inch glass sphere. The schematic view of the Baikal-GVD detector is shown in Fig. 1. Baikal-GVD primarily observes two topologically distinct classes of events: tracks and cascades. The charged current interaction of a muon neutrino (\(\nu_{\mu}\)) with matter results in an outgoing muon, which travels a long distance in water and leaves an elongated track signature. The cascade events arise from the interactions of all three neutrino flavours. The charged current (CC) electron neutrino (\(\nu_{e}\)) interaction and the neutral current (NC) electron, muon, and tau neutrino (\(\nu_{e}\), \(\nu_{\mu}\), \(\nu_{\tau}\)) interactions produce detectable cascade light signatures. In case of cascades most of the neutrino energy is deposited in a small volume, that results in a nearly spherical event. An advantage of the cascade channel over a track channel is that cascade events typically have allow for a better energy resolution, because the events can be fully contained inside the detector. However, it is more difficult in the case of cascades to reconstruct the initial direction of the neutrino. At the angles above the horizon, there is an overwhelming background of muons produced in air showers when cosmic rays enter the Earth's atmosphere. Muons from the air showers come Figure 1: Left: Baikal-GVD in 2023. The detector is composed of individual clusters, laser stations and experimental strings. The cluster color scheme represents annual deployment progress. Right: A standard Baikal-GVD cluster with 8 strings. in bundles containing up to hundreds of muons. In the cascade channel, the main background originates from the discrete stochastic processes along the muon track as a result of bremsstrahlung, photonuclear processes or direct electron-positron pair production (see Fig. 2, left). Conventional rejection strategy for the atmospheric muon bundles is to select only events coming from the lower part of the detector (upgoing). However, the background cascade events may be wrongly reconstructed as upgoing, while they were truly downgoing muons. A search for the signal cascades is, therefore, challenging due to the high muon flux from the air showers. In this work, developed and optimized techniques to separate the neutrino-induced cascades from the background (cascades from atmospheric muon bundles) are discussed. The main difference between signal and the background cascade is the presence of a muon track. The Cherenkov light from the muon track can change the cascade light signature and influence the cascade reconstruction variables. For development of the selection methods we used Monte Carlo (MC) simulations for the part of the 2019 season from April to June (5 cluster configuration). Each selection method provides an output variable, which are fed into a Boosted Decision Tree (BDT) [3]. The selection algorithm was optimized only with single-cluster data. Furthermore, the selected experimental neutrino cascade candidate was searched for among the multicluster events to find an indication of a muon track. Moreover, preliminary waveform analysis of that interesting event was performed. ## 2 Background Cascades In this work, MC data sets for signal and background were generated for each cluster separately and used for development of the selection techniques. The arrangement of each cluster is set up to correspond to the average conditions during the initial phase of the 2019 season (between April 1 and June 30). During this interval, the optical activity (luminescence) of the lake is relatively low, and the noise rates fluctuate from \(\approx\) 15 kHz (for the bottom OMs) to \(\approx\) 50 kHz (for the OMs located at the uppermost sections) [4]. For comparison of the experimental data and MC samples we used runs from the same period with the effective livetime of 353 days (combined for all 5 clusters). Atmospheric \(\nu^{\rm atm}\) and astrophysical \(\nu^{\rm astro}\) neutrino events (only electron and muon) were considered as a signal. In the MC signal simulation dataset, neutrino energies range from 1 TeV to 400 TeV and from 1 TeV to 400 PeV for atmospheric and astrophysical neutrinos, respectively. The energy interval of cosmic ray protons in the background MC sample is from 240 GeV to 100 PeV. After MC simulations a reconstruction of the cascade energy and direction was used. A parallel package [5] has been used for the cascade reconstruction software. Fig. 2 (right) displays the event rate for the experimental data and MC datasets as a function of the reconstructed cascade energy. The cut applied on the events was that the horizontal distance of the reconstructed vertex position from the centre of the cluster can be no more than \(\rho=100\) m. The rate of events is shown per cluster per one year. It can be observed that most of the reconstructed cascade-like events from the experimental data follow the MC background cascades from \(\mu_{\rm atm}\). According to the MC simulations the background cascades can also reach high energies. Note also that the event rate from downgoing \(\mu_{\rm atm}\) is almost 4 orders of magnitude higher than the event rate from atmospheric neutrinos. Hence, the development of methods for the selection of neutrino cascades is essential. ## 3 Neutrino Cascade Selection Algorithm Various selection methods for neutrino cascade events in the Baikal-GVD were implemented, tested, and optimized. Described technique for the selection analysis of signal and background cascades represents an additional step in the cascade selection algorithm [6]. The optimization was performed on the simulated data of background cascades from \(\mu_{\rm atm}\) and signal cascades from \(\nu^{\rm atm}\) and \(\nu^{\rm astro}\) interactions. For the analysis we only took into account contained cascade-like events reconstructed as upgoing. First, we developed the nTrackHits method that differentiates background and signal cascades by leveraging the existence of a muon track within the atmospheric muon bundle and its absence in the neutrino cascade events. The nTrackHits method selects hits with time residuals in the interval: \[t_{1}<t_{i}-T_{i}^{\rm track}<t_{2}, \tag{1}\] where \(t_{i}\) is the OM hit time, \(t_{1}=-100\) ns, \(t_{2}=25\) ns. The expected time \(T_{i}\), when the OM is supposed to detect a hit from the muon track is obtained from: \[T_{i}^{\rm track}=t_{\rm recoCascade}+({\rm sLong-ILong})\cdot\frac{1}{c}+\sqrt{{ \rm sPerp}^{2}+{\rm ILong}^{2}}\cdot\frac{1}{c_{\rm w}}, \tag{2}\] where \(t_{\rm recoCascade}\) is time of the reconstructed cascade, \(c_{w}\) is the speed of light in water, and \(c\) is the speed of light in vacuum (see Fig. 3 (left)). The track direction cannot be determined as the reconstructed cascade direction, because it can be misreconstructed due to track hits that were incidentally used in the reconstruction. Therefore, the nTrackHits is determined in each muon direction given by the iteration of the reconstructed cascade direction over azimuth and zenith angle in a cone with apex angle \(\approx 40^{\circ}\). The OM hit time \(t_{i}\) dependence on the OM z position for the reconstructed background cascade from simulated MC \(\mu_{\rm atm}\) event is displayed in Fig. 3 (right). Figure 2: Left: The characteristic topology of light emission for a muon track (blue light) with visible stochastic light depositions along the track (background cascades - marked by green boxes). Right: Reconstructed energies for cascade-like events. Black dots display the experimental data from Cluster 1 from season 2019. The predicted \(\mu_{\rm atm}\) background is shown by the green line, the atmospheric neutrinos \(\nu_{e}^{\rm atm}\) and \(\nu_{\mu}^{\rm atm}\) are shown by the red and yellow lines, respectively. Astrophysical \(\nu_{e}^{\rm astro}\) and \(\nu_{\mu}^{\rm astro}\) neutrinos are merged into one dataset and shown by the light blue line. Each color indicates a different origin of the hit. The nTrackHits method identifies the number of hits per event that fulfill the criteria for the muon track. Fig. 4 (left) displays the distribution of such _nTrackHits_ for neutrino-induced cascade-like events (combined MC datasets \(\nu_{e,\mu}^{\rm atm}\) and \(\nu_{e,\mu}^{\rm astro}\) shown by blue line), background cascades (red line), and experimental data (black points). Signal cascades have also non-zero values of _nTrackHits_, which can be caused by mis-identified hits from the noise or cascade. Another method, called BranchRatio, exploits the fact that for the \(\mu_{\rm atm}\) background cascade events mis-reconstructed as upgoing, more OMs are hit located below the z coordinate of the reconstructed cascade position than above. This method results in the separation variable defined as BranchRatio = \(\frac{n{\rm OMs}^{\rm up}}{n{\rm OMs}^{\rm down}}\). Neutrino cascades may also be separated from the background using the method referred to as QEarly inspired by the work of the ANTARESS collaboration [7]. The output of QEarly is the ratio of the overall charge of track hits \(Q_{\rm nTrackHits}\) and total charge of cascade hits \(Q\)cascadeHits (green band in Fig. 3 (right)), calculated as: \({\rm QEarly}=\log_{10}\left(\frac{(Q_{\rm nTrackHits}+a)}{Q_{\rm cascadeHits}}\right)\), where \(a=1\) is a constant that prevents QEarly from reaching infinity. Figure 4: nTrackHits (left) distribution for the signal cascades (blue line), background cascades (red line), and experimental data (black dots). BDT response score (right) for background cascades (red histogram) and signal cascades (blue). Figure 3: Left: Geometry components used for the calculation of the predicted OM hit time from the muon track in Eq. 2. Right: The OM z coordinate (on one string) as a function of the OM hit time for simulated \(\mu_{\rm atm}\) event. Red dot corresponds to \(t_{\rm reocCascade}\), track hits are shown in blue, cascade hits in black and noise hits in pink color. Yellow line corresponds to the predicted time \(T_{i}^{\rm track}\) for track hits according to Eq. 2. Green lines show the expected time interval for the cascade hits. Note that track hits are detected earlier with respect to \(t_{\rm reocCascade}\) compared to the cascade hits. ### Multivariate Analysis After the selection methods have been developed, five output variables from the cascade reconstruction of signal and background simulated datasets were chosen for the multivariate event classifier BDT within the TMVA package of the CERN ROOT framework [3]. Only cascades reconstructed as upgoing and contained were used in this BDT analysis. The five variables fed into the BDT are: nTrackHits, BranchRatio, QEarly, the Chi-Square after cascade position reconstruction, and the reconstructed zenith angle. The BDT response score is formed from the five variables given to the TMVA. The BDT response value for signal and background cascades is shown in Fig. 4 (right). The datasets used for the BDT training and testing have almost the same BDT response curves for both background and signal, however it can be clearly observed that in the case of background, statistics is the limiting factor. As a result of the BDT analysis relative importance of the individual variables is determined. The QEarly method has the highest separation power, while nTrackHits method is also able to separate between signal and background to a considerable extent. Subsequently, the optimal cut on the BDT response score (0.48) was determined according to the maximal significance and applied to the experimental data. Signal efficiency is \(\approx\) 49% after applying the BDT cut and the background has been reduced to the magnitude on the order of 1 event per cluster per year. Additionaly to the BDT cut, we applied a 50 TeV cut on the reconstructed energy to search for the high-energy reconstructed cascade events in the experimental data. One well-reconstructed cascade-like event fulfills the criteria imposed on the BDT output value and reconstructed energy. The event was reconstructed with energy \(E\) = 83.3 TeV as contained in Cluster 1 and upgoing direction with zenith angle \(\theta\) = 70.9 deg. For further details on this event refer to Tab. 1. ### Multicluster Search and Preliminary Waveform Comparison For the abovementioned upgoing event, there are some indications that it can be possibly of the neutrino origin. However, a more complex analysis is required to confidently determine if that cascade-like event comes from the neutrino interaction. For that purpose, a search for this cascade event in the multicluster regime has been performed. A multicluster event is required to leave a signal in two or more clusters at once. This detection mode can be very useful as a veto for the background cascades produced along muon tracks. In addition to the detection of background cascade, a long-range muon track can also be detected in other clusters. Therefore, it is more likely that such event will be found in the multicluster data. For example, a cascade-like event detected in 2019 on Cluster 5 was also found in multicluster data. This event was reconstructed as downgoing and it passed the standard reconstruction quality selection criteria. Indeed, Fig. 5 (left) shows that this event was also detected in two other clusters. \begin{table} \begin{tabular}{||c|c|c|c|c|c|c|c|c||} \hline Cl & \(E_{\mathrm{rec}}\) [TeV] & \(\theta\) [\({}^{\circ}\)] & \(\phi\) [\({}^{\circ}\)] & \(\rho\) [m] & L & Q [p.e.] & nHits & nRecoHits & nTrackHits \\ \hline \hline 1 & 83.3 & 70.9 & 4.96 & 47.65 & 1.01 & 1665.01 & 106 & 44 & 1 \\ \hline \end{tabular} \end{table} Table 1: Reconstructed parameters of the most energetic event found in data 2019 for Cluster1: Cluster, Energy, Zenith angle, Azimuth angle, Distance from the cluster center, Likelihood, Total charge, Total number of hits, Number of hits used in the reconstruction, and Number of track hits. It suggests that this cascade-like event comes from the \(\mu_{\rm atm}\) background, and the corresponding muon was detected in two additional clusters. Subsequently, we tried to search for the muons independently reconstructed in the single-cluster track reconstruction mode [8]. For this event, two tracks were reconstructed as downgoing, one of them in Cluster2 and the second one in Cluster3. We conclude that this cascade-like event is most probably of background origin. This procedure was also preformed for the upgoing event described in Tab. 1, in which case the event was not found among multicluster events, suggesting that additional muon tracks are not present. After that, the preliminary waveform analysis was performed. In this analysis the comparison of the expected waveforms of the pulses (i.e. time and amplitude) induced by the cascade or muon track with the real detected waveforms at the OMs was performed. The analytical formula for the real pulse waveform can be described by a Gumbel function, as follows: \[f(t)=A\times e^{-(t\cdot\frac{(t-\mu)}{\mu})+e^{-(t\cdot\frac{t-\mu}{\mu})}}, \tag{3}\] where \(A\) is a scaling amplitude factor, \(\mu\) is the time when the pulse reaches maximum amplitude and \(\beta\) is the width of Gumbel function. The expected amplitude for the hit coming from the cascade \(A_{i}^{\rm cascade}\) is obtained from the pre-calculated MC tables. The expected amplitude for a track hit from a TeV muon can be estimated according to the following function: \[A_{i}^{\rm track}\approx e^{-\frac{|\vec{r}_{i}-\vec{r}_{\rm track}|}{\tau}} \cdot\alpha_{i}, \tag{4}\] where \(|\vec{r}_{i}-\vec{r}_{\rm track}|\) is distance between the position of hit OM and point of light emission from the track (illustrated blue line in Fig. 3 (left)), \(\tau\approx 24\) m is the light absorption length in the deep lake water and \(\alpha_{i}\) is the relative sensitivity of the \(i\)-th OM as a function of the angle between the OM vertical axis and incidence of the incoming light. The track direction is assumed to be the direction in which the most of track hits were obtained by nTrackHits method. The waveform comparison was done for the upgoing event (Tab. 1). This is illustrated in Fig. 5 (right), where the waveform registered by one particular OM is compared to the cascade and track model predictions. No track hits are observed at the times. Hence, in accord with the multicluster analysis, the waveform procedure does not exhibit the presence of muon track for this upgoing event. ## 4 Conclusion Several selection methods for the neutrino cascades have been developed. They were optimized with Monte Carlo simulations for season 2019. The corresponding background rejection variables were used for training and testing of a Boosted Decision Tree (BDT). The BDT was trained and tested with the contained upgoing neutrino and background cascades. Furthermore, experimental data from season 2019 were analyzed. In the search for the neutrino cascade candidates with higher energy in experimental data we applied cuts on the BDT output value and the reconstructed energy. One contained upgoing event was reconstructed with 83.3 TeV energy. Moreover, a multicluster analysis of that experimental event and a preliminary waveform comparison were performed. No muon counterparts were found in the multicluster data and the recorded PMT waveforms do not support a muon track origin of the event. Hence this event is likely a genuine neutrino event.
バイカル-GVD(ギガトン体積検出器)は、 Lake Baikal の水深 1366 m に設置された中性子望遠鏡です。2023 年の探査によって、アレイ内の光学モジュール数は3492 (実験的なストリングを含む) に増えました。これらの光学モジュールは、中性子相互作用から二次的に引き起こされるチェレンコフ放射線を検出しています。中性子は、異なる種類のトポロジーに関連する光学的シグネチャーを生成します。Charged current muon neutrino interaction は、水中の長い軌跡を生み出します。他の中性子フレーバーのCharged and neutral current Interaction は、ハドロン的および電磁的カスケードを生み出します。中性子カスケードチャネルにおける背景は、大気中でのmuon軌跡における断片的な確率的エネルギー損失
2309.07304
The Way We Were: Structural Operational Semantics Research in Perspective
This position paper on the (meta-)theory of Structural Operational Semantic (SOS) is motivated by the following two questions: (1) Is the (meta-)theory of SOS dying out as a research field? (2) If so, is it possible to rejuvenate this field with a redefined purpose? In this article, we will consider possible answers to those questions by first analysing the history of the EXPRESS/SOS workshops and the data concerning the authors and the presentations featured in the editions of those workshops as well as their subject matters. The results of our quantitative and qualitative analyses all indicate a diminishing interest in the theory of SOS as a field of research. Even though `all good things must come to an end', we strive to finish this position paper on an upbeat note by addressing our second motivating question with some optimism. To this end, we use our personal reflections and an analysis of recent trends in two of the flagship conferences in the field of Programming Languages (namely POPL and PDLI) to draw some conclusions on possible future directions that may rejuvenate research on the (meta-)theory of SOS. We hope that our musings will entice members of the research community to breathe new life into a field of research that has been kind to three of the authors of this article.
Luca Aceto, Pierluigi Crescenzi, Anna Ingólfsdóttir, Mohammad Reza Mousavi
2023-09-13T20:50:53
http://arxiv.org/abs/2309.07304v1
# The Way We Were: Structural Operational Semantics Research in Perspective ###### Abstract This paper presents a novel approach to the (meta-)theory of structural Operational Semantics. We show that the (meta-)theory of structural Operational Semantics is motivated by the following two questions: * Is the (meta-)theory of SOS dying out as a research field? * If so, is it possible to rejuvenate this field with a redefined purpose? In this article, we will consider possible answers to those questions by first analysing the history of the EXPRESS/SOS workshops and the data concerning the authors and the presentations featured in the editions of those workshops as well as their subject matters. The first International Workshop on Structural Operation was held in London, UK in 2004. The workshop was established as 'a forum for researchers, students and practitioners interested in new developments, and directions for future investigation, in the field of structural operational semantics. One of the specific goals of the workshop was to establish synergies between the concurrency and programming language communities working on the theory and practice of SOS.' At its ninth edition, the SOS workshop joined forces with the nineteenth edition of International Workshop on Expressiveness in Concurrency. The joint workshop was meant to cover the broader scope of 'the formal semantics of systems and programming concepts, and on the expressiveness of mathematical models of computation.' We examined the contributions dedicated to the theory of SOS presented in the EXPRESSS/SOS workshop series (and, prior to that, in the SOS workshop) and whether they appeared before or after the merger between the EXPRESS and SOS workshops. We also used the collected data to compute a well-established measure of similarity between the two phases in the life of the SOS workshop, before and after the merger with EXPRESS. Beyond these data- and graph-mining analyses, we reflect on the major results developed in nearly four decades of research on SOS and identify, in our admittedly biased opinion, its strengths and gaps. The results of our quantitative and qualitative analyses all indicate a diminishing interest in the theory of SOS as a field of research. Even though 'all good things must come to an end', we strive to finish this position paper on an upbeat note by addressing our second motivating question with some optimism. To this end, we use our personal reflections and an analysis of recent trends in two of the flagship conferences in the field of Programming Languages (namely POPL and PDLI) to draw some conclusions on possible future directions that may rejuvenate research on the (meta-)theory of SOS. We hope that our musings will entice members of the research community to breathe new life into a field of research that has been kind to three of the authors of this article. Whence this collaboration?This article is the result of a collaboration between a researcher from the theory of algorithms and their applications, Pierluigi Crescenzi, and three contributors to the theory of SOS. Pierluigi Crescenzi has recently offered data- and graph-mining analyses of conferences such as CONCUR, in cooperation with Luca Aceto in [5], SIROCCO [25] and ICALP--see the presentation available at [https://slides.com/piluc/icalp-50?token=f13BBJ8j](https://slides.com/piluc/icalp-50?token=f13BBJ8j). All authors thought that it was natural to combine quantitative data- and graph-mining analysis techniques with qualitative domain-specific knowledge to offer a fairly well-rounded perspective on the developments in the (meta-)theory of SOS and its relation to the SOS and EXPRESS/SOS workshops. Both the Java code and the Julia software developed by Pierluigi Crescenzi, which was used for the quantitative analyses reported in this article and the aforementioned earlier ones, are publicly available at the following GitHub repository: [https://github.com/piluc/ConferenceMining](https://github.com/piluc/ConferenceMining). We encourage everyone interested in carrying out data- and graph-mining analyses of conferences to use it! ## 2 Data Collection and Analysis To set the stage for our reflections on the (meta-)theory of SOS, we have carried out some data analysis on the SOS and EXPRESS/SOS workshops. ### Data Collection We extracted the following data from all the eleven past editions of the joint EXPRESS/SOS workshop: 1. the authors and titles of contributed talks; 2. invited speakers and the titles of their presentations or papers; 3. the number of submissions and accepted papers; and 4. at least two and at most three subject matter classifiers from the scope of EXPRESS/SOS. Much of the gathered data was extracted from the tables of contents and proceedings of those editions of the workshop, which are all available in open access form as volumes of Electronic Proceedings in Computer Science (EPTCS), and from the DBLP page devoted to the Workshop on Structural Operational Semantics. In case of missing information regarding the number of submissions, we approached the workshops chairs and gathered that information through personal communication. For subject matter classification, since the general classifications, such as the one by the ACM, were too general for our purposes, we manually read the abstract (and in a few cases full papers) and identified domain-specific classifiers, using the scope definition of the EXPRESS/SOS workshop. The results of our data collection are publicly available online. The choice of focusing our analysis on the last eleven editions was motivated by the fact that, since 2012, the SOS workshop decided to join forces with the EXPRESS workshop and created a new joint venue. This gave us a consistent view of how the topics featured in the joint workshop have evolved over time and of how (structural) operational semantics has been represented in the joint workshop since 2012. However, using the data we collected, we also took the opportunity to compare the two phases of the SOS workshop, the first as an independent workshop in the period 2004-2011 and the second as EXPRESS/SOS from 2012 till 2022. ### Automatic Analysis Based on the articles that were archived in the workshop proceedings, we found that * 194 authors contributed articles to the workshop proceedings since 2004; * 90 colleagues published papers in the proceedings of the first eight editions of the SOS workshop; * 122 researchers contributed articles to the joint EXPRESS/SOS workshop in the period 2012-2022; * 18 authors published papers in the SOS workshop proceedings both before and after the merger with the EXPRESS workshop, which means that there were 104 contributors to EXPRESS/SOS who had never published in the SOS workshop in the period 2004-2011. The above-mentioned data allow us to compute a measure of similarity between the two phases of the SOS workshop, before and after the merger with EXPRESS, using the Sorensen-Dice index, which is a statistic used to measure the similarity of two samples. Given two sets \(A\) and \(B\), the _Jaccard index_\(J(A,B)\) is equal to \(\frac{|A\cap B|}{|A\cup B|}\), and the _Sorensen-Dice index_ is equal to \(\frac{2J(A,B)}{1+J(A,B)}\), see [28, 66]. The Sorensen-Dice index for the lists of authors in the two phases of the SOS workshop is roughly 0.17. This value indicates that the SOS workshop is not as similar to the joint EXPRESS/SOS workshop as one might have expected. By way of comparison, quoting from the data- and graph-mining analysis of CONCUR presented in [4], the conference that is most similar to CONCUR is LICS (with Sorensen-Dice index approximately equal to 0.3), followed by TACAS (approximately 0.25), CAV (approximately 0.24), and CSL (approximately 0.21). Computing the Sorensen-Dice index for SOS 2004-2022 and CONCUR, LICS, PLDI and POPL yields low values of similarity, namely 0.106396 (CONCUR), 0.0622966 (LICS), 0.00585138 (PLDI) and 0.0303169 (POPL). This is due to the fact that the sets of authors of those conferences is much larger than that of the SOS workshop, namely 1475 (CONCUR), 1953 (LICS), 3220 (PLDI) and 1979 (POPL). When quantifying the degree of similarity between a small workshop like SOS with larger conferences, it might be more appropriate to consider the Szymkiewicz-Simpson coefficient (also known as the overlap coefficient) [65, 68, 69, 73]. Given two sets \(A\) and \(B\), the _Szymkiewicz-Simpson coefficient_ is equal to \(\frac{|A\cap B|}{\min(|A|,|B|)}\). The values of that coefficient for the conferences we considered above are roughly 0.45 (CONCUR), 0.34 (LICS), 0.05 (PLDI) and 0.17 (POPL). Those values seem to support the view that SOS is rather similar to CONCUR and LICS, has some similarity with POPL, but is very dissimilar to PLDI. ### Centrality Measures The _static graph_ (or collaboration graph) of SOS is an undirected graph whose nodes are the authors who presented at least one paper at SOS, and whose edges link two authors who coauthored at least one paper (not necessarily presented at SOS). In other words, this graph is the subgraph of the DBLP collaboration graph induced by the set of SOS authors. Centrality measures have been used as a key tool for understanding social networks, such as the static graph of SOS, and are used to assess the 'importance' of a given node in a network--see, for instance, [35]. Therefore, to quantify the role played by authors who have contributed to the SOS workshop, we have computed the following classic centrality measures on the largest connected component of the static graph of SOS. * Degree: This is the number of neighbours of a node in the graph (that is, the number of coauthors). * Closeness: This is the average distance from one author to all other authors of its connected component. * Betweenness: This is the fraction of shortest paths, passing through one author, between any pair of other authors in its connected component. The top ten SOS authors with respect to the above-mentioned three centrality measures are, in decreasing order: * Degree: Luca Aceto, Anna Ingolfsdottir, Mohammad Reza Mousavi, Nobuko Yoshida, Rob van Glabbeek, Bas Luttik, Wan Fokkink, Michel Reniers, Catuscia Palamidessi, and Rocco De Nicola. * Closeness: Luca Aceto, Rob van Glabbeek, Nobuko Yoshida, Matthew Hennessy, Catuscia Palamidessi, Anna Ingolfsdottir, Rocco De Nicola, Daniele Gorla, Bas Luttik, and Uwe Nestmann. * Betweenness: Luca Aceto, Matthew Hennessy, Nobuko Yoshida, Rob van Glabbeek, Rocco De Nicola, Catuscia Palamidessi, Daniele Gorla, Frank de Boer, Bartek Klin, and Uwe Nestmann. In addition, we also calculated the _temporal closeness_, which is an analogue of closeness that takes the number of years of a collaboration between two authors into account--see the paper [25] for more information on this centrality measure. The top ten SOS authors according to temporal closeness are, in decreasing order: Luca Aceto, Anna Ingolfsdottir, Wan Fokkink, Rocco De Nicola, Catuscia Palamidessi, Bas Luttik, Michel Reniers, Rob van Glabbeek, Jan Friso Groote, and Mohammad Reza Mousavi. Finally, to get a glimpse of the evolution of the aforementioned measures of similarity and centrality in the two phases of the SOS workshop, we computed them on the static graphs before and after the merger with EXPRESS. Before the merger with EXPRESS, the 2004-2011 editions of SOS had Szymkiewicz-Simpson index approximately of 0.42 with CONCUR, 0.37 with LICS, 0.067 with PLDI and 0.2 with POPL. After the merger with EXPRESS, those figures become 0.512 for CONCUR, 0.352 for LICS, 0.032 for PLDI and 0.152 for POPL. So, from 2012 onwards, SOS has become more similar to CONCUR and even more dissimilar to PLDI and POPL than before. The top ten authors at the SOS workshop also change before and after the merger. When focusing on the period before the merger, the most central authors are as follows, in decreasing order: * Degree: Luca Aceto, Michel Reniers, Mohammad Reza Mousavi, Anna Ingolfsdottir, Wan Fokkink, Rocco De Nicola, Jose Meseguer, Rob van Glabbeek, Catuscia Palamidessi, and David de Frutos-Escrig. * Closeness: Luca Aceto, Anna Ingolfsdottir, Rocco De Nicola, Rob van Glabbeek, Matthew Hennessy, Georgiana Caltais, Mohammad Reza Mousavi, Eugen-Ioan Goriac, Michel Reniers, and Catuscia Palamidessi. * Betweenness: Rocco De Nicola, Luca Aceto, Catuscia Palamidessi, Jose Meseguer, Frank de Boer, Filippo Bonchi, Matthew Hennessy, Michel Reniers, Rob van Glabbeek, and David de Frutos-Escrig. * Temporal closeness: Luca Aceto, Anna Ingolfsdottir, Wan Fokkink, Michel Reniers, Mohammad Reza Mousavi, Jose Meseguer, Jan Friso Groote, Rob van Glabbeek, Rocco De Nicola, and Catuscia Palamidessi. After the merger with EXPRESS, our graph-mining analysis yields the following most central authors, in decreasing order: * Degree: Nobuko Yoshida, Luca Aceto, Bas Luttik, Rob van Glabbeek, Mohammad Reza Mousavi, Uwe Nestmann, Anna Ingolfsdottir, Jorge Perez, Jose Baeten, and Hans Huttel. * Closeness: Nobuko Yoshida, Luca Aceto, Rob van Glabbeek, Catuscia Palamidessi, Anna Ingolfsdottir, Bas Luttik, Uwe Nestmann, Mohammad Reza Mousavi, Iain Phillips, and Mariangiola Dezani-Ciancaglini. * Betweenness: Nobuko Yoshida, Rob van Glabbeek, Daniele Gorla, Luca Aceto, Bas Luttik, Bartek Klin, Uwe Nestmann, Catuscia Palamidessi, Hans Huttel, and Rance Cleaveland. * Temporal closeness: Luca Aceto, Anna Ingolfsdottir, Bas Luttik, Tim Willemse, Catuscia Palamidessi, Mohammad Reza Mousavi, Jos Baeten, Jan Friso Groote, Jorge Perez, and Rob van Glabbeek. ### The Two Lives of the SOS Workshop As we saw above, the first and the second life of the SOS workshop are not that similar after all, which seems to indicate that the eleven joint editions of the EXPRESS/SOS workshop were more about expressiveness than about structural operational semantics1. To see whether this is really the case, we visually summarise the data we collected in Figure 1 and provide its details below: Footnote 1: Another possible explanation for the low degree of similarity between the pre- and post-merger incarnations of the SOS workshop is that the community welcomed many new authors from 2012 onwards. This would be a healthy and welcome development and is, in fact, supported by the data we collected. However, the analysis we present in what follows gives some indication that, since 2014, the scientific programme of EXPRESS/SOS has featured only a few papers on structural operational semantics. * The proceedings of EXPRESS/SOS 2012 included 10 papers, five of which dealt with topics related to operational semantics and its mathematical (meta-)theory--that's 50% of the articles and the largest percentage of SOS contributions to EXPRESS/SOS in the period 2012-2022. * The proceedings of EXPRESS/SOS 2013 included seven papers, two of which dealt with topics related to operational semantics and its mathematical (meta-)theory--that's 28.5% of the contributions. * The proceedings of EXPRESS/SOS 2014 included eight papers, two of which (25%) dealt with topics related to the theory of structural operational semantics. * The proceedings of EXPRESS/SOS 2015 included six papers, one of which (16.7%) dealt with topics related to the theory of structural operational semantics. * The proceedings of EXPRESS/SOS 2016 included five papers, none of which dealt mainly with operational semantics. * The proceedings of EXPRESS/SOS 2017 included six papers, one of which (16.7%) dealt mainly with operational semantics. * The proceedings of EXPRESS/SOS 2018 included seven papers, none of which dealt mainly with operational semantics. * The proceedings of EXPRESS/SOS 2019 included seven papers, two of which 28.5% dealt mainly with operational semantics. * The proceedings of EXPRESS/SOS 2020 included six papers, none of which dealt mainly with operational semantics. * The proceedings of EXPRESS/SOS 2021 included six papers, none of which dealt mainly with operational semantics. * The proceedings of EXPRESS/SOS 2022 included eight papers, none of which dealt mainly with operational semantics. So, only 13 out of the 76 papers published in the proceedings of EXPRESS/SOS since 2012 dealt with topics in SOS theory (17.1% of published papers). In passing, we also note that 16 out of the 110 presentations at the workshop in the period 2012-2022 were devoted to topics in SOS theory (that is, 14.5% of the workshop presentations). Research in SOS was well represented at EXPRESS/SOS in the first three editions of the joint workshop. However, five of the last seven instalments of the workshop did not include any presentations devoted to topics that were mainly related to structural operational Figure 1: Total number of accepted paper (blue) and the number of accepted papers on SOS theory at the EXPRESS/SOS Workshop since 2012. semantics. In particular, EXPRESS/SOS 2020-2022 did not have any talks on the theory and applications of structural operational semantics. ### Reflections on the Analysis Results Reading through the EXPRESS/SOS contributions relevant to the theory of SOS reveals that the most recent results mostly focused on two aspects of SOS specifications: foundational aspects concerning the bialgebraic interpretation of SOS due to Turi and Plotkin [71], as well as compositionality of quantitative notions of equivalence such as probabilistic bisimilarity. Below, we provide a more nuanced analysis of this trend. Another observation is that the diminishing strength in the provision of results on the theory of SOS can be largely attributed to a lack of projects (particularly, PhD studentships) in this area. Almost all of the results on the meta-theory of SOS contributed to the EXPRESS/SOS series had a co-author with a PhD project on this topic. A reduction in the number of doctoral students does not bode well for the healthy development of any research field. ## 3 Personal Reflections Since the appearance of Plotkin's seminal Aarhus technical report [60], reprinted in slightly revised form as a journal paper in [62] with some historical remarks by Plotkin himself in [61], structural operational semantics has arguably become the most widely used approach to defining the semantics of programming and executable specification languages. To our mind, it is as impactful and popular today as it has been for over forty years. Indeed, one would be hard pressed to find papers on the theory of programming and specification languages that do not use structural operational semantics in some way. Moreover, the semantics of full-blown programming or domain-specific languages is still given in that style, reflecting its flexibility and applicability--see, for instance, the paper [45] for a small-step semantics of full Ethereum-virtual-machine bytecode that is formalised in the \(F*\) proof assistant [68] and then validated against the official Ethereum test suite. As Plotkin highlights in his aforementioned piece on the origins of structural operational semantics, the essence of that approach to semantics is that it is _rule based_ and that the rules should be _syntax directed_ in order to support compositional language specifications and reasoning, as in the denotational approach to semantics. Conceptually, this rule-based view of operational semantics naturally led to the development of a theory of SOS language specifications that focused on the rules used in semantic definitions. The gist of that line of research, which can be traced back to de Simone's work [65], was to study _rule formats_ for operational specifications guaranteeing that every program in the specified language afford some semantic property of interest. So, rule formats offered a way to reduce the checking of semantic properties of programs in a language to syntactic checks on the rules used to define the operational semantics of the language. The literature on what came to be called the'meta-theory of structural operational semantics' is by now very large and we cannot do it justice in this paper. We refer the interested reader to the survey articles [7, 59] and to the references therein as well as the proceedings of SOS, EXPRESS/SOS, and of conferences such as CONCUR, LICS and POPL, for much more information and recent references. Naturally, since its first edition in 2004, the SOS workshop has served as a venue for the publication of several articles on SOS meta-theory. Three of the authors of this piece have been amongst the contributors to the development of the fascinating research on rule formats for operational specifications and thoroughly enjoyed doing so. However, we feel that the time has come for a critical appraisal of the strengths, weaknesses and possible future of that line of research and to speculate about whether the data we discussed in Section 2 reflects the musings we present in the rest of this note. ### Strengths In our, admittedly biased, opinion, research on rule formats for structural operational semantics has led to a wealth of interesting and elegant theoretical results, ranging from those on the meaning of rule-based specifications using rules with negative premises (see, for instance, the articles [14, 41, 19]) to congruence formats for several behavioural equivalences obtained uniformly from their modal characterisations via modal decomposition (see, for example, [12, 35, 33, 34] and the references therein). Early studies of congruence rule formats, such as those reported in the seminal [13, 46], were accompanied by characterisations of the largest congruences included in trace semantics induced by the collection of operators that can be specified in the rule formats studied in those references. After all these years, we still find it amazing that such results could be proved at all! Below we provide a non-exhaustive list of the available meta-theorems with sufficient strength (more than a single paper, with more than one application to a language) and we refer to the past review papers/chapters [7, 59] for a more exhaustive list to the date of their publication: * Congruence: proving congruence (compositionality) for various notions of strong [53, 73], weak [33], higher-order [55], data-rich [57], timed [48], and quantitative behavioural equivalences [27, 17, 18]; supporting various syntactic language features such as formal variables and binders [53, 21], as well as semantic features such as negative premises and predicates, terms as labels, and ordering on rules. * (De-)Compositional reasoning methods: decomposing logical formulae (in the multi-modal \(\mu\)-calculus, also known as Hennessy-Milner logic with recursion, [50, 51]) according to the semantics of various operators for various notions of bisimilarity [34, 33, 35] and their quantitative extensions [17, 18]; interestingly, this can lead not only to a reasoning method for checking modal formulae, but can also serve as a recipe for 'generating' congruence formats for different notions of equivalence, once their modal characterisation is defined. * Axiomatisation and algebraic properties: to generate sound and ground-complete axiomatisations for strong bisimilarity [3], as well as weak behavioural equivalences [42], and equivalences with data [38]. An orthogonal line of enquiry considered identifying sufficient conditions guaranteeing various algebraic properties of language operators such as commutativity [58], associativity [24], zero and unit elements [4], and idempotence [2]; we refer to an accessible overview paper [9] summarising such results to its date of publication. There have been a number of implementations of such results in tools [8, 56, 72], mostly based on rewriting logic [22]. Several of the theorems from the theory of structural operational semantics have found application in the study of process calculi, reducing the need to prove congruence and axiomatisation results, amongst others, from scratch for each calculus and have been extended to settings including, for instance, probabilistic and stochastic features (see, for example, [18, 27]), as well as to higher-order calculi, as in the recent [44]. The article [44] belongs to a fruitful and still active line of research, stemming from the seminal work by Turi and Plotkin [71], providing bialgebraic foundations to the theory of structural operational semantics. The contributions to the work on rule formats and on the meta-theory of structural operational semantics have striven to achieve a reasonably good trade-off between the generality of the technical results and the ease with which they can be applied to specific languages. Ideally, one would always like to have simple syntactic restrictions on rule formats that guarantee desired semantic properties in a wide variety of applications. Indeed, following a Pareto Principle, very often simple rule formats cover many of the languages of interest and one quickly hits a threshold where complex and hard-to-check definitions are needed to extend the applicability of obtained results. In many cases, the 'curse of generality' led to definitions of rule formats whose constraints are arguably not purely syntactic any more and may even be undecidable. As an example, Klin and Nachyla [49] have shown that it is undecidable whether an operational specification that uses rules with negative premises has a least supported model and whether it has a unique supported model or a stable model. It is also undecidable whether such a specification is complete. As mentioned by Klin and Nachyla in the aforementioned reference, these negative results entail that formats such as the complete ntyft/ntyxt [32] 'are not _bona fide_ syntactic formats, as there is no algorithmic way to tell whether a given specification fits such a format.' So, the pursuit of generality is, to our mind, a double-edged sword and can be seen as both a strength and a weakness of several result on rule formats and the meta-theory of structural operational semantics. In the context of EXPRESS/SOS, we observed that this tradition of strong theoretical results is dying down: from 2012 to 2017, we counted nine contribution to the foundation of SOS specifications [8, 15, 28, 38, 39, 49, 52, 63, 30], including on the bialgebraic framework [15, 49, 63], as well as congruence for quantitative notions of equivalence [28, 39, 52, 53] and axiomatisation results [38]; however, this number dropped to only one contribution from 2018 to 2022 on the meaning of SOS specification and compositionality of equivalences on open terms [43]. In summary, we believe that the study of rule formats and of the meta-theory of structural operational semantics has yielded many elegant results that have been of some use for the working concurrency theorist. However, first, the number of such contributions has significantly dropped in the past few years and, second, one has to wonder whether that line of work has had impact on the field of programming language semantics. We will offer some musings on that question in the coming section. ### Gaps To our mind, apart from its intrinsic scientific interest, the theory of structural operational semantics based on rule formats has served the concurrency-theory community well by providing elegant, and often general and deep, results that have both explained the underlying reasons why specific languages enjoyed several semantic properties and served as tools to prove new theorems as instances of a general framework. The use of'syntactic' rule formats to establish properties of interest about formal systems has also been used in logic. By way of example, Ciabattoni and Leitsch have given algorithmic conditions guaranteeing that some logics enjoy cut elimination [20]. However, despite its undoubted successes, to our mind, the theory of rule formats has not yet had the impact one might have expected on the community working on the theory of programming languages. Perusing the proceedings of the premier conferences in that field indicates that much of the research on programming-language semantics and its applications is done in the context of proof assistants such as Coq [10, 23]2 and on frameworks built on top of those--see, for instance, the highly influential Iris framework for higher-order concurrent separation logic [47]. Footnote 2: Coq is available at [https://coq.inria.fr/](https://coq.inria.fr/). We speculate that this relative lack of impact might be due to the fact that the theory of structural operational semantics based on rule formats has been mostly developed within the process algebra community. This has naturally led to the development of results and frameworks having process calculi as main application area. As a consequence, despite some foundational research [6, 31, 57], the development of a widely-applicable theory of rule formats for languages having first-class notions of data and memory, as well as binding constructs is still somewhat in its infancy. This limits the applicability of the results obtained by the concurrency theory community to mainstream programming languages. Moreover, the software tools embodying the theory of structural operational semantics developed so far have mostly taken the form of prototypes and are arguably not as mature and usable as those produced by groups working on the theory of programming languages [64]. The initial work carried out within the PLanCompS [11] aimed to address this gap based on the Modular SOS framework that has been pioneered by Mosses [54]; this line of work has been influential and has led to other frameworks such as the iCoLa framework for incremental language development [37]. ### Trends and Opportunities To relate the past strengths to future trends, particularly regarding emerging application areas of operational semantics, we analysed the table of contents of five past editions of flagship conferences in programming languages: POPL (from 2021 to 2023, inclusive) and PLDI (from 2021 to 2022, inclusive). The aim of the analysis was to find areas where the available strength in the theory of SOS can be exploited. We aimed to be as inclusive as possible and tried to mention any such areas, even if the exploitation of available strength would require a major rework or transformation of ideas and results. Below we provide a raw list of keywords that we encountered in our analysis: * POPL 2023: Semantics of Probabilistic and Quantum programs, Coq Proof Libraries, Nominal Sets, Co-Algebra and Bisimulation, Multi-Language Semantics, Session types. * POPL 2022: Session types, Semantics of Probabilistic and Quantum programs, Semantic Substitution and congruence. * POPL 2021: Semantics of Probabilistic Programs, Nominal Semantics, Hyper-properties and non-interference, functorial semantics * PLDI 2022: Information flow analysis, equational and algebraic reasoning (also applied to quantum programs), sound sequentialisation, Kleene algebra, language interoperability, verified compilation (also applied to quantum programs). * PLDI 2021: Language translation conformance and compiler verification, session types, regular expressions, semantics of probabilistic and quantum programs. In all the POPL and PLDI editions we reviewed, abstract interpretation (also for quantum programs), analysing weak memory models, and reasoning using separation logics are featured prominently. It appears from our analysis that the following activities may have substantial potential impact: * semantic meta-theorems about quantitative transition systems (particularly probabilistic and quantum transition systems [16, 30]); * providing mechanised semantic frameworks, particularly in proof assistants such as Coq; * defining general semantic frameworks and theorems for different memory models and models of parallelism; * defining general compositional frameworks for reasoning with separation logics and logics of incorrectness; * devising algorithms for test-case generation, for instance, for compiler testing, based on a semantic framework. We hope to see work on some of those topics in the near future, which might lead to a new lease of life for the (meta-)theory of SOS and its applications. AcknowledgementsWe thank Valentina Castiglioni and Peter Mosses for their comments on a draft of this piece. Luca Aceto and Anna Ingolfsdottir were partly supported by the projects 'Open Problems in the Equational Logic of Processes (OPEL)' (grant no. 196050) and 'Mode(l)s of Verification and Monitorability (MoVeMent)' (grant no. 217987) of the Icelandic Research Fund. Mohammad Reza Mousavi have been partially supported by the UKRI Trustworthy Autonomous Systems Node in Verifiability, Grant Award Reference EP/V026801/2 and the EPSRC grant on Verified Simulation for Large Quantum Systems (VSL-Q), Grant Award Reference EP/Y005244/1.
この構造的操作的意味論(SOS)の(メタ-)理論に関する論文は、以下の2つの質問に動機付けられています。 (1) SOSの(メタ-)理論が研究分野として衰退しているのか?(2) それならば、その分野を再定義された目的で再生することは可能か? この論文では、EXPRESS/SOSワークショップの歴史や、そのワークショップの作者や発表に関するデータ、そしてそのテーマについて分析することで、これらの質問に対する可能性のある回答について検討します。 私たちの定量的および定性的な分析の結果、SOSの理論は研究分野としての関心の低下を示しています。「すべてが良いものは終わりを迎える」と云うが、この論文を最後まで明るい気持ちで締めくくるために、この論文の2番目の動機付けられた質問に、いくつかのビジョンをもって取り組んでいきます。 そこで、個人的な経験に基づき、プログラミング言語の分野の2つのflagshipコン fer ence(POP
2306.17792
Towards Improving the Performance of Pre-Trained Speech Models for Low-Resource Languages Through Lateral Inhibition
With the rise of bidirectional encoder representations from Transformer models in natural language processing, the speech community has adopted some of their development methodologies. Therefore, the Wav2Vec models were introduced to reduce the data required to obtain state-of-the-art results. This work leverages this knowledge and improves the performance of the pre-trained speech models by simply replacing the fine-tuning dense layer with a lateral inhibition layer inspired by the biological process. Our experiments on Romanian, a low-resource language, show an average improvement of 12.5% word error rate (WER) using the lateral inhibition layer. In addition, we obtain state-of-the-art results on both the Romanian Speech Corpus and the Robin Technical Acquisition Corpus with 1.78% WER and 29.64% WER, respectively.
Andrei-Marius Avram, Răzvan-Alexandru Smădu, Vasile Păiş, Dumitru-Clementin Cercel, Radu Ion, Dan Tufiş
2023-06-30T16:48:22
http://arxiv.org/abs/2306.17792v1
Towards Improving the Performance of Pre-Trained Speech Models for Low-Resource Languages Through Lateral Inhibition ###### Abstract With the rise of bidirectional encoder representations from Transformer models in natural language processing, the speech community has adopted some of their development methodologies. Therefore, the Wav2Vec models were introduced to reduce the data required to obtain state-of-the-art results. This work leverages this knowledge and improves the performance of the pre-trained speech models by simply replacing the fine-tuning dense layer with a lateral inhibition layer inspired by the biological process. Our experiments on Romanian, a low-resource language, show an average improvement of 12.5% word error rate (WER) using the lateral inhibition layer. In addition, we obtain state-of-the-art results on both the Romanian Speech Corpus and the Robin Technical Acquisition Corpus with 1.78% WER and 29.64% WER, respectively. Lateral Inhibition; Romanian Language; Speech Recognition; Wav2Vec 2.0 ## I Introduction Deep neural networks benefit from large amounts of annotated training data. However, annotated data is challenging to obtain in many settings. Except for English, generating thousands of hours of transcribed audio necessary to train a state-of-the-art speech recognition system is infeasible for most languages worldwide. Self-supervised learning [1] has become the de facto technique for addressing this issue by first teaching a general data representation from unlabeled samples and then transferring the accumulated knowledge to a downstream task via fine-tuning [2]. Working with self-supervision on unlabeled speech signals involves similar challenges as in computer vision. However, the research community continued to build pre-trained models on audio that have pushed further the state of the art in speech recognition. Schneider et al. [3] introduced the Wav2Vec model, which encodes the input audio data into a latent space to create a contextualized representation employing a Transformer encoder [4]. Baevski et al. [5] built Wav2Vec 2.0 on top of the previous work, mainly using the same model architecture while changing the pre-training objective to a discretized contrastive loss similar to the masked language model strategy from natural language processing. Introduced by Pais [6], the lateral inhibition layer helps the model to learn when the annotated data is scarce. This paper investigates its application in transcribing human voice from audio files by integrating the lateral inhibition mechanism into a pre-trained automatic speech recognition (ASR) system. We choose the Wav2Vec 2.0 Base model pre-trained on 100k hours of unlabeled audio data extracted from VoxPopuli (i.e., Wav2Vec2.0-VP-100k) [7]. We run our experiments on a low-resource language, namely the Romanian language. Our results for the experimental setup with the lateral inhibition layer show an average performance of 12.5% word error rate (WER) on various dataset settings compared with the feed-forward layer. In addition, we obtain state-of-the-art results on the Romanian Speech Corpus (RSC) [8] with 1.78% WER, using fewer training data than the previous model, and on the Robin Technical Acquisition Speech Corpus (RTASC) [9] with 29.64% WER, using the same training data. We can summarize our main contributions as follows: (i) applying the technique of neural lateral inhibition to ASR; (ii) performing an analysis of the improvements brought by the lateral inhibition layer; (iii) to the best of our knowledge, creating the first publicly available Romanian Wav2Vec 2.0 model1 (called RoWav2Vec2.0-VP-100k-LI) that was thoroughly evaluated on several benchmarks; and (iv) obtaining state-of-the-art performance on two Romanian ASR datasets. Footnote 1: [https://huggingface.co/racai](https://huggingface.co/racai) ## II Lateral Inhibition Inspired by the human brain's biological process of lateral inhibition, the neural lateral inhibition layer has been successfully applied in named entity recognition [6]. This process accounts for exciting neurons reducing the activity of neighboring neurons in the human brain [10]. Also, it provides an increased perception of the visual cortex under challenging scenarios, such as low-lighting conditions [11]. Intuitively, we envisage that the new layer should be able to better focus on the actual voice data while possibly removing unwanted noise. Following the original formulation [6], the lateral inhibition layer is described as follows: \[F(x)=x\cdot Diag(\Theta(x\cdot ZeroDiag(W)+b)) \tag{1}\] where \(x\) is the input vector of the layer (i.e., the embedding representation produced by the RoWav2Vec2.0-VP-100k-LI model), \(Diag(\cdot)\) denotes a diagonal matrix having the diagonal set to the vector presented as a parameter, \(ZeroDiag(\cdot)\) generates a matrix with the zero value on the diagonal, \(W\) is the weight matrix, \(b\) corresponds to the bias values, and \(\Theta(\cdot)\) is the Heaviside function (see Equation 2). \[\Theta(x)=\begin{cases}1,x>0\\ 0,x\leq 0\end{cases} \tag{2}\] Following the analogy with the biological process, the Heaviside function determines which values can pass to the next layer. The decision is based on the adjacent values in the supplied embedding representation. Equation 1 is used for the forward pass, with the Heaviside function included, thereby providing a strict pass or reject functionality for the input values. However, in the backward pass, the non-differentiable Heaviside function is replaced with the parameterized sigmoid function [12] (see Equation 3, where \(k\) is the scaling parameter). This technique, known as surrogate gradient learning [13], allows using a known derivative (see Equation 4) in the backward pass. \[\sigma(x)=\frac{1}{1+e^{-kx}} \tag{3}\] \[\sigma^{\prime}(x)=k\sigma(x)\sigma(-x) \tag{4}\] ## III Experimental Settings ### _Dataset_ The fine-tuning of the RoWav2Vec2.0-VP-100k-LI model was done on a speech dataset whose composition contained ten Romanian corpora with transcribed audio files. The corpora contain recordings from several domains, including Wikipedia, News, Internet, and Legal. The resulting dataset has approximately 300 hours of transcribed speech from 222.7k utterances. It is composed of both reading and spontaneous speech, distributed in an imbalanced manner, with 229 hours of reading and 71 hours of spontaneous speech, respectively. We further split our Romanian speech dataset into five subsets based on the total recording time by random sampling without replacement audio files until the desired size was reached: Small (S) - 10 minutes, Medium (M) - 1 hour, Large (L) - 10 hours, Extra Large (XL) - 100 hours, and Extra Extra Large (XXL) - the whole dataset. The split was necessary to evaluate the lateral inhibition performance in more extreme settings, i.e., with fewer labeled audio files. ### _Fine-tuning_ We used the primary fine-tuning mechanism for the Wav2Vec 2.0 model as introduced in the original paper [5]. Therefore, using the raw audio input, we project the contextualized embeddings \(c_{i}\) obtained by the model for each time step \(i\) into a tensor \(y_{i}\) whose dimensions match the number of letters in the Romanian alphabet, plus the space character and the blank token. We project the data using either the standard fully-connected layer or the lateral inhibition layer followed by a dense layer. Using the connectionist temporal classification algorithm [16], we computed the loss between the predicted logits and target labels. We set \(k=10\) for the lateral inhibition layer, which we believe is a good enough approximation of the surrogate gradient of the Heaviside function. We employed the Adam method [17] to optimize the loss with a learning rate set to \(3e-5\) and a weight decay to \(5e-3\). We fine-tuned each model on two NVIDIA 1080 TI GPUs. Due to GPU memory limitations, we set the batch size to 4 with a gradient accumulation of 8. In addition, we clipped the gradients from the back-propagation algorithm to 2 to improve training stability [18]. ## IV Results ### _Romanian ASR_ We evaluate our models, namely RoWav2Vec2.0-VP-100k (i.e., without lateral inhibition) and RoWav2Vec2.0-VP-100k-LI (i.e., with lateral inhibition), on the test set of three corpora: Spontaneous Speech Corpus (SSC) [19], RSC, and RTASC. Compared with previous works on Romanian ASR, the results of the evaluation regarding WER and character error rate (CER) are listed in Table I. In all our experiments, the decoding phase employs a 4-gram KenLM language model [20] trained on the textual part of the corpus for contemporary Romanian language [21]. Our model with lateral inhibition, trained on the full dataset (i.e., RoWav2Vec2.0-VP-100k-LI-XXL), obtains state-of-the-art performance on the RSC and RTASC corpora, achieving 1.78% WER and 29.64% WER, respectively2. It improves the performance of the best Kaldi [22]-based ASR system, the Time Delay Neural Network - Recurrent Neural Network (TDNN-RNN) [15], by 1.01% WER on RSC and also the performance of the Romanian DeepSpeech2 model [14] on RTASC by 7.57% WER. Footnote 2: The high difference in WER between the two corpora comes from the type of utterances found in them: RSC contains common Romanian words and phonemes, while RTASC has more specific utterances from technology, with many words and phonemes borrowed from the English language. However, our proposed models do not improve the performance on the SSC evaluation set, with our best variant (i.e., RoWav2Vec2.0-VP-100k-LI-XXL) falling behind 2.24% WER compared to the TDNN-RNN architecture. The main driver behind this difference is the need for more spontaneous speech data within our training corpus compared to the dataset used for training the state of the art. Specifically, the TDNN - Long Short-Term Memory (TDNN-LSTM), the Convolutional Neural Network - TDNN (CNN-TDNN), the TDNN, and the TDNN-RNN were all trained on a dataset with 235 hours of speech, namely 95 hours of read speech data from RSC and 140 hours of dedicated internal spontaneous speech data, similar to the one used in the SSC evaluation set. Meanwhile, we used only 71 hours of spontaneous speech data, approximately half the amount used to train the TDNN-based models. On the other hand, we increased the number of read speech data by decreasing the amount of spontaneous speech data within our training corpus. Hence, the performance of our best variant on the RSC evaluation set may have benefited from this fact. However, RoWav2Vec2.0-VP-100k-LI-XL still achieves almost state-of-the-art performance with 1.80% WER on RSC, indicating that our methodology has not benefited too much from the increased amount of read speech data on this test set. Apart from our best model, the rest of the variants performed reasonably well on each evaluation task, given the low amount of available training data. The RoWav2Vec2.0-VP-100k model obtained good results when fine-tuned on the L, XL, and XXL subsets, but the word error rate rapidly increased when the training dataset was switched to the more extreme cases (i.e., the M and S subsets). For instance, on the RSC dataset, the variants fine-tuned on the L, XL, and XXL subsets maintained a fairly good performance, achieving 4.80%, 2.31%, and 2.01% WER, respectively (or 3.95%, 1.80%, and 1.78% WER, respectively, with the lateral inhibition layer). However, the WER increased by more than three times on the RSC M subset and more than eight times on the RSC S subset, with our model obtaining 16.55% and 44.78% WER, respectively (or 13.92% and 35.00% WER with the lateral inhibition layer). ### _Lateral Inhibition Layer Improvements_ We further analyze the improvements brought by the lateral inhibition in the RoWav2Vec2.0-VP-100k-LI models on all five evaluation subsets. An illustration of the difference in performance obtained by our model fine-tuned on all subsets is depicted in Figure 1. We observe that the lateral inhibition layer decreases the error rates of the RoWav2Vec2.0-VP-100k-LI models in all our experiments. We also notice that, on average, the improvements become more significant for the smaller subsets. We believe this results from the increased regularization when the lateral inhibition layer is employed, mainly because it allows the model to focus better on the features of the actual human voice, thereby learning to distinguish the speech from the noise better even when the data is scarce. We also compute the average relative improvement of the lateral inhibition mechanism to all the RoWav2Vec2.0-VP-100k-LI variants on each evaluated corpus. We depict the results in Figure 2. The greatest improvements are achieved on the RSC evaluation subsets, the lateral inhibition layer reducing the WER on average by 17.8% and the CER by 16.1%. The lowest average WER improvement (i.e., 9.0%) is obtained on the RTASC evaluation subsets. Also, the lowest CER improvement (i.e., 11.4%) is obtained on the SSC evaluation subsets. The average improvement over all evaluation subsets is 12.5% for WER and 13.1% for CER. ## V Conclusions Automatic speech recognition for low-resource languages remains an important research direction. In this work, we applied the recently introduced mechanism, namely the lateral inhibition layer, which helps the speech recognition neural networks to better distinguish between the human voice and the surrounding noise. We performed experiments on the Romanian language using the RoWav2Vec2.0-VP-100k-LI models and a custom dataset composed of 300 hours of speech. The results showed that the lateral inhibition layer reduces, on average, the WER by 12.5% over all the evaluated test sets. Furthermore, we achieved state-of-the-art performance on the RSC and RTASC datasets using this mechanism, obtaining 1.78% WER and 29.64% WER, respectively. Future work considers experimenting with the lateral inhibition layer on languages other than Romanian and an evaluation of a speech dataset containing more than 300 hours. In addition, we intend to fine-tune other variants of the Wav2Vec 2.0 model, pre-trained on various datasets and with different methodologies, to validate that our results generalize beyond the pre-trained variant employed in this work. ## Acknowledgements The research has been funded by the University Politehnica of Bucharest through the PubArt program.
自然言語処理においてBidirectional encoder representations from Transformermodels の台頭により、音声コミュニティは彼らの開発方法論の一部を採用しました。そのため、Wav2Vec モデルが導入され、最先端の結果を得るために必要なデータ量を削減しました。この作業は、この知識を適用し、事前学習された音声モデルの性能を向上させ、細分化されたdense層をLateral Inhibition層に置き換えることで実現しました。生物学的プロセスに由来するLateral Inhibition層。ルーマニア語、リソースが不足している言語の論文で、Lateral Inhibition層を用いることで、平均的に12.5%の単語誤り率(WER)の改善が見られました。さらに、ルーマニア語音声corpusとRobinTechnical Acquisition corpusでは、それぞれ1.78%のWERと29.64%のWERを達成しました。
2301.13553
Millimetre-wave Radar for Low-Cost 3D Imaging: A Performance Study
Millimetre-wave (mmWave) radars can generate 3D point clouds to represent objects in the scene. However, the accuracy and density of the generated point cloud can be lower than a laser sensor. Although researchers have used mmWave radars for various applications, there are few quantitative evaluations on the quality of the point cloud generated by the radar and there is a lack of a standard on how this quality can be assessed. This work aims to fill the gap in the literature. A radar simulator is built to evaluate the most common data processing chains of 3D point cloud construction and to examine the capability of the mmWave radar as a 3D imaging sensor under various factors. It will be shown that the radar detection can be noisy and have an imbalance distribution. To address the problem, a novel super-resolution point cloud construction (SRPC) algorithm is proposed to improve the spatial resolution of the point cloud and is shown to be able to produce a more natural point cloud and reduce outliers.
Han Cui, Jiacheng Wu, Naim Dahnoun
2023-01-31T11:08:52
http://arxiv.org/abs/2301.13553v1
# Millimetre-wave Radar for Low-Cost 3D Imaging: A Performance Study ###### Abstract Millimetre-wave (mmWave) radars can generate 3D point clouds to represent objects in the scene. However, the accuracy and density of the generated point cloud can be lower than a laser sensor. Although researchers have used mmWave radars for various applications, there are few quantitative evaluations on the quality of the point cloud generated by the radar and there is a lack of a standard on how this quality can be assessed. This work aims to fill the gap in the literature. A radar simulator is built to evaluate the most common data processing chains of 3D point cloud construction and to examine the capability of the mmWave radar as a 3D imaging sensor under various factors. It will be shown that the radar detection can be noisy and have an imbalance distribution. To address the problem, a novel super-resolution point cloud construction (SRPC) algorithm is proposed to improve the spatial resolution of the point cloud and is shown to be able to produce a more natural point cloud and reduce outliers. mmWave radar, 3D imaging, point cloud ## I Introduction Millimetre-wave (mmWave) radars have received increased popularity in many industries as an emerging type of sensor. The high bandwidth allows them to estimate the distance of an object at centimetre-level resolution, and the short wavelength and antenna size allow multiple antennas to be integrated into a single chip and measure the angle-of-arrival (AoA) of the object using multiple-input multiple-output (MIMO) techniques [1]. Combining the distance and AoA measurement, mmWave radars are able to construct a point cloud to represent the spatial shape of an object [2]. Therefore, mmWave radars can be used as a low-cost 3D imaging sensor, as an alternative to the traditional depth cameras and laser sensors. This allows many computer vision tasks, such as object detection [3], human tracking [4, 5], posture estimation [6, 7], and identification [4], to be addressed using mmWave radars as a non-intrusive solution. However, although many applications have been proposed that rely on the 3D imaging capability of mmWave radars, few researchers have attempted to evaluate the quality of mmWave radars' detection quantitatively, and there is a lack of a standard on how this quality can be assessed. When detecting objects in a scene, the reflection signal can be seen as a time-delayed version of the transmitted signal. Combining the two signals gives an intermediate frequency (IF) signal, whose frequency and phase are determined by the time-of-flight (ToF) of the signal, and, equivalently, the distance between the object and the radar [2]. The distance can be estimated directly from the IF signal of one pair of transmitter and receiver at high resolution, whereas the AoA needs to be estimated from the signal phase over a linearly spaced antenna array. There is rich literature on antenna array-based AoA estimation for traditional radars [8], and the same concept also applies to mmWave radars. This paper discusses the AoA estimation in the context of mmWave radar 3D imaging. It reviews and discusses the 3D point cloud construction techniques that are commonly used with mmWave radars. This paper presents a purpose-built simulation system that simulates the data acquisition process of a mmWave radar when facing a scene. Radar data simulation allows researchers to focus on algorithm design and verification, instead of investing too much time in the hardware and real-world data collection. Existing radar simulators are often not designed for 3D imaging and have certain constraints. For example, the system in [9] generates range and Doppler information of the radar rather than the raw data, the system in [10] only supports single antenna data generation and cannot be used to estimate the AoA, and the system in [11] only supports up to four receivers in one direction and cannot be used for 3D imaging. In this research, a lightweight mmWave radar simulator is designed that supports raw data generation of a multi-antenna mmWave radar, configurable antenna parameters and layout, and customized scene construction using 3D human models with programmable motions. Using the simulation system, a quantitative evaluation of the radar's capability of imaging a human subject is carried out, as well as an evaluation of the key factors that could affect the output quality, including the data processing chain (DPC), radar antenna configuration, chirp configuration, subject velocity, and signal-to-noise ratio (SNR) in the scene. It will be shown that, although the radar can capture the spatial information of the subject's body shape, the detected point cloud can be noisy, sparse and imbalanced, and can require further processing before being used for higher-level applications. Finally, a novel super-resolution point cloud construction (SRPC) algorithm is proposed to improve the spatial resolution of the point cloud and is shown to be able to produce a more natural point cloud and reduce outliers. The contribution of this paper can be summarized as follows: * It presents a simulator of mmWave radar that can simulate the radar data as if it is placed in a real scene. It supports customized 3D models to be imported as the ground truth and provides a framework for evaluating a 3D imaging algorithm quantitatively. * It presents a systematical study of 3D imaging algorithms using mmWave radars and an evaluation of the key factors that could affect the radar detection. It highlights the challenges of the noisy and imbalanced point cloud. * It presents a novel SRPC algorithm that can be inserted into the traditional point cloud construction DPC and can improve the quality of the point cloud. The rest of the paper is organized as follows. Section II discusses the background and related work. Section III introduces the preliminaries of mmWave radars. Section IV presents the details of the simulator. Section V discusses the 3D imaging DPC using mmWave radars. Section VI presents the experimental results based on the simulation system. Section VII presents the novel SRPC algorithm and shows how it can improve the radar detection. Section VIII concludes the work. ## II Background Traditionally, 3D imaging systems often use depth cameras (like stereo cameras or RGBD cameras) [12] or laser sensors [13], which are able to provide a dense and accurate 3D model of the object in front. However, camera-based systems can be intrusive and limited by the lighting conditions, and laser sensors are often constrained by their high cost (when compared with the cost of a mmWave radar which is only around \(\upvarepsilon 10\)). Radar-based 2D imaging has also been used widely in applications like security, but they provide limited depth information and often rely on a dense antenna array that has a fixed region-of-interest [14]. Radar-based 3D object detection uses radio frequency (RF) signals at certain frequencies to detect objects, and the resolution of the detection largely depends on the available bandwidth. For example, WiFi devices operating at \(5\,\mathrm{GHz}\) with a \(40\,\mathrm{MHz}\) bandwidth can locate people with sub-meter level resolution [15], and ultra-wide band (UWB) devices at higher frequency bands can achieve centimetre level resolution [16]. With mmWave radars operating at above \(60\,\mathrm{GHz}\), the range resolution can be below \(5\,\mathrm{cm}\)[2] and even micrometre level when pointing to a corner reflector [17]. Therefore, the high resolution has gained mmWave radars great popularity in automotive driving applications, and researchers are actively investigating their usage in computer vision tasks. Although mmWave radars can be used as 3D imaging sensors, the point cloud is often less accurate and noisier than the traditional systems [5]. Many methods have been proposed to improve the detection quality of a mmWave radar, such as [18, 19]. However, as these methods often use different radar configurations and scene setup, it is hard to carry out a quantitative comparison between them, and there lacks a standard on how to define the quality of the radar detection. This work aims to address the problem by providing a simulation system and a framework for a systematic evaluation of a 3D imaging algorithm. Radar-based 3D imaging requires measuring the distance and AoA of the object. The distance is often measured through the ToF of the signal, whereas the AoA measurement relies on the use of an antenna array. Since the antennas in the array will have different physical locations, the ToF at each antenna will be different, and the AoA of the object can be estimated by investigating the signal difference. This process has been studied in depth for traditional long-range radars [8], and the same principle can be applied to mmWave radars on a smaller scale. There are many algorithms designed for estimating the AoA based on a linearly spaced antenna array, such as the FFT-based method, beamforming method and subspace method (more details in Section III-C). These algorithms provide a trade-off between the computational complexity and the angular resolution [8, 20]. However, in contrast to traditional radar systems where the signal sources are often well-defined and uncorrelated, signal sources in 3D imaging can be one object with a continuous surface, which can require a different DPC. This paper discusses the traditional AoA estimation algorithms in the context of 3D imaging using a mmWave radar and investigates the key factors that would affect the detection result. ## III mmWave Radar Preliminaries Commercial mmWave radars often implement the frequency modulated continuous wave (FMCW) model. The radar sends a modulated chirp signal, detects the signal reflection from any object, processes the signal and determines the range, velocity, and AoA of the object. The principle of the FMCW radar model has been documented in detail in the literature (e.g. [5]). This section will give a brief discussion of the fundamentals that are necessary for understanding this paper, with a particular focus on the AoA estimation. The radar sends an FMCW signal and receives its reflection from the object in the scene, where the reflection will be a time-delayed version of the transmitted signal. The two signals are mixed to produce an IF signal, as shown in Equation (1) (more details in Appendix A): \[IF(t)=Ae^{j(\omega_{b}t+\phi_{b})}\ \mathrm{where}\ \omega_{b}=2\pi S\tau,\ \phi_{b}=2\pi f_{0}\tau \tag{1}\] where \(S\) is the slope of the chirp, \(\tau\) is the ToF of the signal, \(f_{0}\) is the starting frequency of the chirp, and \(A\) represents the amplitude of the signal. After obtaining the IF signal, a DPC will be applied to determine the presence of any object. ### _Distance and Velocity Estimation_ For a single object, the frequency \(\omega_{b}\) will be a constant value and the distance of the object can be calculated as \[d=\frac{\tau c}{2}=\frac{\omega_{b}c}{4\pi S} \tag{2}\] When there are multiple reflection sources, the frequencies can be found by applying an FFT over the IF signal, which is referred to as the range-FFT. The velocity can be measured by transmitting multiple chirps at a known interval and calculating the phase difference between the chirps. Assuming the radar transmits a chirp every \(T_{c}\) seconds and a phase difference \(\Delta\phi\) is observed between successive chirps, then the velocity \(v\) of the object can be estimated as: \[v=\frac{\Delta\phi c}{4\pi T_{c}f_{0}} \tag{3}\] When there are multiple reflection sources moving at different velocities, they can be found by applying another FFT over the chirp phases, which is referred to as the Doppler-FFT. ### _AoA Estimation Principle_ The AoA of the object can be estimated by having multiple antennas operating concurrently and by comparing the phase difference between neighbouring receivers. Due to the spatial location difference between the receivers, the signal received at each receiver will have a slight phase difference depending on the relative position of the receivers and the AoA. The AoA can be computed in both azimuth and elevation directions, given that there exists more than one antenna in each direction. The azimuth and elevation angles will be denoted as \(\theta_{a}\) and \(\theta_{e}\), respectively, or \(\theta_{(a,e)}\) when referring to both of them. Assuming there are \(N_{a}\times N_{e}\) linearly spaced receivers in the azimuth and elevation directions, and \(M\) objects in different directions \(\theta_{(a,e)m}\), then each object can be viewed as a signal source and the receiving antenna array will receive a signal (denoted as \(x\)) as a weighted sum of the \(M\) data source: \[x^{(N_{a}\times N_{e})}=\sum_{m=1}^{M}\alpha_{m}s(\theta_{(a,e)m})+n \tag{4}\] where \(s(\theta_{(a,e)m})\) is the steering vector that represents the phase difference between receivers when a signal arrives with angle \(\theta_{(a,e)m}\), \(\alpha\) is an unknown parameter that models the signal transmission from the data source to the receivers, and \(n\) is the noise. The AoA estimation can be modelled as estimating the values of \(\theta_{(a,e)}\) for each object \(m\), given a set of receiver data (\(x\)). For linearly spaced arrays, the receivers are often separated by a small distance \(l\) that is equal to half of the signal wavelength, i.e. \(l=\frac{\lambda}{2}\), to maximize the angle-of-view (AoV) [8]. When using an array of \(N_{a}\) azimuth receivers and \(N_{e}\) elevation receivers, each subsequent receiver beyond the first one will receive an additional phase change that can be expressed using a 2D steering vector (more details in Appendix B): \[s(\theta_{(a,e)},N_{a},N_{e})=\] \[\begin{bmatrix}1,&...,&e^{j(N_{a}-1)\Delta\phi_{a}}\\ e^{j\Delta\phi_{e}},&...,&e^{j(\Delta\phi_{e}+(N_{a}-1)\Delta\phi_{a})}\\...,&...,&...\\ e^{j(N_{e}-1)\Delta\phi_{e}},&...,&e^{j((N_{e}-1)\Delta\phi_{e}+(N_{a}-1) \Delta\phi_{a})}\end{bmatrix} \tag{5}\] 3D point cloud construction requires the x-y-z coordinates of the object instead of the azimuth and elevation angles. Therefore, the calculation of the exact value of \(\theta_{a}\) and \(\theta_{e}\) is often not required. Let \(d\) denote the distance of the object, then the 3D coordinates of the object can be calculated as (more details in Appendix B): \[x=d\frac{\Delta\phi_{a}}{\pi},\ z=d\frac{\Delta\phi_{e}}{\pi},\ y=\sqrt{d^{2} -x^{2}-z^{2}} \tag{6}\] Given that \(d\) can be obtained from the range-FFT as discussed in Section III-A, the x-y-z coordinates can be obtained if the phase differences \(\Delta\phi_{a}\) and \(\Delta\phi_{e}\) are known. Therefore, the AoA estimation of an object can be considered equivalently as searching for the best matching steering vector \(s(\theta_{(a,e)m})\) of the object. ### _AoA Estimation Algorithms_ In the following sections, some of the most widely-used AoA estimation algorithms will be discussed, including the FFT-based method, conventional beamforming (also known as the Bartlett beamforming or the delay-and-sum beamforming), the minimum variance distortionless response (MVDR) beamforming (also known as the Capon beamforming) [21], and the multiple signal classification (MUSIC) subspace method [22]. The angle-FFT method is a single-snapshot method that can make an estimate based on a single chirp, whereas the other methods are multi-snapshot methods that require a few chirps to make one estimate. The performance of the algorithms depends on several factors, including the antenna layout, number of antennas, chirp configuration, number of snapshots, SNR, environment, etc. #### Iii-C1 Angle-FFT Method The simplest way of estimating \(s(\theta_{(a,e)m})\) of an object \(m\) in Equation (4) is by using correlation between the receiver data \(x\) and the steering vector from the candidate angles. A set of candidate steering vectors \(s(\bar{\theta}_{(a,e)})\) is defined for \(\theta_{a}\in[-\pi,\pi],\theta_{e}\in[-\pi,\pi]\), and the correlation is calculated as \(s(\bar{\theta}_{(a,e)})\cdot x\), which will yield a peak output when \(\bar{\theta}_{(a,e)}\) equals to \(\theta_{(a,e)m}\). This process is equivalent to applying an FFT over the receiver data \(x\), since the steering vector can be considered the same as a set of FFT coefficients, which gives the frequency components in terms of \(\Delta\phi_{a}\) and \(\Delta\phi_{e}\). This FFT is also referred to as the angle-FFT. As an example, Figure 2 shows the antenna layout of the TI IWR6843 radar. It has three transmitters and four receivers, which can form a 12-receiver array when using MIMO techniques [1]. The phase of each virtual receiver is also shown, where \(\varphi\) is the random initial phase of the first receiver. The azimuth receivers will form a signal \(e^{j(\Delta\phi_{a}n+\varphi)}\) and the elevation receivers will form a signal \(e^{j(\Delta\phi_{a}n+2\Delta\phi_{a}+\varphi+\Delta\phi_{e})}\), where \(n\) is the receiver index in each direction. The value of \(\Delta\phi_{a}\) can be obtained by applying an azimuth-FFT over the azimuth receivers (RX1-RX4 and RX9-RX12), which will give the frequency \(\Delta\phi_{a}\) and phase \(\varphi\). The value of \(\Delta\phi_{a}\) can be obtained by applying an FFT over the elevation receivers, which will give the frequency \(\Delta\phi_{a}\) and phase \(2\Delta\phi_{a}+\varphi+\Delta\phi_{e}\). Hence, the value of \(\Delta\phi_{e}\) can also be calculated given \(\Delta\phi_{a}\) and \(\varphi\). An alternative approach to calculate \(\Delta\phi_{a}\) is by applying an elevation-FFT over a set of receivers in the elevation direction. For example, Figure 3 shows the layout of the TI overhead detection sensor (ODS) model, where the receivers form a near-square shape and allows a 2D angle-FFT to be performed. Fig. 1: Phase difference between two receivers from one signal source. The ODS models allow a higher elevation resolution at the cost of reduced azimuth resolution. #### Iii-B2 Beamforming Method Beamforming methods calculate a set of weights \(w^{(N_{rx}\times\Theta)}\) for the \(N_{rx}\) virtual receivers in the array (both azimuth and elevation), and for all possible angles \(\theta_{(a,e)}\in\Theta\) where \(\theta_{a}\in[-\pi,\pi],\theta_{e}\in[-\pi,\pi]\). When applying a column of weights to the receiver data \(x\), the signal from the direction \(\theta\) will receive a constructive inference. By searching all possible angles \(\theta_{(a,e)}\), a power spectrum \(p\) with size \(\Theta\) can be obtained, where a high power in the spectrum indicates that there is a data source in that direction: \[p=w^{H}x \tag{7}\] where \(w^{H}\) is the Hermitian transposition of \(w\). The angles of the \(M\) objects can be obtained by taking the \(M\) highest peaks in \(p\) and finding the corresponding entries in \(w\). In the data model shown in Equation (4), signals reflected from objects will be correlated when being received at each receiver, whereas the noise will be uncorrelated. Therefore, one way to extract signal information from \(x\) is by calculating a sensor covariance matrix \(R_{x}\)[8]: \[R_{x}=E\{x^{H}x\}\approx\frac{1}{N}\sum_{t=1}^{N}x^{H}(t)x(t) \tag{8}\] where \(E\) represents the statistical expectation and \(x(t)\) represents one snapshot (or one frame) of the receiver data \(x\). When evaluating the beamforming power spectrum using multiple snapshots, the overall power spectrum becomes the statistical expectation of \(p\) in Equation (7) over the snapshots, which gives: \[P=E\{|w^{H}x|^{2}\}=\frac{1}{N}\sum_{t=1}^{N}w^{H}x(t)x^{H}(t)w=w^{H}R_{x}w \tag{9}\] Once the beamforming power spectrum is computed, the peaks in the spectrum will correspond to the signal from the objects. There are many algorithms designed for calculating the weights \(w\). The conventional beamforming uses the steering vector directly as the weights, which is conceptually equivalent to the angle-FFT method (or correlation-based method) in Section III-C1: \[P_{conventional}=s^{H}R_{x}s \tag{10}\] where \(s\) is the candidate steering vector in the format of Equation (5). There are also adaptive beamforming algorithms that calculate the weights using the signal information embedded in the covariance matrix. For example, the MVDR algorithm aims at minimizing the variance from non-interested directions while keeping the signal from the candidate direction distortionless [21]: \[P_{mvdr}=\frac{1}{s^{H}R_{x}^{-1}s} \tag{11}\] #### Iii-B3 Subspace Method The core of the subspace method is that, since the signal \(x\) should contain \(M\) correlated signals and uncorrelated noise, the covariance matrix \(R_{x}\) should have \(M\) non-zero eigenvalues and \(N-M\) zero eigenvalues, where \(N\) is the rank of \(R_{x}\) that is equal to the number of receivers. The eigenvectors corresponding to the \(M\) eigenvalues form the signal subspace, and the eigenvectors corresponding to the zero eigenvalues form the noise subspace. The signal subspace and the noise subspace are orthogonal. One of the most widely-used subspace-based algorithms is the MUSIC algorithm [22]. It searches for steering vectors that are orthogonal to the noise subspace. The power spectrum of the MUSIC algorithm can be written as: \[P_{music}=\frac{1}{s^{H}UU^{H}s} \tag{12}\] where \(U\) is the set of eigenvectors corresponding to the zero eigenvalues. ## IV mmWave Radar Simulator A simulator is designed to verify the discussed algorithms and evaluate the theoretical capability of using a mmWave radar as a 3D sensor. The simulator simulates mmWave radars with one transmitter and one receiving antenna array, which is practically equivalent to a multi-transmitter multi-receiver radar using an appropriate modulation scheme [1]. Any two neighbouring receivers in the array are separated by \(\lambda_{0}/2\), where \(\lambda_{0}\) (approximately \(3.9\,\mathrm{mm}\)) is the wavelength of the mmWave signal at its chirp starting frequency (\(77\,\mathrm{GHz}\)). The simulator simulates the IF signal at each receiver of a mmWave radar when pointing toward a scene. The scene is modelled to have \(M\) points, where each point has a unique x-y-z coordinate and represents the spatial location of the object in the scene. Each point is modelled as a corner reflector and reflects the mmWave signal sent out by the radar with the same reflectivity. The reflection area of the object is modelled by the number of points, i.e. a large object would have a higher number of points. The IF signal at a receiver during one chirp is modelled using Equation (1). Given a certain chirp Fig. 3: IWR6843ODS radar antenna layout, the virtual receiver array and the received phases. Fig. 2: IWR6843 radar antenna layout, the virtual receiver array and the received phases. configuration, the frequency and phase of the IF signal from one point are determined by the distance \(d\) between the point and the receiver. The amplitude of the IF signal is set to be inversely proportional to \(d^{4}\), to simulate the power loss due to distances according to the radar range equation [23]. The final IF signal at a receiver is the accumulated IF signals from all \(M\) points in the scene, with an additional white Gaussian noise \(n\), as shown in Equation (13). \[IF(t)=\sum_{i=1}^{M}\frac{1}{d_{i}^{4}}e^{j(2\pi S_{rx},t+2\pi f_{0}\tau_{i})}+n \tag{13}\] where \(\tau_{i}\) is the ToF of the signal from the transmitter to the point \(i\) and then to the receiver, and \(S\) is the slope of the chirp. The amplitude of the noise \(n\) is controlled by the desired SNR during the experiment. The signal \(IF(t)\) is sampled into a digital signal of length \(N_{s}\), where \(N_{s}=\) (duration of the chirp) \(\times\) (ADC sampling rate). During one chirp, the radar receives a signal that can be represented as a 2D matrix of size \(N_{rx}\times N_{s}\), where \(N_{rx}\) is the number of receivers in the array. One frame includes \(N_{c}\) chirps that form a 3D matrix of size \(N_{rx}\times N_{c}\times N_{s}\), which becomes the input matrix of the point cloud construction algorithm, as shown as the input block in Figure 4. The design of the simulation system makes two assumptions. First, the multipath effect is not considered in this system. While the multipath effect is a long-standing issue that can cause power fading and ghost targets, it highly depends on the scene and the reflectivity of the objects and is hard to incorporate in the model, so it is left as future work. Second, a practical radar often uses multiple transmitters and receivers and an appropriate signal modulation scheme to separate the signal from different transmitters, such as time demultiplexing modulation and binary phase modulation [1], to achieve an equivalent single-transmitter-multi-receiver system. The simulation system assumes a perfect signal modulation scheme for this purpose and ignores any error or SNR loss that may be introduced during the modulation process. ## V Point Cloud Construction Algorithm The construction of a point cloud takes an input matrix of size \(N_{rx}\times N_{c}\times N_{s}\) and outputs a 2D matrix \(PC_{K}\) of size \(K\times 3\) (referred to as the output point cloud), where \(K\) is the number of detected points and \(3\) is the x-y-z coordinates. This section studies one of the most common DPCs used on mmWave radars and its variant, which have shown success in many HAR systems, like in [4, 6, 24]. ### _Data Processing Chains_ Two DPCs are implemented that differ in using a Doppler-FFT or not, as shown in Figure 4. Both DPCs require a range-FFT over the raw data. The range-FFT identifies the frequency components in the IF signal that correspond to the distance of an object. It transforms the input matrix \(X\) of size \(N_{rx}\times N_{c}\times N_{s}\) into a range matrix \(R\) of size \(N_{rx}\times N_{c}\times N_{s}^{*}\), where \(N_{s}^{*}\) is the length of the range-FFT. The first DPC applies a Doppler-FFT on the data from all the chirps and generates a Range-Doppler heatmap of size \(N_{rx}\times N_{c}^{*}\times N_{s}^{*}\), where \(N_{c}^{*}\) is the length of the Doppler-FFT. Then, it searches for peaks in the Range-Doppler heatmap (using the average of all receivers), extracts the receivers' data for each peak and generates a 2D matrix of size \(K\times N_{rx}\), where \(K\) is the number of detected peaks and, equivalently, the number of detected points. A constant false alarm rate (CFAR) algorithm is used for detecting peaks from the Range-Doppler heatmap. The parameters of the CFAR control the sensitivity of the peak detection and are considered the hyperparameters of the system. Finally, a single-snapshot AoA estimation is applied to each point in the matrix for a total of \(K\) times, to obtain the x-y-z coordinates of all detected points. The AoA estimation algorithm can be any of the angle-FFT, beamforming or subspace methods. Although the beamforming and subspace methods are multi-snapshot algorithms, the Doppler-FFT implicitly uses the information from all chirps and allows a good estimate of the covariance matrix at the AoA estimation stage. The second DPC does not include a Doppler-FFT. Instead, it considers the chirps as different snapshots and performs one multi-snapshot estimation for each range bin for a total of \(N_{s}^{*}\) times. More specifically, the input range matrix of size \(N_{rx}\times N_{c}\times N_{s}^{*}\) is re-arranged into \(N_{s}^{*}\) instances of \(N_{rx}\times N_{c}\) matrix, and the AoA estimation is applied to each \(N_{rx}\times N_{c}\) matrix using \(N_{c}\) snapshots. The AoA estimation algorithm can be any of the beamforming or subspace methods. Finally, the points detected at each range bin are concatenated into one point cloud. In this research, the angle-FFT, conventional beamforming, MVDR beamforming and MUSIC subspace methods described in Section III-C are being studied. ### _Model Order Estimation_ As described in Section III-C2 and Section III-C3, the beamforming and subspace methods include an angle power spectrum computation step, where each peak in the spectrum corresponds to an incoming signal from a point. However, in both DPCs, the expected number of incoming signals will be unknown in practice. Therefore, this number needs to be estimated from the signal data. This step is referred to as model order estimation. For this purpose, the covariance matrix of the signal data and its eigenvalues are computed. As Fig. 4: Two possible DPCs for mmWave radar point cloud construction. described in Section III-C3, the covariance matrix should have a size of \(N_{rx}\times N_{rx}\) and has a full rank equal to \(N_{rx}\). There should be \(M\) large eigenvalues that correspond to the number of incoming signals and \(N_{rx}-M\) zeros corresponding to noise. In practice, due to the presence of noise, the difference between these eigenvalues may not be significant. Therefore, the minimum descriptive length (MDL) algorithm [25] is used for estimating the value of \(M\). It fits a statistical model using the eigenvalues and searches for the optimal value of \(M\) that minimizes a cost function. The MDL algorithm is used in both DPCs to estimate the number of incoming signals in the AoA estimation stage. Once the angle power spectrum is calculated, all the local maxima will be found and the largest \(M_{mdl}\) peaks will be taken as the output, where \(M_{mdl}\) is the value found from the MDL algorithm. ### _Steering Vector Searching_ The beamforming and subspace methods search for the steering vectors that maximize a power function. This process can be carried out using three approaches: an azimuth search followed by an elevation search, a 2D azimuth-elevation search or a 2D search using sub-grids. An example of the three approaches is shown in Figure 5. In the example, the power spectrum shows the incoming direction of the signal. The space of the spectrum is sampled into a \(17\times 17\) grid and each vertex on the grid represents a candidate AoA to be tested. In the first approach, an azimuth AoA search is performed using the data from azimuth receivers and steering vectors that only consider the azimuth angle. Then, based on the azimuth AoA output, a secondary search is performed in the elevation direction using the data from all receivers. This approach has the least computational cost (34 searches in the example), but the performance can be suboptimal as the azimuth search may not cover the actual AoA. The second approach performs a 2D search that considers all possible combinations of the azimuth and elevation directions and uses data from all receivers. It is computationally expensive (289 searches in the example) but provides the most accurate estimate. The third approach defines several levels of grids and performs the AoA search at different granularities. It starts the searching with a sparse grid, finds the peaks, defines a denser grid around each peak and performs the next search. The process can be performed iteratively until the desired resolution is achieved. It reduces the computational cost of the second approach significantly as it skips certain regions in the spectrum (50 searches in the example), at the cost of the potential possibility of missing some peaks. ## VI Evaluation ### _Dataset_ The FAUST dataset [26] was used to serve as the ground truth for the simulator, to evaluate the point cloud construction algorithms described. The datasets contain human models in the form of watertight triangulated meshes. The meshes were generated from a high-resolution camera system containing stereo cameras, RGB cameras and speckle projectors. The FAUST dataset contains 10 subjects and 30 static postures per subject, of which 10 postures are provided with aligned watertight models, giving 100 models in total. In the simulation, the models were placed at \(2\,\mathrm{m}\) from the radar and facing towards the radar. The height of the radar was set to be in the middle of each model. A ground truth point cloud was constructed from each model by randomly sampling \(M\) points from the surface of the mesh model, where each point was assumed to be a corner reflector. Some examples of the mesh models and point clouds are shown in Figure 6. The simulator computed a signal matrix for each point cloud to simulate the IF signal that would be received by the radar when placed towards a subject, as described by Equation (13). The entire dataset containing the 100 models was split into 80 training data and 20 test data, where the training data was used for hyperparameters searching in the point cloud construction algorithms, and the test data was used for evaluating the algorithms. When generating the IF signal matrix, there are two sources of randomness: the noise term \(n\) introduced in Equation (13) and the random sampling of the ground truth point cloud from the mesh model. Therefore, all the evaluation processes were repeated 10 times for each mesh model and the average metrics were reported, to minimize any potential effect of the randomness. ### _Evaluation Metrics_ To evaluate the quality of the point cloud constructed by an algorithm, it is necessary to define the evaluation metrics for comparing the output point cloud against the ground truth point cloud. Let \(PC_{M}\) denote the ground truth point cloud and \(PC_{K}\) denote the point cloud generated by the radar, which are Fig. 5: Three approaches when searching for the steering vectors. (a) An azimuth search (red) followed by an elevation search (black). (b) A full 2D azimuth-elevation search. (c) A 2D azimuth-elevation search using sub-grids. Fig. 6: Some examples of the mesh models and point clouds from the FAUST dataset. a \(M\times 3\) matrix and a \(K\times 3\) matrix, respectively. It is important to note that the point cloud construction algorithm can provide an uncertain number of points that might be different to the ground truth (\(M\neq K\)), and \(PC_{K}\) can have a non-uniform distribution while \(PC_{M}\) is distributed uniformly on the mesh model. The evaluation metrics should take the two point clouds \(PC_{M}\) and \(PC_{K}\) as input and measure the similarity between them. First, two points are defined to be close to each other if their Euclidean distance is less than a certain distance \(D\). In this research, \(D\) is set to \(10\,\mathrm{cm}\) as an empirical estimation of the error tolerance of a HAR system. Then, the following terms and metrics are defined: * Precision: Number of points in \(PC_{K}\) that has at least one close point from \(PC_{M}\), divided by \(K\). It evaluates how many points in \(PC_{K}\) are considered to be accurate. * Sensitivity/Recall: Number of points in \(PC_{M}\) that has at least one close point from \(PC_{K}\), divided by \(M\). It evaluates how well \(PC_{K}\) can cover the space of \(PC_{M}\). * Fowlkes-Mallows index (FMI): the geometric mean of precision and sensitivity, i.e. \(\sqrt{\text{precision}\times\text{sensitivity}}\). * Intersection over Union (IoU): Establish two regular 3D voxel grids for \(PC_{K}\) and \(PC_{M}\) with the voxel size set to \(10\,\mathrm{cm}\times 10\,\mathrm{cm}\times 10\,\mathrm{cm}\), consider a voxel to be occupied if there is at least one point present in the voxel, then the IoU is calculated as the number of overlapping voxels of the two voxel grid, divided by the union. The IoU evaluates the similarity of the two point clouds at the granularity of the voxel size. An ideal system should have both high precision and high sensitivity, whereas the relative importance of the two depends on the application. In this section, the FMI, i.e. the geometric mean of precision and sensitivity, is used to indicate the performance of the system. The IoU also provides a good indication of how the generated point cloud can represent the scene. However, as the calculation of the IoU is highly sensitive to the voxel size and outliers, it is used as a secondary metric. ### _Data Processing Chain and Algorithms_ In the first experiment, the two DPCs combined with different AoA algorithms were evaluated and compared, in terms of the quality of the estimated point cloud and the computational cost. A baseline radar and scene configuration were designed to approximate a typical setup in a common indoor environment as follows: * One transmitter and a \(4\times 4\) uniform receiver array. * The chirp frequency is \(77\,\mathrm{GHz}\) to \(81\,\mathrm{GHz}\), the slope is \(40\,\mathrm{MHz}/\mathrm{us}\), the chirp duration is \(100\,\mathrm{us}\), the ADC sampling rate is \(15\,\mathrm{MHz}\), each frame is \(50\,\mathrm{ms}\) with 50 chirps, and each chirp has 1500 samples (as shown in Figure 7). * Each human mesh model is sampled into 512 points and placed at \(2\,\mathrm{m}\) away from the radar. * SNR is \(30\,\mathrm{dB}\). * The subject has a velocity of \(0.05\,\mathrm{m}/\mathrm{s}\) moving away from the radar. * The AoA algorithm uses 512 bins to cover the \(\pm 90^{\circ}\) AoV, i.e. the angular resolution is \(0.35^{\circ}\). The velocity of the subject is introduced following the assumption that a real person cannot stay absolutely stationary during the measurement. At a velocity of \(0.05\,\mathrm{m}/\mathrm{s}\) and a frame time of \(50\,\mathrm{ms}\), the total displacement will be \(2.5\,\mathrm{mm}\) and is considered negligible. The velocity provides a variation on the signal received at different chirps, as otherwise the multi-snapshot AoA estimation algorithms would receive an identical signal at all chirps and would yield a poor performance. Combining the two DPCs with different AoA estimation algorithms, there are 14 methods in total to be evaluated. For each method, both the 1D search approach and the 2D sub-grid approach described in Section V-C are included. For the 2D angle-FFT method, the full-grid approach is used instead of the sub-grid approach, since the benefit of the lower computational cost is less significant for FFTs. The algorithms will be referred to using the format "DPC-Method-1D/2D" throughout the paper. For example, DPC1-Conv-2D refers to the conventional beamforming method in DPC1 that uses a 2D steering vector search. The angle-FFT method is not applicable in DPC2 as it is not a multi-snapshot algorithm. Algorithms in DPC1 include a CFAR peak detection step on the Range-Doppler heatmap, where the optimal parameters for the CFAR were searched on the training dataset. Then, the performance of the algorithms on the test dataset was evaluated and compared. The results are shown in Table I and Table II as FMI and IoU (in % and with the standard deviation in parentheses), respectively. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{IoU in \%} & \multicolumn{2}{c|}{Angle-FFT} & \multicolumn{2}{c|}{Conv. BF} & \multicolumn{2}{c|}{MVDR BF} & \multicolumn{2}{c|}{MUSIC} \\ \cline{2-10} & ID & 2D & ID & 2D & ID & 2D & ID & 2D \\ \hline DPC1 & 21.2 & 22.5 & 14.6 & 20.6 & 18.0 & 23.4 & 19.0 & 22.7 \\ & (4.3) & (4.6) & (3.9) & (4.1) & (4.4) & (4.1) & (4.1) & (3.5) \\ \hline \multirow{2}{*}{DPC2} & \multicolumn{2}{c|}{NA} & \multicolumn{1}{c|}{11.2} & 12.2 & 13.2 & 14.7 & 14.6 & 14.6 \\ & & & (3.2) & (3.0) & (3.3) & (4.4) & (3.2) & (3.3) \\ \hline \end{tabular} \end{table} TABLE II: IoU (standard deviation in parentheses) comparison between the algorithms when using a \(4\times 4\) receiver array and a subject velocity of \(0.05\,\mathrm{m}/\mathrm{s}\). \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{FMI in \%} & \multicolumn{2}{c|}{Angle-FFT} & \multicolumn{2}{c|}{Conv. BF} & \multicolumn{2}{c|}{MVDR BF} & \multicolumn{2}{c|}{MUSIC} \\ \cline{2-10} & ID & 2D & ID & 2D & ID & 2D & ID & 2D \\ \hline DPC1 & 68.3 & 68.2 & 60.6 & 67.2 & 67.7 & 74.5 & 69.7 & 77.0 \\ & (7.5) & (7.9) & (8.7) & (7.6) & (7.6) & (6.7) & (6.9) & (6.2) \\ \hline DPC2 & NA & 43.7 & 46.5 & 50.2 & 53.1 & 52.7 & 53.2 \\ & & & (7.8) & (7.1) & (7.6) & (7.4) & (7.4) & (7.0) \\ \hline \end{tabular} \end{table} TABLE I: FMI (standard deviation in parentheses) comparison between the algorithms when using a \(4\times 4\) receiver array and a subject velocity of \(0.05\,\mathrm{m}/\mathrm{s}\). Fig. 7: Chirp configuration of one frame in the baseline setup. There are a few important observations from the experiment. Even though the subject had a low velocity, the DPC1 with a Doppler-FFT outperformed the other significantly. One main reason is that, as the number of receivers is much lower than the number of signals, the AoA estimation algorithm can fail to distinguish points with a close angle. Instead, these points will be identified as one strong signal source. On the contrary, the CFAR peak detection step in DPC1 picks a set of points around the peak that are above the CFAR threshold. As these points also contribute to the point cloud, the output becomes denser and the sensitivity is improved. This effect can be observed from the example detection shown in Figure 8. In terms of the different algorithms, the MVDR and MUSIC methods outperformed the angle-FFT and conventional methods, at the expense of higher complexity. Meanwhile, all the 2D methods outperformed the 1D methods due to a more fine-grained resolution (as shown earlier in Figure 5). The best performance was achieved with the DPC1-MVDR-2D and DPC1-MUSIC-2D methods, with an FMI of 74.5% and 77.0%, respectively. However, the IoU metrics show that the point clouds were still far from the objective of high-accuracy scene reconstruction, as the highest IoU was only 23.4%. It can be seen from Figure 8 that, while the distribution of the point cloud mostly fitted the subject, the distribution was not even and there were body parts (like the hands) that received fewer points. Therefore, there is still a big gap before the radar output can be directly used by applications that require high quality data. Table III compares the algorithms in terms of execution time. The algorithms were run using the same dataset and parameters multiple times. The algorithms were written in Python without any processor-specific optimization and were run on one Intel 17-9700K CPU core. The result is shown as the relative execution time of each algorithm when compared with the DPC1-FFT-1D method (the most lightweight method) and normalized with the number of detected points, to give an indication of their relative complexity. All the 2D methods have a higher complexity than the 1D methods. For algorithms in DPC1, the 1D angle-FFT method has the lowest computational cost. With the sub-grid optimization, the complexity of the 2D beamforming and MUSIC methods can be kept at around twice the 1D methods. The complexity without the sub-grid optimization is expected to be much higher, as can be estimated from the difference between the 2D and 1D angle-FFT methods. When considering both the complexity and the performance, the DPC1-FFT-1D method provides a good trade-off between them. The MVDR methods and MUSIC methods in DPC1 give the best performance at the cost of 9x execution time and require additional efforts on the hardware and implementation. It is worth noting that many mmWave radar systems, like [6, 7, 27], are built based on the DPC1-FFT-1D method. Therefore, these systems can potentially benefit from a more complex AoA estimation algorithm. ### _Subject Velocity_ The motion of the subject being sensed has a significant impact on the detection output. In DPC1, a higher velocity makes a subject easier to be identified in the Range-Doppler heatmap. Due to the relative position difference between the body parts of the subject, they will have a different radial velocity with respect to the radar, making them distinguishable in the Range-Doppler heatmap. In DPC2, a higher velocity increases the variance of the signal between chirps and allows a better estimate of the data covariance matrix. To verify the theorem, an experiment was carried out using the same configuration as the baseline setup, except that the velocity of the subject was set to different values from \(0.1\,\mathrm{m}\mathrm{/}\mathrm{s}\) to \(1\,\mathrm{m}\mathrm{/}\mathrm{s}\). The ground truth point cloud was taken as the average position of the subject during the motion. Table IV and Table V show two examples of the experiment where the subject velocity was set to \(0.5\,\mathrm{m}\mathrm{/}\mathrm{s}\) and \(1\,\mathrm{m}\mathrm{/}\mathrm{s}\), respectively. When compared with Table I, all algorithms achieved a 2.6% to 14.5% improvement in terms of the FMI when the subject had an increased velocity. Figure 9 shows the FMI and IoU of the DPC1-MUSIC-2D method with different subject velocities from \(0.1\,\mathrm{m}\mathrm{/}\mathrm{s}\) to \(1\,\mathrm{m}\mathrm{/}\mathrm{s}\). An overall positive correlation can be observed between the subject velocity and the detection performance, and the impact is the most obvious at lower velocities (around \(0.5\,\mathrm{m}\mathrm{/}\mathrm{s}\)). Some examples of the detection at \(1\,\mathrm{m}\mathrm{/}\mathrm{s}\) are shown in Figure 10. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{FMI in \%} & \multicolumn{2}{c|}{Angle-FFT} & \multicolumn{2}{c|}{Conv. BF} & \multicolumn{2}{c|}{MVDR BF} & \multicolumn{2}{c|}{MUSIC} \\ \cline{2-9} & 1D & 2D & 1D & 2D & 1D & 2D & 1D & 2D \\ \hline DPC1 & +8.3 & +11.4 & +12.1 & +10.7 & +10.4 & +8.8 & +10.1 & +7.7 \\ \hline DPC2 & NA & +4.6 & +2.6 & +5.3 & +4.6 & +8.4 & +5.6 \\ \hline \end{tabular} \end{table} TABLE IV: Relative FMI difference of the algorithms when using a \(4\times 4\) receiver array and a subject velocity of \(0.5\,\mathrm{m}\mathrm{/}\mathrm{s}\) in comparison to \(0.05\,\mathrm{m}\mathrm{/}\mathrm{s}\). Fig. 8: Examples of the radar detection using the different algorithms, when using a \(4\times 4\) receiver array and a subject velocity of \(0.05\,\mathrm{m}\mathrm{/}\mathrm{s}\). \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Normalized & \multicolumn{2}{c|}{Angle-FFT} & \multicolumn{2}{c|}{Conv. BF} & \multicolumn{2}{c|}{MVDR BF} & \multicolumn{2}{c|}{MUSIC} \\ \cline{2-9} & 1D & 2D & 1D & 2D & 1D & 2D & 1D & 2D \\ \hline DPC1 & 1.00 & 13.42 & 4.38 & 9.32 & 3.51 & 8.99 & 4.02 & 8.85 \\ \hline DPC2 & NA & 5.69 & 12.38 & 5.31 & 10.89 & 5.67 & 10.61 \\ \hline \end{tabular} \end{table} TABLE III: Normalized execution time comparison between the algorithms using the baseline setup. ### _Snr_ In a practical environment, a radar system can experience noise from different sources, such as the thermal noise of the radar chip. The SNR also depends on the distance between the radar and the subject, as the signal power drops quickly along with the distance. In the simulator, the SNR can be controlled by the power of the noise term \(n\) in Equation (13). In this section, the performance of the algorithms between a high SNR environment (\(40\,\mathrm{dB}\)) and a lower SNR environment (\(5\,\mathrm{dB}\)) is compared. Two experiments were carried with the subject velocity set to \(0.05\,\mathrm{m/s}\) and \(0.5\,\mathrm{m/s}\), respectively. The results are shown in Table VI and Table VII. In the low SNR environment, all the algorithms in DPC1 experienced a similar drop in performance, as expected. However, the algorithms in the DPC2 showed a higher performance. The reason is that the higher noise affected the model order estimation step and the system tends to report a higher number of points. Taking the DPC2-Conv-2D method as an example, the average size of the detected point cloud was found to be 20.3% higher in a low SNR environment than in a higher SNR environment. However, this was still insufficient to reach a similar performance as DPC1. ### _Antenna Layout_ Theoretically, the antenna layout determines the angular resolution that an AoA estimation algorithm can achieve. The more receivers in one direction, the higher resolution the radar can measure [1]. However, this is questionable when the signal sources are spatially close and continuous. Meanwhile, having more antennas also increases the cost of the hardware, as more circuit components, processing units and memory would be required. Therefore, it is beneficial to study the relationship between the antenna layout and the output quality and find the optimal trade-off for an application. Common commercial mmWave radars use up to three transmitters and up to four receivers, giving up to twelve virtual receivers as a receiving array. Some radar models are designed for automotive applications and prioritize the azimuth direction, while others are designed for general purpose applications and have a similar resolution in both the azimuth and elevation directions. In this section, common antenna layouts implemented on the TI radars are evaluated and compared, as well as a few square-shape antenna layouts that are more common in research projects, as listed in Figure 11. The same radar configuration and scene setup in Section VI-C were used. The experiment compares the antenna layouts using the DPC1-MUSIC-2D algorithm (the best performing algorithm). The result is shown in Table VIII. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{FMI in \%} & \multicolumn{2}{c|}{Angle-FFT} & \multicolumn{2}{c|}{Conv. BF} & \multicolumn{2}{c|}{MVDR BF} & \multicolumn{2}{c|}{MUSIC} \\ \cline{2-7} & 1D & 2D & 1D & 2D & 1D & 2D & 1D & 2D \\ \hline DPC1 & +9.7 & +13.0 & +14.5 & +12.8 & +12.0 & +9.8 & +11.6 & +8.2 \\ \hline DPC2 & NA & +5.3 & +2.6 & +4.3 & +3.4 & +9.3 & +5.8 \\ \hline \end{tabular} \end{table} TABLE V: Relative FMI difference of the algorithms when using a \(4\times 4\) receiver array and a subject velocity of \(1\,\mathrm{m/s}\) in comparison to \(0.05\,\mathrm{m/s}\). \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{FMI in \%} & \multicolumn{2}{c|}{Angle-FFT} & \multicolumn{2}{c|}{Conv. BF} & \multicolumn{2}{c|}{MVDR BF} & \multicolumn{2}{c|}{MUSIC} \\ \cline{2-10} & 1D & 2D & 1D & 2D & 1D & 2D & 1D & 2D \\ \hline DPC1 & -8.1 & -5.8 & -5.5 & -5.6 & -6.4 & -6.3 & -5.7 & -6.2 \\ \hline DPC2 & NA & +2.2 & +2.8 & +1.7 & +2.7 & +1.6 & +2.4 \\ \hline \end{tabular} \end{table} TABLE VI: Performance difference when using a \(4\times 4\) receiver array and a subject velocity of \(0.05\,\mathrm{m/s}\) in a low SNR environment (\(5\,\mathrm{dB}\) in comparison to \(30\,\mathrm{dB}\)). Fig. 11: The list of receiver layouts being evaluated. (a)-(d) are square antenna arrays. (e)-(f) are non-regular antenna arrays implemented on TI radars. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{FMI in \%} & \multicolumn{2}{c|}{Angle-FFT} & \multicolumn{2}{c|}{Conv. BF} & \multicolumn{2}{c|}{MVDR BF} & \multicolumn{2}{c|}{MUSIC} \\ \cline{2-10} & 1D & 2D & 1D & 2D & 1D & 2D & 1D & 2D \\ \hline DPC1 & -8.1 & -5.8 & -5.5 & -5.6 & -6.4 & -6.3 & -5.7 & -6.2 \\ \hline DPC2 & NA & +2.2 & +2.8 & +1.7 & +2.7 & +1.6 & +2.4 \\ \hline \end{tabular} \end{table} TABLE VII: Performance difference when using a \(4\times 4\) receiver array and a subject velocity of \(0.5\,\mathrm{m/s}\) in a low SNR environment (\(5\,\mathrm{dB}\) in comparison to \(30\,\mathrm{dB}\)). Fig. 10: Examples of the radar detection using the different algorithms, when using a \(4\times 4\) receiver array and a subject velocity of \(1\,\mathrm{m/s}\). Fig. 9: FMI and IoU (with errors) of the DPC1-MUSIC-2D algorithm with different subject velocities. It can be seen that most antenna layouts had similar performance, except the layout (g) which had a worse performance as it is designed for automotive applications. The layout (e) has a non-uniform antenna distribution that slightly affected its performance. All other layouts showed a similar performance regardless of the antenna size. Therefore, considering the increased hardware cost and computational cost of increasing the number of antennas, a small antenna size can be preferable for 3D sensing applications. Figure 12 shows some examples of the detection using different antenna layouts. ### _Chip Configuration_ The chirp configuration can have various effects on the distance detection and velocity detection. These factors can indirectly affect the quality of the final point cloud. In this section, three different chirp configurations were tested and compared against the baseline configuration in Section VI-C. The details of the three configurations (named A, B and C) and the performance are shown in Table IX. Each configuration has certain parameter cut to 80% to evaluate the effect on the output. Configuration A had an 80% reduced chirp slope and, hence, a reduced effective bandwidth from \(4\,\mathrm{GHz}\) to \(3.2\,\mathrm{GHz}\). Configuration B had an 80% reduced ADC sampling rate that reduced the samples per chirp from 1500 to 1200. Configuration C had an 80% reduced number of chirps per frame, from 50 to 40. All other parameters were kept the same as the baseline with the DPC1-MUSIC-2D algorithm. The result shows that the performance can be strongly affected by the effective bandwidth and the number of chirps. The former affects the distance resolution of the detection, and the latter affects the Doppler resolution. Reducing either of these parameters reduces the accuracy of the range-Doppler heatmap and the estimation of the covariance matrix. On the other hand, the effect of reducing the ADC sampling rate and the number of samples per chirp is much less significant. ## VII Super-resolution Point Cloud Construction Algorithm It can be seen from Figure 8 and Figure 10 that the constructed point clouds can be noisy and the distribution of the points can be imbalanced. One major reason is that the point cloud construction relies on the peak detection result over the range-Doppler-FFT spectrum, so the distribution of the points will be limited by the resolution of the FFT, and the points will have a discrete distribution in the range domain (as the curve-like data from the left view). Although it is possible to improve this resolution, such as zero padding the data before applying the FFT, it would also increase the computational cost and memory consumption. Meanwhile, there are false detected points due to the outliers from the peak detection stage. To address the mentioned issue and improve the quality of the constructed point cloud, a novel super-resolution point cloud construction (SRPC) algorithm is proposed. The SRPC algorithm aims to improve the distribution of the point cloud and make it span more naturally in the spatial space. The rationale is shown in Figure 13. When detecting peaks in a range-Doppler spectrum or an angle spectrum, a common approach is taking all points above a static or dynamic threshold, where the distribution of the points is limited by the resolution of the original data. An example of this effect is shown in Figure 13a, where the grid represents the resolution of the data and all the detected points must fall on the grid. The SRPC algorithm aims to return a set of points that have a higher resolution than the original data and fall more naturally on the distribution curve, as shown in Figure 13b. The algorithm can be broken down into the following steps. First, the power spectrum is upsampled into the desired resolution using linear interpolation. Then, for each of the originally detected points \(i\in[1..K]\), the algorithm randomly samples \(n_{i}\) points around it with a probability distribution \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \begin{tabular}{c} Chirp Configuration \\ \end{tabular} & \begin{tabular}{c} Baseline \\ \end{tabular} & \begin{tabular}{c} A \\ \end{tabular} & \begin{tabular}{c} B \\ \end{tabular} & \begin{tabular}{c} C \\ \end{tabular} \\ \hline \begin{tabular}{c} Slope of the chirp (MHz/us) \\ \end{tabular} & 40 & 32 & 40 & 40 \\ \hline \begin{tabular}{c} ADC sampling rate (MHz) \\ \end{tabular} & 15 & 15 & 12 & 15 \\ \hline \begin{tabular}{c} Chirps per frame \\ \end{tabular} & 50 & 50 & 50 & 40 \\ \hline \begin{tabular}{c} FMI in \% \\ \end{tabular} & \begin{tabular}{c} 77.0 \\ (6.2) \\ \end{tabular} & \begin{tabular}{c} 71.1 \\ (6.8) \\ \end{tabular} & \begin{tabular}{c} 76.5 \\ (6.0) \\ \end{tabular} & \begin{tabular}{c} 70.2 \\ (6.0) \\ \end{tabular} \\ \hline \end{tabular} \end{table} TABLE IX: FMI (standard deviation in parentheses) comparison between four chirp configurations using the DPC1-MUSIC-2D algorithm. Fig. 12: Examples of the radar detection using the different antenna layouts with the baseline setup. Fig. 13: Using SRPC algorithm to improve the resolution and distribution of the data. being the amplitude of the upsampled power spectrum. The value of \(n_{i}\) is calculated as: \[n_{i}=\frac{p_{i}\cdot\alpha_{SRPC}}{th} \tag{14}\] where \(p_{i}\) is the power of the point, \(th\) is the threshold of the peak detection algorithm, and \(\alpha_{SRPC}\) is a global hyperparameter that controls the aggressiveness of the algorithm. The term \(p_{i}\) ensures that a point with higher power will be sampled into more points, as the power indicates the confidence that a point can represent a real signal source. The parameter \(\alpha_{SRPC}\) amplifies the importance of \(p_{i}\), where a higher \(\alpha_{SRPC}\) pushes the distribution of the points towards the peak of the spectrum and gives a more dense distribution. The sampling process is repeated for each point \(i\) to form a new point list. Finally, \(K\) points (the population of the original detection) are randomly selected from the new point list, so that the total number of detected points is kept the same and the computational cost of the rest of the system is not affected. Since the algorithm tends to sample more points at higher power, the distribution of the final points will also tend to be around higher powers, and, hence, gives a more natural distribution regarding the power spectrum and overcomes the limitation of the original data resolution. The time complexity of the SRPC algorithm is approximately \(O(K\cdot n_{i})\), where a typical value of \(n_{i}\) can fall between 2 and 8. When constructing the point cloud, the SRPC is applied when detecting peaks from the range-FFT spectrum and detecting peaks from the angle spectrum in the AoA estimation step. The former improves the data distribution in the range domain and eliminates the curve-like effect when looking at the point cloud from the left view. The latter improves the data distribution in the angle domain so that the points tend to span into the space rather than appearing as a dense cluster. Meanwhile, since the points will be distributed around higher powers, the probability of outliers will be reduced. To evaluate the proposed SRPC algorithm, it was inserted into the DPC1-FFT-1D and DPC1-MUSIC-2D methods mentioned in Section VI-C when using the baseline setup. The two methods were chosen as they represent the most lightweight algorithm and the most accurate algorithm, respectively. Since the SRPC is likely to produce point clouds with different sizes and to ensure a fair comparison, a fixed number of 512 points were randomly taken from the point cloud generated by each algorithm for the evaluation. The result is shown in Figure 14. After applying the SRPC algorithm, the distribution of the point cloud appeared to be more natural and better distributed around the ground truth, and the outliers in the original detection were reduced. A quantitative evaluation is shown in Table X. The performance without SRPC dropped slightly when compared with Table I because the output size was forced to be 512, but both metrics have improved after applying SRPC. Therefore, it is shown that the SRPC algorithm can successfully improve the data point distribution, reduce the outliers and produce a more natural point cloud that can be potentially preferable for higher-level applications. Future work of this research includes an efficient hardware implementation of this algorithm using the radar on-chip processors so that it can be further verified in real-world scenarios, as well as an evaluation of its effectiveness in higher-level applications like posture estimation. ## VIII Conclusion In this paper, a mmWave radar simulator is presented. The system is used to evaluate the ability of the mmWave radar as a 3D imaging sensor. A mmWave radar dataset is constructed using the FAUST dataset as the ground truth to provide 3D mesh models of human subjects, from which mmWave radar IF signals are simulated and used to evaluate different point cloud construction algorithms. The FMI and IoU metrics are defined to evaluate the quality of the generated point cloud. The evaluation is performed regarding a set of different factors, including the DPCs, AoA estimation algorithms, subject velocity, SNR, antenna layout and chirp configuration. It was found that the DPC combining a range-Doppler-FFT and a single-snapshot AoA estimation algorithm gives better performance. Among all the AoA estimation algorithms, the angle-FFT method gives a good trade-off between high performance and low computational cost, whereas the more advanced AoA estimation algorithms, like MVDR and MUSIC, give the best performance at up to 9x higher execution time. The velocity of the subject helps significantly in the detection, as the algorithms are better at detecting a moving subject than a stationary object. When comparing common antenna layouts, large square antenna arrays give the best performance, but the advantage is not significant in a 3D imaging application when the data sources are spatially close and continuous. It is shown that the performance of the point cloud detection benefits from higher effective bandwidth and a higher number of chirps per frame. Finally, a novel SRPC algorithm is proposed for improving the resolution and distribution of the point cloud and reducing the probability of outliers. The algorithm applies \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & \multicolumn{2}{|c|}{DPC1-FFT-1D} & \multicolumn{2}{|c|}{DPC1-MUSIC-2D} \\ \hline & FMI & IoU & FMI & IoU \\ \hline Without SRPC & 64.9 & 20.2 & 72.1 & 22.9 \\ \hline With SRPC & 69.5 & 23.6 & 72.9 & 25.9 \\ \hline \end{tabular} \end{table} TABLE X: Performance comparison of two algorithms with and without SRPC. Fig. 14: Examples of point clouds constructed with and without the SRPC algorithm. to the range-Doppler-FFT peak detection stage and the AoA estimation stage and detects points at a higher resolution that fits the power spectrum better. When evaluating the algorithm using the simulation system, it has been shown that the algorithm can successfully improve the data distribution and produces a more natural point cloud.
millimeter波(mmWave)レーダーは、3D点雲を生成することができ、その点雲はシーン内のオブジェクトを表現することができます。しかし、生成された点雲の精度と密度がレーザーセンサーよりも低くなる場合があります。研究者は、mmWaveレーダーをさまざまなアプリケーションに使用してきましたが、点雲の品質に関する定量的な評価は限られており、その品質を評価する基準も欠如しています。この研究は、文献の空白を埋めることを目的としています。レーダーシミュレータを構築し、3D点雲の構築における最も一般的なデータ処理経路を評価し、mmWaveレーダーの3Dイメージングセンサーとしての能力を、さまざまな要因において検討します。それは、レーダー検出がノイズになる可能性があり、分布が不均衡であることを示すでしょう。この問題に対処するために、ノウハウの超高解像度点雲構築(SRPC)アルゴリズム
2309.10678
Dialogues with algorithms
In this short paper we focus on human in the loop for rule-based software used for law enforcement. For example, one can think of software that computes fines like tachograph software, software that prepares evidence like DNA sequencing software or social profiling software to patrol in high-risk zones, among others. An important difference between a legal human agent and a software application lies in possible dialogues. A human agent can be interrogated to motivate her decisions. Often such dialogues with software are at the best extremely hard but mostly impossible. We observe that the absence of a dialogue can sincerely violate civil rights and legal principles like, for example, Transparency or Contestability. Thus, possible dialogues with legal algorithms are at the least highly desirable. Futuristic as this may sound, we observe that in various realms of formal methods, such dialogues are easily obtainable. However, this triggers the usual tension between the expressibility of the dialogue language and the feasibility of the corresponding computations.
Joost J. Joosten
2023-09-19T15:03:12
http://arxiv.org/abs/2309.10678v1
# Dialogues with algorithms ###### Abstract In this short paper we focus on human in the loop for rule-based software used for law enforcement. For example, one can think of software that computes fines like tachograph software, software that prepares evidence like DNA sequencing software or social profiling software to patrol in high-risk zones, among others. An important difference between a legal human agent and a software application lies in possible dialogues. A human agent can be interrogated to motivate her decisions. Often such dialogues with software are at the best extremely hard but mostly impossible. We observe that the absence of a dialogue can sincerely violate civil rights and legal principles like, for example, Transparency or Contestability. Thus, possible dialogues with legal algorithms are at the least highly desirable. Futuristic as this may sound, we observe that in various realms of formal methods, such dialogues are easily obtainable. However, this triggers the usual tension between the expressibility of the dialogue language and the feasibility of the corresponding computations. Keywords:Human in the loop, Legal software, Formal Methods, Dialogues with Software. ## 1 Human in the loop It is commonly held that during automated legal decision making there should be human oversight and involvement. As a matter of fact, the human involvement is anchored in various legal instruments most notably in the European GDPR [1], quoting from Article 22.1: The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her. The more recent European AI Act dedicates an entire article to human oversight, quoting from Article 14.1 (Human oversight): High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which the AI system is in use. However, it is far from clear what such human involvement should look like. Some argue that often, having a human in the loop is considered unjustly as a magic potion to warrant correct decisions but that human oversight in general falls short as a solution to the risks of algorithmic decision-making ([3]). Notwithstanding, all scholars agree that some degree of human oversight and involvement is needed though, again, there is no common notion of what this should look like (see also [4]). Human overview should minimise or mitigate undesired effects of using AI and automated decision making like biases/discrimination, nudging, opaqueness or simply errors ([5, 6, 7]). An important parameter in the discussion on correct human involvement in legal automated decision making is the kind of AI that is used. For example, transparency in Neural Networks is much harder to achieve (if not impossible) than in old-style rule based AI. In this paper we shall therefore focus on the latter since, as we shall see, recent development of formal methods can facilitate certain rudimentary forms of interrogations of algorithms on how they perform and what kind of properties these algorithms have. ## 2 Black-boxes and dialogues We decide to focus on classical legal computer programs in this paper leaving other paradigms like neural networks and the like aside. Thus, we focus on programs that follow our human ideal logical reasoning schemes in an algorithmic fashion. The resulting legal computer programs often leads to problems. Users or those affected may object to legal software and claim to lose transparency, oversight, understanding and fear errors. As a matter of fact, all substantially large computer programs do contain errors, be they typos, small design errors, or programmed biases. However, all these objections also holds for human actors in the law enforcement who also err, have personal inclinations and preferences and may be methodologically far from optimal. An essential difference between legal computer programs and human legal actors lies in the possibility to _dialogue_. For example, with a human legal actor there is less fear of loss of transparency. For sure, the legal actor may and most likely will be much more knowledgeable than us, but at least we have the feeling that we can pose questions, ask explanations, and inquire for the basic assumptions. Those acts of interaction are at least cumbersome with a program and not accessible to the average citizen. In various known cases of erroneous software in the past we see that interaction took very long, and over sometimes years went through various committees and groups of experts. A notorious example is the Dutch SyRI social risk scoring computer algorithm ([8]). SyRI would assign a fraud risk score to citizens on the basis of which social support could be denied. Only after years of allegations and human tragedy of involved individuals it was shown and understood that the algorithm was biased and, for example, would not act equally on citizens that have more than one nationality. We imagine the amount of tragedy and time that could have been saved if at an early stage, one could have asked the program how it functioned. For example, imagine that \(x\) and \(y\) range over data containers for individual citizens. And imagine that \(\mathcal{A}\) is the set of all attributes2 of those citizens, that \(\mathsf{NrOfPassports}(z)\) is a function that tells how many different passports (nationalities) an individual \(z\) possesses and that \(\mathsf{Score}(z)\) is the score that individual \(z\) obtains by the SyRI algorithm. We can then formalise the question of whether SyRi would be biased for multiple passport holders: Footnote 2: For the sake of exposition we restrict to unary attributes, like “\(z\) is female” but the example can easily be extended to relations of higher arity. \[\forall x,y\;\Big{(}(\mathsf{NrOfPassports}(x)\neq\mathsf{NrOfPassports}( y))\wedge\bigwedge_{P\in\mathcal{A}}(P(x)\leftrightarrow P(y))\] \[\implies\;\mathsf{Score}(x)=\mathsf{Score}(y)\Big{)}\] Clearly, this is a property that either holds true or false of the SyRI software. If only citizens would have been able at an early stage to query this question to the program, it is likely that the whole affair would have been less painful and time-consuming. In general, one can imagine that enabling dialogues with programs could restore trust and control in the interaction between human actors and software. We wish to stress that merely having access to the source code of a legal program is not a sufficient condition to gain transparency. Even for IT specialists it is extremely hard to fully understand the exact working of computer code. As a matter of fact in its full generality, full understanding of the source code is impossible since it would imply that we can solve the unsolvable Halting Problem. However, it seems like a bare minimum to at least grant access to the source code so that citizens that are affected by the functioning of that code may try to understand how the code works. In this regard it is curious to mention the BOSCO case in Spain. BOSCO is a state owned computer program that decides who qualifies for social financial support. Supposedly, BOSCO follows a fully determined legal text but numerous wrong judgements made by the program have been reported [9]. Notwithstanding supported claims of errors, Spanish administration is still reluctant to disclose the software. This is to be contrasted with French practice and regulation [10], where they strive for open software in public administrations. Ponce-Sole points out in [11] that Article 9.3 of the Spanish Constitution prohibits arbitrariness in legal decision making. The very same article however also mentions that norms should be publicly announced. This begs the discussion that if the implementation of the law fills in substantial blanks in that law or reinterprets a law, if it then should be allowed for this implementation to be proprietary or to remain undisclosed. Note that filling blanks or reinterpreting is typically needed to go from natural language to executable code. Through the examples above, we think it is convincingly showcased that when there is no access to nor precise understanding of the source code, transparency is at stake and it will become extremely hard for citizens to contest automated decisions that concern their rights. ## 3 Formal methods enable dialogues Engaging in a dialogue with a software program may sound extremely futuristic. In a rudimentary form, however, this is possible and almost already in place. However, it can only be applied in the case the software is embedded in an environment of formal methods like software correctness proofs or model checking. Formal methods refers to a large collection of techniques where mathematics and logic is employed to reason about, most prominently, the correctness of algorithms/software. As we have seen, correctness/robustness is of utmost importance and currently receives much attention. Robustness is discussed in the European AI Act (e.g. [2], article 15) and it seems that the only way to achieve a serious level of robustness is by employing formal methods. Often, the use of formal methods implicitly opens the door to enabling rudimentary dialogues with software. We shall discuss this for two paradigms: model checking and software synthesis through proof assistants. Let us start with the latter. Proof assistants typically check mathematical proofs for correctness. One may think, aren't mathematical proofs by definition correct? The answer is no. That is to say, most proofs will contain minor errors though oftentimes these errors can easily be repaired by slightly tweaking the argument. Sometimes, mathematicians don't even see the small error since it is clear how the global logical structure of the argument goes. Another 'error' could be the omission of an easy reasoning step. Proof assistants like Coq, Isabelle or Lean, to mention only a few, are small computer programs that perform a simple task. When they are presented with a mathematical proof, they will check step by step that each alleged larger-scale reasoning indeed has a proof. Using proof assistants in mathematics has lead to various new insights, a few new theorems and numerous detections of flaws in proofs. [12] If the language of a proof assistant is rich enough, one can express software in it and moreover, one can express substantial software _behaviour_ in it. Thus, in the environment of a proof assistant, one can make claims about the software. We call this a formal specification when the claims fully describe the desired behaviour of the software. Consequently, once a piece of software lives inside a proof assistant environment, this automatically enables questions to be posed about the software and behold our dialogue. The caveat here is that it will be the programmer/user of the proof assistant who will need to provide a formally verified answer to the question. This means answering the question and proving that the answer is indeed correct. This feels like falling short as a real dialogue but at least certainty about the answers will be obtained (provided we accept that the very small proof assistant program itself is correct). Moreover, very few software is being obtained through the use of proof assistants, let alone legal software. In this context we mention a project to formalise European (freight) traffic regulation software inside Coq that has resulted so far to a formally verified time library [13]. Within model checking in law, it seems that dialogues may come somewhat easier. A legal model checking paradigm is described in [14] and would run as follows. In this paradigm, a computable law would be expressed as a formula \(\varphi\) in some formal language \(\mathcal{L}\) that is rich enough to express the law under consideration. Next, we consider data files that describe particular cases to which the law should be tested. The data files are formally viewed as mathematical structures often called model. Thus, we can consider each data file as a model \(\mathcal{M}\) and a different data file gives rise to a (typically) different model. If we wish to inquire if the case \(\mathcal{M}\) is legal or not according to the law \(\varphi\) we resort to techniques of model checking and the question will boil down to \[\mathcal{M}\models\varphi\ \?\] Thus, given a model \(\mathcal{M}\) and a formula \(\varphi\), does the model \(\mathcal{M}\) make true the formula \(\varphi\) yes or no? We should stress here that this question in general need not be decidable (recall the Halting Problem). Or if it is decidable, it may not be feasible. The art in legal model checking thus resides in choosing the language \(\mathcal{L}\) rich enough so that various interesting laws can be expressed in it. On the other hand, the language should not be too rich as to prevent undecidability or unfeasibility to kick in. Once such a balance is found, there are certain benefits of model checking over proof assistants: the same model-checking framework will work for a whole class of laws, whereas just minor tweaks in legal formally verified software may imply enormous tasks for the programmer to generate new proofs. Of course, a model checker does not directly yield error free software since the implementation may still contain errors. An optimal situation seems to arise if the model checking algorithm is implemented using proof assistants but let us leave this matter aside here. One can also consider the _consistency question_: is the law \(\varphi\) consistent, that is, is there some situation/model \(\mathcal{M}\) that abides by the law, that is, is there some \(\mathcal{M}\) so that \(\mathcal{M}\models\varphi\)? Directly related to the consistency question is the _tautology question_: is the law \(\varphi\) true in all possible situations/models? We use the standard notation \[\models\varphi\] for the tautology statement: \(\varphi\) holds true on all models \(\mathcal{M}\). In our setting this is tantamount to saying that the law \(\varphi\) is satisfied in every possible situation \(\mathcal{M}\). It must be observed that the question \(\models\varphi\) looks more complicated than \(\mathcal{M}\models\varphi\) for a particular model. In general, this holds true and the tautology question is really harder (where the notion of harder being strictly harder often depends on complexity questions like \(\mathsf{P}=\mathsf{NP}\)) than just the model checking question. Notwithstanding, for various logics, like Linear Temporal Logic, the corresponding tautology question is decidable with not too bad computational properties. Using the tautology question we can now enter in dialogue with the law as long as the dialogue is restricted to the linguistic fragment \(\mathcal{L}\). Let \(\psi\) be some property that can be expressed in \(\mathcal{L}\). The question of whether applying the law \(\varphi\) necessarily leads to having the property \(\psi\) can thus be stated as \[\models\varphi\rightarrow\psi.\] We observe the difference in both paradigms: in the proof assistant environment we could directly ask questions about the software. However, these questions were to be replied and proven by the user itself. In the model checking environment, we can pose questions about the law \(\varphi\). In this case, however, the questions are auto matically answered by the model (tautology) checking algorithm. One can argue that a formalisation \(\varphi\) of a computable law can actually be seen as a program. Up to now, laws are typically written in natural languages and a formalisation \(\varphi\) in a logic \(\mathcal{L}\) can be seen as a program: a translation of a written computable law into a particular model of computation with the formal specification being quite similar to a program.
この短い論文では、法律 enforcement に用いるルールベースソフトウェアに人 вloop を中心に扱う。例えば、罰金を計算するソフトウエアのような tachograph ソフトウエア、証拠を準備するソフトウエアのような DNA 测序ソフトウエア、社会プロファイルを作成するソフトウエアなどを考えることができる。法律的な人為的代理とソフトウェアアプリケーションの重要な違いは、可能な対話である。人間代理は、判断を動機付けるために尋問されることがよくある。このような対話では、ソフトウェアと人間代理との対話は、しばしば極めて困難だが、ほとんど不可能である。私たちは、対話がないことで、人権と法原則を誠実に侵害してしまうことを示している。例えば、透明性や contestability など。したがって、法律アルゴリズムとの対話は、最低限、非常に望ましい。これは、未来的であると感じるかもしれないが、さまざまな形式的メソッドの領域において、対話ができることが観察されている
2309.08037
Gain and Phase: Decentralized Stability Conditions for Power Electronics-Dominated Power Systems
This paper proposes decentralized stability conditions for multi-converter systems based on the combination of the small gain theorem and the small phase theorem. Instead of directly computing the closed-loop dynamics, e.g., eigenvalues of the state-space matrix, or using the generalized Nyquist stability criterion, the proposed stability conditions are more scalable and computationally lighter, which aim at evaluating the closed-loop system stability by comparing the individual converter dynamics with the network dynamics in a decentralized and open-loop manner. Moreover, our approach can handle heterogeneous converters' dynamics and is suitable to analyze large-scale multi-converter power systems that contain grid-following (GFL), grid-forming (GFM) converters, and synchronous generators. Compared with other decentralized stability conditions, e.g., passivity-based stability conditions, the proposed conditions are significantly less conservative and can be generally satisfied in practice across the whole frequency range.
Linbin Huang, Dan Wang, Xiongfei Wang, Huanhai Xin, Ping Ju, Karl H. Johansson, Florian Dörfler
2023-09-14T21:58:50
http://arxiv.org/abs/2309.08037v2
# Gain and Phase: Decentralized Stability Conditions for Power Electronics-Dominated Power Systems ###### Abstract This paper proposes decentralized stability conditions for multi-converter systems based on the combination of the small gain theorem and the small phase theorem. Instead of directly computing the closed-loop dynamics, e.g., eigenvalues of the state-space matrix, or using the generalized Nyquist stability criterion, the proposed stability conditions are more scalable and computationally lighter, which aim at evaluating the closed-loop system stability by comparing the individual converter dynamics with the network dynamics in a decentralized and open-loop manner. Moreover, our approach can handle heterogeneous converters' dynamics and is suitable to analyze large-scale multi-converter systems that contain grid-following (GFL) and grid-forming (GFM) converters. Compared with other decentralized stability conditions, e.g., passivity-based stability conditions, the proposed conditions are significantly less conservative and can be generally satisfied in practice across the whole frequency range. Decentralized stability conditions, grid-forming control, grid-following control, power converters, power systems, small gain theorem, small phase theorem, small signal stability. ## I Introduction Power electronics converters play a significant role in modern power systems, acting as the interfaces between the power grid and renewable energy sources, high-voltage DC transmission systems, smart loads, energy storage systems, etc. The large-scale integration of power converters is changing the power system dynamics, as they have distinct dynamics compared with conventional synchronous generators [1]. Under such a background, new stability problems are emerging, and analyzing the stability of systems integrated with multiple power converters is essential for ensuring the secure operation of power systems [2]. In this paper, we focus on the small-signal stability of multi-converter systems. The small-signal stability analysis of power converters has been an important and popular topic for many years, due to the complicated dynamics in converters caused by the interaction among filters, multiple nested control loops, and the power grid. There have been many well-known methods to evaluate the stability of power converters, such as eigenvalue analysis [3], impedance-based analysis [4, 5, 6, 7], small gain theorem-based analysis [8], and passivity-based analysis [9, 10]. Eigenvalue analysis is based on deriving the state-space matrix of the system, which, in the context of multi-converter systems, requires a detailed, global, and closed-loop model of the whole system. Hence, it may suffer from scalability and dimensionality problems when dealing with large-scale systems. Compared with eigenvalue analysis, impedance-based analysis offers more insights into the system dynamics in a wide frequency range. Moreover, the impedance of the power grid and converters can be measured, so black-box models can be directly used for stability assessment [11]. In multi-converter systems, one may need to build the impedance network for stability analysis [12]. Nonetheless, the stability analysis relies on using the generalized Nyquist stability criterion or deriving the characteristic polynomial of the closed-loop system, which may still suffer from scalability and dimensionality problems. As a remedy, if all the converters in the system have homogeneous dynamics, one can mathematically decouple the system into small-scale subsystems, and then use state-space or impedance methods to analyze the subsystems [13, 14, 15]. For instance, Ref. [13] decouples a multi-infeed system that contains homogeneous grid-following (GFL) converters and analyzes the stability from the perspective of grid strength characterized by the generalized short-circuit ratio (gSCR). However, it has been widely acknowledged that GFL converters, which rely on phase-locked loops (PLLs) for grid synchronization, cannot support a power electronics-dominated power system. This is because PLL aims at tracking the grid frequency, and there must be frequency sources in the system such that GFL converters can operate in a stable way. Hence, we need the so-called grid-forming (GFM) converters. Typical GFM control methods include droop control [16, 17], virtual synchronous machines [18], synchronverters [19], virtual oscillator control [20, 21], and so on. The coexistence of GFM and GFL converters makes the stability analysis of multi-converter systems more complicated, and currently it is not clear how to evaluate the stability of large-scale multi-converter systems in a scalable and computationally feasible fashion. Passivity-based analysis can potentially be used to analyze the stability of GFM-GFL hybrid multi-converter systems in a scalable and decentralized manner, i.e., if all the converters are passive, then the interconnected multi-converter system is stable [9], but it may lead to overly conservative results. Moreover, the converter's dynamics in the low-frequency range may not satisfy the passivity condition when the synchronization dynamics are taken into account due to, for instance, the negative resistance effect of PLL [6, 9]. Recent advances in control and systems theory have extended the passivity condition by defining the phases of
この論文では、小gain定理と小相位定理の組み合わせに基づいて、マルチコンバーターシステムの分散安定条件を提案しています。閉ループダイナミクスを直接計算するのではなく、例えば、状態空間行列の Eigen値を使用する、または一般化されたニュquist安定性基準を使用する代わりに、提案された安定条件はスケーラブルで計算効率が良いものであり、個々のコンバーターのダイナミクスをネットワークダイナミクスと比較することで、閉ループシステムの安定性を評価することを目的としています。さらに、このアプローチは異種のコンバーターのダイナミクスを扱うことができ、グリッド追従 (GFL)、グリッド形成 (GFM) コンバーター、および同期発電機を含む大規模なマルチコンバーター電力システムを分析することが可能です。他の分散安定条件と比較して、提案された条件は、非常に保守的ではありません。これは、全体の周波数範囲
2309.13371
Small telescopes being effective: MAGIC or not?
The paper describes the MAGIC multi-mode focal reducer (Monitoring of Active Galaxies by Investigation of their Cores), commissioned on the 1-m Zeiss-1000 telescope of the Special Astrophysical Observatory of the Russian Academy of Sciences in September 2020. Three observational modes are currently realised: photometry, polarimetry, and long-slit spectroscopy. Reducing the focal length makes it possible to obtain a sufficiently large field of view for photometry and a large slit height for spectroscopy of $\sim$12$'$, as well as a large field of view for polarimetry with a quadrupole Wollaston prism of $\sim$6$'$.4. This feature makes the complex study of extended nebulae and galaxies efficient. The MAGIC capabilities are presented in examples of observations of various astronomical objects. The spectral mode in the range of 4000-7200 AA provides the spectral resolution $R \sim$ 1000; for a starlike target up to 14 mag in medium-band filters with a seeing of 1$''$ for 20 minutes of total exposure, the photometry accuracy is better than 0.01 mag and the polarization accuracy is better than 0.6%. Especially for the new focal reducer, an offset guide and a position angle rotation system were implemented. The results of the modernization of the baffle system in the optical scheme of the telescope for the suppression of scattered light are also described.
Victor L. Afanasiev, Eugene A. Malygin, Elena S. Shablovinskaya, Roman I. Uklein, Vladimir R. Amirkhanyan, Alexander E. Perepelitsyn, Irina V. Afanasieva
2023-09-23T13:20:51
http://arxiv.org/abs/2309.13371v1
# Small telescopes being effective: MAGIC or not? ###### Abstract The paper describes the MAGIC multi-mode focal reducer (Monitoring of Active Galaxies by Investigation of their Cores), commissioned on the 1-m Zeiss-1000 telescope of the Special Astrophysical Observatory of the Russian Academy of Sciences in September 2020. Three observational modes are currently realised: photometry, polarimetry, and long-slit spectroscopy. Reducing the focal length makes it possible to obtain a sufficiently large field of view for photometry and a large slit height for spectroscopy of \(\sim\)12\({}^{\prime}\), as well as a large field of view for polarimetry with a quadrupole Wollaston prism of \(\sim\)6\({}^{\prime}\).4. This feature makes the complex study of extended nebulae and galaxies efficient. The MAGIC capabilities are presented in examples of observations of various astronomical objects. The spectral mode in the range of 4000-7200 A provides the spectral resolution \(R\sim\) 1000; for a starlike target up to 14 mag in medium-band filters with a _seeing_ of 1\({}^{\prime\prime}\) for 20 minutes of total exposure, the photometry accuracy is better than 0.01 mag and the polarization accuracy is better than 0.6%. Especially for the new focal reducer, an offset guide and a position angle rotation system were implemented. The results of the modernization of the baffle system in the optical scheme of the telescope for the suppression of scattered light are also described. keywords: astronomical observing techniques - devices and instruments - telescopes ## 1 Introduction The modern level of astronomical signal registration equipment and control systems allows small telescopes to solve observational tasks that were previously available only to large instruments. The operation of meter-class telescopes is not so strictly regulated by the observation schedule, which makes them more accessible for obtaining long-term observation series. Currently, plenty of monitoring campaigns are organized at small instruments worldwide for observations of relatively bright objects variable on time-scales from hours to years, as, e.g., active galactic nuclei (AGN). However, many of the small Cassegrain telescopes have large focal ratios, leading to a small image scale in the focal plane. Zeiss-1000 telescope (with primary mirror diameter \(D=1\) m and focal length at the Cassegrain focus \(F=13.3\) m, Komarov et al., 2020) of the Special Astrophysical Observatory of the Russian Academy of Sciences (SAO RAS) also has a large focal ratio of \(F/13.3\). Thus, for a pixel of the linear size \(p=13.5\)\(\mu\)m, the scale in the focal plane is 0\({}^{\prime\prime}\).2/pix, providing an oversampled images within typical _seeing_\(\beta\approx 1^{\prime\prime}.5\) at SAO (Panchuk & Afanas'ev, 2011). Moreover, when extended objects, e.g., nearby Seyfert galaxies, are of particular interest, the signal-to-noise ratio (S/N) does not depend more on _seeing1_ but S/N \(\sim p\cdot D/F\) (obviously, this is true for optical systems not burdened by scattered light, which significantly reduces S/N). The manufacturing of a focal reducer (Courtes, 1960, 1964) naturally solves these problems. Decreasing the focal ratio from \(F/13.3\) to \(F/6.1\) leads to a larger scale of 0\({}^{\prime\prime}\).45/pix meeting the demands of optimal sampling (e.g. Howell, 2006). Moreover, it results in a larger field of view (FoV) important for the extended objects and also in the presence of the parallel beam allowing one to introduce dispersion elements or polarization analyzers. The latter extends the number of available observation modes for flexible reactions to weather conditions and the ability to apply diverse methods of investigation of astrophysical objects. Footnote 1: For star-like objects (S/N) \(\sim D/\beta\). For these reasons, considering productive experience in the development of multi-mode cameras based on focal reducers over the past few decades [e.g. focal reducer system for 1.06-m \(F/14.5\) Cassegrain Zeiss-telescope of the Hoher List Observatory (Geyer et al., 1979), OREAD focal-reducing camera for 1.27-m \(F/7.6\) McGraw-Hill telescope of the Michigan-Dartmouth-MIT Observatory (Aldering & Bothun, 1991), DFOSC for Danish 1.54-m \(F/8.6\) telescope at La Silla Observatory (Andersen et al., 1995), and many other devices, associated with the widespread use of compact small-sized telescopes with the Ritchey-Chretien system, which have a large aberration-free FOV, but not quite fast, as well as our own positive twenty-year experience of operating focal reducers (devices of the SCORPIO family, Spectral Camera with Optical Reducer for Photometrical and Interferometrical Observations, Afanasiev & Moiseev, 2005, 2011) at 6-m BTA telescope of SAO RAS - we have developed a multi-mode MAGIC focal reducer for the Zeiss-1000 of the SAO RAS, the parameters of which are given in Table 1. This device is aimed at a wide range of observational monitoring tasks within approaches developed at SAO RAS for the last 30 years (Shapovalova et al., 2004, 2019; Uklein et al., 2019; Malygin et al., 2020; Shablovinskaya et al., 2020), unified in the MAGIC project (Monitoring of Active Galaxies by Investigation of their Cores). Among other things, in the case of Zeiss-1000, the construction of the efficient device required additional modification of the telescope components, described in this paper. The paper structure is as follows. Section 2 describes the modernization of the optomechanical scheme of the 1-m Zeiss-1000 telescope of SAO RAS, as the installation of shielding elements, rotating platform and offset guiding system. In Section 3 the MAGIC optomechanical scheme is given together with its characteristics. Section 4 discusses the features of observations in the modes of photometry, polarimetry, and long-slit spectroscopy and provides examples of observations. ## 2 Modernization of the optical-mechanical scheme of the telescope To increase the efficiency and accuracy of observations, we have upgraded the optomechanical scheme of the 1-m Zeiss-1000 telescope, as well as created the MAGIC multi-mode focal reducer. As part of the modernization of the telescope design, we introduced and changed several key components of the system: \(\rightarrow\) [Baffles] \(\rightarrow\) [Rotator + Guide] \(\rightarrow\) [Calibration illumination] \(\rightarrow\) [MAGIC] Arrows imply the path of incoming rays. After reflection on the primary and secondary mirrors of Zeiss-1000, the light is surrounded by baffles, then crosses the automated turntable consisting of the rotator and the offset guide, after which it passes through the calibration illumination module and only then enters the MAGIC entrance pupil. The modified components in the scheme complement the MAGIC device, however, they are permanent modifications of the entire optical system of the telescope and also work in conjunction with other devices operating at the Cassegrain focus. Nevertheless, all these modules are independent and separated from each other and can be removed if necessary. At the moment, the rotation and guiding modules are implemented on the telescope and are at the stage of the test observations. Further in the section, we will sequentially describe these components in brief detail. Being the essential part of the telescope modernization, the module of telecentric illumination with discrete and continuous spectrum sources for spectral calibrations designed similarly to the concepts implemented for the 6-m BTA telescope adapter (Afanasiev et al., 2017) is in the process of development and is a point of the upcoming paper. ### Baffles The Zeiss-1000 telescope (Komarov et al., 2020) is a Ritchey-Chretien aplanat with two hyperbolic mirrors. Due to their design, Cassegrain telescopes are the most vulnerable to parasitic light incoming to the detector during observations. Baffles have been installed into the telescope as a system of two surfaces: a truncated cone and a cylinder (near the secondary and primary mirrors, respectively). They are shown on the top panel of Fig. 1 and are called "front" and "rear" baffles. These are the default baffles originally installed into the telescope. This configuration provides an unvignetted field of \(\diameter 106\) mm (\(\sim\)27\(\arcmin\)) at the telescope focal plane. The baffles shield the detector from the most dangerous direct light and also prevent light reflected by the inner surface of the telescope tube from entering the field of view. However, the baffle near the main mirror causes additional scattered light, when direct light is reflected from the inner surface of the baffle during grazing incidence (Maksutov, 1946). Thus, due to light re-reflection on the inner surface of the rear baffle, a complex-shaped spot was formed on the detector, which, in its meaning, was an additive parasitic signal. We observed it as a drop of the intensity at the edges of the calibration flat-field frame of the order of \(\sim\)10% (Fig. 2, left panel). The maximum of this "bell" was shifting during observations depending on the position of the telescope tube, which introduced significant errors in obtaining flat field frames and data reduction. When processing scientific frames, scattered light cannot be taken into account, which worsens the accuracy of measurements of faint objects. Since we have a parasitic additive signal, the division of frames into the flat field also introduced a systematic error of about 10% towards the edges of FoV and decreased the accuracy of high-precision photometric measurements. Moreover, the scattered light must contribute to the instrumental polarization, and its value is heterogeneous over the field. Firstly, we performed the exact solution of the problem for calculating the optimum design of baffles (Terebizh, 2001) for Zeiss-1000 to fully replace the original baffles. Yet, it appeared that the solution led to an unacceptably high linear obstruction coefficient (ratio of diameters of the widest baffle and the entrance pupil) \(\eta\sim 0.46\). Thus, we got out of the exact solution for a more acceptable design. To suppress unwanted light, we installed four annular diaphragms with an internal diameter of 185 to 215 mm inside the existing rear baffle (it consists of two parts, with a total height of 1100 mm) and painted the components with high absorption paint. We also made an additional cylindrical 976 mm high structure with five internal diaphragms, installed between the focal plane of the telescope and the default rear baffle, and passing through the central hole of the telescope's main mirror. A drawing of baffles with annular diaphragms \begin{table} \begin{tabular}{c c} \hline MAGIC main parameters & \\ \hline Input focal ratio of focal reducer & \(F/12.5\) \\ Total focal ratio at the Zeiss-1000 & \(F/6.1\) \\ QE (optics + telescope + CCD) & \(\sim\)50\% \\ Image quality (FWHM) & 0\(\arcsec\).3 \\ Spectral range & 340-990 nm \\ Weight & 23 kg \\ Dimensions & 430 \(\times\) 440 \(\times\) 265 mm \\ \hline CCD system & Andor ikonL-936 \\ CCD & E2V CCD42-40 (BEX2-DD) \\ Format & 2048 \(\times\) 2048 pix \\ Pixel size & 13.5 \(\times\) 13.5 \(\mu\)m \\ QE & 400-850 nm: \(>\)90\% \\ & 340-980 nm: \(\sim\)40\% \\ Readnoise (min) & 2.2 e\({}^{-}\) \\ \hline Photometry & \\ FoV & 12\({}^{\prime}\) \\ Image scale (binning 1 \(\times\) 1) & 0\(\arcsec\).45/pix \\ Limiting mag (\(V\), 20 min, seeing \(\sim\) 1\(\arcsec\).1) & 22\({}^{\rm{m}}\).5 \\ \hline Stokes polarimetry & \\ FoV & 6\(\arcmin\).4 \(\times\) 6\(\arcmin\).4 \\ Image scale (binning 1 \(\times\) 1) & 0\(\arcsec\).45/pix \\ Accuracy (14 mag, 20 min, seeing \(\sim\) 1\(\arcsec\)) & 0.6\% \\ \hline \multicolumn{3}{c}{Long slit spectroscopy} \\ Spectral range & 400-720 nm \\ Spectral resolution & \(R\sim\) 1000 \\ Slit size & 1\(\arcsec\).7 \(\times\) 12\(\arcmin\) \\ Monochromatic slit image (FWHM) & 3.5 pix \\ Reciprocal dispersion & 0.2 nm/pix \\ \hline \end{tabular} \end{table} Table 1: The main parameters of MAGIC with a CCD on the Zeiss-1000 telescope of the SAO RAS Figure 1: Optical scheme of the Zeiss-1000 telescope after modernization of the baffles. The top panel shows default front and rear baffles near secondary and primary mirrors with annular diaphragps installed in the rear one. Also, in the scheme (to the right to the rear baffle) there is an additional construction with diaphragps, which we installed through the main mirror of the telescope. The middle panel indicates the sizes of installed elements. The bottom panel shows the idea of arranging annular diaphragps described in (Danjon & Couder 1935). Dimensions are in millimetres. installed inside in the Zeiss-1000 optical system of the SAO RAS is shown in Fig. 1. The idea of annular diaphragms for refractors was described earlier in Danjon & Couder (1935) and is easily adapted to the design of a cylindrical baffle (the idea is visualized in the bottom panel of Fig. 1). Thus, diaphragms surround the useful light beam in the optical path and significantly reduce the level of unwanted light. A comparison of flat field frames obtained from the twilight sky _before_ and _after_ blackening the baffle, installing painted diaphragms in it, and installing an additional structure with diaphragms is shown in Fig. 2 on the left and right panels, respectively. After the upgrade, the intensity of the flat field does not drop at the edges of the FoV, which indicates effective blocking of direct and scattered beams in the telescope tube. ### Rotator and offset guide The rotator, offset guide, calibration illumination as well as baffles are device-independent modules. Since the end of 2022, the rotator and offset guide are already being used in a test mode with the MAGIC device and are available to be used with other devices installed at the Cassegrain focus. The calibration illumination is still under development. Below we briefly show their necessity and main features. The details of the rotator and offset guiding system will be described in the upcoming paper (Amirkhanyan et al. 2024, in prep.). The Zeiss-1000 telescope was originally equipped with a manual rotator. We have upgraded the original Zeiss-1000 rotator by designing, manufacturing and assembling construction of large gear, worm reduction and a stepper motor with PCB control. Thus, this modification allows one to rotate remotely the devices installed in the Cassegrain focus to any given angle during the night, which makes observations using various methods much more efficient. The accuracy of the angle setting is \(\sim\)0\({}^{\circ}\).5. The offset guide is designed to correct the position of the Zeiss-1000 telescope tube based on images from a guide digital camera mounted on a small gear platform into space inside the motorized rotator. An additional guiding module turned out to be necessary since the telescope's tracking error does not allow full-fledged exposures for several tens of minutes. Before the start of work on the production of the offset guide, the capabilities of the side telescope guide of the Zeiss-1000 telescope were tested. During guiding through the side telescope, we got a systematic drift of \(\sim\)2\({}^{\prime\prime}\).5 per hour, which became the prerequisite for the creation of the offset guide. The rotation of the offset guide platform makes it possible to quickly find available stars for guiding in the FoV of the telescope at the Cassegrain focus. The limiting magnitude of a star for guiding is \(\sim\)14 mag in \(R\)-band. ## 3 Magic description The MAGIC device is a multi-mode focal reducer, allowing a flexible response to changing weather conditions due to several observational modes: direct images, polarimetry and long-slit spectroscopy. MAGIC is installed in the Cassegrain focus of the 1-m Zeiss-1000 telescope and works in conjunction with the components of the optical system described earlier (see Fig. 3), but does not depend on them. The Figure 2: Comparison of the normalized flat field frames of the twilight sky _before_ (left) and _after_ (right) the installation of annular diaphragms in the default rear baffle and an additional tube, and blackening of the components. The cuts at the bottom correspond to the blue lines in the frames above. The horizontal axes of bottom cuts correspond to the pixel location along the \(y\)-axis of the frame (the length of the blue line in the angular measure corresponds to 12\({}^{\circ}\)). Frames obtained with a 250Å-width SED700 filter. weight of the device without a CCD detector is 23 kg, and the size is 430\(\times\)440\(\times\)265 mm. The device is designed for an input focal ratio of \(F/12.5\) and, due to the collimator and camera, reduces it to \(F/6.1\), which solves the problem of oversampling for typical modern CCDs in the focus of Cassegrain telescopes and provides an advantage for observing faint extended objects. ### Optical design The optical part of the MAGIC focal reducer consists of a field lens, a collimator and a camera lens. The scheme is shown in Fig. 4. The collimator is a 5-lens apochromat with a focal length of 220 mm and forms the exit pupil of the system. The camera lens is a 6-lens apochromat with a focal length of 109 mm, which focuses the resulting image on the CCD detector. All optical surfaces have an anti-reflective coating, which ensures transmission of each lens \(>\)80%. Figure 4: MAGIC contents: (1, 2) — filter wheels; (3) — collimator; (4) — focusing mechanism of the collimator; (5) — mode changing linear guide carriage; (6) — camera; (7) — the CCD detector. Figure 3: MAGIC in the Cassegrain focus. _Left_: An illustrative scheme with a transparent telescope tube. _Right_: photo of MAGIC and a round flat-field screen in the background. The integral transmission2 of the focal reducer optics considering the reflection coefficient of telescope mirrors and CCD efficiency is shown in Fig. 5 and is QE \(\sim\) 50%. Footnote 2: The quantum efficiency of MAGIC optics and observational modes was measured by on-sky standard stars in medium-band filters with the known pass-bands. The optomechanics of the device allow introducing the movable optical elements into the optical path. The optical filters can be additionally set in front of the collimator. Also, between the collimator and the camera, a volume phase holographic grism (VPHG) and a double Wollaston prism can be introduced into the parallel beam by moving the linear guide carriage perpendicular to the central axis of the device; it is also allowed to install other optical elements on the carriage. The optical design of MAGIC was calculated in the ZEMAX software environment. Spot diagram in Fig. 6 shows how the calculated image of a point source looks like for a series of wavelengths from 365 nm to 900 nm at various distances from the central axis of the device from 0\({}^{\circ}\) to 0\({}^{\circ}\).12. The calculated polychromatic encircled energy (the fraction of the total energy in the point spread function) is shown in Fig. 7. The quality of the image formed by the optics is no worse than 10 \(\mu\)m in the plane of the CCD detector, which corresponds to FWHM \(\sim\) 0\({}^{\prime\prime}\).3. ### Electro-mechanical scheme In the MAGIC scheme (Fig. 4), the light from the telescope passes through the filter wheels (1) and (2). Each wheel has 9 positions for installing filters with a diameter of no more than 50 mm and a thickness of no more than 5 mm. The first wheel, in addition to optical filters, also includes: * _slit_ -- long slit (width 1\({}^{\prime\prime}\). 7, linear width -- 0.11 mm) * _mask_ -- mask for the Wollaston prism (angular dimensions -- 6\({}^{\prime}\).4 \(\times\) 6\({}^{\prime}\).4, linear dimensions -- 25\(\times\)25 mm) * _dots_ -- a matrix of 8\(\times\)8 pinholes with a diameter of 0.1 mm and a step of 3 mm for focusing optics and estimating geometric distortions in polarimetry mode (linear dimensions -- 25\(\times\)25 mm) Zero position in each wheel is always empty, and given the constant presence of \(slit\), \(mask\) and \(dots\), we have 13 positions to install the necessary replaceable filters. Next, there is the collimator (3) with the focusing mechanism (4). In the heart of MAGIC is the mode-changing linear guide carriage for 4 positions (5) with the VPH-grism and the Wollaston prism. The switching time between the adjacent carriage positions is 1 min. After the mode carriage light comes through the camera (6) to the CCD detector (7). To change the configuration, MAGIC has 4 stepper motors: two -- for rotating the filter wheels (1) and (2) and two more -- for the collimator focusing mechanism (4) and moving the linear guide carriage (5). The control program from the onboard PC sends commands to the ATmega8535 microprocessor, which controls the configuration and activates the mechanics of the device. The motors are controlled via the serial port from the graphical user interface (Fig. 10). ### CCD characteristics Andor iKon-L 936 CCD system with a BEX2-DD type 2048 \(\times\) 2048 pix E2V CCD42-40 with a pixel size of 13.5 \(\times\) 13.5 \(\mu\)m is used as a detector. The mass of the CCD system is 7 kg. The quantum efficiency of this device is \(>\)90% in the range of 400-850 nm (see Fig. 5) and not less than 40% in the range of 340-990 nm, which is the working spectral range of MAGIC due to its optics. We use default air cooling, which makes it possible to conduct observations with a CCD temperature of about \(-\)80\({}^{\circ}\)C. The laboratory measurements of the gain value for the 1\(\times\)1 binning mode used in the observations are presented in Table 2. We use two gain modes 'low' (\(\times\)1) and 'high' (\(\times\)4), as well as three readout rates for full frame - 'fast' (4 sec), 'norm' (9 sec) and'slow' (90 sec). The value of the measured readnoise for these modes is shown in Table 3. Note here that the measured values of CCD gain and readout noise differ significantly from the values provided by the manufacturer (19-28% less than the declared gain and 26-45% less than the declared readnoise, depending on the mode). It is significant that there is a misconception that the statistics of counts (analogue digital units, ADU) in CCDs correspond to Poisson ones. This assumption is laid down when determining the gain factor of the analogue-to-digital converter of the CCD registration path (Howell 2006). However, as can be seen in Fig. 8 (and especially on the right panel, where the range of the graph is zoomed in), the dependence of the counts variance on the average registered signal is different from a strictly linear law. There are periodic fluctuations around a linear dependence. We assume that this is a feature of thick silicon CCD detectors with deep depletion technology. Also, based on the measurements in Fig. 8, we can identify the working ranges of ADU accumulation for observations in various modes (for gain \(\times\)1 and \(\times\)4) of CCD iKon-L 936, where the signal dispersion behaves in the most acceptable way. It can be concluded that for (\(\times\)1) low gain mode it is not worth accumulating a signal of more than \(\sim\)20k ADU. On the other hand, for astronomical observations, the particular interest is the registration of weak signals, whose statistics are distorted by the readout noise introduced by the electronics. To study Figure 5: QE of the system MAGIC+telescope+CCD. Filled black circles with error bars mark transmission measurements of the MAGIC with the Zeiss–1000 telescope mirrors and CCD. Blue squares present the same including the transmission of the quadruple Wollaston prism. The dash-dotted line presents the QE in the spectral mode with the VPHG (including optics+telescope+CCD). The dashed line also shows the quantum efficiency of the CCD for this spectral range. The pass-bands of the medium-band SED filters used to measure QE are plotted with a dotted line. the distortion of counts statistics, a test criterion is used using the dispersion index, the so-called Fano factor (Fano, 1947). The application of the method to CCD studies is described in detail by Afanasieva (2016). By definition, the dispersion index is the ratio of the variance of counts to the average value of the registered signal. For a Poisson distribution, this ratio is equal to one, and this corresponds only to a certain range of registered values. Fig. 9 shows graphs of the dependence of the dispersion index on the magnitude of the registered signal in different modes for the iKon-L 936 CCD. The left and right panels correspond to two gain modes - (\(\times 1\)) low and (\(\times 4\)) high respectively. These studies also provide insight into the optimal choice of exposure time in order to minimize the distortion of counts statistics when observing astrophysical objects using the MAGIC focal reducer. According to the measurements, the best fit to the Poisson statistics is achieved when the signal is accumulated in the (\(\times 1\)) low gain mode at a'slow' readout rate from about a few hundred to \(\sim\)10k ADU. Note here that for both CCD gain modes used (\(\times 1\) and \(\times 4\)) for the 'norm' readout rate,'sawtooth' beats of the dispersion index are observed. We keep in mind this negative feature during observations. Also in Fig. 9 on the bottom panels there are measurements of the deviation from signal linearity, which do not exceed 0.5% in the entire range of signal accumulations used in observations. CCDs with a thick, deep-depletion silicon substrate provide high spectral sensitivity of the detector even in the 1 \(\mu\)m region. A powerful advantage of the iKon-L 936 CCD is the complete absence of interference noise in the red part of the spectrum. Under laboratory conditions, we exposed the CCD illuminated with various \begin{table} \begin{tabular}{c c c c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{2}{c}{rate} \\ \cline{2-3} \multicolumn{1}{c}{2048 \(\times\) 2048 pix} & fast (3.0 MHz) & norm (1.0 MHz) & slow (0.1 MHz) \\ \hline \multirow{2}{*}{GAIN} & high (\(\times 4\)) & 0.89 & 0.84 & 0.84 \\ & low (\(\times 1\)) & 3.0 & 2.8 & 2.8 \\ \hline \hline \end{tabular} \end{table} Table 2: Measurement of the gain value for various modes Andor iKon-L 936 CCD \begin{table} \begin{tabular}{c c c c c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{2}{c}{rate} \\ \cline{2-3} \multicolumn{1}{c}{2048 \(\times\) 2048 pix} & fast (3.0 MHz) & norm (1.0 MHz) & slow (0.1 MHz) \\ \hline \multirow{2}{*}{GAIN} & high (\(\times 4\)) & 6.7 \(\pm\) 0.03 & 4.8 \(\pm\) 0.01 & 2.2 \(\pm\) 0.01 \\ & low (\(\times 1\)) & 11.3 \(\pm\) 0.11 & 5.9 \(\pm\) 0.06 & 2.7 \(\pm\) 0.07 \\ \hline \hline \end{tabular} \end{table} Table 3: Measurement of the readnoise for various Andor iKon-L 936 CCD modes Figure 6: Spot Diagram. Circle diameter — 30 microns = 1′′. wavelengths and could not detect the contribution of the interference pattern, so-called fringes. Thus, this CCD allows one to efficiently provide research in the red part of the spectrum at high sensitivity. Additional information about the peculiarities of CCD images in the near-infrared band is given in Appendix A. ### Remote control The control of the device, including the rotator, guide and CCD, is implemented through several compact computers installed on the telescope, which allows remote observations. In observations, we use network access to the onboard computer MR3253S-00F (with Windows 7 as the operating system) made by LEX COMPUTECH in the remote desktop format. The control interface is a graphical shell in the IDL environment MAGIC remote control, a screenshot of which is shown in Fig. 10. The upper half of the interface is used to control the CCD detector and edit the information recorded in the FITS header during the observations; the lower half is used to control the MAGIC (setting the observation mode, focusing the collimator, and orientation) and some telescope functions (small tube shifts and focusing). At the end of each exposure, the resulting FITS file is opened for analysis in the FITS-viewer (see Fig. 11) -- here the observer traditionally controls the levels of accumulation and the quality of each frame. Note here that the image in the viewer is flipped along the RA axis. ## 4 Observation modes ### Photometry The photometric mode of observations with the MAGIC device makes it possible to obtain direct images using various light filters, which are introduced into the beam by means of two wheels. The size of the FoV is limited by the size of the round filter and is \(\sim\)12\({}^{\prime}\). Note that for photometry, as well as in other observation modes, we use 1\(\times\)1 CCD binning, which gives an image scale of 0\({}^{\prime\prime}\).45/pix and satisfies the Kotelnikov-Nyquist theorem (sampling allows us to accurately restore the PSF-profile). The device uses narrow-band and medium-band interference SED filters3 (the bandwidths of the SED filters used to measure QE are shown in Fig. 5), as well as broadband glass filters \(BVR_{\rm C}I_{\rm C}\) of the Johnson-Cousins system (Bessell, 1990). In the case of the broadband filters, for converting instrumental quantities into _standard photometric system_ the following equations were constructed neglecting the second-order extinction coefficients: Footnote 3: Manufactured by Edmund Optics, [https://www.edmundoptics.com/](https://www.edmundoptics.com/). \[\begin{array}{l}B=b+0.12^{\pm 0.022}(B-V)+22.43^{\pm 0.014}\\ V=v-0.23^{\pm 0.023}(B-V)+22.78^{\pm 0.015}\\ R_{\rm C}=r+0.22^{\pm 0.043}(V-R_{\rm C})+22.75^{\pm 0.017}\\ I_{\rm C}=i+0.05^{\pm 0.022}(V-I_{\rm C})+22.23^{\pm 0.019}\\ \end{array} \tag{1}\] where \(B\), \(V\), \(R_{\rm C}\), \(I_{\rm C}\) are standard magnitudes in \(B\)-, \(V\)-, \(R_{\rm C}\)- and \(I_{\rm C}\)-bands, \(b\), \(v\), \(r\), \(i\) - instrumental magnitudes in filters \(B\), \(V\), \(R_{\rm C}\), \(I_{\rm C}\), reduced to zenith distance z = 0, calculated as \(-2.5\cdot\lg(N)-\alpha\cdot X\), where \(N\) is the number of counts (ADU) per second acquired in the 2.8 \(e^{-}\)/ADU gain mode, \(\alpha\) is the extinction coefficient, \(X\) is the air mass. We built equations from measurements of 36 stars (in the range of colours not exceeding 0.6 mag) in the field NGC7654, which was observed at a zenith distance z \(\sim 18^{\circ}\) on September 22, 2020. The measured extinction coefficients on this night were: \[\begin{array}{l}\alpha_{B}=0^{\rm m}.50\pm 0^{\rm m}.030\\ \alpha_{V}=0^{\rm m}.39\pm 0^{\rm m}.028\\ \alpha_{R_{\rm C}}=0^{\rm m}.29\pm 0^{\rm m}.025\\ \alpha_{I_{\rm C}}=0^{\rm m}.28\pm 0^{\rm m}.039\\ \end{array}\] For our monitoring tasks, typical magnitudes of observed objects are 16 mag in the \(V\)-band. For 10 minutes of total exposure within a typical seeing of about 2\({}^{\prime\prime}\) at SAO, the accuracy for a star-like object is 0.005 mag. Providing the photometry of faint sources in a \(V\)-band on a single frame with an exposure time of 20 minutes for 22.5 mag we achieved \(S/N\approx 4\) within a 1\({}^{\prime\prime}\).1 seeing. ### Polarimetry In the MAGIC device, the polarization analyzer is installed in a parallel beam. The design of the device involves the use of any type of polarization analyzer - both a classic dichroic polaroid and birefringent prisms. At the moment, we use a double Wollaston prism for tasks of AGN polarimetry. The advantage of this analyzer is the ability to apply the _one-shot polarimetry_ approach when the number of images of the FoV sufficient to calculate the Stokes parameters is simultaneously registered at the detector in several angles of electric vector oscillation. This method minimizes the effect of atmospheric depolarization (for more details see Afanasiev & Amirkhanyan, 2012). We use the quadrupole Wollaston prism, originally described in Geyer et al. (1993). The prism was produced by OPTEL4 and consists of two Wollaston calcite prisms glued together with a total size of 30\(\times\)30\(\times\)16 mm. The antireflection coating applied to the prism optics provides a high transmission of about 90%, which leads to QE \(\sim\) 45% concerning the contribution of the device optics, CCD and telescope (Fig. 5). To avoid overlapping images in different polarization directions, the prism is used in conjunction with a square mask giving a square FoV in each direction of 6\({}^{\prime}\).4\(\times\)6\({}^{\prime}\).4. Figure 7: Polychromatic Encircled Energy. As an example, Fig. 12 shows a frame of the M1 nebula, obtained with a Wollaston prism for 300 seconds of exposure in the SED600 filter. As can be seen, four directions of polarization are registered on the detector in the angles \(0^{\circ}\), \(90^{\circ}\), \(45^{\circ}\) and \(135^{\circ}\). This makes it possible to calculate three Stokes parameters \(I,Q,U\), which describe the intensity and linear polarization of radiation, as follows: \[I=I_{0}+I_{00}+I_{45}+I_{135},\] \[\frac{Q}{I}=\frac{I_{0}-I_{90}}{I_{0}+I_{90}},\] \[\frac{U}{I}=\frac{I_{45}-I_{135}}{I_{45}+I_{135}},\] where \(I_{0},I_{90}\), \(I_{45},I_{135}\) are the intensity in each direction, respectively. Figure 8: The gain factor is determined from the slope of the dependence of half of the variance of counts on the average value of signal accumulation. _Left_: dependencies are presented for all gain modes and readout rates used in the observations. _Right_: zoomed in on the same dependencies. Figure 9: Measurement of CCD characteristics for all gain modes (_left_: \(\times 4\) high, _right_: \(\times 1\) low) and readout rates. The top panel shows the dependence of the dispersion index on signal accumulation. The lower panel shows the level of non-linearity of signal registration in the entire range of accumulations. Further, for convenience, we will use the notation \(Q\equiv Q/I\) and \(U\equiv U/I\). The degree of polarization \(P\) and the angle of the plane of polarization \(\varphi\) are calculated by the formulas: \[P=\sqrt{Q^{2}+U^{2}},\] \[\varphi=\frac{1}{2}\arctan(Q/U).\] Note that to rotate the Stokes parameters to the celestial plane, the Stokes vector should be multiplied by the rotation matrix of the \(-2\)-PA angle, where PA is the instrument position angle. Due to the huge image separation, the prism used in MAGIC has its own dispersion, much larger than the more classic wedged version. Without the use of a filter in white light, the dispersion will decompose the star-like source image into a low-dispersion spectrum of \(>\)40'' in length. The use of broadband filters, for example, the \(BVR_{\rm C}I_{\rm C}\) system, with this prism is also not justified, since the distortions introduced by dispersion will be an order of magnitude greater than seeing. For this reason, observations with this quadrupole Wollaston prism are optimally carried out in medium-band filters. Using the observations of unpolarized standard stars, we estimated the value of the instrumental polarization of the device within the FoV inside the mask. Repeated observations of zero standards at different positions in the field, as well as measuring the polarization of images forming by the 8-dots mask, which we use to correct geometric field distortions, we found that the changes of polarization are stable over time and have a smooth field dependence (Fig. 13). The average value of the degree of polarization \(P\) introduced by the device is 3.5% and varies over the field from 2.3% to 4.5%. The pattern and absolute values of the instrumental polarization do not change with the wavelength in the range 6000-7000 A\(\AA\). Our laboratory tests of Figure 10: MAGIC control interface. the optics and detector with other polarization analyzers introduced into the beam showed that the source of instrumental polarization is the prism. We have described the \(Q\) and \(U\) changes by 1st-order surfaces (Fig. 14). After correcting observations of unpolarized stars for instrumental polarization using this model, the deviations of the parameters \(Q\) and \(U\) from zero were less than 0.05%. Thus, the correction of instrumental polarization makes it possible to carry out high-precision polarimetric observations. To determine the accuracy of the data obtained in the polarimetric mode, we observed a set of highly polarized standard stars. In Fig. 15 the dependence of the observed polarization degree \(P\) and polarization angle \(\varphi\) for a set of standard high polarization stars (after correction for instrumental effects) are plotted against their reference values. The deviations were \(\Delta P=0.18\%\) and \(\Delta\varphi=3^{\circ}\). In general, according to our observations, for a star-like target up to 14 mag in medium-band filters with a seeing of 1'' for 20 minutes of total exposure, the polarization accuracy is better than 0.6%. The large field of view in the one-shot polarimetry mode is an important advantage for polarization observations of extended objects. An example of the results of such observations is shown in Fig. 16. For the Crab Nebula M1, a map of the change in the polarization of the continuum ('amorphous') radiation was obtained, which makes it possible to compare the polarization characteristics of the nebula with its geometry. The measurement of the surface polarization was conducted for a methodical purpose and repeated the results obtained over the extensive history of Crab polarimetric studies initiated by Baade (1956) and subsequently analyzed by Woltjer (1957). Our observations are in agreement with the surface polarization distribution, its degree, and orientation, as previously identified in earlier photographic studies (Baade, 1956; Woltjer, 1957), as well as in the initial CCD observations (Hickson & van den Bergh, 1990) with a large FoV similar to that of MAGIC. These results are also consistent with _HST_ observations using a smaller FoV (Moran et al., 2013). ### Long slit spectroscopy The spectral mode of the MAGIC device is implemented by introducing into the collimated beam (between the camera and the collimator) a direct vision grism VPHG600@500 (600 lines/mm, 500 nm - central wavelength), as well as a slit into the converging beam in front of the collimator. The efficiency of the device in the spectral mode (telescope + optics + grating + CCD) is \(\sim\)16% at maximum (Fig. 5)5. Footnote 5: The efficiency here was also measured by on-sky standard stars. During the observations, the seeing was comparable to the slit width, and the slit losses of \(\sim\)80% are taken into account. The slit dimensions of 0.11 mm \(\times\) 46 mm correspond to the angular dimensions 1''.7 \(\times\) 12'' in the focal plane. The width of the projected monochromatic slit image onto the CCD plane is FWHM = 3.5 pix. We chose the slit sizes to achieve the best compromise between optimal CCD sampling, the required _extragalactic_6 spectral resolution, and minimizing light loss at the slit under average SAO weather conditions. In conjunction with the spectral grating, low-resolution spectra are obtained in the range 4000-7200 A with reciprocal dispersion 2A/pix and spectral resolution \(\delta\lambda\sim\) 7-8 A or in terms of \(R=\lambda\delta\lambda\sim 1000\). Footnote 6: Here is meant a compromise for studies of extragalactic objects between the spectral resolution for typical extragalactic tasks and denser concentration of light in a single CCD pixel. In Fig. 17 the sequence of obtaining observational material on the example of spectroscopy of type 1 AGN E1821+643 is demonstrated from setting the object onto the slit (in the direct image mode) to obtain the processed 1D spectrum. Observations are taken on September 21, 2020. It is interesting to note that in the presented frames, due to such a long slit, several objects are simultaneously observed, including the extended planetary nebula PN K 1-16 (indicated by number 1 in Fig. 17). It is clear that the slit height of 12' allows efficient spectroscopic observations of strongly extended objects, for example, comets. Such a long slit also simplifies sky subtraction when processing spectra. At the moment, the development of a calibration module is underway to obtain auxiliary frames of a spectral flat field and a reference Figure 11: Viewer interface with frames of the M27 planetary nebula in photometric (_left_, \(t_{\rm exp}\) = 10 s in \(R_{\rm C}\)-band) and spectral (_right_, \(t_{\rm exp}\) = 600 s) modes. Direct image FoV is 12′ \(\times\) 12′, slit height is 12′, slit width is 1′′.7′, the wavelength range is 340-740 nm. The frames colours are inverted. illumination of a He-Ne-Ar lamp for constructing a dispersion curve. However, only slight bending of the device (within \(\pm 1\) pix) makes it possible to use an auxiliary appliance installed on the inside of the telescope dome (see Fig. 3, on the right) to obtain calibration frames, which gives Lambertian scattering under illumination lamp. ## 5 Conclusions In 2020, the MAGIC multi-mode focal reducer for the 1-m Zeiss-1000 telescope of the SAO RAS was designed, manufactured and put into operation. The device effectively solved the problem of oversampling in the Cassegrain focus, making the optical system faster (from \(F/13.3\) to \(F/6.1\)) and more effective for the study of faint and/or extended objects. The optics of the device constrain an \(\sim 0^{\prime\prime}\).3 image of a point source and has an integral transmission QE \(\sim 50\%\). The ability to observe and quickly switch between observation modes allows one to respond flexibly to the weather conditions changes during the night, as well as to comprehensively explore astrophysical objects. Currently, three observation modes are implemented in the MAGIC device. * Direct images could be taken in the Johnson-Cousins photometric system and in the medium-band interference filters. The photometry FoV is \(\sim\)12\({}^{\prime}\) with a scale of \(0^{\prime\prime}\).45/pix. The filters are set in 2 wheels each of 9 positions. For 10 minutes of total exposure within a typical seeing of about 2\({}^{\prime\prime}\) at SAO, the accuracy for a star-like object of 16 mag in \(V\)-band is 0.005 mag. The limited magnitude in \(V\)-band (\(S/N\approx 4\)) is 22.5 mag within a 1\({}^{\prime\prime}\).1 seeing and 20 minutes exposure. * Image-polarimetry mode provides measurements of intensity and linear polarization in \(6^{\prime}\).4 \(\times\) \(6^{\prime}\).4 FoV. The introduced instrumental polarization varies over the field and could be compensated by the calculated smooth model. For a star-like target up to 14 mag in medium-band filters with a seeing of 1\({}^{\prime\prime}\) for 20 minutes of total exposure, the accuracy of the intensity measurement is better than 0.01 mag and the polarization accuracy is better than 0.6%. * In long-slit spectroscopy the combination of 1\({}^{\prime\prime}\).7 \(\times\) 12\({}^{\prime}\) slit and volume phase holographic disperser VPHG600\(\oplus\)500 is used. Low-resolution spectra are obtained in the range 4000-7200 A with reciprocal dispersion 2A/pix and spectral resolution \(\delta\lambda\sim\) 7-8A. To use the MAGIC device on the 1-m Zeiss-1000 telescope, the optomechanical scheme of the telescope was upgraded. The modernization of baffles made it possible to minimize parasitic rays in the telescope tube, correcting additive noises that occurred in observations. The installation of additional modules - a rotator and an offset guide - helps to solve the problem of accurate telescope guidance and instrument orientation. It is important to note that exactly the given optical scheme and design can be used to create universal devices for a wide class of small Cassegrain telescopes with a large focal ratio (\(\lesssim F/8\)) and a large aberration-free FoV. A specific implementation of the MAGIC device is a fairly universal solution to reduce the relative focus of the system for a large number of both already built Zeiss-type telescopes and new ones. The realizable efficiency of MAGIC makes it possible to carry out joint monitoring campaigns in conjunction with other focal reducers [see, e.g., results of MAGIC observations in (Shablovinskaya et al. 2023a) obtained together with AFOSC of the 1.82-m Copernico telescope of Asiago-Cima Ekar observatory and FoReRo-2 of the 2-m telescope of Rozhen National Astronomical Observatory], as well as to carry out observations applying the original methodical approaches [see, e.g., the Stokes polarimetry of blazars with quadruple Wollaston prism in two-band filter (Shablovinskaya et al. 2023b)]. ## Acknowledgements MAGIC was the last of many astronomical devices created by Viktor Leonidovich Afanasiev (1947 - 2020). We will remember him as a brilliant practising astronomer who deeply understands the experiment - from the formulation of scientific issues, the device creation Figure 12: Observation of M1 in four directions of polarization (each FoV = \(6^{\prime}\).4) with the quadrupole Wollaston prism in the SED600 filter (\(t_{\rm exp}=300\) s). Figure 13: Instrumental polarization over the field inside the FoV of the quadrupole Wollaston prism. Coordinates in pixels are given along the X and Y axes, the coordinate grid is corrected for geometric distortions. and development of observational techniques to the obtaining of observational data and its competent interpretation. He loved science and was an ideological inspirer. His contribution to the development of our observatory is invaluable. We are grateful to E.I. Perepelitsyn for the manufacture of optics for the device. The mechanical and optical parts of MAGIC, as well as parts for the modernization of the telescope units, were produced at the SAO breadboard workshops. We also thank the engineers of the 1-m Zeiss-1000 telescope led by V.V. Komarov for constant assistance in the work with the telescope. We thank Dr. Imre Barna Biro for Figure 16: Results of observations of the M1 nebula: _on the left_, a combined photometric image of the nebula in the \(B\) (blue), \(V\) (green), and SED650 (red) filters; _on the right_ is the polarization map of the nebula obtained with the Wollaston quadrupole prism in the SED600 filter. Figure 14: For the Stokes parameters \(Q\) and \(U\), smooth variations over the field inside the square mask are described. Figure 15: Comparison of the measured values of the degree of polarization \(P_{\rm obs}\) (_left_) and the polarization angle \(\varphi_{\rm obs}\) (_right_) with their reference values \(P_{\rm sub}\) and \(\varphi_{\rm sub}\). Figure 17: MAGIC spectroscopy of the E1821+643 quasar: (a) a fragment of a direct image in the \(R_{\rm C}\) filter (\(t_{\rm cap}=10\) sec) with the position of the spectrograph slit into which four objects fall, the arrow indicates the studied quasar; (b) – single spectral frame (\(t_{\rm cap}=600\) sec), contains traces of cosmic particles; (c) – robustly averaged frame (\(t_{\rm cap}=8\times 600\) sec) with geometric correction and subtracted night sky spectrum; (d) integrated spectrum in the wavelength scale of the quasar E1821+643, marked in the figure: 1 – planetary nebula PN K 1-16; 2 – quasar E1821+643; 3 – star [SPB96] 1882; 4 – field star. helpful discussions and advice on baffles. We express our gratitude to A.V. Moiseev for providing valuable methodological guidance throughout the study of the device. Also, we appreciate the constructive comments provided by the reviewers, which significantly enhanced the quality of this paper. This work was supported by the Russian Scientific Foundation (grant no. 20-12-00030 "Investigation of geometry and kinematics of ionized gas in active galactic nuclei by polarimetry methods"). Observations with the SAO RAS telescopes are supported by the Ministry of Science and Higher Education of the Russian Federation. ## Data Availability The data underlying this article will be shared on reasonable request to the corresponding author.
紙面には、ロシア科学アカデミーの特殊天体物理天文台 (Special Astrophysical Observatory) の1m Zeiss-1000 天体望遠鏡に2020年9月に稼働させた「MAGIC 多機能焦点縮小器」(Monitoring of ActiveGalaxies by Investigation of their Cores)について記述されています。 現在、3つの観測モードが実現されています。それは光度計、偏光計、長 slit 分光法です。焦点距離の縮小により、光度計では十分な視野を得ることができ、長 slit 分光法では$\sim$12’の大きさの slit を実現することができ、偏光計では6’4”の大きさの偏光プリズムを用いることで、十分な視野を得ることができ、これにより、広範囲の天体であるネブラと銀河の複雑な研究が効率化されます。MAGIC の能力を示す例として、様々な天体に対する
2301.00249
Minimal surfaces and the new main inequality
We establish the new main inequality as a minimizing criterion for minimal maps to products of $\mathbb{R}$-trees, and the infinitesimal new main inequality as a stability criterion for minimal maps to $\mathbb{R}^n$. Along the way, we develop a new perspective on destabilizing minimal surfaces in $\mathbb{R}^n$, and as a consequence we reprove the instability of some classical minimal surfaces; for example, the Enneper surface.
Vladimir Markovic, Nathaniel Sagman
2022-12-31T16:47:10
http://arxiv.org/abs/2301.00249v2
# Minimal surfaces and the new main inequality ###### Abstract. We establish the new main inequality as a minimizing criterion for minimal maps into products of \(\mathbb{R}\)-trees, and the infinitesimal new main inequality as a stability criterion for minimal maps to \(\mathbb{R}^{n}\). Along the way, we develop a new perspective on destabilizing minimal surfaces in \(\mathbb{R}^{n}\), and as a consequence we reprove the instability of some classical minimal surfaces; for example, the Enneper surface. ## 1. Introduction Let \(S\) be a Riemann surface, \(\phi_{1},\ldots,\phi_{n}\) integrable holomorphic quadratic differentials on \(S\) summing to zero, and \(f_{1},\ldots,f_{n}:S\to S^{\prime}\) mutually homotopic quasiconformal maps to another Riemann surface with Beltrami forms \(\mu_{1},\ldots,\mu_{n}\). If \(\partial S\) is non-empty, we ask that \(f_{1},\ldots,f_{n}\) are mutually homotopic relative to \(\partial S\). The new main inequality holds if: \[\operatorname{Re}\sum_{i=1}^{n}\int_{S}\phi_{i}\cdot\frac{\mu_{i}}{1-|\mu_{i} |^{2}}\leq\sum_{i=1}^{n}\int_{S}|\phi_{i}|\cdot\frac{|\mu_{i}|^{2}}{1-|\mu_{i} |^{2}}. \tag{1}\] For \(n=1\) and \(f_{1}:S\to S\) homotopic to the identity, (1) is always satisfied, and referred to as the Reich-Strebel inequality or the main inequality for quasiconformal maps. The result is a key ingredient in the proof of Teichmuller's uniqueness theorem. The first author introduced the new main inequality in the papers [11] and [12] as a tool to study minimal surfaces in products of hyperbolic surfaces. The outcome of [12] is that there exists a product of Fuchsian representations into \(\operatorname{PSL}(2,\mathbb{R})^{n}\), \(n\geq 3\), with multiple minimal surfaces in the corresponding product of closed hyperbolic surfaces. With Smillie in [13], we gave a new proof of the result from [12]. Then in [17], the second author and Smillie found unstable minimal surfaces for Hitchin representations into Lie groups of rank at least \(3\), disproving a conjecture of Labourie [8]. In this paper we revisit the new main inequality and some aspects of the paper [12], but with applications to minimal maps to products of \(\mathbb{R}\)-trees and to \(\mathbb{R}^{n}\). The results on \(\mathbb{R}\)-trees and \(\mathbb{R}^{n}\) are proved in Sections 3 and 4 respectively, which can be read independently. ### Harmonic maps to \(\mathbb{R}\)-trees Throughout the paper, let \(\Sigma_{g}\) be a closed and oriented surface of genus \(g\geq 2\), and let \(\mathbf{T}_{g}\) be the Teichmuller space of marked Riemann surface structures on \(\Sigma_{g}\). Let \(S\) be a Riemann surface structure on \(\Sigma_{g}\), which lifts to a Riemann surface structure \(\tilde{S}\) on the universal cover, and let \(\operatorname{QD}(S)\) be the space of holomorphic quadratic differentials on \(S\). We review the basics about harmonic maps to \(\mathbb{R}\)-trees in Section 3. Briefly, a non-zero holomorphic quadratic differential gives the data of an \(\mathbb{R}\)-tree \((T,d)\), a representation \(\rho:\pi_{1}(\Sigma_{g})\to\operatorname{Isom}(T,d)\), and a unique \(\rho\)-equivariant harmonic map \(\pi:\tilde{S}\to(T,d).\) From non-zero \(\phi_{1},\ldots,\phi_{n}\in QD(S)\) summing to zero, we assemble the product of \(\mathbb{R}\)-trees, denoted \(X\), and the product of representations \(\rho:\pi_{1}(\Sigma_{g})\to\operatorname{Isom}(X)\). The product of the equivarant harmonic maps \(\pi_{i}\) from \(\tilde{S}\) to each individual \(\mathbb{R}\)-tree is a minimal map \(\pi:\tilde{S}\to X\). For any ###### Abstract We consider the following problem: **Problem 1**.: _Let \(\mathcal{B}\) be a bounded domain with boundary \(\partial\mathbb{D}\) and \(\partial\mathbb{D}\) be a bounded domain with boundary \(\partial\mathbb{D}\) and \(\partial\mathbb{D}\) be a bounded domain with boundary \(\partial\mathbb{D}\). Then the problem is formulated as follows:_ \[\begin{split}\text{Re}\sum_{i=1}^{n}\int_{S}\phi_{i}\cdot\frac{ \mu_{i}}{1-|\mu_{i}|^{2}}\leq\sum_{i=1}^{n}\int_{S}|\phi_{i}|\cdot\frac{|\mu_{ i}|^{2}}{1-|\mu_{i}|^{2}}.\end{split}\] _The problem is formulated as follows:_ \[\begin{split}\text{Re}\sum_{i=1}^{n}\int_{S}\phi_{i}\cdot\frac{ \mu_{i}}{1-|\mu_{i}|^{2}}\leq\sum_{i=1}^{n}\int_{S}|\phi_{i}|\cdot\frac{|\mu_{ i}|^{2}}{1-|\mu_{i}|^{2}}.\end{split}\] _The problem is formulated as follows:_ \[\begin{split}\text{Re}\sum_{i=1}^{n}\int_{S}\phi_{i}\cdot\frac{ \mu_{i}}{1-|\mu_{i}|^{2}}\leq\sum_{i=1}^{n}\int_{S}|\phi_{i}|\cdot\frac{|\mu_{ i}|^{2}}{1-|\mu_{i}|^{2}}.\end{split}\] _The problem is formulated as follows:_ \[\begin{split}\text{Re}\sum_{i=1}^{n}\int_{S}\phi_{i}\cdot\frac{ \mu_{i}}{1-|\mu_{i}|^{2}}\leq\sum_{i=1}^{n}\int_{S}|\phi_{i}|\cdot\frac{|\mu_{ i}|^{2}}{1-|\mu_{i}|^{2}}.\end{split}\] _The problem is formulated as follows:_ \[\begin{split}\text{Re}\sum_{i=1}^{n}\int_{S}\phi_{i}\cdot\frac{ \mu_{i}}{1-|\mu_{i}|^{2}}\leq\sum_{i=1}^{n}\int_{S}|\phi_{i}|\cdot\frac{|\mu_{ i}|^{2}}{1-|\mu_{i}|^{2}}.\end{split}\] _The problem is formulated as follows:_ \[\begin{split}\text{Re}\sum_{i=1}^{n}\int_{S}\phi_{i}\cdot\frac{ \mu_{i}}{1-|\mu_{i}|^{2}}\leq\sum_{i=1}^{n}\int_{S}|\phi_{i}|\cdot\frac{|\mu_ {i}|^{2}}{1-|\mu_{i}|^{2}}.\end{split}\] _The problem is formulated as follows:_ \[\begin{split}\text{Re}\sum_{i=1}^{n}\int_{S}\phi_{i}\cdot\frac{ \mu_{i}}{1-|\mu_{i}|^{2}}\leq\sum_{i=1}^{n}\int_{S}|\phi_{i}|\cdot\frac{|\mu_ {i}|^{2}}{1-|\mu_{i}|^{2}}.\end{split}\] _The problem is formulated as follows:_ \[\begin{split}\text{Re}\sum_{i=1}^{n}\int_{S}\phi_{i}\cdot\frac{ \mu_{i}}{1-|\mu_{i}|^{2}}\leq\sum_{i=1}^{n}\int_{S}|\phi_{i}|\cdot\frac{|\mu_ {i}|^{2}}{1-|\mu_{i}|^{2}}.\end{split}\] _The problem is formulated as follows:_ \[\begin{split}\text{Re}\sum_{i=1}^{n}\int_{S}\phi_{i}\cdot\frac{ \mu_{i}}{1-|\mu_{i}|^{2}}\leq\sum_{i=1}^{n}\int_{S}|\phi_{i}|\cdot\frac{|\mu_ {i}|^{2}}{1-|\mu_{i}|^{2}}.\end{split}\] _The problem is formulated as follows:_ \[\begin{split}\text{Re}\sum_{i=1}^{n}\int_{S}\phi_{i}\cdot\frac{ \mu_{i}}{1-|\mu_{i}|^{2}}\leq\sum_{i=1}^{n}\int_{S}|\phi_{i}|\cdot\frac{|\mu_ {i}|^{2}}{1-|\mu_{i}|^{2}}.\end{split}\] _The problem is formulated as follows:_ \[\begin{split}\text{Re}\sum_{i=1}^{n}\int_{S}\phi_{i}\cdot\frac{ \mu_{i}}{1-|\mu_{i}|^{2}}\leq\sum_{i=1}^{n}\int_{S}|\phi_{i}|\cdot\frac{|\mu_ {i}|^{2}}{1-|\mu_{i}|^{2}}.\end{split}\] _The problem is formulated as follows:_ \[\begin{split}\text{Re}\sum_{i=1}^{n}\int_{S}\phi_{i}\cdot\frac{ \mu_{i}}{1-|\mu_{i}|^{2}}\leq\sum_{i=1}^{n}\int_{S}|\phi_{i}|\cdot\frac{|\mu_ {i}|^{2}}{1-|\mu_{i}|^{2}}.\end{split}\] _The problem is formulated as follows:_ \[\begin{split}\text{Re}\sum_{i=1}^{n}\int_{S}\phi_{i}\cdot\frac{\mu_ {i}}{1-|\mu_{i}|^{2}}\leq\sum_{i=1}^{n}\int_{S}|\phi_{i}|\cdot\frac{|\mu_{i}|^{2} }{1-|\mu_{i}|^{2}}.\end{split}\] _The problem is formulated as follows:_ \[\begin{split}\text{Re}\sum_{i=1}^{n}\int_{S}\phi_{i}\cdot\frac{\mu_ {i}}{1-|\mu_{i}|^{2}}\leq\sum_{i=1}^{n}\int_{S}|\phi_{i}|\cdot\frac{|\mu_{i}|^{2 }}{1-|\mu_{i}|^{2}}. **Corollary B**.: \(h\) _is stable if and only if for all mutually infinitesimally equivalent functions \(\dot{\mu}_{1},\dots,\dot{\mu}_{n}\in L^{\infty}(\mathbb{D}),\) the infinitesimal new main inequality holds:_ \[-\text{Re}\sum_{i=1}^{n}\int_{\mathbb{D}}\phi_{i}\dot{\mu}_{i}T(\dot{\mu}_{i}) dxdy\leq\sum_{i=1}^{n}\int_{\mathbb{D}}|\phi_{i}||\dot{\mu}_{i}|dxdy. \tag{2}\] Above and throughout the paper, when integrating over \(\mathbb{D}\) we use the \(\phi_{i}\) term to denote the associated holomorphic function rather than the differential. We now give an overview of the second half of the paper. To destabilize a minimal surface, it's probably most common to perturb by normal variations of the image in \(\mathbb{R}^{n}\) that vanish on the boundary. Another option is to precompose the boundary parametrization along a flow of diffeomorphisms of the circle. One then hopes to lower the energy by taking the harmonic extension of the boundary map at each time along the flow. Instead, motivated by Theorem A, we vary a minimal surface \(h=(h_{1},\dots,h_{n})\) by precomposing the harmonic coordinate functions \(h_{i}\) by quasiconformal maps. Let \(\mathcal{E}(\Omega,g)\) denote the energy of a map \(g\) from a domain \(\Omega\subset\mathbb{C}\) to \(\mathbb{R}\). First order variations of quasiconformal maps can be described by a real vector space \(\mathcal{V}\) whose elements are a particular class of holomorphic functions from \(\mathbb{C}\setminus\mathbb{D}\to\mathbb{C}\). Given \(\varphi\in\mathcal{V}\), it is possible to find a path of \(n\)-tuples of quasiconformal maps \(t\mapsto f_{1}^{t},\dots,f_{n}^{t}:\mathbb{C}\to\mathbb{C}\) all fixing the origin and agreeing on \(\mathbb{C}\setminus\mathbb{D}\) with a holomorphic map \(F^{t}\) that satisfies \(F^{t}(z)=z+t\varphi(z)+o(t)\). Note that \(f_{i}^{t}(\mathbb{D})=F^{t}(\mathbb{D})\) does not depend on \(i\), and the boundary of the minimal surface in \(\mathbb{R}^{n}\) remains fixed if we precompose each \(h_{i}\) by \((f_{i}^{t})^{-1}.\) Suppose that \[\frac{d^{2}}{dt^{2}}|_{t=0}\sum_{i=1}^{n}\mathcal{E}(f_{i}^{t}(\mathbb{D}),h_{ i}\circ(f_{i}^{t})^{-1})<0. \tag{3}\] Then, because the energy of a map to \(\mathbb{R}^{n}\) is at least the area of the image, \(h\) is unstable. **Definition 1.2**.: We say that \(h\) is unstable via self-maps, and that \(\varphi\) destabilizes \(h\), if we can choose \(f_{i}^{t}\) so that (3) holds. Theorem B justifies that varying by self-maps can be done in place of the usual methods. In Section 4.4 we define a real quadratic form \(\mathbf{L}_{h}:\mathcal{V}\to\mathbb{R}\) such that \(\mathbf{L}_{h}(\varphi)<0\) if and only if \(\varphi\) destabilizes. **Definition 1.3**.: The self-maps index of \(h\), denoted \(\text{Ind}(\mathbf{L}_{h})\), is the maximal dimension of a subspace of \(\mathcal{V}\) on which \(\mathbf{L}_{h}\) is negative definite. Let \(\text{Ind}(h)\) denote the ordinary index for the area functional. **Theorem B**.: \(\text{Ind}(\mathbf{L}_{h})=\text{Ind}(h)\)_._ **Remark 1.4**.: The result should have implications for maps from \(\overline{\mathbb{D}}\) to products of \(\mathbb{R}\)-trees, a subject which we don't develop in this paper. Every harmonic function from any Riemann surface arises from a folding of a map to an \(\mathbb{R}\)-tree (see [4] and [13, Section 4.1]). Clearly, self-maps variations lift to variations of maps to \(\mathbb{R}\)-trees. **Remark 1.5**.: For equivariant minimal maps to \(\mathbb{R}^{n}\), the analogous result is true and proved in [13, Lemma 4.6 and Proposition 4.8] via a different method. The conditions (1) are (2) are tractable, so we also ask: given a minimal map \(h\) with Weierstrass-Enneper data \(\alpha\) and \(\varphi\in\mathcal{V}\), when does \(\varphi\) destabilize? As in [12, Section 5], define the functional \(\mathcal{F}:C^{1}(\mathbb{D})\to\mathbb{R}\), \[\mathcal{F}(f)=\text{Re}\int_{\mathbb{D}}f_{z}f_{\overline{z}}+\int_{\mathbb{ D}}|f_{\overline{z}}|^{2}.\] Given a continuous function from \(\partial\mathbb{D}\to\mathbb{C}\), the harmonic extension is the sum of the Poisson extensions of the real and imaginary parts. **Theorem C**.: _Let \(\varphi\in\mathcal{V}.\) For each \(i\), let \(v_{i}\) be the harmonic extension of \((\frac{\partial}{\partial z}h_{i})\cdot\varphi|_{\partial\mathbb{D}}:\partial \mathbb{D}\to\mathbb{C}\). If_ \[\mathcal{F}_{\alpha}(\varphi):=\sum_{i=1}^{n}\mathcal{F}(v_{i})<0,\] _then \(\varphi\) destabilizes \(h\)._ In the case of polynomials, we work out the explicit formulas for a particular class of variations. For a polynomial \(p(z)=\sum_{j=0}^{r}a_{j}z^{j}\), an integer \(m\geq 0\), and \(\gamma\in\mathbb{C}^{*}\), set \[C(p,\gamma,m)=\pi\sum_{j=0}^{m-1}\frac{\operatorname{Re}(\gamma^{2}a_{j}a_{2m- j})+|\gamma|^{2}|a_{j}|^{2}}{m-j}.\] **Theorem D**.: _For \(i=1,\ldots,n\), let \(p_{i}\) be a polynomial with no zeros on \(\partial\mathbb{D}\), and such that \(\sum_{i=1}^{n}p_{i}^{2}=0.\) On \(\mathbb{D}\), let \(\alpha_{i}\) be the holomorphic \(1\)-form \(\alpha_{i}(z)=p_{i}(z)dz\). Suppose there exists an integer \(m\geq 0\) and \(\gamma\in\mathbb{C}^{*}\) such that_ \[\sum_{i=1}^{n}C(p_{i},\gamma,m)<0.\] _Then \(\varphi(z)=\gamma z^{-m}\) destabilizes the associated minimal surface in \(\mathbb{R}^{n}\)._ To demonstrate the result, we consider the most well known unstable minimal surface: the Enneper surface. The Weierstrass-Enneper data \((\alpha_{1},\alpha_{2},\alpha_{3})\) consists of the \(1\)-forms obtained by multiplying the following polynomials on \(\mathbb{C}\) by \(dz\): \[p_{1}(z)=\frac{1}{2}(1-z^{2})\;,\;p_{2}(z)=\frac{i}{2}(1+z^{2})\;,\;p_{3}(z)=z.\] We restrict to \(\overline{\mathbb{D}_{r}}=\{z\in\mathbb{C}:|z|\leq r\}\). For \(r<1\), the Enneper surface is strictly minimizing. For \(r=1\), it is strictly minimizing and stable, but not strictly stable. For \(r>1\), Theorem D gives a new and simple proof of Corollary D below. **Corollary D**.: _For \(r>1\), the Enneper surface restricted to \(\overline{\mathbb{D}_{r}}\) is unstable._ Proof.: Let \(h=(h_{1},h_{2},h_{3}):\mathbb{C}\to\mathbb{R}^{3}\) be the minimal map defining the Enneper surface. We reparametrize to \(h|_{\mathbb{D}_{r}}\) to \(\mathbb{D}\) by defining \(h^{r}=(h_{1}^{r},h_{2}^{r},h_{3}^{r})=(h_{1}(r\cdot),h_{2}(r\cdot),h_{3}(r \cdot)).\) The holomorphic derivatives are given by \[p_{i}^{r}(z)=\frac{\partial}{\partial z}\mathrm{Re}\int_{0}^{rz}\alpha_{i}(w) dw=rp_{i}(rz)\;,\;i=1,2,3.\] Explicitly, \[p_{1}^{r}(z)=\frac{r}{2}(1-r^{2}z^{2})\;,\;p_{2}^{r}(z)=\frac{ri}{2}(1+r^{2}z ^{2})\;,\;p_{3}^{2}(z)=r^{2}z.\] We choose \(m=1,\gamma=1\) and find that for \(p(z)=az^{2}+bz+c\), \[C(p,1,1)=|c|^{2}+\mathrm{Re}(ac). \tag{4}\] Computing the expression (4) for each polynomial, \[\sum_{i=1}^{3}C(p_{i}^{r},1,1)=\frac{r^{2}}{2}(1-r^{2}).\] This is negative for \(r>1\) There are other known conditions for minimal surfaces to be unstable. For example, let \(G:\overline{\Omega}\to S^{2}\) be the Gauss map for a minimal surface. A classical result of Schwarz says that if the first Dirichlet eigenvalue for the Laplacian on \(G(\overline{\Omega})\) is less than 2, then the minimal surface is unstable [18] (see also [2]). For the Enneper surface, the stereographic projection of the Gauss map \(G\) is \(g(z)=z\). For \(r>1\), \(G(\overline{\mathbb{D}_{r}})\) is a spherical cap containing the upper hemisphere, and hence the first Dirichlet eigenvalue for the Laplacian is less than 2 (see also [15, SS117]). We must comment that the methods developed here using quasiconformal maps are not strictly necessary to prove Theorems C and D. For these results, the self-maps variations simply provide a new model for computation, which happens to lend itself well to the situation. We explain this point carefully right after proving Theorem C. ### Acknowledgments Vladimir Markovic is supported by the Simons Investigator Award 409745 from the Simons Foundation. Nathaniel Sagman is funded by the FNR grant O20/14766753, _Convex Surfaces in Hyperbolic Geometry._ ## 2. Preliminaries Let \(S\) be a Riemann surface, not necessarily compact and possibly with boundary. Since we will work with harmonic maps to \(\mathbb{R}\)-trees in Section 3, we define harmonic maps in the metric space context. ### Harmonic and minimal maps Let \(\nu\) be a smooth metric on \(S\) compatible with the complex structure. Let \((M,d)\) be a complete and non-positively curved (NPC) length space, and \(h:S\to M\) a Lipschitz map. Korevaar-Schoen [7, Theorem 2.3.2] associate a locally \(L^{1}\) measurable metric \(g=g(h)\), defined locally on pairs of Lipschitz vector fields, and which plays the role of the pullback metric. If \(h\) is a \(C^{1}\) map to a smooth Riemannian manifold \((M,\sigma)\), and the distance \(d\) is induced by a Riemannian metric \(\sigma\), then \(g(h)\) is represented by the pullback metric \(h^{*}\sigma\). The energy density is the locally \(L^{1}\) function \[e(h)=\frac{1}{2}\mathrm{trace}_{\nu}g(h), \tag{5}\] and the total energy, which is allowed to be infinite, is \[\mathcal{E}(S,h)=\int_{S}e(h)dA, \tag{6}\] where \(dA\) is the area form of \(\nu\). We comment here that the measurable 2-form \(e(h)dA\) does not depend on the choice of compatible metric \(\nu\), but only on the complex structure. **Definition 2.1**.: \(h\) is harmonic if it is a critical point for the energy \(h\mapsto\mathcal{E}(S,h)\). If \(\partial S\neq\emptyset\), we ask that \(h\) is critical among other Lipschitz maps with the same boundary values. Let \(g_{ij}(h)\) be the components of \(g(h)\) in a holomorphic local coordinate \(z=x_{1}+ix_{2}\). The Hopf differential of a map \(h\) is the measurable tensor given in the local coordinate by \[\phi(h)dz^{2}=\frac{1}{4}(g_{11}(h)(z)-g_{22}(h)(z)-2ig_{12}(h)(z))dz^{2}. \tag{7}\] In the Riemannian setting, this is \[\phi(h)(z)=h^{*}\sigma\Big{(}\frac{\partial}{\partial z},\frac{\partial}{ \partial z}\Big{)}(z)dz^{2}.\] When \(h\) is harmonic, even in the metric space setting, the Hopf differential is represented by a holomorphic quadratic differential. **Definition 2.2**.: The map \(h\) is minimal if it is harmonic and the Hopf differential vanishes identically. In the Riemannian setting, a non-constant minimal map is a branched minimal immersion. For a harmonic map to a product space, it is clear from definitions (5) and (7) that the energy density and the Hopf differential are the sum of the energy densities and the Hopf differentials of the component maps respectively. Let \(X\) be a complete NPC length space. Given an action \(\rho:\pi_{1}(\Sigma_{g})\to\operatorname{Isom}(X)\) and a \(\rho\)-equivariant map \(h:\tilde{S}\to X\), the energy density is invariant under \(\pi_{1}(\Sigma_{g})\) action on \(\tilde{S}\) by deck transformations, and hence descends to a function \(S\). Total energy is defined as in (6) by integrating the density against the area form on \(S\), and we say that \(h\) is harmonic if it is a critical point of the total energy among other \(\rho\)-equivariant maps. Similarly, \(h\) is minimal if it is harmonic and the Hopf differential, which also descends to \(S\), is zero. ### Quasiconformal maps For details on results below, we refer the reader to [1]. **Definition 2.3**.: An orientation preserving homeomorphism \(f\) between domains in \(\mathbb{C}\) is quasiconformal if 1. the partial derivatives with respect to the coordinates \(z\) and \(\overline{z}\) exist almost everywhere and can be represented by locally integrable functions \(f_{z}\) and \(f_{\overline{z}}\), and 2. there exists \(k\in[0,1)\) such that \(|f_{\overline{z}}|\leq k|f_{z}|\). A map between Riemannian surfaces \(f:S\to S^{\prime}\) is quasiconformal if any holomorphic local coordinate representation is a quasiconformal map. The Beltrami form is the measurable tensor represented in local coordinates by \[\mu=\mu(z)\frac{d\overline{z}}{dz}=\frac{f_{\overline{z}}(z)}{f_{z}(z)}\frac{ d\overline{z}}{dz}.\] Although \(\mu(z)\) is not globally defined, the transformation law ensures that the norm \(|\mu(z)|\) is. \(L^{\infty}_{1}(S)\) is defined as the open unit ball of the space of measurable tensors of the form \(\mu(z)\frac{d\overline{z}}{dz}\). **Theorem 2.4** (Measurable Riemann mapping theorem).: _Let \(\hat{\mathbb{C}}\) be the Riemann sphere and \(\mu\in L^{\infty}_{1}(\hat{\mathbb{C}})\). There exists a quasiconformal homeomorphism \(f^{\mu}:\hat{\mathbb{C}}\to\hat{\mathbb{C}}\) with Beltrami form \(\mu\). \(f^{\mu}\) is unique up to postcomposing by Mobius transformations._ It is important to note that if \(t\mapsto\mu(t)\) is a real analytic path in \(L^{\infty}_{1}(S)\), then \(t\mapsto f^{\mu(t)}\) and its distributional derivatives locally vary real analytically with respect to a suitable norm (see [1, Chapter V]). For \(\mu\in L^{\infty}_{1}(\mathbb{D})\), we extend \(\mu\) to all of \(\hat{\mathbb{C}}\) by setting \(\mu=0\). There is a unique choice of Mobius transformation so that we can make the definition below. **Definition 2.5**.: The normal solution to the Beltrami equation for \(\mu\) is the unique solution \(f^{\mu}:\mathbb{C}\to\mathbb{C}\) satisfying \(f^{\mu}(0)=0\) and \(f^{\mu}_{z}(z)-1\in L^{p}(\mathbb{C})\) for all \(p>2\). Next we state the Reich-Strebel energy formula (originally equation 1.1 in [16]). Here \(S\) is any Riemann surface, \(h:S\to M\) is a Lipschitz map to a metric space of finite total energy, and \(f:S\to S^{\prime}\) is a quasiconformal map between Riemann surfaces. Let \(\mu\) be the Beltrami form of \(f\), \(J_{f^{-1}}\) the Jacobian of \(f^{-1}\), and \(\phi\) the Hopf differential of \(h\), which need not be holomorphic. One can verify the identity: \[e(h\circ f^{-1}) =(e(h)\circ f^{-1})J_{f^{-1}}+2(e(h)\circ f^{-1})J_{f^{-1}}\frac{(| \mu_{f}|^{2}\circ f^{-1})}{1-(|\mu_{f}|^{2}\circ f^{-1})}\] \[-4\text{Re}\Big{(}(\phi(h)\circ f^{-1})J_{f^{-1}}\frac{(\mu_{f} \circ f^{-1})}{1-(|\mu_{f}|^{2}\circ f^{-1})}\Big{)}\] Integrating against the area form, we arrive at the proposition below. **Proposition 2.6**.: _The formula_ \[\mathcal{E}(S^{\prime},h\circ f^{-1})-\mathcal{E}(S,h)=-4\text{Re}\int_{S} \phi(h)\cdot\frac{\mu}{1-|\mu|^{2}}+2\int_{S}e(h)\cdot\frac{|\mu|^{2}}{1-|\mu|^ {2}}dA \tag{8}\] _holds._ When the target is an \(\mathbb{R}\)-tree, which of course includes \(\mathbb{R}\), we'll explain that \(e(h)dA\) is represented by \(2|\phi(h)|\). Consequently, in the cases of interest, the formula (8) involves only \(\phi\) and \(\mu\). ## 3. Minimal maps into products of \(\mathbb{R}\)-trees In this section, \(S\) is a closed Riemann surface structure on \(\Sigma_{g}\). ### Harmonic maps to \(\mathbb{R}\)-trees **Definition 3.1**.: An \(\mathbb{R}\)-tree is a length space \((T,d)\) such that any two points are connected by a unique arc, and every arc is a geodesic, isometric to a segment in \(\mathbb{R}\). A point \(x\in T\) is a vertex if the complement \(T\backslash\{x\}\) has greater than two components. Otherwise it is said to lie on an edge. The vertical (resp. horizontal) foliation of \(\phi\in QD(S)\) is the singular foliation whose leaves are the integral curves of the line field on \(S\backslash\phi^{-1}(0)\) on which \(\phi\) is a positive (resp. negative) real number. The singularities are standard prongs at the zeros, with a zero of order \(k\) corresponding to a prong with \(k+2\) segments. Both foliations come with transverse measures \(|\text{Re}\sqrt{\phi}|\) and \(|\text{Im}\sqrt{\phi}|\) respectively (see [5, Expose 5] for precise definitions). Throughout, we work with the vertical foliation. Lifting to a singular measured foliation on a universal cover \(\tilde{S}\), we define an equivalence relation on \(\tilde{S}\) by \(x\sim y\) if \(x\) and \(y\) lie on the same leaf. The quotient space \(\tilde{S}/\sim\) is denoted \(T\). Pushing the transverse measure down via the projection \(\pi:\tilde{S}\to T\) yields a distance function \(d\) that turns \((T,d)\) into an \(\mathbb{R}\)-tree, with an induced action \(\rho:\pi_{1}(S)\to\text{Isom}(T,d).\) Under this distance, the projection map \(\pi:\tilde{S}\to(T,d)\) is \(\rho\)-equivariant and harmonic [21, Section 4]. The energy and the Hopf differential of the projection map \(\pi\) can be described explicitly. At a point \(p\in\tilde{S}\) on which \(\phi(p)\neq 0\), the map locally isometrically factors through a segment in \(\mathbb{R}\). In a small enough neighbourhood around that point, \(g(h)\) is represented by the pullback metric of the locally defined map to \(\mathbb{R}\). From this, we see that the energy density and the Hopf differential have continuous representatives equal to \(\nu^{-1}|\phi|/2\) and \(\phi/4\) respectively. For any other Riemann surface \(S^{\prime}\) representing a point in \(\mathbf{T}_{g}\), there is a unique \(\rho\)-equivariant harmonic map \(\tau:\tilde{S}^{\prime}\to(T,2)\) (see [21]). The energy functional on Teichmuller space \(\mathbf{E}_{\rho}:\mathbf{T}_{g}\to[0,\infty)\) is defined by \(\mathbf{E}_{\rho}(S^{\prime})=\mathcal{E}(S^{\prime},\tau)\). Now we turn to Theorem A. Suppose that \(\phi_{1},\ldots,\phi_{n}\in QD(S)\) sum to \(0\). For each \(i\), we have an action of \(\pi_{1}(\Sigma_{g})\) on an \(\mathbb{R}\)-tree \((T_{i},d_{i})\) and an equivariant harmonic projection map \(\pi_{i}:\tilde{S}\to(T_{i},d_{i})\). We assemble the product of \(\mathbb{R}\)-trees \(X\) with the product action \(\rho:\pi_{1}(\Sigma_{g})\to\operatorname{Isom}(X)\) and product map \(\pi=(\pi_{1},\dots,\pi_{n}).\) The energy functional \(\mathbf{E}_{\rho}\) on \(\mathbf{T}_{g}\) for \(\rho\) is the sum of the energy functionals for each component action. \(\pi\) is not only harmonic but also minimal. Theorem A is about determining when \(S\) minimizes \(\mathbf{E}_{\rho}.\) The new main inequality comes out of the formula (8). Let \(S^{\prime}\) be another Riemann surface structure on \(\Sigma_{g}\) and let \(f_{1},\dots,f_{n}:S\to S^{\prime}\) be mutually homotopic quasiconformal maps with Beltrami forms \(\mu_{i}\). We lift each \(f_{i}\) to a quasiconformal map \(\tilde{f}_{i}\) between the universal covers. Putting previous results in our setting, we have **Proposition 3.2**.: \(\mathbf{E}_{\rho}(S)=\mathcal{E}(S,\pi)=\sum_{i=1}^{n}\mathcal{E}(S,\pi_{i}),\) _and_ \[\sum_{i=1}^{n}\mathcal{E}(S^{\prime},\pi_{i}\circ\tilde{f}_{i}^{-1})-\sum_{i= 1}^{n}\mathcal{E}(S,\pi_{i})=-\text{Re}\sum_{i=1}^{n}\int_{S}\phi_{i}\cdot \frac{\mu_{i}}{1-|\mu_{i}|^{2}}+\sum_{i=1}^{n}\int_{S}|\phi_{i}|\cdot\frac{| \mu_{i}|^{2}}{1-|\mu_{i}|^{2}}.\] Hence, as we stated in Section 1.1, the new main inequality (1) is equivalent to \[\mathbf{E}_{\rho}(S)\leq\sum\mathcal{E}(S,\pi_{i})\leq\sum\mathcal{E}(S,\pi_{ i}\circ\tilde{f}_{i}^{-1}).\] One direction of Theorem A is therefore clear: if \(S\) is a global minimum, then (1) holds for any choice of \(f_{1},\dots,f_{n}.\) To prove the harder direction of Theorem A, we show that we can nearly factor harmonic maps to \(\mathbb{R}\)-trees arising from Jenkins-Strebel differentials. ### Jenkins-Strebel differentials and the main proof Given a singular measured foliation on \(S\), we say that a leaf entering or exiting a singular point is critical, and that a leaf connecting two singular points is a saddle connection. If two leaves connect to the same singular point, we say that they lie on a critical trajectory. So in particular, if two singular points are connected by a saddle connection, then they lie in the same critical trajectory. A differential \(\phi\in\operatorname{QD}(S)\) is Jenkins-Strebel if every non-critical leaf of the vertical measured foliation is a closed circle. The complement of the set of critical trajectories is a disjoint union of cylinders \(C_{1},\dots,C_{p}\), each foliated by the vertical leaves. Each cylinder \(C_{k}\) corresponds to the homotopy class of its core curve, say \(\gamma_{k}\), so \(p\) is at most \(3g-3.\) The reader should be aware that it's more common to define the Jenkins-Strebel condition in terms of the horizontal foliation. The length of any arc connecting the boundaries of the cylinder \(C_{k}\) under the measure \(|\text{Re}\sqrt{\phi}|\) is called the height of the cylinder, and denoted \(h_{k}\). Likewise, the length of any of the leaves under the measure \(|\text{Im}\sqrt{\phi}|\), say \(l_{k}\), is the length. In a holomorphic coordinate \(z=x+iy\) on \(C_{k}\) that is conformal with respect to the metric \(|\phi|\), vertical leaves are the circles of the form \(\{x_{0}\}\times[0,l_{k}]/\{(x_{0},0)\sim(x_{0},l_{k})\}\), and horizontal leaves are the lines \([0,h_{k}]\times\{y_{0}\}\). When \(\phi\) is a Jenkins-Strebel differential, the \(\mathbb{R}\)-tree \((T,d)\) is locally compact and a genuine metric tree. The quotient by the action of \(\rho\), which will always be denoted \((G,s)\), is a metric graph. Each edge in \((G,s)\) corresponds to a cylinder \(C_{k}\), and the length of the edge under \(s\) is exactly the height \(h_{k}\). Note the following converse. **Lemma 3.3**.: _Suppose that \((T,d)\) is a metric tree, and the graph \((G,s)\) has \(p\) edges with lengths \(h_{1},\dots,h_{p}\). Then \(\phi\) is Jenkins-Strebel and has \(p\) cylinders with heights \(h_{1},\dots,h_{p}\)._ Proof.: First, descend to a map \(S\to(T,d)/\rho(\pi_{1}(\Sigma_{g})).\) Locally isometrically factoring the map near the preimage of an edge point, the regular value theorem yields that preimages of edge points, i.e., leaves in the vertical foliation, are closed circles. Points on the same edge correspond to homotopic closed circles. The circles corresponding to an edge foliate the cylinders that make up the decomposition for \(\phi\). By definition of the transverse measure, the height is \(h_{k}\) In the situation above, the homotopy classes of the \(\gamma_{k}\) are determined by \((T,d)\). For more details, see [19]. We say that \(\phi\) is a maximal Jenkins-Strebel differential if the number of cylinders is \(3g-3\). **Lemma 3.4**.: _Maximal Jenkins-Strebel differentials are dense in \(\text{QD}(S)\) with respect to the \(L^{1}\) norm._ Proof.: It is foundational that Jenkins-Strebel differentials are dense in \(\text{QD}(S)\) with respect to the \(L^{1}\) norm [3]. It is proved in [10, Theorem 1.6] that any Jenkins-Strebel differential can be approximated in \(L^{1}\) by maximal ones. The main step in the proof of Theorem A is the lemma below. **Lemma 3.5** (Nearly factoring harmonic maps).: _Let \(\pi:\tilde{S}\to(T,d)\) be a \(\rho\)-equivariant harmonic map to an \(\mathbb{R}\)-tree arising from a maximal Jenkins-Strebel differential. Let \(S^{\prime}\) be another Riemann surface. Then there exists a sequence of quasiconformal maps \(f_{n}:S\to S^{\prime}\) in the homotopy class of the identity such that_ \[\lim_{n\to\infty}\mathcal{E}(S^{\prime},\pi\circ\tilde{f}_{n}^{-1})=\mathbf{E }_{\rho}(S^{\prime}). \tag{9}\] The lemma is probably true for any \(\phi\in\text{QD}(S)\), but the proof would be be more involved. Our argument for Theorem A requires just the Jenkins-Strebel case. We now prove Theorem A, deferring the proof of Lemma 3.5 to the next two subsections. Resume the notation from the introduction. Proof of Theorem A.: In view of the comments in Section 3.1, we only need to prove that if the new main inequality always holds for \(\phi_{1},\dots,\phi_{n}\), then \(S\) minimizes \(\mathbf{E}_{\rho}.\) We assume for the sake of contradiction that there exists a Riemann surface \(S^{\prime}\) representing another point in \(\mathbf{T}_{g}\) and an \(\epsilon>0\) such that \[\mathbf{E}_{\rho}(S^{\prime})+\epsilon<\mathbf{E}_{\rho}(S).\] Via Lemma 3.4, for each \(i\) we find a sequence of maximal Jenkins-Strebel differentials \((\phi_{i}^{m})_{m=1}^{\infty}\subset\text{QD}(S)\) that approximate \(\phi_{i}\) in the \(L^{1}\) norm. For each \(m\), we have a product of \(\mathbb{R}\)-trees \(X_{m}\) and we let \(\rho_{m}\) be the product action. By Lemma 3.3, the associated quadratic differentials on the Riemann surface \(S^{\prime}\) are all maximal Jenkins-Strebel. For all \(m\) sufficiently large, \[\mathbf{E}_{\rho_{m}}(S^{\prime})+\epsilon<\mathbf{E}_{\rho_{m}}(S).\] Let \(\pi_{i}^{m}\) be the component harmonic maps from \(\tilde{S}\). Fixing a large enough \(m\), by Lemma 3.5 we can find a sequence of quasiconformal maps \(f_{i}^{r}:S\to S^{\prime}\) such that for \(r\) large enough, \[\sum_{i=1}^{n}\mathcal{E}(S^{\prime},\pi_{i}^{m}\circ(\tilde{f}_{i}^{r})^{-1} )+\epsilon<\mathbf{E}_{\rho_{m}}(S). \tag{10}\] Choose any such large \(r\) and let \(\mu_{i}\) be the Beltrami form of \(f_{i}^{r}\). By Proposition 3.2, (10) is equivalent to \[\text{Re}\sum_{i=1}^{n}\int_{S}\phi_{i}^{m}\frac{\mu_{i}}{1-|\mu_{i}|^{2}}> \sum_{i=1}^{n}\int_{S}|\phi_{i}^{m}|\frac{|\mu_{i}|^{2}}{1-|\mu_{i}|^{2}}+\epsilon.\] Taking \(m\to\infty\), an application of the dominated convergence theorem yields \[\text{Re}\sum_{i=1}^{n}\int_{S}\phi_{i}\frac{\mu_{i}}{1-|\mu_{i}|^{2}}\geq \sum_{i=1}^{n}\int_{S}|\phi_{i}|\frac{|\mu_{i}|^{2}}{1-|\mu_{i}|^{2}}+\epsilon >\sum_{i=1}^{n}\int_{S}|\phi_{i}|\frac{|\mu_{i}|^{2}}{1-|\mu_{i}|^{2}}.\] That is, the new main inequality fails for \(\mu_{1},\dots,\mu_{n}\). This contradiction establishes the result of Theorem A. ### Model maps between pants The remainder of the section is devoted to the proof of Lemma 3.5. We first recall Liu's solution to the heights problem [10]. Cutting the surface \(\Sigma_{g}\) along a maximal curve system yields a decomposition of the surface into pairs of pants. Let \(\Sigma_{0,3}\) be an oriented pair of pants. Liu first proves the lemma below. **Lemma 3.6** (Lemma 2.1 in [10]).: _Let \((h_{1},h_{2},h_{3})\) be positive numbers. For any triple of positive numbers \((l_{1},l_{2},l_{3})\) there is a unique Riemann surface structure \(P\) on \(\Sigma_{0,3}\) that comes with a Jenkins-Strebel differential \(\varphi\) such that each boundary component is a vertical leaf, and the corresponding conformal cylinders \(C_{k},\,k=1,2,3,\) have height \(h_{k}\) and length \(l_{k}\)._ Fix a maximal curve system \(\gamma_{1},\ldots,\gamma_{3g-3}\), heights \(h_{1},\ldots,h_{3g-3}\), and lengths \(l_{1},\ldots,l_{3g-3}\). Using Lemma 3.6, on each pair of pants in the corresponding decomposition we get a Riemann surface structure and a Jenkins-Strebel differential realizing specified heights \(h_{k}/2\) and lengths \(l_{k}\). By conformally welding the pants together along the curves that we originally cut along, we obtain a Riemann surface structure on \(\Sigma_{g}\) and a Jenkins Strebel differential with heights \(h_{1},\ldots,h_{3g-3}\) and lengths \(l_{1},\ldots,l_{3g-3}\). To do the welding, we take a curve \(\gamma_{k}\) in two connecting pants \(P_{i}\) and \(P_{j}\), and parametrize \(\gamma_{k}\) in the two pants by some maps \(\delta_{ki}(t)\) and \(\delta_{kj}(t)\), \(t\in[0,1]\), with \(\delta_{ki}(0)=\delta_{ki}(1)\) and \(\delta_{kj}(0)=\delta_{kj}(1)\). We weld the pants by identifying \(\delta_{ki}(t)\) with \(\delta_{kj}(\theta_{k}-t)\), for some \(\theta_{k}\in\mathbb{R}\), and where \(\theta_{k}-t\) is taken mod \(\mathbb{Z}\). With the \(l_{k}\) and \(h_{k}\) specified, the only freedom we have is how much we twist when we do the welding. It is proved in [10, Theorem 2.3] that any pair consisting of a Riemann surface and a maximal Jenkins-Strebel differential is obtained in this fashion. We construct the maps \(f_{n}\) for Lemma 3.5 by building them on individual pants, and then gluing the maps together and possibly twisting to account for welding. Lemma 3.7 is the localized version of Lemma 3.5 that applies to \(\Sigma_{0,3}\). One difficulty is that the critical trajectories in pants can be topologically distinct: for the pants in Lemma 3.6, there are three possibilities for \(\varphi\) (see the proof of Lemma 2.1 in [10]). 1. If \(l_{1}<l_{2}+l_{3}\), then \(\varphi\) has two simple zeros connected by three saddle connections. 2. If \(l_{1}=l_{2}+l_{3}\), then \(\varphi\) has a single double zero. 3. If \(l_{1}>l_{2}+l_{3}\), then \(\varphi\) has two simple zeros that each lie on their own loop in the critical trajectory and that are connected by a single saddle connection. See Figure 1 below. In case (i), we say that the pair of pants has type i. In the situation above we can define a leaf space projection as usual. There's no need to pass to a covering space: we simply declare two points to be equivalent if they lie on the Figure 1. Cases 1, 2, and 3, arranged from left to right same leaf. The resulting quotient is a \(1\)-complex consisting of three line segments that have each been glued together at one endpoint. As before, we push the transverse measure down to get a distance function. The metric space is compact; the lengths of the segments are \(h_{1},h_{2},h_{3}\). We can extend the line segments to infinity to obtain an NPC space and apply the formalism from [7]. **Lemma 3.7**.: _For \(i,j\in\{1,2,3\}\), let \(P_{i}\) and \(P_{j}\) be Riemann surface structures of types \(i,j\) on \(\Sigma_{0,3}\) with the same heights and leaf space projections \(\pi_{i}\) and \(\pi_{j}\). There exists a sequence of quasiconformal maps \(f_{n}:P_{i}\to P_{j}\) and a constant \(C>0\) such that_ 1. \(\lim_{n\to\infty}e(\pi_{i}\circ f_{n}^{-1})=e(\pi_{j})\) _almost everywhere,_ 2. _and_ \(e(\pi_{i}\circ f_{n}^{-1})<C\)_._ Note that since the heights are the same, the quotient graphs are isometric. We write \((G,s)\) for the graph. Proof.: Let \(\varphi_{i}\) and \(\varphi_{j}\) be the two holomorphic quadratic differentials. Let \(C_{k}^{i}\) and \(C_{k}^{j}\) be the conformal cylinders, \(k=1,2,3\), with core curve classes \(\gamma_{k}^{i}\), \(\gamma_{k}^{j}\). We split the proof into cases. First, \((i,j)=(1,1).\) Choose an identification of the critical points. Each cylinder \(C_{k}^{i}\), \(C_{k}^{j}\) is bounded by a circle on the critical trajectory that is split into two segments when we remove the critical points. We map the circle for \(C_{k}^{i}\) onto the corresponding circle for \(C_{k}^{j}\) in a way that maps critical points onto critical points according to our identification and is constant speed with respect to the singular metrics \(|\varphi_{i}|\) and \(|\varphi_{j}|\) on the segments. In conformal coordinates on each cylinder \(C_{k}^{i}\), \(C_{k}^{j}\), take the straight horizontal lines from the critical points to the boundary curve, which cut each non-critical leaf into two segments. Each edge point of \((G,s)\) corresponds to a unique non-critical leaf for \(\varphi_{i}\) and a unique non-critical leaf for \(\varphi_{j}\). On each segment of each given non-critical leaf, we define \(f\) to be constant speed with respect to \(|\varphi_{i}|\) and \(|\varphi_{j}|\), mapping intersections with the horizontal line in \(P_{i}\) to the intersections with the line in \(P_{j}\). Since the metrics are smooth, these constant speed maps are varying smoothly on the complement of the critical trajectory and the horizontal lines. The resulting map \(f\) is therefore quasiconformal everywhere and smooth almost everywhere. The map \(f\) satisfies \(\pi_{i}\circ f_{n}^{-1}=\pi_{j}\). We set \(f_{n}=f\) for all \(n\). For \((i,j)=(2,2)\) and \((i,j)=(3,3)\), we can go by essentially the same procedure, since the critical trajectories are the same. Again, we remove critical points and map the resulting segments of the critical trajectories onto each other in a constant speed way. In the \((2,2)\) case, the critical trajectory is split into two segments, and in the \((3,3)\) case it is split into three segments. We then take the horizontal lines from the critical points to the boundaries and remove them. In the \((2,2)\) case there are two cylinders such that removing the line has the effect of turning each circle into a segment, and one cylinder (the one of length \(l_{1}\)) such that each circle is broken into two segments. In the \((3,3)\) case we have the same thing. We then choose constant speed maps between the segments as before. Next, we treat \((i,j)=(1,2)\). Let \(l_{1},l_{2},l_{3}\) be the lengths of the boundary curves for \(P_{2}\), with \(l_{1}=l_{2}+l_{3}\). Every pair of pants can be obtained by gluing conformal rectangles to get a hexagon and then doubling the hexagon along three boundary curves (see [10, Lemma 2.1] for precise expressions in coordinates). By slightly modifying this construction of \(P_{2}\), we can create a Riemann surface structure \(P_{2}^{n}\) on \(\Sigma_{0,3}\) with the same heights and so that the lengths \(l_{1}^{n},l_{2}^{n},l_{3}^{n}\) satisfy \(l_{2}^{n}=l_{2}\), \(l_{3}^{n}=l_{3}\), and \(l_{1}^{n}=l_{1}-2^{-n}.\) For each \(n\), the case \((i,j)=(1,1)\) gives us a quasiconformal map \(f_{n}:P_{1}\to P_{2}^{n}\) intertwining the harmonic maps from \(P_{1}\) and \(P_{2}^{n}\) to \((G,s)\). We postcompose with the uniformly quasiconformal identity map from \(P_{2}^{n}\to P_{2}\) to turn \(f\) into a map from \(P_{1}\to P_{2}.\) We assume the choice of identification of critical points in the \((1,1)\) construction is the same for all \(n\). \(f_{n}\) has two speeds on each circle in the foliation, with speed determined by \(|\varphi_{i}|\) and the Jenkins-Strebel differential on \(P_{2}^{n}\). The horizontal line segments in the construction above depend only on the location of a critical point for the foliation, which is converging with \(n.\) The associated Jenkins-Strebel differentials are converging with the Riemann surface structures (their \(L^{1}\) norms are uniformly bounded, completely determined by the heights and lengths). Hence, all derivatives of \(f_{n}\) are uniformly bounded, in fact locally uniformly bounded below on the complement of the critical trajectory, and therefore \(f_{n}\) converges to a continuous map \(f\) such that \(\pi_{i}=\pi_{j}\circ f\). Moreover, \(\pi_{i}\circ f_{n}^{-1}\) converges to \(\pi_{j}\) in the space of Lipschitz maps from \(P_{1}\to(G,s)\). Both the uniform bound and the convergence of \(e(\pi_{i}\circ f_{n}^{-1})\) come out of the definition of the \(L^{1}\) metric tensor from [7, Theorem 2.3.2]. The case \((i,j)=(2,1)\) is obtained by inverting the process above. Using the solution for \((i,j)=(1,2)\), we have quasiconformal maps \(g_{n}:P_{j}\to P_{i}\) limiting to a continuous map \(g\) that factors \(\pi_{i}\circ g=\pi_{j}\). At each step \(n\) we take \(f_{n}=g_{n}^{-1}\). Although there is no \(C^{0}\) limit, the bounds on the complement of the critical trajectory give that \(\pi_{i}\circ f_{n}^{-1}\) converges to \(\pi_{j}\) in the space of Lipschitz maps from \(P_{1}\to(G,s)\). Since the critical trajectory has measure zero, the energy density converges pointwise almost everywhere and we have a uniform bound. The case \((i,j)=(3,2)\) is analogous to the limiting process of the case \((i,j)=(1,2)\), except we replace \(P_{2}\) with pants \(P_{2}^{n}\) such that \(l_{1}^{n}=l_{1}+2^{-n}\), rather than \(l_{1}^{n}=l_{1}-2^{-n}\). Similarly, we invert that procedure to handle \((i,j)=(2,3)\). We are left to do \((i,j)=(1,3)\) and \((3,1)\). For \((i,j)=(1,3)\) we choose an auxiliary pair of pants \(P_{2}\) of type 2, and compose the maps we obtain using the cases \((i,j)=(1,2)\) and \((i,j)=(2,3)\). By boundedness of derivatives and Beltrami forms away from the critical trajectories, convergence follows the same line of thought as above. Likewise, we compose the previous cases for \((i,j)=(3,1)\). Figure 2. The bottom map describes the model map near the singular points for \((i,j)=(1,2)\). The map to the upper foliation illustrates the case \((i,j)=(1,1)\) near the singular points, which limits to the bottom map as we shrink the saddle connection. ### Nearly factoring harmonic maps Equipped with our model maps, we give the proof of Lemma 3.5. From Lemma 3.3, the tree \((T,d)\) gives us the data of a maximal collection of curves \(\gamma_{1},\ldots,\gamma_{3g-3}\) cutting the surface into pants, as well as the heights \(h_{1},\ldots,h_{3g-3}\). Proof of Lemma 3.5.: Let \(l_{1}^{1},\ldots,l_{3g-3}^{1}\) and \(l_{1}^{2},\ldots,l_{3g-3}^{2}\) be the lengths for the maximal Jenkins-Strebel differentials on \(S\) and \(S^{\prime}\) respectively. We can assume that \(S\) has been built with zero twisting, and we set \(\theta_{1},\ldots,\theta_{3g-3}\) to be the twisting angles for \(S^{\prime}\). We also have pants with Riemann surface structures \(P_{1}^{1},\ldots,P_{2g-2}^{1}\) and \(P_{1}^{2},\ldots,P_{2g-2}^{2}\) on \(S\) and \(S^{\prime}\) respectively. Using Lemma 3.7, we build model maps \(f_{k}^{n}:P_{k}^{1}\to P_{k}^{2}\) between the pants that nearly intertwine the restrictions of the harmonic maps to \((G,s)\). We need to modify the \(f_{k}^{n}\) to account for the twisting in \(S^{\prime}\), so that we can glue the maps together for a globally defined map. We do the modification near each boundary component of each pant individually. Take pants \(P_{k}^{1}\) and \(P_{k}^{2}\) and boundary curves on each one that we aim to properly identify. In the associated cylinder, choose a very small collar neighbourhood bounded by a non-singular vertical leaf. Working in conformal coordinates in th collar, precompose \(f_{k}^{n}\) with a map that is constant in the horizontal direction and twists with constant speed in an orientation preserving fashion in the vertical direction so as to properly take identify the boundary curve in \(P_{k}^{1}\) to the boundary in \(P_{k}^{2}\). Since we're constant in the horizontal direction, the map \(\pi\circ(f_{k}^{n})^{-1}\) is unaffected, so points (1) and (2) from Lemma 3.7 continue to hold. Since the twisting is bounded, the map remains quasiconformal. We then glue the new maps on each pair of pants to obtain the map \(f_{n}\). Using points (1) and (2) from Lemma 3.7, an application of the dominated convergence theorem completes the proof. With the proof of Theorem A complete, we can now comment on why the new main inequality is special to the leaf space projections. Any equivariant harmonic map to an \(\mathbb{R}\)-tree is the composition of a leaf space projection and a map that folds edges onto each other (see [4] and [13, Section 4.1]). Two harmonic maps to the same \(\mathbb{R}\)-tree can arise from foldings of different leaf spaces. Consequently, the critical leaves for the Hopf differentials can look quite different, and we can't expect to be able to find quasiconformal maps that nearly intertwine the critical leaves, as we did in Lemma 3.7. In this general setting, it should be more promising to study maps to \(\mathbb{R}\)-trees that are nearby. One could perturb a variation of maps so that the critical structure is fixed, which eliminates the issue raised above. The most efficient way to perturb is to use the log-cut off trick, which negligibly affects the second variation of energy, but can force the third variation to blow up. Hence, for other maps to \(\mathbb{R}\)-trees, such as the maps to \(\mathbb{R}^{n}\) in the next section, the best one can hope for is the infinitesimal version of the new main inequality. ## 4. Classical minimal surfaces We return to the setup from Section 1.2: \(h=(h_{1},\ldots,h_{n}):\overline{\mathbb{D}}\to\mathbb{R}^{n}\) is a non-constant admissible minimal map with Weierstrass-Enneper data \(\alpha=(\alpha_{1},\ldots,\alpha_{n})\). We denote the Hopf differential of \(h_{i}\) by \(\phi_{i}=\alpha_{i}^{2}\). We first prove Theorem C, which is then used to prove Theorem B. We conclude with Theorem D. ### Variations by quasiconformal maps To properly begin, we need to explain how to vary quasiconformal maps. **Definition 4.1**.: Beltrami forms \(\mu,\nu\in L^{\infty}_{1}(\mathbb{D})\) are equivalent if the normal solutions \(f^{\mu}\) and \(f^{\nu}\) agree on \(\mathbb{C}\backslash\mathbb{D}\). The universal Teichmuller space \(\mathbf{T}\) has many definitions, and the resulting spaces can all be identified in a reasonable way. The model we take is \(\mathbf{T}=L^{\infty}_{1}(\mathbb{D})/\sim\), where \(\mu\sim\nu\) if \(\mu\) and \(\nu\) are equivalent. **Remark 4.2**.: It is more common to define \(\mathbf{T}\) by taking \(F^{\mu}=f^{\mu}/f^{\mu}(1)\) instead of \(f^{\mu}\). Under our definition, tangent vectors at \([\mu]=[0]\) have a more tractable expression. Tangent vectors in \(T_{[0]}\mathbf{T}\) should arise from functions in \(L^{\infty}(\mathbb{D})\) up to a certain identification. To make this identification explicit, we first recall the operator \(P\), defined on \(L^{p}(\mathbb{C})\), \(2<p<\infty\), by \[P(h)(z)=-\frac{1}{\pi}\int_{\mathbb{C}}h(\zeta)\Big{(}\frac{1}{\zeta-z}-\frac {1}{\zeta}\Big{)}dxdy.\] Secondly, the Beurling transform \(T\) is defined on \(C^{\infty}_{0}(\mathbb{C})\) by the principal value \[T(h)(z)=\lim_{\epsilon\to 0}-\frac{1}{\pi}\int_{|\zeta-z|>\epsilon}\frac{h( \zeta)}{(\zeta-z)^{2}}dxdy,\] and extends continuously to \(L^{p}(\mathbb{C})\), \(1<p<\infty\). For \(h\in L^{\infty}(\mathbb{D})\), we extend to \(\mathbb{C}\) by setting \(h=0\) on \(\mathbb{C}\backslash\mathbb{D}\), and we write \(P(h)\) and \(T(h)\) for \(P\) and \(T\) applied to the extension of \(h\). The normal solution to the Beltrami equation for \(\mu\in L^{\infty}_{1}(\mathbb{D})\) can be written explicitly in terms of \(P\) and \(T\): \[f^{\mu}(z)=z+P(\mu)(z)+P(\mu T(\mu))(z)+P(\mu T(\mu T(\mu)))(z)+\ldots\] So, if \(\mu=t\dot{\mu}+o(t)\) is a variation of Beltrami forms, then the normal solution along the variation is \[f^{\mu_{t}}=z+tP(\dot{\mu})+o(t).\] Therefore, \(\dot{\mu},\dot{\nu}\in L^{\infty}(\mathbb{D})\) give the same variation in \(\mathbf{T}\) if and only if \(P(\dot{\mu})=P(\dot{\nu})\) on \(\mathbb{C}\backslash\mathbb{D}\). **Definition 4.3**.: \(\dot{\mu},\dot{\nu}\in L^{\infty}(\mathbb{D})\) are infinitesimally equivalent if \(P(\dot{\mu})=P(\dot{\nu})\) on \(\mathbb{C}\backslash\mathbb{D}\). **Definition 4.4**.: The space \(\mathcal{V}\) from the introduction, our model for \(T_{[0]}\mathbf{T}\), is obtained by restricting every function of the form \(P(h)\), \(h\in L^{\infty}(\mathbb{D})\), to \(\mathbb{C}\backslash\mathbb{D}\). In order to show that we can pick variations with lots of freedom, which we'll do to prove Theorems C and D, we justify the well known fact below. **Proposition 4.5**.: _For every \(f\in C^{\infty}_{0}(\mathbb{C})\) that is holomorphic on \(\mathbb{C}\backslash\mathbb{D}\), we can find \(\dot{\mu}\in L^{\infty}(\mathbb{D})\) with \(P(\dot{\mu})=f.\)_ The following basic result can be verified immediately. **Proposition 4.6**.: _Assume \(h\in C^{\infty}_{0}(\mathbb{C})\). Then \(P(h)\) is smooth, \((P(h))_{\overline{z}}=h\), and \(P(h)(z)\) tends to \(0\) as \(|z|\to\infty\)._ Proof of Proposition 4.5.: Let \(f\in C^{\infty}_{0}(\mathbb{C})\) be holomorphic in \(\mathbb{C}\backslash\mathbb{D}\). Define the function \(\dot{\mu}\) on \(\mathbb{C}\) by \(\dot{\mu}=f_{\overline{z}}.\) By Proposition 4.6, \((P(\dot{\mu}))_{\overline{z}}=f_{\overline{z}}\), so \((f-P(\dot{\mu}))\) is an entire function that is bounded, and therefore a constant. Since both \(f(z)\) and \(P(\dot{\mu})(z)\) tend to \(0\) as \(|z|\to\infty\), they are identically equal. Hence, this \(\dot{\mu}\) satisfies \(P(\dot{\mu})=f\). Now we can formulate our problem more precisely. Recall from Section 2.2 that for harmonic functions to \(\mathbb{R}\), the Reich-Strebel computation gives the following. **Lemma 4.7**.: _Let \(h:\mathbb{D}\to\mathbb{R}\) be a harmonic function with integrable Hopf differential \(\phi\), and \(f:\mathbb{C}\to\mathbb{C}\) a quasiconformal map with Beltrami form \(\mu\). The formula_ \[\mathcal{E}(h\circ f^{-1})-\mathcal{E}(h)=-4\text{Re}\int_{\mathbb{D}}\phi \cdot\frac{\mu}{1-|\mu|^{2}}dxdy+4\int_{\mathbb{D}}|\phi|\cdot\frac{|\mu|^{2}} {1-|\mu|^{2}}dxdy. \tag{11}\] _holds._ We call paths \(\mu_{i}(t):[0,t_{0}]\to L_{1}^{\infty}(\mathbb{D})\) equivalent if they project to the same path in \(\mathbf{T}\). We fix any \(\varphi\in\mathcal{V}\) and look for mutually equivalent \(C^{2}\) paths \(\mu_{i}^{t}\) tangent at time zero to \(\varphi\) in \(\mathbf{T}\), such that if \(f_{i}^{t}\) is the normal solution at time \(t\), then \[\frac{d^{2}}{dt^{2}}|_{t=0}\sum_{i=1}^{n}\mathcal{E}(f_{i}^{t}(\Omega),h_{i} \circ(f_{i}^{t})^{-1})<0.\] As we noted in the introduction, since energy dominates area, it follows that the variation \(h_{t}=(h_{1}\circ(f_{1}^{t})^{-1},\ldots,h_{1}\circ(f_{n}^{t})^{-1})\) decreases the area to second order. ### The second variation of energy In [12, Lemma 3.2] and [12, Proposition 4.2], the author computes the second variation of the new main inequality. In our context, this is the second variation of the energy. We recap the computation here. **Proposition 4.8**.: _If \(\dot{\mu}_{1},\ldots,\dot{\mu}_{n}\in L^{\infty}(\mathbb{D})\) are mutually infinitesimally equivalent, then there exists \(C^{2}\) mutually equivalent paths \(\mu_{i}(t):[0,t_{0}]\to L_{1}^{\infty}(\mathbb{D})\) tangent to \(\dot{\mu}_{i}\) at \(t=0\) and with normal solutions \(f_{i}^{t}\) such that_ \[\frac{d^{2}}{dt^{2}}|_{t=0}\sum_{i=1}^{n}\mathcal{E}(h_{i}\circ(f_{i}^{t})^{-1 })=4\text{Re}\sum_{i=1}^{n}\int_{\mathbb{D}}\phi_{i}\dot{\mu}_{i}T(\dot{\mu}_{ i})dxdy+4\sum_{i=1}^{n}\int_{\mathbb{D}}|\phi_{i}||\dot{\mu}_{i}|^{2}dxdy. \tag{12}\] Proof.: Let \(\mu_{i}(t)=t\dot{\mu}_{i}+t^{2}\ddot{\mu}_{i}+o(t^{2})\) be mutually equivalent paths with normal solutions \(f_{i}^{t}\). Differentiating the Reich-Strebel formula (11), \[\frac{1}{4}\frac{d^{2}}{dt^{2}}|_{t=0}\sum_{i=1}^{n}\mathcal{E}(h_{i}\circ(f_ {i}^{t})^{-1})=-\text{Re}\sum_{i=1}^{n}\int_{\mathbb{D}}\phi_{i}\ddot{\mu_{i} }dxdy+\sum_{i=1}^{n}|\phi_{i}||\dot{\mu}_{i}|^{2}dxdy\] (see [12, Lemma 3.2] for details). Crucially making use of the fact that \(\sum_{i=1}^{n}\phi_{i}=0\), i.e., that \(h\) is a minimal map, it follows from [12, Proposition 4.2] that we can choose mutually equivalent paths such that \[\text{Re}\sum_{i=1}^{n}\int_{\mathbb{D}}\phi_{i}\ddot{\mu_{i}}dxdy=-\text{Re} \sum_{i=1}^{n}\int_{\mathbb{D}}\phi_{i}\dot{\mu_{i}}T(\dot{\mu_{i}})dxdy.\] Putting the pieces together gives the result. **Remark 4.9**.: Up to this point, we have not used that \(\phi_{i}=\alpha_{i}^{2}\). So in particular, Proposition (4.8) holds as well for minimal maps to \(\mathbb{R}\)-trees. It is computed in [12, Section 6], using the relation \((P(h))_{z}=Th\) (distributionally), that \[-\text{Re}\sum_{i=1}^{n}\int_{\mathbb{D}}\phi_{i}\dot{\mu}_{i}T(\dot{\mu}_{i} )dxdy=\text{Re}\sum_{i=1}^{n}\int_{\mathbb{D}}(\alpha_{i}P(\dot{\mu}_{i}))_{z} (\alpha_{i}P(\dot{\mu}_{i}))_{\overline{z}}dxdy, \tag{13}\] and \[\sum_{i=1}^{n}\int_{\mathbb{D}}|\phi_{i}||\dot{\mu}_{i}|^{2}dxdy=\sum_{i=1}^{n }\int_{\mathbb{D}}|(\alpha_{i}P(\dot{\mu}_{i}))_{\overline{z}}|^{2}dxdy. \tag{14}\] Substituting (13) and (14) into (12), we arrive at the following **Proposition 4.10**.: _If \(\dot{\mu}_{1},\ldots,\dot{\mu}_{n}\in L^{\infty}(\mathbb{D})\) are mutually infinitesimally equivalent, then there exists \(C^{2}\) mutually equivalent paths \(\mu_{i}(t):[0,t_{0}]\to L^{\infty}_{1}(\mathbb{D})\) tangent to \(\dot{\mu}_{i}\) at \(t=0\) and with normal solutions \(f_{i}^{t}\) such that_ \[\frac{d^{2}}{dt^{2}}|_{t=0}\sum_{i=1}^{n}\mathcal{E}(h_{i}\circ(f_ {i}^{t})^{-1}) =4\text{Re}\sum_{i=1}^{n}\int_{\mathbb{D}}(\alpha_{i}P(\dot{\mu}_{i }))_{z}(\alpha_{i}P(\dot{\mu}_{i}))_{\overline{z}}dxdy+4\sum_{i=1}^{n}\int_{ \mathbb{D}}|(\alpha_{i}P(\dot{\mu}_{i}))_{\overline{z}}|^{2}dxdy\] \[=4\sum_{i=1}^{n}\mathcal{F}(\alpha_{i}P(\dot{\mu}_{i})),\] _where \(\mathcal{F}\) is the function from Section 1.2._ ### Proof of Theorem C We continue in the setting above with an admissible \(h\) with Weierstrass-Enneper data \(\alpha=(\alpha_{1},\ldots,\alpha_{n})\), and \(\phi_{i}=\alpha_{i}^{2}\) We fix a variation \(\varphi\in\mathcal{V}\). Proposition 4.10 says that if we are given \(\varphi\in\mathcal{V}\) and we can find maps \(P(\dot{\mu}_{1}),\ldots,P(\dot{\mu}_{n})\) on \(\mathbb{D}\) extending to \(\varphi\) on \(\mathbb{C}\setminus\mathbb{D}\) such that \(\sum_{i=1}^{n}\mathcal{F}(\alpha_{i}P(\dot{\mu}_{i}))<0\), then \(\varphi\) destabilizes \(h\). The first question is, how to pick \(P(\dot{\mu}_{i})\) that have the best chance of destabilizing \(h\)? If we could pick \(P(\dot{\mu}_{i})\) so that there is a choice of quasiconformal maps \(f_{i}^{t}(z)=z+tP(\dot{\mu}_{i})(z)+o(t)\) such that \(h_{i}\circ(f_{i}^{t})^{-1}\) is harmonic, then \(h_{i}\circ(f_{i}^{t})^{-1}\) would minimize the energy over maps with the same boundary values at each time \(t\). Recalling the local pictures from Section 3, picking such \(f_{i}^{t}\) is not in general possible. However, we can still argue heuristically. Given some choice of \(P(\dot{\mu}_{i})\) and accompanying variation of quasiconformal maps \(f_{i}^{t}\), define \(\dot{h}_{i}:\overline{\mathbb{D}}\to\mathbb{R}\) by \[h_{i}\circ(f_{i}^{t})^{-1}=h_{i}+t\dot{h}_{i}+o(t).\] Since the Laplacian is linear, if we demand that \(\dot{h}_{i}\) allows a variation of harmonic functions, then \(\dot{h}_{i}\) must be a harmonic function itself. Up to first order, the inverse of \(f_{i}^{t}\) is \[(f_{i}^{t})^{-1}(z)=z-tP(\dot{\mu}_{i})(z)+o(t).\] Computing via the chain rule, \[\dot{h}_{i}=\frac{d}{dt}|_{t=0}h_{i}\circ(f_{i}^{t})^{-1}=-2\text{Re}(\alpha_{ i}P(\dot{\mu}_{i})).\] Let \(v_{i}\) be the harmonic extension of the complex-valued function \((\frac{\partial}{\partial z}h)\cdot\varphi|_{\partial\mathbb{D}}\). If we pretend that we can pick \(P(\dot{\mu}_{i})\) to be \((\frac{\partial}{\partial z}h)^{-1}v_{i}\), then the choice would minimize the map \[(g_{1},\ldots,g_{n})\mapsto\sum_{i=1}^{n}\mathcal{F}(\alpha_{i}g_{i}),\] where the \(g_{i}\) range over every map extending \(\varphi\), since the corresponding path \(f_{i}^{t}\) would minimize the second derivative of \(\mathcal{E}(h_{i}\circ(f_{i}^{t})^{-1})\) at time zero. The problem of course is that these choices for \(P(\dot{\mu}_{i})\) blow up at the zeros of \((\frac{\partial}{\partial z}h_{i})\). We're saved by the log cut-off trick, which allows us to smoothly perturb \(v_{i}\) to be zero in a neighbourhood of the zero set of \((\frac{\partial}{\partial z}h_{i})\), so that the division is possible, while only changing the evaluation of \(\mathcal{F}\) by a controlled amount. The computation for the functional \(\mathcal{F}\) is carried out in [12, Section 5]. **Proposition 4.11** (Proposition 5.1 in [12]).: _Let \(Z\subset\mathbb{D}\) be a finite set of points and \(f:\overline{\mathbb{D}}\to\mathbb{C}\) a smooth function. Then for every \(\epsilon>0\), there exists smooth \(g:\overline{\mathbb{D}}\to\mathbb{C}\) such that_ 1. \(f(z)=g(z)\) _for_ \(z\) _in a neighourhood of_ \(\partial\mathbb{D}\)_._ 2. \(g(z)=0\) _for_ \(z\) _in some neighbourhood of each_ \(z_{0}\in Z\)_._ 3. \(|\mathcal{F}(f)-\mathcal{F}(g)|<\epsilon\) We're ready for the formal proof of the theorem. Proof of Theorem C.: Suppose \[\mathcal{F}_{\alpha}(\varphi):=\sum_{i=1}^{n}\mathcal{F}(v_{i})<0.\] Let \(\epsilon>0\) be small enough so that \[\mathcal{F}_{\alpha}(\varphi)+\epsilon<0. \tag{15}\] Let \(Z_{i}\) be the zero set of \(\frac{\partial}{\partial z}h_{i}\), and apply Proposition 4.11 to \((v_{i},Z_{i})\) to find \(g_{i}:\overline{\mathbb{D}}\to\mathbb{C}\) such that \(g_{i}=(\frac{\partial}{\partial z}h_{i})\cdot\varphi\) on \(\partial\mathbb{D}\), and \[|\mathcal{F}(v_{i})-\mathcal{F}(g_{i})|<\frac{\epsilon}{n}. \tag{16}\] Via Proposition 4.5, we can choose \(\dot{\mu_{i}}\) so that \(P(\dot{\mu}_{i})=\alpha_{i}^{-1}g_{i}\). By (15) and (16), \[\sum_{i=1}^{n}\mathcal{F}(\alpha_{i}g_{i})<0.\] Theorem C now follows from Proposition 4.10. Theorem C can probably also be proved by using the destabilizing strategy mentioned in the introduction of varying the boundary parametrization and taking harmonic extensions. To understand how to relate the two methods, we need to know how to turn \(\varphi\) into a variation of boundary parametrizations. \(\mathbf{T}\) is also the space of quasisymmetric maps of \(\partial\mathbb{D}\) mod Mobius transformations. In this model, the tangent space at the identity identifies with the Zygmund class of vector fields on \(\partial\mathbb{D}\)[14, Section 2]. Nag finds a beautiful identification of the tangent spaces to the different models in [14, Section 3], which explains how to get a Zygmund vector field out of an admissible holomorphic map on \(\mathbb{C}\backslash\mathbb{D}.\) We gave the proof of Theorem C because it is interesting to see it from our angle, and because elements of the proof will be used toward Theorem B. ### The self-maps index Continuing in our usual setting and keeping the notation from above, we now prove Theorem B and its corollary. **Definition 4.12**.: The real quadratic form \(\mathbf{L}_{h}:\mathcal{V}\to\mathbb{R}\) is defined by \(\mathbf{L}_{h}(\varphi)=\sum_{i=1}^{n}\mathcal{F}(v_{i})\), where \(v_{i}\) is the harmonic extension of \((\frac{\partial}{\partial z}h)\cdot\varphi|_{\partial\mathbb{D}}.\) The self-maps index is the maximum dimension of a subspace on which \(\mathbf{L}_{h}\) is negative definite. Noting that taking the Poisson extension is a linear operation, it is routine to check that \(\mathbf{L}_{h}\) is a real quadratic form. Let \(m\) be the Euclidean metric on \(\mathbb{R}^{n}\), and denote the volume form by \(dV\). The area of a \(C^{2}\) map \(g\) from a domain \(\Omega\subset\mathbb{C}\) to \(\mathbb{R}^{n}\) is the area of the image \(g(\Omega)\subset\mathbb{R}^{n}\), \[A(\Omega,g):=\int_{\Omega}g^{*}dV.\] \(h\) may be only a branched immersion, but it is well-understood that the normal bundle, apriori defined where \(h\) is regular, extends real analytically over the branch points (see, for example, [6, Lemma 1.3]). This extension of the normal bundle is denoted \(N_{h}\subset h^{*}T\mathbb{R}^{n}\). Variations of the image surface are elements of \(\Gamma_{0}(N_{h})\), the space of \(C^{\infty}\) sections of \(N_{h}\) that extend to zero on \(\partial\mathbb{D}\), which we tacitly view as functions \(X:\mathbb{D}\to\mathbb{R}^{n}.\) The second variation of area is defined by a real quadratic form \(\mathbf{Q}_{h}:\Gamma_{0}(N_{h})\to\mathbb{R},\) \[\mathbf{Q}_{h}(X)=\frac{d}{dt}|_{t=0}A(\Omega,h+tX)\] (see [9, Theorem 32] for the well known formula for the right hand side). The usual index \(\operatorname{Ind}(h)\) is the maximal dimension of a subspace on which \(\mathbf{Q}_{h}\) is negative definite. Theorem D is the statement that \(\operatorname{Ind}(\mathbf{L}_{h})=\operatorname{Ind}(h).\) Before we enter the proof, we recall the following application of the log cut-off trick in its usual form (see [Section 4.4, MSS] for detailed explanation). **Proposition 4.13**.: _Let \(\operatorname{\mathit{Ind}}_{0}(h)\) be the index of \(h\) restricted to variations in \(\Gamma_{0}(N_{h})\) that vanish on a neighbourhood of the critical points of every \(h_{i}\). Then \(\operatorname{\mathit{Ind}}(h)=\operatorname{\mathit{Ind}}_{0}(h).\)_ Proof of Theorem B.: It was already explained in Section 4.1 that a destabilizing self-maps variation yields a variation of maps \(h_{t}:\overline{\mathbb{D}}\to\mathbb{R}^{n}\) that decreases area to second order. Pulling back the Euclidean metric from \(T\mathbb{R}^{n}\) to \(h^{*}T\mathbb{R}^{n}\) and orthogonally projecting the induced section of \(h^{*}T\mathbb{R}^{n}\) onto \(N_{h}\), we obtain a section \(X\in\Gamma_{0}(N_{h})\) with \(\mathbf{Q}_{h}(X)<0\). To prove the theorem, we need to show that if \(X\in\Gamma_{0}(N_{h})\) vanishes in a neighbourhood of the critical point of every \(h_{i}\) and destabilizes the area of \(h\), then we can find a destabilizing self-maps variation in a way that inverts the process above. For then \(\operatorname{Ind}(\mathbf{L}_{h})=\operatorname{Ind}(\mathbf{Q}_{h})\), and we can appeal to Proposition 4.13. We will apply Theorem C by finding a variation \(\varphi\in\mathcal{V}\) with \(\mathcal{F}_{\alpha}(\varphi)<0\). Set \(h_{t}=h+tX.\) If \(h\) has branch points, then the pullback metric \(h^{*}m\) is degenerate at those points, and regular elsewhere. \(h^{*}m\) is conformal to the flat metric \(\sigma(z)=|dz|^{2}\) on \(\mathbb{D}\) in the sense that there is a bounded and \(C^{\infty}\) function \(u:\mathbb{D}\to[0,\infty)\) with isolated zeros exactly at the branch points of \(h\), and such that \(h^{*}m=u\sigma.\) Since \(X=0\) in \(U\), \(h_{t}^{*}m=h^{*}m\) in \(U\). There exists \(t_{0}>0\) such that for \(t<t_{0}\), the degenerate locus of \(h_{t}^{*}m\) is equal to that of \(h^{*}m\). We define a family of non-degenerate \(C^{\infty}\) metrics \((\sigma_{t})_{t<t_{0}}\) on \(\mathbb{D}\) by \[\sigma_{t}(z)=\begin{cases}\sigma(z),\;z\in U\\ u(z)^{-1}h_{t}^{*}m(z),\;z\in\mathbb{D}\backslash U\end{cases}.\] We emphasize that \(h_{t}^{*}m\) is not necessarily conformally flat. For each \(t\leq t_{0}\), by the measurable Riemann mapping theorem, Theorem 2.4, we can find a Jordan domain \(\Omega_{t}\subset\mathbb{C}\) and a quasiconformal homeomorphism \(f_{t}:\mathbb{D}\to\Omega_{t}\) that takes \(\sigma_{t}\) to a conformally flat metric (this is a classical application). Observe that the Beltrami form \(\mu_{t}\) of each \(f_{t}\) extends to \(0\) on \(\partial\mathbb{D}\), since \(X\) extends to \(0\) on \(\partial\mathbb{D}.\) For each \(t\), we extend \(\mu_{t}\) to \(0\) on \(\mathbb{C}\backslash\mathbb{D}.\) We then take the \(L^{\infty}\) function \(\dot{\mu}=\frac{d}{dt}|_{t=0}\mu_{t}\) and the associated tangent vector \(\varphi=P(\dot{\mu})|_{C\backslash\mathbb{D}}\in\mathcal{V}.\) This is the desired self-maps variation. Let's now verify Theorem C for \(\varphi\). Note that for every \(t\), the map \(h\circ f_{t}^{-1}:\Omega_{t}\to\mathbb{R}^{n}\) is weakly conformal. Obviously, the area of \(h\circ f_{t}^{-1}(\Omega_{t})\) is equal to area of \(h_{t}(\mathbb{D}).\) By design, the maps \(h\circ f_{t}^{-1}\) are weakly conformal, and therefore \[A(\Omega_{t},h\circ f_{t}^{-1})=\mathcal{E}(\Omega_{t},h\circ f_{t}^{-1}).\] Replacing each \(h_{i}\circ f_{t}^{-1}\) with the harmonic extension of the boundary map, say \(v_{i}^{t}\), cannot increase the energy. Hence, \[\mathcal{E}(\Omega_{t},v_{i}^{t})\leq\mathcal{E}(\Omega_{t},h\circ f_{t}^{-1} )=A(\Omega_{t},h\circ f_{t}^{-1})=A(\Omega,h_{t}).\] Taking the second derivative at time zero, we obtain \[\mathcal{F}_{\varphi}(\alpha)\leq\mathbf{Q}_{h}(X)<0.\] As discussed, by Theorem C we are done. Proof of Corollary B.: By Theorem B, \(h\) is stable if and only if \(\mathrm{Ind}(\mathbf{Q}_{h})=0.\) By Proposition 4.8, \(\mathrm{Ind}(\mathbf{Q}_{h})=0\) if and only if the infinitesimal new main inequality holds for the Hopf differentials of the component maps and all choices of infinitesimally equivalent \(\dot{\mu}_{1},\ldots,\dot{\mu}_{n}.\) ### Explicit destabilizing variations To conclude the paper, we test out the framework we've developed and prove Theorem D. We compute the functional \(\mathcal{F}_{\alpha}(\varphi)\) for polynomial Weierstrass data \(\alpha=(\alpha_{1},\ldots,\alpha_{n})\) and the variation \(\varphi(z)=\gamma z^{-m}.\) Recall from the introduction that we have defined, for a polynomial \(p(z)=\sum_{j=0}^{r}a_{j}z^{j},\)\(\gamma\in\mathbb{C}^{*},\) and \(m>0,\) \[C(p,\gamma,m)=\pi\sum_{j=0}^{m-1}\frac{\mathrm{Re}(\gamma^{2}a_{j}a_{2m-j})+| \gamma|^{2}|a_{j}|^{2}}{m-j}. \tag{17}\] Setting \(\alpha(z)=p(z)dz,\) the harmonic extension of \(p\cdot\varphi|_{\partial\mathbb{D}}\) is \[f_{p,\gamma,m}(z)=\gamma(a_{0}\overline{z}^{m}+\cdots+a_{m}+a_{m+1}z+\ldots a _{n}z^{n-m}).\] **Lemma 4.14**.: _In the setting above, \(\mathcal{F}(f_{p,\gamma,m})=C(p,\gamma,m).\)_ Proof.: For notations sake, set \(f=f_{p,\gamma,m}\). We compute the integrals individually. First, \[|f_{\overline{z}}|^{2}=|\gamma|^{2}\sum_{j=0}^{m-1}|a_{j}|^{2}|z|^{2(m-1-j)}+2 |\gamma|^{2}\mathrm{Re}\sum_{j=0}^{m-1}\sum_{k\neq j}a_{j}\overline{a_{k}} \overline{z}^{m-1-j}z^{m-1-k}. \tag{18}\] Due to \(L^{2}\)-orthogonality of the Fourier basis on \(S^{1}\), the second term on the right in (18) vanishes upon integration: \[2|\gamma|^{2}\mathrm{Re}\,\sum_{j=0}^{m-1}\sum_{k\neq j}a_{j} \overline{a_{k}}\int_{\mathbb{D}}\overline{z}^{m-1-j}z^{m-1-k}|dz|^{2}\] \[=2|\gamma|^{2}\mathrm{Re}\,\sum_{j=0}^{m-1}\sum_{k\neq j}a_{j} \overline{a_{k}}\int_{0}^{1}r^{2m-1-j-k}dr\int_{0}^{2\pi}e^{i\theta(j-k)}d \theta=0.\] Hence, \[\int_{\mathbb{D}}|f_{\overline{z}}|^{2}=2\pi|\gamma|^{2}\sum_{j=0}^{m-1}|a_{j }|^{2}\int_{0}^{1}r^{2m-1-2j}dr=\pi|\gamma|^{2}\sum_{j=0}^{m-1}\frac{|a_{j}|^{ 2}}{m-j}. \tag{19}\] The term \(f_{z}f_{\overline{z}}\) is a sum of terms of the form \(c_{j,k}\overline{z}^{m-j}z^{r-m-k}\). Again by \(L^{2}\)-orthogonality, the integration over the disk evaluates to a non-zero number if and only if \(0\leq j\leq m-1\), \(m+1\leq k\leq r\), and \((m-1)-j=(r-(m+1))-(r-k)\), i.e., \(k=2m-j\). This returns the formula \[\mathrm{Re}\gamma^{2}\int_{\mathbb{D}}f_{z}f_{\overline{z}}=\mathrm{Re}\gamma ^{2}\sum_{j=0}^{m-1}a_{j}a_{2m-j}\int_{\mathbb{D}}|z|^{2(m-1-j)}|dz|^{2}=\pi \mathrm{Re}\gamma^{2}\sum_{j=0}^{m-1}\frac{a_{j}a_{2m-j}}{m-j}. \tag{20}\] Putting (19) and (20) together, \[\mathcal{F}(f)=\pi\sum_{j=0}^{m-1}\frac{\mathrm{Re}(\gamma^{2}a_{j}a_{2m-j})+ |\gamma|^{2}|a_{j}|^{2}}{m-j}.\] Proof of Theorem D.: Apply Theorem C with the variation \(\gamma z^{-m}\), using Lemma 4.14\(n\) times to obtain the value of \(\mathcal{F}_{\alpha}(\varphi)\).
新しい主な不平等を最小化の基準とした、最小マップの$\mathbb{R}$木製品への対応と、微小な新しい主要不平等を安定性の基準とした、最小マップの$\mathbb{R}^n$への対応について、新たな視点で、$\mathbb{R}^n$の最小表面の不安定化について考察しました。そして、その結果、いくつかの古典的な最小表面の不安定性が再証明されました。例えば、エンネパー曲面。 Please let me know if you have any further questions.
2309.03162
On the Line-Separable Unit-Disk Coverage and Related Problems
Given a set $P$ of $n$ points and a set $S$ of $m$ disks in the plane, the disk coverage problem asks for a smallest subset of disks that together cover all points of $P$. The problem is NP-hard. In this paper, we consider a line-separable unit-disk version of the problem where all disks have the same radius and their centers are separated from the points of $P$ by a line $\ell$. We present an $O((n+m)\log(n+m))$ time algorithm for the problem. This improves the previously best result of $O(nm+ n\log n)$ time. Our techniques also solve the line-constrained version of the problem, where centers of all disks of $S$ are located on a line $\ell$ while points of $P$ can be anywhere in the plane. Our algorithm runs in $O((n+m)\log (m+ n)+m \log m\log n)$ time, which improves the previously best result of $O(nm\log(m+n))$ time. In addition, our results lead to an algorithm of $O(n^3\log n)$ time for a half-plane coverage problem (given $n$ half-planes and $n$ points, find a smallest subset of half-planes covering all points); this improves the previously best algorithm of $O(n^4\log n)$ time. Further, if all half-planes are lower ones, our algorithm runs in $O(n\log n)$ time while the previously best algorithm takes $O(n^2\log n)$ time.
Gang Liu, Haitao Wang
2023-09-06T17:00:38
http://arxiv.org/abs/2309.03162v2
# On the Line-Separable Unit-Disk Coverage and Related Problems+ ###### Abstract Given a set \(P\) of \(n\) points and a set \(S\) of \(m\) disks in the plane, the disk coverage problem asks for a smallest subset of disks that together cover all points of \(P\). The problem is NP-hard. In this paper, we consider a line-separable unit-disk version of the problem where all disks have the same radius and their centers are separated from the points of \(P\) by a line \(\ell\). We present an \(m^{2/3}n^{2/3}2^{O(\log^{2}(m+n))}+O((n+m)\log(n+m))\) time algorithm for the problem. This improves the previously best result of \(O(nm+n\log n)\) time. Our techniques also solve the line-constrained version of the problem, where centers of all disks of \(S\) are located on a line \(\ell\) while points of \(P\) can be anywhere in the plane. Our algorithm runs in \(O(m\sqrt{n}+(n+m)\log(n+m))\) time, which improves the previously best result of \(O(nm\log(m+n))\) time. In addition, our results lead to an algorithm of \(n^{10/3}2^{O(\log^{2}n)}\) time for a half-plane coverage problem (given \(n\) half-planes and \(n\) points, find a smallest subset of half-planes covering all points); this improves the previously best algorithm of \(O(n^{4}\log n)\) time. Further, if all half-planes are lower ones, our algorithm runs in \(n^{4/3}2^{O(\log^{2}n)}\) time while the previously best algorithm takes \(O(n^{2}\log n)\) time. Keywords:disk coverage, line-separable, unit-disk, line-constrained, half-planes ## 1 Introduction Given a set \(P\) of \(n\) points and a set \(S\) of \(m\) disks in the plane, the _disk coverage_ problem asks for a smallest subset of disks such that every point of \(P\) is covered by at least one disk in the subset. The problem is NP-hard, even if all disks have the same radius [15, 20]. Polynomial time approximation algorithms have been proposed for the problem and many of its variants, e.g., [1, 6, 8, 9, 16, 19]. Polynomial time exact algorithms are known for certain special cases. If all points of \(P\) are inside a strip bounded by two parallel lines and the centers of all disks lie outside the strip, then the problem is solvable in polynomial time [3]. If all disks of \(S\) contain the same point, polynomial time algorithms also exist [12, 13]; in particular, applying the result in [8] (i.e., Corollary 1.7) yields an \(O(mn^{2}(m+n))\) time algorithm. In order to devise an efficient approximation algorithm for the general coverage problem (without any constraints), the _line-separable_ version was considered in the literature [3, 7, 11], where disk centers are separated from the points by a given line \(\ell\). A polynomial time 4-approximation algorithm is given in [7]. Ambuhl et al. [3] derived an exact algorithm of \(O(m^{2}n)\) time. An improved \(O(nm+n\log n)\) time algorithm is presented in [11] and another algorithm in [21] runs in \(O(n\log n+m^{2}\log n)\) in the worst case. The _line-constrained_ version of the disk coverage problem has also been studied, where disk centers are on the \(x\)-axis while points of \(P\) can be anywhere in the plane. Pedersen and Wang [21] considered the weighted case in which each disk has a weight and the objective is to minimize the total weight of the disks in the subset that cover all points. Their algorithm runs in \(O((m+n)\log(m+n)+\kappa\log m)\) time, where \(\kappa\) is the number of pairs of disks that intersect and \(\kappa=O(m^{2})\) in the worst case. They reduced the runtime to \(O((m+n)\log(m+n))\) for the _unit-disk case_, where all disks have the same radius, as well as the \(L_{\infty}\) and \(L_{1}\) cases, where the disks are squares and diamonds, respectively [21]. The 1D problem where disks become segments on a line and points are on the same line is also solvable in \(O((m+n)\log(m+n))\)[21]. Other types of line-constrained coverage problems have also been studied in the literature, e.g., [2, 4, 5, 18]. A related problem is when disks of \(S\) are half-planes. For the weighted case, Chan and Grant [8] proposed an algorithm for the lower-only case where all half-planes are lower ones; their algorithm runs in \(O(n^{4})\) time when \(m=n\). With the observation that a half-plane may be considered as a unit disk of infinite radius, the techniques of [21] solve the problem in \(O(n^{2}\log n)\) time. For the general case where both upper and lower half-planes are present, Har-Peled and Lee [17] solved the problem in \(O(n^{5})\) time. Pedersen and Wang [21] showed that the problem can be reduced to \(O(n^{2})\) instances of the lower-only case problem and thus can be solved in \(O(n^{4}\log n)\) time. To the best of our knowledge, we are not aware of any previous work particularly on the unweighted half-plane coverage problem. ### Our result We assume that \(\ell\) is the \(x\)-axis and all disk centers are below or on \(\ell\) while all points of \(P\) are above or on \(\ell\). We consider the line-separable version of the disk coverage problem with the following _single-intersection condition_: For any two disks, their boundaries intersect at most once in the half-plane above \(\ell\). Note that this condition is satisfied in both the unit-disk case (see Fig 2) and the line-constrained case (see Fig. 2; more to explain below). Hence, an algorithm for this line-separable single-intersection case works for both the unit-disk case and the line-constrained case. Note that all problems considered in this paper are unweighted case in the \(L_{2}\) metric. For the above line-separable single-intersection problem, we give an algorithm of \(O(m\sqrt{n}+(n+m)\log(n+m))\) time in Section 3. Based on observations, we find that some disks are "useless" and thus can be pruned from \(S\). After pruning those useless disks, the remaining disks have certain property so that we can reduce the problem to the 1D problem, which can then be easily solved. The overall algorithm is fairly simple conceptually. One challenge, however, is to show the correctness, namely, to prove why those "useless" disks are indeed useless. The proof is rather lengthy and technical. The bottleneck of the algorithm is to find those useless disks, for which we utilize the cuttings [10]. #### 1.1.1 The line-constrained problem Observe that the line-constrained problem where all disks of \(S\) are centered on a line \(\ell\) while points of \(P\) can be anywhere in the plane is also a special case of the line-separable single-intersection problem. Indeed, for each point \(p\) of \(P\) below \(\ell\), we could replace \(p\) by its symmetric point with respect to \(\ell\); in this way, we can obtain a set of points that are all above \(\ell\). It is not difficult to see that an optimal solution using this new set of points is also an optimal solution for \(P\). Further, since disks are centered on \(\ell\), although their radii may not be equal, boundaries of any two disks intersect at most once above \(\ell\). Hence, the problem is an instance of the line-separable single-intersection case. As such, applying our algorithm in Section 3 solves the line-constrained problem in \(O(m\sqrt{n}+(n+m)\log(n+m))\) time; this improves the previous algorithm in [21], which runs in \(O(n\log n+m^{2}\log m)\) time in the worst case. #### 1.1.2 The unit-disk case To solve the line-separable unit-disk case, the algorithm in Section 3 still works. However, by making use of the property that all disks have the same radius, we further improve the runtime to \(m^{2/3}n^{2/3}2^{O(\log^{*}(m+n))}+O((m+n)\log(m+n))\) in Section 4. This improves the \(O(nm+n\log n)\) time algorithm in [11] as well as the \(O(n\log n+m^{2}\log n)\) time one in [21]. The main idea of the improvement (over the algorithm in Section 3) is to explore the duality of certain subproblems in the algorithm (i.e., consider the corresponding problems on the centers of all unit disks of \(S\) and the unit disks centered at the points of \(P\)). We derive new algorithms for these dual subproblems and then combine them with the algorithms in Section 3 using recursion (the number of recursions is \(O(\log^{*}(n+m))\) and this is why there is a factor \(2^{O(\log^{*}(m+n))}\) in the time complexity). #### 1.0.2 The half-plane coverage problem. As in [21], our techniques also solve the half-plane coverage problem. Specifically, for the lower-only case, let \(\ell\) be a horizontal line that is below all points of \(P\). If we consider each half-plane as a unit disk of infinite radius with center below \(\ell\), then the problem becomes an instance of the line-separable unit-disk coverage problem. Therefore, applying our result leads to an \(m^{2/3}n^{2/3}2^{O(\log^{*}(m+n))}+O((m+n)\log(m+n))\) time algorithm. When \(m=n\), this is \(n^{4/3}2^{O(\log^{*}n)}\) time, improving the previous algorithm of \(O(n^{2}\log n)\) time [21]. For the general case where both the upper and lower half-plane are present, using the method in [21] that reduces the problem to \(O(n^{2})\) instances of the lower-only case, the problem is now solvable in \(m^{2/3}n^{8/3}2^{\log^{*}(m+n)}+O(n^{2}(m+n)\log(m+n))\) time. When \(m=n\), this is \(n^{10/3}2^{O(\log^{*}n)}\) time, improving the previous algorithm of \(O(n^{4}\log n)\) time [21]. ## 2 Preliminaries In this section, we introduce some concepts and notations that we will use in the rest of the paper. We follow the notation defined in Section 1, e.g., \(P\), \(S\), \(m\), \(n\), \(\ell\). Without loss of generality, we assume that \(\ell\) is the \(x\)-axis and points of \(P\) are all above or on \(\ell\) while centers of disks of \(S\) are all below or on \(\ell\). Under this setting, for each disk \(s\in S\), only its portion above \(\ell\) matters for our coverage problem. Hence, unless otherwise stated, a disk \(s\) only refers to its portion above \(\ell\). As such, the boundary of \(s\) consists of an _upper arc_, i.e., the boundary arc of the original disk above \(\ell\), and a _lower segment_, i.e., the intersection of \(s\) with \(\ell\). Notice that \(s\) has a single leftmost (resp., rightmost) point, which is the left (resp., right) endpoint of the lower segment of \(s\). We assume that each point of \(P\) is covered by at least one disk since otherwise there would be no feasible solution. Our algorithm is able to check whether the assumption is met. We make a general position assumption that no point of \(P\) lies on the boundary of a disk and no two points of \(A\) have the same \(x\)-coordinate, where \(A\) is the union of \(P\) and the set of the leftmost and rightmost points of all disks. Degenerated cases can be easily handled by standard techniques of perturbation, e.g., [14]. For any point \(p\) in the plane, we denote its \(x\)- and \(y\)-coordinates by \(x(p)\) and \(y(p)\), respectively. We sort all the points of \(P\) in ascending order of their \(x\)-coordinates, resulting in a sorted list \(p_{1},p_{2},\cdots,p_{n}\). We also sort all disks in ascending order of the \(x\)-coordinates of their leftmost points, resulting in a sorted list \(s_{1},s_{2},\cdots,s_{m}\). We use \(S[i,j]\) to denote the subset \(\{s_{i},s_{i+1},\cdots,s_{j}\}\); for convenience, \(S[i,j]=\emptyset\) if \(i>j\). For each disk \(s_{i}\), let \(l_{i}\) and \(r_{i}\) denote its leftmost and rightmost points, respectively. For any disk \(s\), we use \(S_{l}(s)\) (resp., \(S_{r}(s)\)) to denote the set of disks \(S\) whose leftmost points are to the left (resp., right) of that of \(s\). As such, if the index of \(s\) is \(i\), then \(S_{l}(s)=S[1,i-1]\) and \(S_{r}(s)=S[i+1,m]\). If disk \(s^{\prime}\in S_{l}(s)\), then we also say that \(s^{\prime}\) is _to the left_ of \(s\); similarly, if \(s^{\prime}\in S_{r}(s)\), then \(s^{\prime}\) is _to the right_ of \(s\). For a point \(p_{i}\in P\) and a disk \(s_{k}\in S\), we say that \(p_{i}\) is _vertically above_\(s_{k}\) if \(p_{i}\) is outside \(s_{k}\) and \(x(l_{k})<x(p_{i})<x(r_{k})\). If \(S^{\prime}\) is a subset of \(S\) that form a coverage of \(P\), then we call \(S^{\prime}\) a _feasible solution_. If \(S^{\prime}\) is a feasible solution of minimum size, then \(S^{\prime}\) is an _optimal solution_. #### 2.0.3 The non-containment property. Suppose a disk \(s_{i}\) contains another disk \(s_{j}\). Then \(s_{j}\) is redundant for our problem since any point covered by \(s_{j}\) is also covered by \(s_{i}\). Those redundant disks can be easily identified and removed from \(S\) in \(O(m\log m)\) time (indeed, this is a 1D problem by observing that \(s_{i}\) contains \(s_{j}\) if and only if the lower segment of \(s_{i}\) contains that of \(s_{j}\)). Hence, for solving our problem, we first remove such redundant disks and work on the remaining disks. For simplicity, from now on we assume that no disk of \(S\) contains another. Therefore, \(S\) has the following _non-containment_ property, which our algorithm relies on. **Observation 1**: (Non-Containment Property) _For any two disks \(s_{i},s_{j}\in S\), \(x(l_{i})<x(l_{j})\) if and only if \(x(r_{i})<x(r_{j})\)._ #### 2.0.2 Cuttings. One algorithmic tool we use is the cuttings [10]. Let \(H\) denote the set of the upper arcs of all disks of \(S\). Note that \(|H|=m\). For a parameter \(r\) with \(1\leq r\leq m\), a _\((1/r)\)-cutting \(\Xi\)_ of size \(O(r^{2})\) for \(H\) is a collection of \(O(r^{2})\) constant-complexity cells whose union covers the plane such that for any cell \(\sigma\), \(|H_{\sigma}|\leq m/r\), where \(H_{\sigma}\) is the subset of arcs of \(H\) that intersect the interior of \(\sigma\) (\(H_{\sigma}\) is often called the _conflict list_ in the literature). In our algorithm descriptions, we often use \(S_{\sigma}\), defined as the subset of disks whose upper arcs are in \(H_{\sigma}\). Our algorithm actually uses _hierarchical cuttings_[10]. A cutting \(\Xi^{\prime}\)_\(c\)-refines_ a cutting \(\Xi\) if each cell of \(\Xi^{\prime}\) is contained in a single cell of \(\Xi\) and every cell of \(\Xi\) contains at most \(c\) cells of \(\Xi^{\prime}\). Let \(\Xi_{0}\) denote the cutting whose single cell is the entire plane. We define cuttings \(\{\Xi_{0},\Xi_{1},...,\Xi_{k}\}\), in which each \(\Xi_{i}\), \(1\leq i\leq k\), is a \((1/\rho^{i})\)-cutting of size \(O(\rho^{2i})\) that \(c\)-refines \(\Xi_{i-1}\), for two constants \(\rho\) and \(c\). By setting \(k=\lceil\log_{\rho}r\rceil\), the last cutting \(\Xi_{k}\) is a \((1/r)\)-cutting. The sequence \(\{\Xi_{0},\Xi_{1},...,\Xi_{k}\}\) of cuttings is called a _hierarchical \((1/r)\)-cutting_ of \(H\). For a cell \(\sigma^{\prime}\) of \(\Xi_{i-1}\), \(1\leq i\leq k\), that fully contains cell \(\sigma\) of \(\Xi_{i}\), we say that \(\sigma^{\prime}\) is the _parent_ of \(\sigma\) and \(\sigma\) is a _child_ of \(\sigma^{\prime}\). Thus the hierarchical \((1/r)\)-cutting can be viewed as a tree structure with \(\Xi_{0}\) as the root. We often use \(\Xi\) to denote the set of all cells in all cuttings \(\Xi_{i}\), \(0\leq i\leq k\). A hierarchical \((1/r)\)-cutting of \(H\) can be computed in \(O(mr)\) time, e.g., by the algorithm in [22], which adapts Chazelle's algorithm [10] for hyperplanes. The algorithm also produces the conflict lists \(H_{\sigma}\) (and thus \(S_{\sigma}\)) for all cells \(\sigma\in\Xi\), implying that the total size of these conflict lists is bounded by \(O(mr)\). In particular, each cell of the cutting produced by the algorithm of [22] is a (possibly unbounded) _pseudo-trapezoid_ that typically has two vertical line segments as left and right sides, a sub-arc of an arc of \(H\) as a top side (resp., bottom side) (see Fig. 3). ## 3 The line-separable single-intersection case In this section, we present our algorithm for the disk coverage problem in the line-separable single-intersection case. We follow the notation defined in Section 2. For each disk \(s_{i}\in S\), we define two indices \(a(i)\) and \(b(i)\) of points of \(P\) (where \(p_{a(i)}\) and \(p_{b(i)}\) are not contained in \(s_{i}\)), which are critical to our algorithm. Definition 1: - Among all points of \(P\) covered by the union of the disks of \(S[1,i-1]\) but not covered by \(s_{i}\), define \(a(i)\) to be the largest index of these points; if no such point exists, then let \(a(i)=0\). Figure 3: Illustrating a pseudo-trapezoid. * _Among all points of_ \(P\) _covered by the union of the disks of_ \(S[i+1,m]\) _but not covered by_ \(s_{i}\)_, define_ \(b(i)\) _to be the smallest index of these points; if no such point exists, then let_ \(b(i)=n+1\)_._ We now describe our algorithm. Although the algorithm description looks simple, it is quite challenging to prove the correctness; we devote Section 3.1 to it. The algorithm implementation, which is also not trivial, is presented in Section 3.2. #### 3.1.1 Algorithm description. The algorithm has three main steps. 1. We first compute \(a(i)\) and \(b(i)\) for all disks \(s_{i}\in S\). We will show in Section 3.2 that this can be done in \(O(m\sqrt{n}+(n+m)\log(n+m))\) time using cuttings. 2. For each disk \(s_{i}\), if \(a(i)\geq b(i)\), we say that \(s_{i}\) a _prunable disk_. Let \(S^{*}\) denote the subset of disks of \(S\) that are not prunable. We will prove in Section 3.1 that \(S^{*}\) contains an optimal solution for the coverage problem on \(P\) and \(S\). This means that it suffices to work on \(S^{*}\) and \(P\). 3. We reduce the disk coverage problem on \(S^{*}\) and \(P\) to a 1D coverage problem as follows. For each point of \(P\), we project it vertically onto \(\ell\). Let \(P^{\prime}\) be the set of all projected points. For each disk \(s_{i}\in S^{*}\), we create a line segment on \(\ell\) whose left endpoint has \(x\)-coordinate equal to \(x(p_{a(i)+1})\) and whose right endpoint has \(x\)-coordinate equal to \(x(p_{b(i)-1})\) (if \(a(i)+1=b(i)\), then let the \(x\)-coordinate of the right endpoint be \(x(p_{a(i)+1})\)). Let \(S^{\prime}\) be the set of all segments thus created. We solve the following 1D coverage problem: Find a minimum subset of segments of \(S^{\prime}\) that together cover all points of \(P^{\prime}\). This problem can be easily solved in \(O((|S^{\prime}|+|P^{\prime}|)\log(|S^{\prime}|+|P^{\prime}|))\) time [21],1 which is \(O((m+n)\log(m+n))\) since \(|P^{\prime}|=n\) and \(|S^{\prime}|\leq m\). Suppose \(S^{\prime}_{1}\) is any optimal solution to the above 1D coverage problem. We create a subset \(S_{1}\) of \(S^{*}\) as follows. For each segment of \(S^{\prime}_{1}\), suppose it is created from a disk \(s_{i}\in S^{*}\); then we add \(s_{i}\) to \(S_{1}\). We will prove in Section 3.1 that \(S_{1}\) is an optimal solution to the coverage problem for \(S^{*}\) and \(P\). Footnote 1: The algorithm in [21], which uses dynamic programming, is for the weighted case where each segment has a weight. Our problem is simpler since it is an unweighted case. We can use a simple greedy algorithm to solve it. We summarize the result in the following theorem. Theorem 3.1: _Given a set \(P\) of \(n\) points and a set \(S\) of \(m\) disks in the plane such that the disk centers are separated from points of \(P\) by a line, and the single-intersection condition is satisfied, the disk coverage problem for \(P\) and \(S\) is solvable in \(O(m\sqrt{n}+(n+m)\log(n+m))\) time._ #### 3.1.2 The unit-disk case. In Section 4, we will reduce the time to \(m^{2/3}n^{2/3}2^{O(\log^{*}(m+n))}+O((n+m)\log(n+m))\) for the unit-disk case. The algorithm is exactly the same as above, except that we compute \(a(i)\)'s and \(b(i)\)'s in a more efficient way by utilizing the property that all disks have the same radius. ### Algorithm correctness We now prove the correctness of our algorithm. Lemma 2 justifies the correctness of the second main step. To prove Lemma 2, whose proof is lengthy and technical, we first prove the following Lemma 1. Recall the definition of \(S_{l}(s)\) and \(S_{r}(s)\) in Section 2. Lemma 1: _A disk \(s\) is prunable if and only if there exists a point in \(P\) that is outside \(s\) but is covered by both a disk in \(S_{l}(s)\) and a disk in \(S_{r}(s)\)._ Proof: Let \(i\) be the index of \(s\), i.e., \(s=s_{i}\). Our goal is to show that \(s_{i}\) is prunable if and only if there exists a point in \(P\) that is outside \(s_{i}\) but is covered by both a disk in \(S[1,i-1]\) and a disk in \(S[i+1,m]\) **The "if" direction.** Suppose \(P\) has a point \(p_{t}\) that is outside \(s_{i}\) but is covered by a disk \(s_{k}\) and a disk \(s_{j}\) with \(k<i<j\). Our goal is to prove that \(a(i)\geq b(i)\), meaning that \(s_{i}\) is prunable by definition. Since \(s_{k}\) covers \(p_{t}\) and \(k<i\), by definition we have \(a(i)\geq t\). On the other hand, since \(s_{j}\) covers \(p_{t}\) and \(i<j\), by definition we have \(b(i)\leq t\). As such, we obtain \(a(i)\geq b(i)\). **The "only if" direction.** Suppose \(s_{i}\) is a prunable disk. Our goal is to show that there exists a point \(p^{*}\in P\) that is outside \(s_{i}\) but covered by both a disk in \(S[1,i-1]\) and a disk in \(S[i+1,m]\). Since \(s_{i}\) is a prunable disk, we have \(a(i)\geq b(i)\), and further, there are a disk \(s_{k}\) with \(k<i\) that covers \(p_{a(i)}\) and a disk \(s_{j}\) with \(j>i\) that covers \(p_{b(i)}\). Depending on whether \(s_{k}\) covers \(p_{b(i)}\), there are two cases. 1. If \(s_{k}\) covers \(p_{b(i)}\) (see Fig. 5), then \(p_{b(i)}\) is covered by both \(s_{k}\) and \(s_{j}\). Since \(k<i\) and \(j>i\), we can use \(p_{b(i)}\) as our target point \(p^{*}\). 2. If \(s_{k}\) does not cover \(p_{b(i)}\) (see Fig. 5), then since \(s_{k}\) covers \(p_{a(i)}\), we have \(a(i)\neq b(i)\) and thus \(a(i)>b(i)\). Since \(k<j\), \(s_{k}\) covers \(p_{a(i)}\), and \(s_{j}\) covers \(p_{b(i)}\), due to the non-containment property of \(S\), we have \(x(l_{k})<x(l_{j})<x(p_{b(i)})<x(p_{a(i)})<x(r_{k})<x(r_{j})\), implying that the upper arcs of \(s_{k}\) and \(s_{j}\) must intersect, say, at at point \(q\) (see Fig. 5). As \(p_{b(i)}\not\in s_{k}\), \(p_{b(i)}\) must be vertically above \(s_{k}\). This implies that \(x(q)<x(p_{b(i)})\) must hold. Hence, the region of \(s_{k}\) to the right of \(q\) must be inside \(s_{j}\). Since \(x(q)<x(p_{b(i)})<x(p_{a(i)})\) and \(p_{a(i)}\) is in \(s_{k}\), \(p_{a(i)}\) must be in \(s_{j}\) as well. Therefore, \(p_{a(i)}\) is in both \(s_{k}\) and \(s_{j}\). As such, we can use \(p_{a(i)}\) as our target point \(p^{*}\). The lemma thus follows. The following observation, which follows immediately from the non-containment property of \(S\), is needed in the proof of Lemma 2. **Observation 2**: _For any disk \(s\) and a point \(p\) outside \(s\), if \(p\) is covered by both a disk \(s_{i}\in S_{l}(s)\) and a disk \(s_{j}\) in \(S_{r}(s)\), then \(s\subseteq s_{i}\cup s_{j}\) (see Fig. 6)._ **Lemma 2**: \(S^{*}\) _contains an optimal solution for the coverage problem on \(S\) and \(P\)._ Proof: Let \(S_{\rm opt}\) be an optimal solution. Let \(Q\) be the set of all prunable disks, i.e., \(Q=S\setminus S^{*}\). If \(S_{\rm opt}\cap Q=\emptyset\), then \(S_{\rm opt}\subseteq S^{*}\) and thus the lemma trivially follows. In what follows, we assume that \(|S_{\rm opt}\cap Q|\geq 1\). Pick an arbitrary point from \(S_{\mathrm{opt}}\cap Q\), denoted \(\hat{s}_{1}\). Below we give a process that can find a disk \(s^{*}\) from \(S^{*}\) to replace \(\hat{s}_{1}\) in \(S_{\mathrm{opt}}\) such that the new set \(S^{1}_{\mathrm{opt}}=\{s^{*}\}\cup S_{\mathrm{opt}}\setminus\{\hat{s}_{1}\}\) still forms a coverage of \(P\) (i.e., \(S^{1}_{\mathrm{opt}}\) is a feasible solution), implying that \(S^{1}_{\mathrm{opt}}\) is also an optimal solution since \(|S^{1}_{\mathrm{opt}}|=|S_{\mathrm{opt}}|\). As \(p^{*}\in S^{*}\), we have \(|S^{1}_{\mathrm{opt}}\cap Q|=|S_{\mathrm{opt}}\cap Q|-1\). If \(S^{1}_{\mathrm{opt}}\cap Q\) is still nonempty, then we can repeat the above process for other points in \(S^{1}_{\mathrm{opt}}\cap Q\) until we obtain an optimal solution \(S^{*}_{\mathrm{opt}}\) with \(S^{*}_{\mathrm{opt}}\cap Q=\emptyset\), which will prove the lemma. We now give a process to find a target disk \(s^{*}\). The process involves induction. To help the reader understand it better, we first provide the details of the first two iterations of the process (we will introduce some notation that appears unnecessary for the first two iterations, but these will be needed when we describe the induction). **The first iteration.** Let \(S^{\prime}_{\mathrm{opt}}=S_{\mathrm{opt}}\setminus\{\hat{s}_{1}\}\). Since \(\hat{s}_{1}\in Q\), by Lemma 1, \(P\) has a point \(\hat{p}_{1}\) outsides \(\hat{s}_{1}\) but is covered by a disk \(\hat{s}^{l}_{1}\in S_{l}(\hat{s}_{1})\) and a disk \(\hat{s}^{r}_{1}\in S_{r}(\hat{s}_{1})\). By Observation 2, \(\hat{s}_{1}\subseteq\hat{s}^{l}_{1}\cup\hat{s}^{r}_{1}\). Since \(\hat{p}_{1}\) is outside \(\hat{s}_{1}\) and \(S_{\mathrm{opt}}=S^{\prime}_{\mathrm{opt}}\cup\{\hat{s}_{1}\}\) forms a coverage of \(P\), \(S^{\prime}_{\mathrm{opt}}\) must have a disk \(s\) that covers \(\hat{p}_{1}\). Clearly, \(s\) is either in \(S_{l}(\hat{s}_{1})\) or in \(S_{r}(\hat{s}_{1})\). Without loss of generality, we assume that \(s\in S_{r}(\hat{s}_{1})\). Since \(\hat{s}^{r}_{1}\) refers to an arbitrary disk of \(S_{r}(\hat{s}_{1})\) that covers \(\hat{p}_{1}\) and \(s\) is also a disk of \(S_{r}(\hat{s}_{1})\) that covers \(\hat{p}_{1}\), for notational convenience, we let \(\hat{s}^{r}_{1}\) refer to \(s\). As such, \(\hat{s}^{r}_{1}\) is in \(S^{\prime}_{\mathrm{opt}}\). Consider the disk \(\hat{s}^{l}_{1}\). Since \(\hat{s}_{1}\subseteq\hat{s}^{l}_{1}\cup\hat{s}^{r}_{1}\) and \(\hat{s}^{r}_{1}\) is in \(S^{\prime}_{\mathrm{opt}}\), it is not difficult to see that the area covered by the union of the disks of \(S_{\mathrm{opt}}\) is contained in the area covered by the union of the disks of \(S^{\prime}_{\mathrm{opt}}\cup\{\hat{s}^{l}_{1}\}\) and thus \(S^{\prime}_{\mathrm{opt}}\cup\{\hat{s}^{l}_{1}\}\) is a feasible solution. As such, if \(\hat{s}^{l}_{1}\not\in Q\), then we can use \(\hat{s}^{l}_{1}\) as our target disk \(s^{*}\) and our process (for finding \(s^{*}\)) is done. In what follows, we assume \(\hat{s}^{l}_{1}\in Q\). For any subset \(S^{\prime}\) of \(S\), we define \(\mathcal{R}(S^{\prime})\) as the area covered by the union of the disks of \(S^{\prime}\), i.e., \(\mathcal{R}(S^{\prime})=\bigcup_{s\in S^{\prime}}s\). We let \(\hat{s}_{2}=\hat{s}^{l}_{1}\). Define \(A_{1}=\{\hat{s}^{r}_{1}\}\). According to the above discussion, we have \(A_{1}\subseteq S^{\prime}_{\mathrm{opt}}\), \(\hat{s}_{1}\subseteq\mathcal{R}(A_{1})\cup\hat{s}_{2}\), and \(S^{\prime}_{\mathrm{opt}}\cup\{\hat{s}_{2}\}\) is a feasible solution. **The second iteration.** We are now entering the second iteration of our process. First notice that \(\hat{s}_{2}\) cannot be \(\hat{s}_{1}\) since \(\hat{s}_{2}=\hat{s}^{l}_{1}\), which cannot be \(\hat{s}_{1}\). Our goal in this iteration is to find a _candidate disk_\(s^{\prime}\) to replace \(\hat{s}_{2}\) so that \(S^{\prime}_{\mathrm{opt}}\cup\{s^{\prime}\}\) also forms a coverage of \(P\). Consequently, if \(s^{\prime}\not\in Q\), then we can use \(s^{\prime}\) as our target \(s^{*}\); otherwise, we need to guarantee \(s^{\prime}\neq\hat{s}_{1}\) so that our process will not enter a dead loop. The discussion here is more involved than in the first iteration. Since \(\hat{s}_{2}\in Q\), by Lemma 1, \(P\) has a point \(\hat{p}_{2}\) outside \(\hat{s}_{2}\) but is covered by a disk \(\hat{s}^{l}_{2}\in S_{l}(\hat{s}_{2})\) and a disk \(\hat{s}^{r}_{2}\in S_{r}(\hat{s}_{2})\). By Observation 2, \(\hat{s}_{2}\subseteq\hat{s}^{l}_{2}\cup\hat{s}^{r}_{2}\). Depending on whether \(\hat{p}_{2}\) is in \(\mathcal{R}(A_{1})\), there are two cases. * If \(\hat{p}_{2}\not\in\mathcal{R}(A_{1})\), then since \(\hat{p}_{2}\not\in\hat{s}_{2}\) and \(\hat{s}_{1}\subseteq\mathcal{R}(A_{1})\cup\hat{s}_{2}\), we obtain that \(\hat{p}_{2}\not\in\hat{s}_{1}\). We can now basically repeat our argument in the first iteration. Since \(\hat{p}_{2}\) is outside \(\hat{s}_{2}\) and \(S^{\prime}_{\mathrm{opt}}\cup\{\hat{s}_{2}\}\) is a feasible solution, \(S^{\prime}_{\mathrm{opt}}\) must have a disk \(s\) that covers \(\hat{p}_{2}\). Clearly, \(s\) is either in \(S_{l}(\hat{s}_{2})\) or in \(S_{r}(\hat{s}_{2})\). Without loss of generality, we assume that \(s\in S_{r}(\hat{s}_{2})\). Since \(\hat{s}^{r}_{2}\) refers to an arbitrary disk of \(S_{r}(\hat{s}_{2})\) that covers \(\hat{p}_{2}\) and \(s\) is also a disk of \(S_{r}(\hat{s}_{2})\) that covers \(\hat{p}_{2}\), for notational convenience, we let \(\hat{s}^{r}_{2}\) refer to \(s\). As such, \(\hat{s}^{r}_{2}\) is in \(S^{\prime}_{\mathrm{opt}}\). We let \(\hat{s}^{l}_{2}\) be our candidate disk, which satisfies our need as discussed above for \(s^{\prime}\). Indeed, since \(S^{\prime}_{\mathrm{opt}}\cup\{\hat{s}_{2}\}\) is an optimal solution, \(\hat{s}_{2}\subseteq\hat{s}^{l}_{2}\cup\hat{s}^{r}_{2}\), and \(\hat{s}^{r}_{2}\in S^{\prime}_{\mathrm{opt}}\), we obtain that \(S^{\prime}_{\mathrm{opt}}\cup\{\hat{s}^{l}_{2}\}\) also forms a coverage of \(P\). Further, since \(\hat{s}^{l}_{2}\) contains \(\hat{p}_{2}\) while \(\hat{s}_{1}\) does not, we know that \(\hat{s}^{l}_{2}\neq\hat{s}_{1}\). Therefore, if \(\hat{s}^{l}_{2}\not\in Q\), then we can use \(\hat{s}^{l}_{2}\) as our target \(s^{*}\) and we are done with the process. Otherwise, we let \(\hat{s}_{3}=\hat{s}^{l}_{2}\) and then enter the third iteration argument. In this case, we let \(A_{2}=A_{1}\cup\{\hat{s}^{r}_{2}\}\). According to our above discussion, we have \(A_{2}\subseteq S^{\prime}_{\mathrm{opt}}\), \(\hat{s}_{2}\subseteq\mathcal{R}(A_{2})\cup\hat{s}_{3}\), and \(\{\hat{s}_{3}\}\cup S^{\prime}_{\mathrm{opt}}\) is a feasible solution. * If \(\hat{p}_{2}\in\mathcal{R}(A_{1})\), then we let \(\hat{s}^{l}_{2}\) be our candidate disk. We show below that it satisfies our need as discussed above for \(s^{\prime}\), i.e., \(\{\hat{s}^{l}_{2}\}\cup S^{\prime}_{\mathrm{opt}}\) forms a coverage of \(P\ Indeed, since \(A_{1}=\{\hat{s}_{1}^{r}\}\) and \(\hat{p}_{2}\in\mathcal{R}(A_{1})\), \(\hat{p}_{2}\) is inside \(\hat{s}_{1}^{r}\). Since \(\hat{s}_{1}^{r}\) is to the right of \(\hat{s}_{1}\), and \(\hat{s}_{2}\), which is \(\hat{s}_{1}^{l}\), is to the left of \(\hat{s}_{1}\), we obtain that \(\hat{s}_{1}^{r}\) is to the right of \(\hat{s}_{2}\), i.e., \(\hat{s}_{1}^{r}\in S_{r}(\hat{s}_{2})\). Since \(\hat{s}_{2}^{l}\) contains \(\hat{p}_{2}\), \(\hat{s}_{2}^{l}\in S_{l}(\hat{s}_{2})\), \(\hat{s}_{1}^{r}\) contains \(\hat{p}_{2}\), and \(\hat{s}_{1}^{r}\in S_{r}(\hat{s}_{2})\), by Observation 2, we obtain that \(\hat{s}_{2}\subseteq\hat{s}_{2}^{l}\cup\hat{s}_{1}^{r}\), i.e., \(\hat{s}_{2}\subseteq\hat{s}_{2}^{l}\cup\mathcal{R}(A_{1})\). Since \(S_{\mathrm{opt}}^{\prime}\cup\{\hat{s}_{2}\}\) is a feasible solution and \(A_{1}\subseteq S_{\mathrm{opt}}^{\prime}\), it follows that \(\{\hat{s}_{2}^{l}\}\cup S_{\mathrm{opt}}^{\prime}\) is also a feasible solution. On the other hand, since \(\hat{s}_{2}^{l}\) is in \(S_{l}(\hat{s}_{2})\) while \(\hat{s}_{2}\) (which is \(\hat{s}_{1}^{l}\)) is in \(S_{l}(\hat{s}_{1})\), we know that \(\hat{s}_{2}^{l}\) is in \(S_{l}(\hat{s}_{1})\) and thus \(\hat{s}_{2}^{l}\neq\hat{s}_{1}\). As such, if \(\hat{s}_{2}^{l}\not\in Q\), we can use \(\hat{s}_{2}^{l}\) as our target \(s^{*}\) and we are done with the process. Otherwise, we let \(\hat{s}_{3}=\hat{s}_{2}^{l}\) and continue on the third iteration. In this case, we let \(A_{2}=A_{1}\). According to our above discussion, we have \(A_{2}\subseteq S_{\mathrm{opt}}^{\prime}\), \(\hat{s}_{2}\subseteq\mathcal{R}(A_{2})\cup\{\hat{s}_{3}\}\), and \(\{\hat{s}_{3}\}\cup S_{\mathrm{opt}}^{\prime}\) is a feasible solution. This finishes the second iteration of the process. #### 3.1.2 Inductive step. In general, suppose that we are entering the \(i\)-th iteration of the process with disk \(\hat{s}_{i}\in Q\), \(i\geq 2\). We make the following inductive hypothesis for \(i\). 1. We have disks \(\hat{s}_{k}\in Q\) for all \(k=1,2,\ldots,i-1\) in the previous \(i-1\) iterations such that \(\hat{s}_{i}\neq\hat{s}_{k}\) for any \(1\leq k\leq i-1\) 2. We have subsets \(A_{k}\) for all \(k=1,2,\ldots,i-1\) such that \(A_{1}\subseteq A_{2}\subseteq\cdots\subseteq A_{i-1}\subseteq S_{\mathrm{opt}}^{\prime}\), and \(\hat{s}_{k}\subseteq\mathcal{R}(A_{k})\cup\hat{s}_{k+1}\) holds for each \(1\leq k\leq i-1\). 3. For any \(1\leq k\leq i\), \(\{\hat{s}_{k}\}\cup S_{\mathrm{opt}}^{\prime}\) is a feasible solution. Our above discussion showed that the hypothesis holds for \(i=2\) and \(i=3\). Next we discuss the \(i\)-th iteration for any general \(i\). Our goal is to find a candidate disk \(\hat{s}_{i+1}\) so that \(S_{\mathrm{opt}}^{\prime}\cup\{\hat{s}_{i+1}\}\) is a feasible solution and the inductive hypothesis holds for \(i+1\). Since \(\hat{s}_{i}\in Q\), by Lemma 1, \(P\) has a point \(\hat{p}_{i}\) outsides \(\hat{s}_{i}\) but is covered by a disk \(\hat{s}_{i}^{l}\in S_{l}(\hat{s}_{i})\) and a disk \(\hat{s}_{i}^{r}\in S_{r}(\hat{s}_{i})\). By Observation 2, \(\hat{s}_{i}\subseteq\hat{s}_{i}^{l}\cup\hat{s}_{i}^{r}\). Depending on whether \(\hat{p}_{i}\) is in \(\mathcal{R}(A_{i-1})\), there are two cases. 1. If \(\hat{p}_{i}\not\in\mathcal{R}(A_{i-1})\), then since \(\hat{p}_{i}\) is outside \(\hat{s}_{i}\) and \(S_{\mathrm{opt}}^{\prime}\cup\{\hat{s}_{i}\}\) is a feasible solution, \(S_{\mathrm{opt}}^{\prime}\) must have a disk \(s\) that covers \(\hat{p}_{i}\). Clearly, \(s\) is either in \(S_{l}(\hat{s}_{i})\) or in \(S_{r}(\hat{s}_{i})\). Without loss of generality, we assume that \(s\in S_{r}(\hat{s}_{i})\). Since \(\hat{s}_{i}^{r}\) refers to an arbitrary disk of \(S_{r}(\hat{s}_{i})\) that covers \(\hat{p}_{i}\) and \(s\) is also a disk of \(S_{r}(\hat{s}_{i})\) that covers \(\hat{p}_{i}\), for notational convenience, we let \(\hat{s}_{i}^{r}\) refer to \(s\). As such, \(\hat{s}_{i}^{r}\) is in \(S_{\mathrm{opt}}^{\prime}\). We let \(\hat{s}_{i+1}\) be \(\hat{s}_{i}^{l}\) and define \(A_{i}=A_{i-1}\cup\{\hat{s}_{i}^{r}\}\). We argue below that the inductive hypothesis holds. * Indeed, since \(\{\hat{s}_{i}\}\cup S_{\mathrm{opt}}^{\prime}\) is a feasible solution, \(\hat{s}_{i}\subseteq\hat{s}_{i}^{l}\cup\hat{s}_{i}^{r}\), \(\hat{s}_{i}^{r}\in S_{\mathrm{opt}}^{\prime}\), and \(\hat{s}_{i+1}=\hat{s}_{i}^{l}\), we obtain that \(\{\hat{s}_{i+1}\}\cup S_{\mathrm{opt}}^{\prime}\) is a feasible solution. This proves the third statement of the hypothesis. * Since \(A_{i}=A_{i-1}\cup\{\hat{s}_{i}^{r}\}\), \(A_{i-1}\subseteq S_{\mathrm{opt}}^{\prime}\) by inductive hypothesis, and \(\hat{s}_{i}^{r}\in S_{\mathrm{opt}}^{\prime}\), we obtain \(A_{i}\subseteq S_{\mathrm{opt}}^{\prime}\). Further, since \(\hat{s}_{i}\subseteq\hat{s}_{i}^{l}\cup\hat{s}_{i}^{r}\), \(\hat{s}_{i}^{r}\in A_{i}\), and \(\hat{s}_{i+1}=\hat{s}_{i}^{l}\), we have \(\hat{s}_{i}\subseteq\mathcal{R}(A_{i})\cup\hat{s}_{i+1}\). This proves the second statement of the hypothesis. * For any disk \(\hat{s}_{k}\) with \(1\leq k\leq i-1\), to prove the first statement of the hypothesis, we need to show that \(\hat{s}_{k}\neq\hat{s}_{i+1}\). To this end, since \(\hat{p}_{i}\in\hat{s}_{i+1}\), it suffices to show that \(\hat{p}_{i}\not\in\hat{s}_{k}\). Indeed, by inductive hypothesis, \(\hat{s}_{k}\subseteq\mathcal{R}(A_{k})\cup\hat{s}_{k+1}\) and \(\hat{s}_{k+1}\subseteq\mathcal{R}(A_{k+1})\cup\hat{s}_{k+2}\). Hence, \(\hat{s}_{k}\subseteq\mathcal{R}(A_{k})\cup\mathcal{R}(A_{k+1})\cup\hat{s}_{k+2}\). As \(\mathcal{R}(A_{k})\subseteq\mathcal{R}(A_{k+1})\), we obtain \(\hat{s}_{k}\subseteq\mathcal{R}(A_{k+1})\cup\hat{s}_{k+2}\). Following the same argument, we can derive \(\hat{s}_{k}\subseteq\mathcal{R}(A_{i-1})\cup\hat{s}_{i}\). Now that \(\hat{p}_{i}\not\in\mathcal{R}(A_{i-1})\) and \(\hat{p}_{i}\not\in\hat{s}_{i}\), we obtain \(\hat{p}_{i}\not\in\hat{s}_{k}\). 2. If \(\hat{p}_{i}\in\mathcal{R}(A_{i-1})\), then \(\hat{p}_{i}\) is covered by a disk of \(A_{i-1}\), say \(s\). As \(\hat{p}_{i}\not\in\hat{s}_{i}\), \(s\neq\hat{s}_{i}\) and thus \(s\) is in either \(S_{l}(\hat{s}_{i})\) or \(S_{r}(\hat{s}_{i})\). Without loss of generality, we assume that \(s\in S_{r}(\hat * Since \(A_{i-1}\subseteq S^{\prime}_{\rm opt}\) by inductive hypothesis and \(A_{i}=A_{i-1}\), we have \(A_{i}\subseteq S^{\prime}_{\rm opt}\). As discussed above, \(\hat{s}_{i}\subseteq\hat{s}_{i}^{l}\cup s\). Since \(s\in A_{i-1}=A_{i}\) and \(\hat{s}_{i+1}=\hat{s}_{i}^{l}\), we obtain that \(\hat{s}_{i}\subseteq\mathcal{R}(A_{i})\cup\hat{s}_{i+1}\). This proves the second statement of the hypothesis. * For any disk \(\hat{s}_{k}\) with \(1\leq k\leq i-1\), to prove the first statement of the hypothesis, we need to show that \(\hat{s}_{k}\neq\hat{s}_{i+1}\). By hypothesis, we know that \(\hat{s}_{k}\neq\hat{s}_{i}\), implying that \(\hat{s}_{k}\in S_{l}(\hat{s}_{i})\) or \(\hat{s}_{k}\in S_{r}(\hat{s}_{i})\). If \(\hat{s}_{k}\in S_{r}(\hat{s}_{i})\), since \(\hat{s}_{i+1}=\hat{s}_{i}^{l}\in S_{l}(\hat{s}_{i})\), it is obviously true that \(\hat{s}_{k}\neq\hat{s}_{i+1}\). In the following, we assume that \(\hat{s}_{k}\in S_{l}(\hat{s}_{i})\) and we will prove that \(\hat{s}_{k}\) does not contain \(\hat{p}_{i}\), which implies that \(\hat{s}_{k}\neq\hat{s}_{i+1}\) as \(\hat{p}_{i}\in\hat{s}_{i+1}\). First of all, since \(\hat{p}_{i}\) is covered by both \(s\in S_{r}(\hat{s}_{i})\) and \(\hat{s}_{i+1}\in S_{l}(\hat{s}_{i})\), it must hold that \(x(\hat{l}_{i})<x(\hat{p}_{i})<x(\hat{r}_{i})\), where \(\hat{l}_{i}\) and \(\hat{r}_{i}\) are the left and right endpoints of the lower segment of \(\hat{s}_{i}\) (i.e., the segment \(\hat{s}_{i}\cap\ell\)), respectively (see Fig. 7). Hence, since \(s\in S_{r}(\hat{s}_{i})\) and \(\hat{p}_{i}\in s\), the upper arcs of \(s\) and \(\hat{s}_{i}\) must cross each other, say, at a point \(q\). As \(\hat{p}_{i}\) is in \(s\) but not in \(\hat{s}_{i}\), we have \(x(q)<x(\hat{p}_{i})\). Recall that \(\{\hat{s}_{i}\}\cup S^{\prime}_{\rm opt}\) is a feasible solution. Also, \(S^{\prime}_{\rm opt}\) cannot be a feasible solution since that would contradict with the fact that \(S_{\rm opt}\) is an optimal solution as \(|S_{\rm opt}|=|S^{\prime}_{\rm opt}|+1\). This implies that \(\hat{s}_{i}\) is not contained in \(\mathcal{R}(S^{\prime}_{\rm opt})\). Since all disk centers are below \(\ell\), at least one point, say, \(q^{\prime}\), on the upper arc of \(\hat{s}_{i}\) is not in \(\mathcal{R}(S^{\prime}_{\rm opt})\). As \(s\in S^{\prime}_{\rm opt}\), \(q^{\prime}\) is not in \(s\). Therefore, \(x(q^{\prime})<x(q)\) must hold. As \(x(q)<x(\hat{p}_{i})\), we have \(x(q^{\prime})<x(\hat{p}_{i})\) (see Fig. 7). Recall that our goal is to prove that \(\hat{p}_{i}\not\in\hat{s}_{k}\). Let \(\hat{l}_{k}\) and \(\hat{r}_{k}\) be the left and right endpoints of the lower segment of \(\hat{s}_{k}\), respectively. If \(x(\hat{r}_{k})\leq x(q^{\prime})\), then since \(x(q^{\prime})<x(\hat{p}_{i})\), it is obviously true that \(\hat{s}_{k}\) does not contain \(\hat{p}_{i}\). We thus assume that \(x(\hat{r}_{k})>x(q^{\prime})\) (see Fig. 7). Since \(\hat{s}_{k}\in S_{l}(\hat{s}_{i})\), we have \(x(\hat{l}_{k})<x(\hat{l}_{i})\). As \(x(\hat{l}_{i})<x(q^{\prime})\), we obtain that \(x(\hat{l}_{k})<x(q^{\prime})\). Since \(x(\hat{l}_{k})<x(q^{\prime})<x(\hat{r}_{k})\), the vertical line through \(q^{\prime}\) must intersect the upper arc of \(\hat{s}_{k}\) at a point, say, \(q_{1}\) (see Fig. 7). We claim that \(y(q_{1})\leq y(q^{\prime})\). Indeed, since \(q^{\prime}\) is not inside \(\mathcal{R}(S^{\prime}_{\rm opt})\), \(q^{\prime}\) is on the upper envelope of all disks of \(S^{\prime}_{\rm opt}\cup\{\hat{s}_{i}\}\), denoted by \(\mathcal{U}\). Recall that we have proved above (in the first case where \(\hat{p}_{i}\in\mathcal{R}(A_{i-1})\)) that \(s_{k}\subseteq\mathcal{R}(S^{\prime}_{\rm opt})\cup\hat{s}_{i}\). Therefore, the upper arc of \(\hat{s}_{k}\) must be no higher than \(\mathcal{U}\). As \(q^{\prime}\in\mathcal{U}\), it follows that \(y(q_{1})\leq y(q^{\prime})\). Since \(x(\hat{l}_{k})<x(\hat{l}_{i})<x(q^{\prime})<x(\hat{r}_{k})\), due to the non-containment property, the upper arcs of \(\hat{s}_{k}\) and \(\hat{s}_{i}\) must cross each other at a single point, say, \(z\). Because \(y(q_{1})\leq y(q^{\prime})\), it holds that \(x(z)\leq x(q_{1})\). As such, the region of \(\hat{s}_{k}\) to the right of \(q_{1}\) must be inside \(\hat{s}_{i}\) (see Fig. 7). Recall that \(x(q_{1})=x(q^{\prime})<x(\hat{p}_{i})\). As \(\hat{p}_{i}\not\in\hat{s}_{i}\), \(\hat{p}_{i}\) cannot be in \(\hat{s}_{k}\). This proves that the inductive hypothesis still holds for \(i+1\). The inductive hypothesis implies each iteration of the process always finds a new candidate disk \(\hat{s}_{i}\) such that \(S^{\prime}_{\rm opt}\cup\{\hat{s}_{i}\}\) is a feasible solution. If \(\hat{s}_{i}\not\in Q\), then we can use \(\hat{s}_{i}\) as our target disk \(s^{*}\) and we are done with the process. Otherwise, we continue on the next iteration. Since each iteration finds a new candidate disk (that was never used before) and \(|Q|\) is finite, eventually we will find a candidate disk \(\hat{s}_{i}\) not in \(Q\). This completes the proof of the lemma. It remains to prove the correctness of the third main step of our algorithm. For each disk \(s_{i}\in S^{*}\), \(a(i)<b(i)\) by definition; define \(P(s_{i})=\{p_{j}\ |\ a(i)<j<b(i)\}\). Lemma 3: _All points of \(P(s_{i})\) are inside \(s_{i}\)._ Proof: Assume to the contrary that a point \(p_{k}\in P(s_{i})\) is not in \(s_{i}\). By definition, \(a(i)<k<b(i)\). Recall that each point of \(P\) is covered by a disk of \(S\). Let \(s\) be a disk of \(S\) that covers \(p_{k}\). Since \(s\neq s_{i}\), \(s\) is either in \(S_{l}(s_{i})\) or in \(S_{r}(s_{i})\). In the former case, by the definition of \(a(i)\), \(a(i)\geq k\), but this contradicts with \(a(i)<k\). In the latter case, by the definition of \(b(i)\), \(b(i)\leq k\); but this contradicts with \(k<b(i)\). The following lemma justifies the correctness of the third main step of our algorithm. Lemma 4: _Suppose \(S_{\mathit{opt}}\) is an optimal solution for the coverage problem on \(S^{*}\) and \(P\), and \(s_{i}\) is a disk in \(S_{\mathit{opt}}\). Then, any point of \(P\setminus P(s_{i})\) must be covered by a disk of \(S_{\mathit{opt}}\setminus\{s_{i}\}\)._ Proof: Let \(p\) be a point in \(P\setminus P(s_{i})\). If \(p\not\in s_{i}\), then since disks of \(S_{\mathit{opt}}\) form a coverage of \(P\), there must be a disk of \(S_{\mathit{opt}}\setminus\{s_{i}\}\) that covers \(p\). In the following, we assume that \(p\in s_{i}\). Since \(p\not\in P(s_{i})\), by Lemma 3, either \(x(p)\leq x(p_{a(i)})\) or \(x(p)\geq x(p_{b(i)})\). Below we only discuss the former case as the latter case is symmetric. Since \(S_{\mathit{opt}}\) is an optimal solution, \(S_{\mathit{opt}}\) must have a disk \(s\) that covers \(p_{a(i)}\) (see Fig. 8). By definition, \(p_{a(i)}\) is not in \(s_{i}\). Hence, \(s\neq s_{i}\) and thus \(s\) is either in \(S_{l}(s_{i})\) or in \(S_{r}(s_{i})\). We claim that \(s\) must be in \(S_{l}(s_{i})\). Indeed, assume to the contrary that \(s\in S_{r}(s_{i})\). Then, by the definition of \(b(i)\), \(b(i)\leq a(i)\) must hold, which contradicts with \(a(i)<b(i)\). Since \(s\in S_{l}(s_{i})\), we next prove that \(s\) must cover \(p\), which will prove the lemma. Indeed, since \(x(p)\leq x(p_{a(i)})\), \(p\) is inside \(s_{i}\), and \(s\in S_{l}(s_{i})\), due to the non-containment property of \(S\), the upper arcs of \(s_{i}\) and \(s\) must intersect at a single point, say, \(q\) (see Fig. 8). Further, since \(p_{a(i)}\) is in \(s\) but not in \(s_{i}\), \(x(p_{a(i)})\leq x(q)\) must hold. This implies that the region of \(s_{i}\) left of \(p_{a(i)}\) is inside \(s\). As \(p\) is in \(s_{i}\) and \(x(p)\leq x(p_{a(i)})\), \(p\) must be inside \(s\). In light of the preceding two lemmas, when considering the coverage of \(s_{i}\), it suffices to only consider points in \(P(i)\). This establishes the correctness of the third main step of our algorithm. ### Algorithm implementation In this section, we show that the first main step of the algorithm can be implemented in \(O(m\sqrt{n}+(n+m)\log(n+m))\) time. The goal is to compute \(a(i)\) and \(b(i)\) for all disks \(s_{i}\in S\). We only discuss how to compute \(a(i)\) since computing \(b(i)\) can be done analogously. To this end, we start with the following definition. Definition 2: For each point \(p\in P\), define \(\gamma(p)\) as the smallest index \(k\) such that the disk \(s_{k}\) covers \(p\). One reason we introduce \(\gamma(p)\) is due to the following observation. Figure 8: Illustrating the proof of Lemma 4. **Observation 3**: _For any disk \(s_{i}\in S\) and any point \(p\in P\) that is outside \(s_{i}\), there is a disk in \(S_{l}(s_{i})\) covering \(p\) if and only if \(\gamma(p)<i\)._ Our algorithm for computing \(a(i)\) relies on \(\gamma(p)\) for all \(p\in P\). Therefore, we first present an algorithm in the following lemma to compute \(\gamma(p)\). Lemma 5: _There is an algorithm that can compute \(\gamma(p)\) for all \(p\in P\) in \(O(m\sqrt{n}+(m+n)\log(m+n))\) time._ Proof: Let \(H\) be the set of the upper arcs of all disks. As discussed in Section 2, we compute a hierarchical \((1/r)\)-cutting \(\Xi_{0},\ldots,\Xi_{k}\) for \(H\) in \(O(mr)\) time [10, 22], for a parameter \(r\in[1,m]\) to be determined later. We follow the notation about cutting as in Section 2, e.g., \(\Xi\), \(H_{\sigma}\), \(S_{\sigma}\), etc. Recall that \(\Xi\) denotes the set of all cells of all cuttings \(\sigma_{i}\), \(i=0,1,\ldots,k\). As discussed in Section 2, the cutting algorithm [10, 22] also computes the conflict lists \(H_{\sigma}\) (and thus \(S_{\sigma}\)) for all cells \(\sigma\in\Xi\). Also, \(\sum_{\sigma\in\Xi}|S_{\sigma}|=O(mr)\). For each \(i\) with \(1\leq i\leq k\), for each cell \(\sigma\in\Xi_{i}\), let \(S(\sigma)\) be the set of disks that contain \(\sigma\) but do not contain \(\sigma^{\prime}\), where \(\sigma^{\prime}\) is the parent cell of \(\sigma\) (which is in \(\Xi_{i-1}\)). Note that \(\Xi_{0}\) consists of a single cell \(\sigma^{*}\) that is the entire plane and thus we simply let \(S(\sigma^{*})=\emptyset\) as no disk contains the entire plane. We can compute \(S(\sigma)\) of all cells \(\sigma\in\Xi\) in \(O(mr)\) time as follows. For each \(i\) with \(1\leq i\leq k\), for each cell \(\sigma^{\prime}\in\Xi_{i-1}\), recall that \(S_{\sigma^{\prime}}\) is available from the cutting algorithm. For each disk \(s\) of \(S_{\sigma^{\prime}}\), for each child cell \(\sigma\) of \(\sigma^{\prime}\), we check whether \(s\) contains \(\sigma\); if yes, we add \(s\) to \(S(\sigma)\). As such, since the total size of \(S_{\sigma}\) of all cells \(\sigma\) of \(\Xi\) is \(O(mr)\) and each cell has \(O(1)\) children, the total time for computing \(S(\sigma)\) for all cells \(\sigma\in\Xi\) is \(O(mr)\). For each cell \(\sigma\), by slightly abusing the notation, we define \(\gamma(\sigma)\) as the smallest index of the disks in \(S(\sigma)\). After \(S(\sigma)\)'s are computed, the indices \(\gamma(\sigma)\) for all cells \(\sigma\in\Xi\) can be computed in additional \(O(mr)\) time. Next, we run the following _point location step_ for each point \(p\in P\) to compute \(\gamma(p)\). Initially, we set \(\gamma(p)=m+1\). Starting from the only cell of \(\Xi_{0}\), we locate the cell \(\sigma_{i}\) that contains \(p\) in each cutting \(\Xi_{i}\). This can be done in \(O(\log r)\) time as each cell contains \(O(1)\) children and \(k=O(\log r)\). For each such cell \(\sigma_{i}\), we update \(\gamma(p)=\min\{\gamma(p),\gamma(\sigma_{i})\}\). As such, the point location step on \(p\) takes \(O(\log r)\) time. The total time for all points of \(P\) is \(O(n\log r)\). In addition, we do the following processing for the cell \(\sigma_{k}\) of the last cutting \(\Xi_{k}\) that contains each \(p\in P\). For each disk \(s_{j}\in S_{\sigma_{k}}\), we check whether \(s_{j}\) contains \(p\). If yes, we update \(\gamma(p)=\min\{\gamma(p),j\}\). After that, \(\gamma(p)\) is correctly computed. As \(|S_{\sigma_{k}}|\leq m/r\), this additional step for each point \(p\) takes \(O(m/r)\) time. Therefore, the total time of this step for all points of \(P\) is \(O(nm/r)\). In summary, computing \(\gamma(p)\) for all \(p\in P\) takes \(O(mr+n\log r+nm/r)\) time. Setting \(r=\min\{\sqrt{n},m\}\) leads to the lemma. The following lemma finally computes \(a(i)\). Lemma 6: _Computing \(a(i)\) for all disks \(s_{i}\in S\) can be done in \(O(m\sqrt{n}+(m+n)\log(m+n))\) time._ Proof: We first compute \(\gamma(p)\) for all \(p\in P\) by Lemma 5. Let \(H\) be the set of the upper arcs of all disks. As discussed in Section 2, we compute a hierarchical \((1/r)\)-cutting \(\Xi_{0},\ldots,\Xi_{k}\) for \(H\) in \(O(mr)\) time [10, 22], for a parameter \(r\in[1,m]\) to be determined later. We follow the notation about cutting as in Section 2, e.g., \(\Xi\), \(H_{\sigma}\), \(S_{\sigma}\), etc. Recall that \(\Xi\) denotes the set of all cells of all cuttings \(\sigma_{i}\), \(i=0,1,\ldots,k\). For each cell \(\sigma\in\Xi\), let \(P(\sigma)\) denote the set of points of \(P\) inside \(\sigma\), i.e., \(P(\sigma)=P\cap\sigma\). We can compute \(P(\sigma)\) for all cells \(\sigma\in\Xi\) in \(O(n\log r)\) time by the point location step as discussed in Lemma 5. Note that the total size of \(P(\sigma)\) for all cells \(\sigma\in\Xi\) is also \(O(n\log r)\). In addition, if we invoke the point location step for points of \(P\) following their index order, then points in each \(P(\sigma)\) can be sorted in their index order without affecting the \(O(n\log r)\) time complexity. We need to perform a _pruning procedure_ for \(P(\sigma)\) of each cell \(\sigma\in\Xi\). Before we describe it, we first explain the motivation. Our algorithm for computing \(a(i)\) needs to solve the following subproblem. Given a disk \(s_{i}\) and a cell \(\sigma\in\Xi\) such that \(\sigma\) does not intersect \(s_{i}\), the problem is to compute \(a_{\sigma}(i)\), which is defined as the largest index \(k\) of a point \(p_{k}\) of \(P(\sigma)\) with \(\gamma(p_{k})<i\) (if no such \(k\) exists, then \(a_{\sigma}(i)=0\)). In light of Observation 3, \(a_{\sigma}(i)\) is the largest index \(k\) of a point \(p_{k}\) of \(P(\sigma)\) such that \(S_{l}(s_{i})\) has a disk covering \(p_{k}\). To solve the subproblem, consider two points \(p_{k}\) and \(p_{j}\) in \(P(\sigma)\) with \(k<j\). A _key observation_ is that if \(\gamma(p_{k})\geq\gamma(p_{j})\), then \(a_{\sigma}(i)\neq k\) holds for any such disk \(s_{i}\) with \(s_{i}\cap\sigma=\emptyset\), and thus \(p_{k}\) can simply be ignored. Indeed, assume to the contrary that \(a_{\sigma}(i)=k\). Then, we have \(\gamma(p_{k})<i\). Hence, \(\gamma(p_{j})<i\). By definition, we can obtain \(a_{\sigma}(i)\geq j>k\), which contradicts with \(a_{\sigma}(i)=k\). In light of the key observation, to facilitate computing \(a_{\sigma}(i)\) for all such disks \(s_{i}\), we first perform the following pruning procedure. The algorithm maintains a stack \(A\) of points of \(P(\sigma)\). Initially, \(A=\emptyset\). We process the points of \(P(\sigma)\) in their index order (recall that they are already sorted). Suppose we are processing a point \(p\in P(\sigma)\). Let \(p^{\prime}\) be the point at the top of the stack. If \(A=\emptyset\) or if \(\gamma(p^{\prime})<\gamma(p)\), then we push \(p\) onto \(A\). Otherwise, we pop \(p^{\prime}\) out of \(A\) (we say that \(p^{\prime}\) is pruned) and repeat the above. After all points of \(P(\sigma)\) are processed, let \(P^{\prime}(\sigma)\) denote the set of points in the stack. Due to the pruning, points of \(P^{\prime}(\sigma)\) are sorted by both their indices and their \(\gamma(\cdot)\) values. Clearly, the pruning procedure runs in \(O(|P(\sigma)|)\) time. We use \(P^{\prime}(\sigma)\) in the following way. Recall that we wish to compute \(a_{\sigma}(i)\). Let \(k\) be the largest index of \(p_{k}\in P^{\prime}(\sigma)\) such that \(\gamma(p_{k})<i\). Then, the above key observation and our pruning procedure guarantee that \(a_{\sigma}(i)=k\). Hence, we could compute \(a_{\sigma}(i)\) by a binary search on \(P^{\prime}(\sigma)\) using \(i\), the index of the disk. However, doing binary search for each disk would make the total runtime of the algorithm have one more logarithmic factor. To improve it, we use the following strategy. For each cell \(\sigma\), suppose \(S^{\prime}(\sigma)\) is a set of disks \(s_{i}\) (with \(s_{i}\cap\sigma=\emptyset\)) that need to compute \(a_{\sigma}(i)\) with respect to \(\sigma\) (the exact definition of \(S^{\prime}(\sigma)\) will be given later). Then, we search \(P^{\prime}(\sigma)\) with disks of \(S^{\prime}(\sigma)\) altogether, by using a procedure similar to that for merging two sorted lists in merge-sort. In this way, the total time is linear in \(|P^{\prime}(\sigma)|+|S^{\prime}(\sigma)|\) (in contrast, the time would be \(O(|S^{\prime}(\sigma)|\cdot\log|P^{\prime}(\sigma)|)\) if we do binary search for each disk of \(S^{\prime}(\sigma)\)). We are now ready to describe our overall algorithm for computing \(a(i)\). The above has computed \(P(\sigma)\) for all cells \(\sigma\in\Xi\), whose total size is \(O(n\log r)\). We run the pruning procedure on \(P(\sigma)\) for every cell \(\sigma\in\Xi\) to compute \(P^{\prime}(\sigma)\); this takes \(O(n\log r)\) time in total as \(\sum_{\sigma\in\Xi}|P(\sigma)|=O(n\log r)\). For each cell \(\sigma\in\Xi\), we define \(S^{\prime}(\sigma)\) as the subset of disks that do not intersect \(\sigma\) but whose upper arcs intersect the parent cell of \(\sigma\). We can compute \(S^{\prime}(\sigma)\) for all cells \(\sigma\in\Xi\) in \(O(mr)\) time as follows. Initially we set \(S^{\prime}(\sigma)=\emptyset\). Then, for each \(0\leq i\leq k-1\), for each cell \(\sigma^{\prime}\in\Xi_{i}\), for each disk \(s\in S_{\sigma^{\prime}}\), for each child \(\sigma\) of \(\sigma^{\prime}\), if \(s\) does not intersect \(\sigma\), then we add \(s\) to \(S^{\prime}(\sigma)\). As each cell \(\sigma^{\prime}\) has \(O(1)\) children and \(\sum_{\sigma\in\Xi}|S_{\sigma}|=O(mr)\), it takes \(O(mr)\) time to compute \(S^{\prime}(\sigma)\) for all cells \(\sigma\in\Xi\). This also implies \(\sum_{\sigma\in\Xi}|S^{\prime}(\sigma)|=O(mr)\). Now that we have \(P^{\prime}(\sigma)\) and \(S^{\prime}(\sigma)\) available for all cells \(\sigma\in\Xi\), we compute \(a(i)\) for all disks \(s_{i}\) as follows. Initially, we set \(a(i)=0\). Then, for each cell \(\sigma\), we perform a search with \(P^{\prime}(\sigma)\) and \(S^{\prime}(\sigma)\) to compute \(a_{\sigma}(i)\) for all disks \(s_{i}\in S^{\prime}(\sigma)\) using the procedure discussed above, which takes \(O(|P^{\prime}(\sigma)|+|S^{\prime}(\sigma)|)\) time. Then, for each disk \(s_{i}\in S^{\prime}(\sigma)\), we update \(a(i)=\max\{a(i),a_{\sigma}(i)\}\). Since \(\sum_{\sigma\in\Xi}|P^{\prime}(\sigma)|=O(n\log r)\) and \(\sum_{\sigma\in\Xi}|S(\sigma)|=O(mr)\), processing all cells \(\sigma\) of \(\Xi\) as above takes \(O(mr+n\log r)\) time in total. Finally, for each cell \(\sigma\) of the last cutting \(\Xi_{k}\), we perform the following additional processing: For each disk \(s_{i}\in S_{\sigma}\), for each point \(p_{j}\in P(\sigma)\), if \(p_{j}\) is outside \(s_{i}\) and \(\gamma(p_{j})<i\), then we update \(a(i)=\max\{a(i),j\}\). After that, the values \(a(i)\) for all disks \(s_{i}\in S\) are correctly computed. Since \(|S_{\sigma}|\leq m/r\) for each cell \(\sigma\in\Xi_{k}\), we spend \(O(m/r)\) time on each point \(p\in P(\sigma)\). As \(\sum_{\sigma\in\Xi_{k}}|P(\sigma)|=n\), the total time of the additional processing as above for all cells \(\sigma\in\Xi_{k}\) is \(O(nm/r)\). In summary, we can compute \(a(i)\) for all disks \(s_{i}\in S\) in \(O(mr+n\log r+nm/r)\) time in total. Setting \(r=\min\{\sqrt{n},m\}\) leads to the lemma. ## 4 The unit-disk case In this section, we consider the unit-disk case where all disks of \(S\) have the same radius. As remarked right after Theorem 3.1, our algorithm is the same as described in Section 3, except that we are able to compute \(a(i)\)'s and \(b(i)\)'s more efficiently in \(m^{2/3}n^{2/3}2^{O(\log^{*}(m+n))}+O((n+m)\log(n+m))\) time. This is achieved by exploring the property that all disks have the same radius. In the following, we only discuss how to compute \(a(i)\)'s because the case for \(b(i)\)'s is similar. For each point \(p_{i}\in P\), define \(\tilde{p}_{i}\) as the unit disk centered at \(p_{i}\), and we call \(\tilde{p}_{i}\) the _dual disk_ of \(p_{i}\). For each disk \(s_{i}\), let \(\tilde{s}_{i}\) denote the center of \(s_{i}\), and we call \(\tilde{s}_{i}\) the _dual point_ of \(s_{i}\). Let \(\tilde{P}\) denote the set of all dual disks and \(\tilde{S}\) the set of all dual points. Since all disks of \(S\) have the same radius, we have the following easy observation. **Observation 4**: _A disk \(s_{i}\in S\) contains a point \(p_{j}\in P\) if and only if the dual point \(\tilde{s}_{i}\) is contained in the dual disk \(\tilde{p}_{j}\)._ Our new algorithm for computing \(a(i)\)'s for the unit-disk case relies on exploring the "duality" in Observation 4. Recall in Section 3.2 that the algorithm for computing \(a(i)\)'s involves two steps: (1) compute \(\gamma(p)\)'s for all points \(p\in P\) (i.e., Lemma 5); (2) compute \(a(i)\)'s for all \(s_{i}\in P\) (i.e., Lemma 6). We have new algorithms for both steps in the following two subsections, respectively. ### Computing \(\gamma(p)\)'s We first introduce the following definition \(\tilde{\gamma}(\cdot)\), which is "dual" to \(\gamma(\cdot)\). Definition 3: For each dual disk \(\tilde{p}\in\tilde{P}\), define \(\tilde{\gamma}(\tilde{p})\) as the smallest index \(k\) such that \(\tilde{p}\) contains the dual point \(\tilde{s}_{k}\). The following observation follows immediately from Observation 4. **Observation 5**: _For each point \(p_{i}\in P\), \(\gamma(p_{i})=\tilde{\gamma}(\tilde{p}_{i})\)._ Observation 5 implies that computing \(\gamma(p_{i})\) for all points \(p_{i}\in S\) is equivalent to computing \(\tilde{\gamma}(\tilde{p}_{i})\) for all dual disks \(\tilde{p}_{i}\in\tilde{P}\). To compute them, we will present two recursive algorithms and then combine them to obtain our final algorithm. The first algorithm computes \(\gamma(p_{i})\)'s using \(P\) and \(S\) while the second one computes \(\tilde{\gamma}(\tilde{p}_{i})\) using \(\tilde{P}\) and \(\tilde{S}\). The combined algorithm will run the two algorithms alternatively using recursion. #### 4.1.1 The first algorithm. This algorithm follows the same framework as that for Lemma 5, but when processing the cells \(\sigma\) in the last cutting \(\Xi_{k}\), instead of brute-force, we form subproblems and solve them recursively. We follow the notation from Lemma 5. Let \(H\) be the set of the upper arcs of all disks of \(S\). We compute a hierarchical \((1/r)\)-cutting \(\Xi_{0},\ldots,\Xi_{k}\) for \(H\) in \(O(mr)\) time [10, 22], for a parameter \(r\in[1,m]\) to be determined later. Let \(\Xi\) denote the set of all cells of all cuttings \(\sigma_{i}\), \(i=0,1,\ldots,k\). As in Lemma 5, we compute \(S(\sigma)\) and \(\gamma(\sigma)\) for all cells \(\sigma\in\Xi\), which takes \(O(mr)\) time. Next, we run the point location step for each point \(p\in P\) as in Lemma 5. Initially, we set \(\gamma(p)=0\). Starting from \(\Xi_{0}\), we locate the cell \(\sigma_{i}\) that contains \(p\) in each cutting \(\Xi_{i}\). For each \(\sigma_{i}\), we update \(\gamma(p)=\min\{\gamma(p),\gamma(\sigma_{i})\}\). This point location step can also compute \(P(\sigma)\) for all cells \(\sigma\in\Xi\), where \(P(\sigma)\) denotes the set of points of \(P\) in \(\sigma\). The total time for the point locations for all points \(p\in P\) is \(O(n\log r)\). Finally, we do the following additional processing for the last cutting \(\Xi_{k}\). For each cell \(\sigma\in\Xi_{k}\), if \(|P(\sigma)|>n/r^{2}\), we partition \(P(\sigma)\) into subsets of sizes between \(n/(2r^{2})\) and \(n/r^{2}\), called _standard subsets_ (if \(|P(\sigma)|\leq n/r^{2}\), then \(P(\sigma)\) itself is a standard subset). As \(\Xi_{k}\) has \(O(r^{2})\) cells and \(\sum_{\sigma\in\Xi_{k}}|P(\sigma)|=n\), the total number of standard subsets for all cells \(\sigma\in\Xi_{k}\) is \(O(r^{2})\). Recall that \(S_{\sigma}\) is the subset of disks whose upper arcs intersect \(\sigma\). For each standard subset \(P_{1}(\sigma)\) of \(P(\sigma)\), we form a subproblem on \((P_{1}(\sigma),S_{\sigma})\): Compute \(\gamma_{\sigma}(p)\) for all points \(p\in P_{1}(\sigma)\) with respect to disks of \(S_{\sigma}\), where \(\gamma_{\sigma}(p)\) is defined to be the smallest index of the disks of \(S_{\sigma}\) covering \(p\). After the subproblem is solved, we update \(\gamma(p)=\min\{\gamma(p),\gamma_{\sigma}(p)\}\) for each point \(p\in P_{1}(\sigma)\). This will compute \(\gamma(p)\) correctly. Note that there are \(O(r^{2})\) subproblems in total and in each subproblem \(|P_{1}(\sigma)|\leq n/r^{2}\) and \(|S_{\sigma}|\leq m/r\). If we use \(T(n,m)\) to denote the runtime of the entire algorithm on the original problem \((P,S)\), then we obtain the following recurrence relation: \[T(n,m)=O(mr+n\log r)+O(r^{2})\cdot T(n/r^{2},m/r). \tag{1}\] **The second algorithm.** In the second algorithm, we compute \(\tilde{\gamma}(\tilde{p})\) for all dual disks \(\tilde{p}\in\tilde{P}\). Recall that all dual disks have their centers above \(\ell\). Therefore, each dual disk has a "lower arc" below \(\ell\). Let \(\tilde{H}\) denote the set of the lower arcs of all dual disks. We compute a hierarchical \((1/r)\)-cutting \(\Xi_{0},\ldots,\Xi_{k}\) for \(\tilde{H}\) in \(O(nr)\) time [10, 22], for a parameter \(r\in[1,n]\) to be determined later. We use \(\Xi\) to denote the set of all cells of all cuttings \(\Xi_{i}\), \(i=0,1,\ldots,k\). For each cell \(\sigma\in\Xi\), let \(\tilde{P}_{\sigma}\) denote the set of dual disks whose lower arcs intersect \(\sigma\). For each cell \(\sigma\in\Xi\), let \(\tilde{S}(\sigma)\) denote the subset of dual points of \(\tilde{S}\) inside \(\sigma\). For each cell \(\sigma\in\Xi\), by slightly abusing the notation, let \(\tilde{\gamma}(\sigma)\) denote the minimum index of all points of \(\tilde{S}(\sigma)\). We can compute \(\tilde{S}(\sigma)\) as well as \(\tilde{\gamma}(\sigma)\) for all cells \(\sigma\in\Xi\) in \(O(m\log r)\) time by point locations as in the first algorithm. We now compute \(\tilde{\gamma}(\tilde{p})\)'s. Initially, we set each \(\tilde{\gamma}(\tilde{p})=m+1\). For each \(1\leq i\leq k\), for each cell \(\sigma^{\prime}\in\Xi_{i-1}\), for each dual disk \(\tilde{p}\in\tilde{P}_{\sigma^{\prime}}\), for each child \(\sigma\in\Xi_{i}\) of \(\sigma^{\prime}\), if \(\tilde{p}\) contains \(\sigma\), then we update \(\tilde{\gamma}(\tilde{p})=\min\{\tilde{\gamma}(\tilde{p}),\tilde{\gamma}( \sigma)\}\). Since \(\sum_{\sigma\in\Xi}|\tilde{P}_{\sigma}|=O(nr)\) and each cell has \(O(1)\) children, the total time of this procedure is \(O(nr)\). Finally, we do the following additional processing for the last cutting \(\Xi_{k}\). For each cell \(\sigma\in\Xi_{k}\), as in the first algorithm, if \(|\tilde{S}(\sigma)|>m/r^{2}\), we partition \(\tilde{S}(\sigma)\) into _standard subsets_ of sizes between \(m/(2r^{2})\) and \(m/r^{2}\). The total number of standard subsets is \(O(r^{2})\). For each standard subset \(\tilde{S}_{1}(\sigma)\) of \(\tilde{S}(\sigma)\), we form a subproblem on \((\tilde{P}_{\sigma},\tilde{S}_{1}(\sigma))\): Compute \(\tilde{\gamma}_{\sigma}(\tilde{p})\) for all dual disks \(\tilde{p}\in\tilde{P}(\sigma)\) with respect to the dual points of \(\tilde{S}_{1}(\sigma)\), where \(\tilde{\gamma}_{\sigma}(\tilde{p})\) is the smallest index of the dual points of \(\tilde{S}_{1}(\sigma)\) contained in \(\tilde{p}\). After the subproblem is solved, we update \(\tilde{\gamma}(\tilde{p})=\min\{\tilde{\gamma}(\tilde{p}),\tilde{\gamma}_{ \sigma}(\tilde{p})\}\) for each \(\tilde{p}\in\tilde{P}(\sigma)\). This will compute \(\tilde{\gamma}(\tilde{p})\) correctly. Note that there are \(O(r^{2})\) subproblems in total and in each subproblem \(|\tilde{S}_{1}(\sigma)|\leq m/r^{2}\) and \(|\tilde{P}_{\sigma}|\leq n/r\). Recall that \(T(n,m)\) refers to our problem for computing \(\gamma(p)\)'s on \((P,S)\), which is equivalent to computing \(\tilde{\gamma}(\tilde{p})\)'s on \((\tilde{P},\tilde{S})\) by Observation 4. Hence, we can also obtain the following recurrence relation using the second algorithm: \[T(n,m)=O(nr+m\log r)+O(r^{2})\cdot T(n/r,m/r^{2}). \tag{2}\] **Combining the two algorithms.** We now combine the two algorithms to compute \(\gamma(p)\)'s for all \(p\in P\). We first discuss the _symmetric case_ where \(m=n\) (if \(m\neq n\), it is the _asymmetric case_). If we apply (1) and then (2) using the same \(r\), we can obtain the following recurrence \[T(n,n)=O(nr\log r)+O(r^{4})\cdot T(n/r^{3},n/r^{3}).\] Setting \(r=n^{1/3}/\log n\) leads to the following \[T(n,n)=O(n^{4/3})+O((n/\log^{3}n)^{4/3})\cdot T(\log^{3}n,\log^{3}n).\] The recurrence solves to \(T(n,n)=n^{4/3}2^{O(\log^{*}n)}\). We next tackle the asymmetric case, by using the above symmetric case result. Depending on whether \(m\geq n\), there are two cases. 1. If \(m\geq n\), depending on whether \(m<n^{2}\), there are two subcases. 1. If \(m<n^{2}\), then set \(r=m/n\) so that \(n/r=m/r^{2}\). Applying (2) with \(r=m/n\) and solving each subproblem \(T(n/r,m/r^{2})\) using the symmetric case result give us \(T(n,m)=m^{2/3}n^{2/3}2^{O(\log^{*}m)}\). 2. If \(m\geq n^{2}\), then applying (2) with \(r=n\) gives us \(T(n,m)=O(n^{2}+m\log n)+O(n^{2})\cdot T(1,m/n^{2})\). Clearly, we have \(T(1,m/n^{2})=O(m/n^{2})\). Hence, we obtain \(T(m,n)=O(m\log n)\) since \(m\geq n^{2}\). Hence in the case where \(m\geq n\) we have \(T(n,m)=O(m\log n)+m^{2/3}n^{2/3}2^{O(\log^{*}m)}\). 2. If \(m<n\), the analysis is similar (using (1) instead) and we can obtain \(T(n,m)=O(n\log m)+m^{2/3}n^{2/3}2^{O(\log^{*}n)}\). In summary, computing \(\gamma(p)\)'s for all points \(p\in P\) can be done in \(O((n+m)\log(m+n))+m^{2/3}n^{2/3}2^{O(\log^{*}(n+m))}\) time. ### Computing \(a(i)\)'s With the \(\gamma(p)\) values computed above, we now describe our algorithm for computing \(a(i)\)'s for all disks \(s_{i}\in S\). As in Section 4.1, we first introduce the following definition, which is "dual" to \(a(i)\). Definition 4: For each dual point \(\tilde{s}_{i}\in\tilde{S}\), define \(\tilde{a}(i)\) as the largest index \(k\) of the dual disk \(\tilde{p}_{k}\in\tilde{P}\) such that \(\tilde{p}_{k}\) contains a dual point \(\tilde{s}_{j}\) with \(j<i\) but does not contain \(\tilde{s}_{i}\). Based on Observation 4, we have the following lemma. Lemma 7: _For each \(1\leq i\leq m\), \(a(i)=\tilde{a}(i)\)._ Proof: Consider the point \(p_{k}\in P\) with \(k=a(i)\). By definition, \(p_{k}\) is outside \(s_{i}\) and \(S\) has a disk \(s_{j}\) that covers \(p_{k}\) with \(j<i\). Then, by Observation 4, the dual disk \(\tilde{p}_{k}\) contains the dual point \(\tilde{s}_{j}\) but does not contain the dual point \(\tilde{s}_{i}\). By definition, it must hold that \(\tilde{a}(i)\geq k=a(i)\). Analogously, we can prove that \(a(i)\geq\tilde{a}(i)\). Lemma 7 implies that computing \(a(i)\) for all disks \(s_{i}\in S\) is equivalent to computing \(\tilde{a}(i)\) for all dual points \(\tilde{s}_{i}\in\tilde{S}\). To compute them, as in Section 4.1, we will also present two recursive algorithms and then combine them. The first algorithm computes \(a(i)\)'s using \(P\) and \(S\) while the second one computes \(\tilde{a}(i)\)'s using \(\tilde{P}\) and \(\tilde{S}\). The combined algorithm will run the two algorithms alternatively using recursion. In what follows, we assume that \(\gamma(p)\) for all points \(p\in P\) and \(\tilde{\gamma}(\tilde{p})\) for all \(\tilde{p}\in\tilde{P}\) have been computed. #### 4.2.1 The first algorithm. The first algorithm follows the framework of Lemma 6 but uses recursion when we process the last cutting \(\Xi_{k}\). Here we only discuss how to perform additional preprocessing for the cells of the last cutting \(\Xi_{k}\); the rest of the algorithm is the same as before, which takes \(O(mr+n\log r)\) time in total. We follow the notation in the proof of Lemma 6. For each cell \(\sigma\in\Xi_{k}\), if \(|P(\sigma)|>n/r^{2}\), we partition \(P(\sigma)\) into _standard subsets_ of sizes between \(n/(2r^{2})\) and \(n/r^{2}\). Recall that \(S_{\sigma}\) is the subset of disks of \(S\) whose upper arcs intersect \(\sigma\). For each standard subset \(P_{1}(\sigma)\) of \(P(\sigma)\), we form a subproblem on \((P_{1}(\sigma),S_{\sigma})\): Compute \(a_{\sigma}(i)\) for all disks \(s_{i}\in S(\sigma)\) with respect to points of \(P_{1}(\sigma)\), where \(a_{\sigma}(i)\) is the largest index \(k\) of a point \(p_{k}\in P_{1}(\sigma)\) that is outside \(s_{i}\) but is covered by a disk \(s_{j}\) with \(j<i\). After the subproblem is solved, we update \(a(i)=\max\{a(i),a_{\sigma}(i)\}\) for each disk \(s_{i}\in S(\sigma)\). This will compute \(a(i)\) correctly. Note that there are \(O(r^{2})\) subproblems in total and in each subproblem \(|P_{1}(\sigma)|\leq n/r^{2}\) and \(|S_{\sigma}|\leq m/r\). If we use \(T(n,m)\) to denote the runtime of the entire algorithm on the original problem \((P,S)\), then we obtain the following recurrence relation: \[T(n,m)=O(mr+n\log r)+O(r^{2})\cdot T(n/r^{2},m/r). \tag{3}\] **The second algorithm.** In the second algorithm, we compute \(\tilde{a}(i)\) for all dual points \(\tilde{s}_{i}\in S\). Recall that all dual disks have their centers above \(\ell\). Therefore, each dual disk has a "lower arc" below \(\ell\). Let \(\tilde{H}\) denote the set of lower arcs of all dual disks. We compute a hierarchical \((1/r)\)-cutting \(\Xi_{0},\ldots,\Xi_{k}\) for \(\tilde{H}\) in \(O(nr)\) time [10, 22], for a parameter \(r\in[1,n]\) to be determined later. We use \(\Xi\) to denote the set of cells of all cuttings \(\Xi_{i}\), \(i=0,1,\ldots,k\). For each cell \(\sigma\in\Xi\), let \(\tilde{P}_{\sigma}\) denote the set of dual disks whose lower arcs intersect \(\sigma\). For each cell \(\sigma\in\Xi\), let \(\tilde{S}(\sigma)\) be the set of dual points of \(\tilde{S}\) inside \(\sigma\). We can compute \(\tilde{S}(\sigma)\) for all cells of \(\Xi\) in \(O(m\log r)\) time using point locations as discussed before. Also, \(\sum_{\sigma\in\Xi}|\tilde{S}(\sigma)|=O(m\log r)\). In addition, points in each \(\tilde{S}(\sigma)\) can be sorted in their index order if we invoke the point location step on dual points of \(\tilde{S}\) in their index order; the total time is still \(O(m\log r)\). For each cell \(\sigma\in\Xi\), we define \(\tilde{P}(\sigma)\) in the same way as \(S^{\prime}(\sigma)\) in the proof of Lemma 6. Specifically, \(\tilde{P}(\sigma)\) is the subset of dual disks of \(\tilde{P}\) that do not intersect \(\sigma\) but whose lower arcs intersect the parent cell of \(\sigma\). As in Lemma 6, \(\tilde{P}(\sigma)\) for all \(\sigma\in\Xi\) can be computed in \(O(nr)\) time and \(\sum_{\sigma\in\Xi}|\tilde{P}(\sigma)|=O(nr)\). Now consider the following problem on \(\tilde{S}(\sigma)\) and \(\tilde{P}(\sigma)\). For each dual point \(\tilde{s}_{i}\in\tilde{S}(\sigma)\), we want to compute \(\tilde{a}_{\sigma}(i)\), which is the largest index \(k\) of a dual disk \(\tilde{p}_{k}\in\tilde{P}(\sigma)\) that contains a dual point \(\tilde{s}_{j}\) with \(j<i\). After solving the problem, we update \(\tilde{a}(i)=\max\{\tilde{a}(i),\tilde{a}_{\sigma}(i)\}\). To solve the problem, first notice that \(\tilde{p}_{k}\) contains a dual point \(\tilde{s}_{j}\) with \(j<i\) if and only if \(\tilde{\gamma}(\tilde{p}_{k})<i\). Then, consider two dual disks \(\tilde{p}_{k}\) and \(\tilde{p}_{j}\) in \(\tilde{P}(\sigma)\) with \(k<j\). A _key observation_ is that if \(\tilde{\gamma}(k)\geq\tilde{\gamma}(j)\), then \(\tilde{a}_{\sigma}(i)\geq j\) holds for any dual point \(\tilde{s}_{i}\in\tilde{S}(\sigma)\) (and thus \(\tilde{p}_{k}\) can be ignored; this echoes the key observation in Lemma 6). Using the key observation, as in the proof of Lemma 6, we run a pruning procedure on \(\tilde{P}(\sigma)\) to obtain a subset \(\tilde{P}^{\prime}(\sigma)\) of dual disks that are sorted both by their indices and their \(\tilde{\gamma}(\cdot)\) values. The pruning procedure takes \(O(|\tilde{P}(\sigma)|)\) time if dual disks of \(\tilde{P}(\sigma)\) are already sorted by their indices. We can produce the sorted lists of \(\tilde{P}(\sigma)\) for all cells \(\sigma\in\Xi\) in \(O(nr)\) time as follows. First, for each dual disk \(\tilde{p}_{i}\), we create a list \(L_{i}\) that contains all cells \(\sigma\in\Xi\) such that \(\tilde{p}_{i}\) is in \(\tilde{P}_{\sigma}\). This can be done in \(O(nr)\) time by traversing the conflict lists of all cells. Second, we process the lists \(L_{1},L_{2},\ldots,L_{n}\) in this order. For each list \(L_{i}\), for each cell \(\sigma\in L_{i}\), we add \(\tilde{p}_{i}\) to the rear of a list \(L(\sigma)\) for \(\sigma\) (initially, \(L(\sigma)=\emptyset\)). Once all lists \(L_{1},L_{2},\ldots,L_{n}\) are processed as above, \(L(\sigma)\) contains the sorted list of the dual disks of \(\tilde{P}_{\sigma}\) by their indices. The total time of this sorting algorithm is linear in \(\sum_{\sigma\in\Xi}|\tilde{P}_{\sigma}|\), which is \(O(nr)\). After the pruning procedure, we proceed with \(\tilde{P}(\sigma)\) as follows. Suppose \(k\) is the largest index of any dual disk \(\tilde{p}_{k}\in\tilde{P}(\sigma)\) such that \(\tilde{\gamma}(\tilde{p}_{k})<i\); then we have \(\tilde{a}_{\sigma}(\tilde{s}_{i})=k\). As such, we can scan the two lists \(\tilde{P}(\sigma)\) and \(\tilde{S}(\sigma)\) simultaneously (recall that dual points of \(\tilde{S}(\sigma)\) are also sorted by their indices), which can compute \(\tilde{a}_{\sigma}(\tilde{s}_{i})\)'s for all dual points \(\tilde{s}_{i}\in\tilde{S}(\sigma)\) in \(O(|\tilde{P}(\sigma)|+|\tilde{S}(\sigma)|)\) time. As \(\sum_{\sigma\in\Xi}|\tilde{P}(\sigma)|=O(nr)\) and \(\sum_{\sigma\in\Xi}|\tilde{S}(\sigma)|=O(m\log r)\), the total time for doing this for all cells \(\sigma\in\Xi\) is \(O(m\log r+nr)\). Finally, we do the following additional processing for the last cutting \(\Xi_{k}\). For each cell \(\sigma\in\Xi_{k}\), if \(|\tilde{S}(\sigma)|>m/r^{2}\), we partition \(\tilde{S}(\sigma)\) into _standard subsets_ of sizes between \(m/(2r^{2})\) and \(m/r^{2}\). Recall that \(\tilde{P}_{\sigma}\) is the subset of dual disks whose lower arcs intersect \(\sigma\). For each standard subset \(\tilde{S}_{1}(\sigma)\) of \(\tilde{S}(\sigma)\), we form a subproblem on \((\tilde{P}_{\sigma},\tilde{S}_{1}(\sigma))\): Compute \(\tilde{a}_{\sigma}(i)\) for all dual points \(\tilde{s}_{i}\in\tilde{S}_{1}(\sigma)\) with respect to dual disks of \(\tilde{P}_{\sigma}\), where \(\tilde{a}_{\sigma}(i)\) is the largest index \(k\) of a dual disk \(\tilde{p}_{k}\in\tilde{P}_{\sigma}\) that contains a dual point \(\tilde{s}_{j}\) with \(j<i\) but does not contain \(\tilde{s}_{i}\). After the subproblem is solved, we update \(\tilde{a}(i)=\max\{\tilde{a}(i),\tilde{a}_{\sigma}(i)\}\) for each dual point \(\tilde{s}_{i}\in\tilde{S}_{1}(\sigma)\). This will compute \(\tilde{a}(i)\) correctly. Note that there are \(O(r^{2})\) subproblems in total and in each subproblem \(|\tilde{P}_{\sigma}|\leq n/r\) and \(|\tilde{S}_{1}(\sigma)|\leq m/r^{2}\). Recall that \(T(n,m)\) refers to our problem for computing \(a(i)\)'s on \((P,S)\), which is equivalent to computing \(\tilde{a}(i)\)'s on \((\tilde{P},\tilde{S})\) by Lemma 7. Hence, we can obtain the following recurrence relation using the second algorithm: \[T(n,m)=O(nr+m\log r)+O(r^{2})\cdot T(n/r,m/r^{2}). \tag{4}\] Combining the two algorithms.Following exactly the same approach and the same analysis as in Section 4.1 and using (3) and (4), we can obtain a combined algorithm that can compute \(a(i)\) for all disks \(s_{i}\in S\) in \(O((n+m)\log(n+m))+m^{2/3}n^{2/3}2^{O(\log^{*}(n+m))}\) time. We summarize our result in the following theorem. Theorem 4.1: _Given a set \(P\) of \(n\) points and a set \(S\) of \(m\) unit disks in the plane such that the disk centers are separated from points of \(P\) by a line, the disk coverage problem for \(P\) and \(S\) is solvable in \(O((n+m)\log(n+m))+m^{2/3}n^{2/3}2^{O(\log^{*}(n+m))}\) time._
``` 平面上に$n$個の点を持ち、$m$個のディスクを指定された条件で与えられたとき、ディスクの覆いかぶさる問題は、$P$の全ての点を覆う最小のディスクの集合を求める問題である。この問題はNP-硬。この論文では、全てのディスクが同じ半径を持ち、$P$の点から離れている線$\ell$を境とする、線分可分化した単位ディスクバージョンを検討する。この問題に対する時間計算量は、$O((n+m)\log(n+m))$ であり、これは従来の$O(nm+n\log n)$時間よりも改善された結果である。この手法により、線分制約付きのバージョンも解決できる。この問題では、$S$の全てのディスクの中心点が線$\ell$に位置し、$P$の点は平面上で任意の位置にある。このアルゴリズムは、$
2310.00508
Analytical Modeling of Parameter Imbalance in Permanent Magnet Synchronous Machines
This paper presents a systematic and comprehensive analysis of the impact of parameter imbalance in permanent magnet synchronous machines. Analytical models that reveal the effects of imbalance are obtained for each parameter. Thereafter, the models are verified for accuracy by comparison with complex simulations that closely represent true machine behavior. Such models may be utilized for developing (general) algorithms for detection, learning and mitigation of the negative effects of parameter imbalance including current (and thus torque) pulsations during real-time operation.
Prerit Pramod
2023-09-30T22:07:06
http://arxiv.org/abs/2310.00508v1
# Analytical Modeling of Parameter Imbalance in Permanent Magnet Synchronous Machines ###### Abstract This paper presents a systematic and comprehensive analysis of the impact of parameter imbalance in permanent magnet synchronous machines. Analytical models that reveal the effects of imbalance are obtained for each parameter. Thereafter, the models are verified for accuracy by comparison with complex simulations that closely represent true machine behavior. Such models may be utilized for developing (general) algorithms for detection, learning and mitigation of the negative effects of parameter imbalance including current (and thus torque) pulsations during real-time operation. ## Background Industrial applications such as electric power steering (EPS) [1, 2, 3, 4, 5] that involve mass manufacturing of electric machines, including permanent magnet synchronous machines (PMSM) [6, 7], switched reluctance machines (SRM) [8, 9, 10, 11, 12], and permanent magnet DC machines (PMDC) [13] must maintain tight control over the part-to-part variation as well as intra-part balance of machine parameters. However, very tight control of such variations and imbalances is not practical since it results in high volume rejection of manufactured parts and thus unnecessary costs. Imbalance of machine parameters results in non-ideal current and thus torque control, i.e., undesirable current and torque pulsations are observed. This effect is significantly magnified when feedforward current control [14, 15, 16, 17, 18, 19] is employed as opposed to feedback control [20, 21, 22, 23, 24, 25, 26, 27, 28, 29], although even the latter suffers from this situation due to bandwidth and maximum bus voltage limitations. While the effect of parameter imbalance is somewhat understood, a detailed analysis of the same is still lacking. A systematic and comprehensive analysis of the impact of parameter imbalance in PMSMs is presented here. Analytical (mathematical) models that reveal the effects of imbalance are obtained for each parameter. Such mathematical models expand the ability to capture mathematically non-ideal behavior that are typically not included in conventional formulations [30, 31]. Thereafter, the models are verified for accuracy by comparison with simulations that closely represent true machine behavior. Such models may be utilized for developing (general) algorithms for detection, learning and mitigation of the negative effects of parameter imbalance including current (and thus torque) pulsations during real-time operation [32, 33, 34, 35]. Note that the focus of this write-up is on modeling of the actual machine. The behavior of the motor drive system during actual operation, where the motor control system interacts with the electric machine, is not presented here. ## Description The mathematical model of a 3-phase PMSM in the stationary or abc reference frame consists of the electrical and magnetic relationships, i.e., the voltage to current relationship and the current to torque expression respectively. The electrical circuit equations are expressed as follows. \[\begin{split} V_{a}&=R_{a}I_{a}+\dot{\lambda}_{a}\\ V_{b}&=R_{b}I_{b}+\dot{\lambda}_{b}\\ V_{c}&=R_{c}I_{c}+\dot{\lambda}_{c}\\ \lambda_{a}&=L_{a}I_{a}-M_{ab}I_{b}-M_{ac}I_{c}- \lambda_{am}\cos\theta\\ \lambda_{b}&=L_{b}I_{b}-M_{ba}I_{a}-M_{bc}I_{c}- \lambda_{bm}\cos(\theta-\beta)\\ \lambda_{c}&=L_{c}I_{c}-M_{ca}I_{a}-M_{cb}I_{b}- \lambda_{cm}\cos(\theta-2\beta)\end{split} \tag{1}\] where \(V_{x}\) and \(I_{x}\) are the phase voltages and currents for phase \(x\), \(R_{x}\), \(L_{x}\) and \(\lambda_{cm}\) are the phase resistance, self-inductance and permanent magnet flux linkage respectively, and \(M_{xy}\) represents the mutual inductance of phase \(x\) due to current in phase \(y\). \(\beta\) is the spatial angle difference between the different phases of the electric machine and is equal to \(\frac{2\pi}{n}\) with \(n\) being the number of phases. The electromagnetic torque is obtained from the current and flux linkages as follows. \[\begin{split} T_{e}&=\frac{\partial W^{\prime}}{ \partial\theta}\\ W^{\prime}&=\sum_{x=a,b,c}\int\lambda_{x}\,dI_{x} \end{split} \tag{2}\] where \(T_{e}\) represents the electromagnetic torque, \(W^{\prime}\) is the magnetic co-energy while \(\theta\) is the electrical (phase) position of the motor. Thus, for modeling the mismatch or imbalance between phases, parameters may be written as follows. \[\begin{split} R_{x}&=R+\Delta R_{x}\\ L_{x}&=L+\Delta L_{x}\\ M_{xy}&=M+\Delta M_{xy}\\ \lambda_{xm}&=\lambda_{m}+\Delta\lambda_{x}\end{split} \tag{3}\] where the \(\Delta A_{x}\) term represents the deviation of the value of parameter \(A\) for phase \(x\) from the nominal value \(A_{r}\). For mathematical convenience, the lowest out of the parameter values between all the phases may be chosen to be the nominal value. In this way, the one of the error terms is always zero. The individual error terms may then be obtained by averaging the deviation of the individual phase parameters from the nominal value. In general, the phase voltage equations are converted to the synchronously rotating or dq reference frame using the commonly known Clarke and Park transforms, which are expressed (in combined form) as follows. \[h_{dq0}=T_{f}h_{abc} \tag{4}\] _Technical Paper_ \[T_{f}=\frac{2}{3}\begin{bmatrix}\cos\theta&\cos(\theta-\beta)&\cos(\theta-2\beta) \\ \sin\theta&\sin(\theta-\beta)&\sin(\theta-2\beta)\\ \frac{1}{2}&\frac{1}{2}&\frac{1}{2}\end{bmatrix}\] where \(h\) may represent the voltage or current. The inverse Clarke and Park transforms (again in combined form) are expressed as follows. \[\begin{split} h_{abc}=T_{i}h_{dq0}\\ T_{i}=T^{-1}=\begin{bmatrix}\cos\theta&\sin\theta&1\\ \cos(\theta-\beta)&\sin(\theta-\beta)&1\\ \cos(\theta-2\beta)&\sin(\theta-2\beta)&1\end{bmatrix}\end{split} \tag{5}\] With matched or equal phase parameters, the Park transform results in machine equations that are independent of position. These ideal equations are commonly used for the purposes of modeling, estimation and control in most industrial motor drive control systems. In order to obtain the analytical model, all the parameters are assumed to be different (as explained above). The general phase voltage equations are then transformed into the dq frame utilizing the transformation matrices. This results in the following voltage equations. \[\begin{split} V_{d}&=V_{di}+\Delta V_{dx}+\Delta V_{dz }+\Delta V_{dzM}\\ V_{q}&=V_{qi}+\Delta V_{qR}+\Delta V_{q4}+\Delta V_{q4M} \end{split} \tag{6}\] where the subscript \(i\) represents ideal (position independent) equations. The additional voltage terms, referenced by \(\Delta V\), are obtained by applying the transformation considering the error terms due to the imbalance. The individual voltage terms that arise due to resistance, permanent magnet flux linkage and inductance imbalance are represented by subscripts \(R\), \(\lambda\) and \(LM\) respectively. The derivation for obtaining these terms for each parameter individually is presented in the following description. The ideal dq frame model for non-salient pole machines is specified below. \[\begin{split} V_{di}=RI_{d}+(L+M)\big{(}I_{d}+\omega_{e}I_{q} \big{)}\\ V_{qi}=RI_{q}+(L+M)\big{(}I_{q}-\omega_{e}I_{d}\big{)}+\omega_{e} \lambda_{m}\\ T_{e}=\frac{3}{2}\frac{N_{p}}{2}\lambda_{m}I_{q}\end{split} \tag{7}\] The ideal dq model considering salient pole machines consists of separate d and q axis inductances and is specified here for reference as follows. \[\begin{split} V_{di}=RI_{d}+\omega_{e}L_{q}I_{q}+L_{d}I_{d}\\ V_{qi}=RI_{q}-\omega_{e}L_{d}I_{d}+L_{q}I_{q}+\omega_{e}\lambda_{m} \\ T_{e}=\frac{3}{2}\frac{N_{p}}{2}\big{(}\lambda_{m}+\big{(}L_{q}-L_{ d}\big{)}I_{d}\big{)}I_{q}\end{split} \tag{8}\] Note that the torque expressions for modeling imbalance of different parameters are not shown here. However, they can be easily obtained by following the same idea as the voltage-current derivations. It is important to understand that the models presented here are general (plant) models that describe the machine behavior and are not influenced by the control strategy whatsoever. Further, the models are valid for all synchronous machines, including wound-rotor machines with field current windings. **Resistance Imbalance** The additional voltage terms obtained as a result of resistance imbalance are specified below. \[\begin{split}&\frac{3}{2}\Delta V_{dR}=\Delta R_{a}l_{a}\cos\theta+ \Delta R_{b}l_{b}\cos(\theta-\beta)+\Delta R_{c}l_{c}\cos(\theta-2\beta)\\ &\frac{3}{2}\Delta V_{qR}=\Delta R_{a}l_{a}\sin\theta+\Delta R_{b }l_{b}\sin(\theta-\beta)+\Delta R_{c}l_{c}\sin(\theta-2\beta)\\ &\Delta V_{dR}=\frac{\Delta R}{3}I_{d}+K_{R}\cos(2\theta+\phi_{R })I_{d}+K_{R}\sin(2\theta+\phi_{R})I_{q}+(...)I_{0}\\ &\Delta V_{qR}=K_{R}\sin(2\theta+\phi_{R})I_{d}-K_{R}\cos(2\theta +\phi_{R})I_{q}\\ & K_{R}=\frac{1}{3}\sqrt{\Delta R_{a}^{2}+\Delta R_{b}^{2}+\Delta R _{c}^{2}-\Delta R_{a}\Delta R_{b}-\Delta R_{b}\Delta R_{c}-\Delta R_{c}\Delta R _{a}}\\ &\phi_{R}=\tan^{-1}\left(\frac{\sqrt{3}(-\Delta R_{b}+\Delta R_{ c})}{2\Delta R_{a}-\Delta R_{b}-\Delta R_{c}}\right)\end{split} \tag{9}\] A block diagram representation of the effect of resistance imbalance is shown in the figure below. A comparison of the analytical prediction of resistance imbalance with a detailed simulation model having high accuracy for describing true machine behavior is shown in the figure below. Figure 1: Block diagram representation of analytical model for resistance imbalance. Figure 2: Results illustrating accuracy of analytical model for resistance imbalance. ### Permanent Magnet Flux Linkage Imbalance The additional voltage terms obtained as a result of permanent magnet flux linkage imbalance are as follows. \[\frac{3}{2}\Delta V_{dA}=\omega_{e}\Delta\lambda_{am}\sin\theta\cos\theta+\omega_ {e}\Delta\lambda_{bm}\sin(\theta-\beta)\cos(\theta-\beta)+\omega_{e}\Delta \lambda_{cm}\sin(\theta-2\beta)\cos(\theta-2\beta)\] \[\frac{3}{2}\Delta V_{dA}=\omega_{e}\Delta\lambda_{am}\sin^{2}\theta+\omega_{e} \Delta\lambda_{bm}\sin^{2}(\theta-\beta)+\omega_{e}\Delta\lambda_{cm}\sin^{2}( \theta-2\beta)\] \[\Delta V_{dA}=\omega_{e}K_{A}\sin(2\theta+\phi_{A})\] \[\Delta V_{qa}=\omega_{e}(\Delta\lambda_{am}+\Delta\lambda_{bm}+\Delta\lambda_{ cm})-\omega_{e}K_{A}\cos(2\theta+\phi_{A}) \tag{10}\] \[K_{A}=\frac{1}{3}\sqrt{\frac{\Delta\lambda_{a}^{2}+\Delta\lambda_{b}^{2}+ \Delta\lambda_{c}^{2}-\Delta\lambda_{a}\Delta\lambda_{b}-\Delta\lambda_{b} \Delta\lambda_{c}-\Delta\lambda_{c}\Delta\lambda_{a}}}\] \[\phi_{A}=\tan^{-1}\left(\frac{\sqrt{3}(-\Delta\lambda_{b}+\Delta\lambda_{c})} {2\Delta\lambda_{a}-\Delta\lambda_{b}-\Delta\lambda_{c}}\right)\] A block diagram representation of the effect of permanent magnet flux linkage imbalance is as follows. A comparison of the analytical prediction of permanent magnet flux linkage imbalance with a detailed simulation model having high accuracy for describing true machine behavior is shown in the figure below. Figure 4: Results illustrating accuracy of analytical model for permanent magnet flux linkage imbalance. Figure 3: Block diagram representation of analytical model for permanent magnet flux linkage imbalance. **Inductance Imbalance** The additional voltage terms obtained as a result of inductance (including both self and mutual inductances) imbalance are specified below. \[\begin{split}\frac{3}{2}\Delta V_{ul}=\left(p(L_{a}l_{a}-M_{ab}l_{ b}-M_{ac}l_{c})\right)\cos\theta+\left(p(L_{b}l_{b}-M_{ba}l_{a}-M_{bc}l_{c}) \right)\cos(\theta-\beta)+\left(p(L_{c}l_{c}-M_{ca}l_{a}-M_{cb}l_{b})\right) \cos(\theta-2\beta)\\ \frac{3}{2}\Delta V_{ul}=\left(p(L_{a}l_{a}-M_{ab}l_{b}-M_{ac}l_{c })\right)\sin\theta+\left(p(L_{b}l_{b}-M_{ba}l_{a}-M_{bc}l_{c})\right)\sin( \theta-\beta)+\left(p(L_{c}l_{c}-M_{ca}l_{a}-M_{cb}l_{b})\right)\sin(\theta-2 \beta)\end{split} \tag{11}\] where \(p\) represents the derivative operator. This is the general expression for all permanent magnet synchronous machines (PMSMs). In the case of salient pole PMSMs both the self and mutual inductance terms are position dependent and so the derivative operation needs to be carried out accordingly. For non-salient pole machines, the inductances may be assumed to be position independent. \[\begin{split}\Delta V_{ul}=(\Delta L+\Delta M+K_{L}\cos(2\theta+ \phi_{L})-K_{M}\cos(2\theta+\phi_{M}))\big{(}l_{d}+\omega_{e}l_{d}\big{)}+(K_ {L}\sin(2\theta+\phi_{L})-K_{M}\sin(2\theta+\phi_{M}))\big{(}-\omega_{e}l_{d} +l_{a}\big{)}\\ \Delta V_{ul}=(K_{L}\sin(2\theta+\phi_{L})-K_{M}\sin(2\theta+ \phi_{M}))\big{(}l_{d}+\omega_{e}l_{d}\big{)}+(\Delta L+\Delta M+K_{L}\cos(2 \theta+\phi_{L})+K_{M}\cos(2\theta+\phi_{M}))\big{(}-\omega_{e}l_{d}+l_{a} \big{)}\end{split} \tag{12}\] \[\begin{split} K_{L}=&\frac{1}{3}\sqrt{\Delta L_{a}^ {2}+\Delta L_{b}^{2}+\Delta L_{c}^{2}-\Delta L_{a}\Delta L_{b}-\Delta L_{a} \Delta L_{c}-\Delta L_{b}\Delta L_{c}}\\ \phi_{L}=\tan^{-1}\left(\frac{\sqrt{3}(-\Delta L_{b}+\Delta L_{ c})}{2\Delta L_{a}-\Delta L_{b}-\Delta L_{c}}\right)\end{split} \tag{13}\] \[\begin{split} K_{M}=&\frac{2}{3}\sqrt{M_{ab}^{2}+M_{ bc}^{2}+M_{ca}^{2}-M_{ab}M_{ac}-M_{ab}M_{cb}-M_{ac}M_{cb}}\\ \phi_{M}=\tan^{-1}\left(\frac{\sqrt{3}(-M_{ab}+M_{ac})}{-M_{ab}- M_{ac}+2M_{cb}}\right)\end{split} \tag{14}\] A block diagram representation of the effect of inductance imbalance in shown in the figure below. A comparison of the analytical prediction of inductance imbalance with a detailed simulation model having high accuracy for describing true machine behavior is shown in the figure below. Figure 5: Block diagram representation of analytical model for inductance imbalance. Note that the above derivations concerning inductance imbalance are only valid for non-salient pole PMSMs. While the derivation or results for modeling salient pole machines are not shown here, it is easy to extend the idea presented here to obtain those as well. As mentioned earlier, for salient pole machines, additional terms will be introduced due to the existence of second order position dependent terms in the stationary frame self and mutual inductances and therefore the derivative operator must be applied appropriately to correct determine the desired inductance imbalance model for salient pole synchronous machines. ## Conclusions This paper presents analytical models capturing the effects of imbalance for all the different parameters of PMSMs are presented. These models are not commonly known and may be used to develop algorithms (that may be implemented at the manufacturing end of line or in the controller software for real-time operation) for the detection, identification, learning and mitigation of the negative effects of parameter imbalance in PMSM machines. Figure 6: Results illustrating accuracy of analytical model for inductance imbalance.
この論文では、永磁体同期電動機の偏波影響を体系的にかつ包括的に分析しています。各パラメータの偏波の影響を明らかにする分析モデルを開発し、これらのモデルは、複雑なシミュレーションと比較することで正確性を検証しました。このようなモデルは、リアルタイム動作中に偏波の影響の検出、学習、緩和のための(一般的な)アルゴリズムを開発するために利用できます。
2309.15700
Tracking Snake-like Robots in the Wild Using Only a Single Camera
Robot navigation within complex environments requires precise state estimation and localization to ensure robust and safe operations. For ambulating mobile robots like robot snakes, traditional methods for sensing require multiple embedded sensors or markers, leading to increased complexity, cost, and increased points of failure. Alternatively, deploying an external camera in the environment is very easy to do, and marker-less state estimation of the robot from this camera's images is an ideal solution: both simple and cost-effective. However, the challenge in this process is in tracking the robot under larger environments where the cameras may be moved around without extrinsic calibration, or maybe when in motion (e.g., a drone following the robot). The scenario itself presents a complex challenge: single-image reconstruction of robot poses under noisy observations. In this paper, we address the problem of tracking ambulatory mobile robots from a single camera. The method combines differentiable rendering with the Kalman filter. This synergy allows for simultaneous estimation of the robot's joint angle and pose while also providing state uncertainty which could be used later on for robust control. We demonstrate the efficacy of our approach on a snake-like robot in both stationary and non-stationary (moving) cameras, validating its performance in both structured and unstructured scenarios. The results achieved show an average error of 0.05 m in localizing the robot's base position and 6 degrees in joint state estimation. We believe this novel technique opens up possibilities for enhanced robot mobility and navigation in future exploratory and search-and-rescue missions.
Jingpei Lu, Florian Richter, Shan Lin, Michael C. Yip
2023-09-27T14:42:30
http://arxiv.org/abs/2309.15700v1
# Tracking Snake-like Robots in the Wild using only a Single Camera ###### Abstract Robot navigation within complex environments requires precise state estimation and localization to ensure robust and safe operations. For simulating mobile robots like robot snakes, traditional methods for sensing require multiple embedded sensors or markers, leading to increased complexity, cost, and increased points of failure. Alternatively, deploying an external camera in the environment is very easy to do, and marker-less state estimation of the robot from this camera's images is an ideal solution: both simple and cost-effective. However, the challenge in this process is in tracking the robot under larger environments where the cameras may be moved around without extrinsic calibration, or maybe when in motion (e.g., a drone following the robot). The scenario itself presents a complex challenge: single-image reconstruction of robot poses under noisy observations. In this paper, we address the problem of tracking ambulatory mobile robots from a single camera. The method combines differentiable rendering with the Kalman filter. This synergy allows for simultaneous estimation of the robot's joint angle and pose while also providing state uncertainty which could be used later on for robust control. We demonstrate the efficacy of our approach on a snake-like robot in both stationary and non-stationary (moving) cameras, validating its performance in both structured and unstructured scenarios. The results achieved show an average error of \(0.05\) m in localizing the robot's base position and \(6\) degrees in joint state estimation. We believe this novel technique opens up possibilities for enhanced robot mobility and navigation in future exploratory and search-and-rescue missions. ## I Introduction Unlike their stationary counterparts, mobile robots are designed to navigate through the physical world in environments that are often too treacherous for humans such as the deep sea [1] and even other planets [2]. With mobile robots acting as surrogates for humans, exploration for research and search and rescue missions in extreme environments are conducted without risking human lives [3]. A growing class of mobile robots involves ambulatory systems. These ambulatory mobile robots (AMRs) have specialized articulated robotic designs for enhanced mobility and stability on uneven ground techniques in order to navigate broader terrains. AMRs include but are not limited to quadruped robots [4], flying drones [5], and snake-like and serpentine robots [6, 7]. To ensure the safe operation of AMRs in complex environments, various sensors are integrated into their systems. These sensors aid in localizing the robot and understanding its surroundings, though this can introduce increased complexity in real-world deployments. A more streamlined approach involves tracking AMRs using cameras. Cameras, given their ease of installation and portability, are better for navigating challenging terrains. For example, in the Mars 2020 NASA mission, where the Mars Helicopter utilized onboard cameras to scout the landscape and guide the Perseverance rover's exploration. As we look to the future, exploratory and search-and-rescue missions likely involve collaborative efforts between multiple robots, and the ability to track one robot using a camera mounted on another will be crucial. In this paper, we address the problem of tracking snake-like robots from a single camera. Along the lines of the Mars Helicopter's mission, we aim to bring robot state estimation from camera data to snake-like robots, and by extension, other AMRs, to aid in future exploratory missions. By estimating the pose and state of an AMR, drones can provide more detailed guidance when providing mapping of the environment [8]. Our focus is on snake robots that draw inspiration from biological snakes [9] and are currently funded by NASA for exploration on extraterrestrial planetary bodies [10]. Toward this end, we recognize a fundamental need for being able to track AMRs using only a monocular camera. These techniques will also become foundational in the future to deploying robots in search-and-rescue missions or leveraging autonomous robot teams for work in the remote wilderness. The overall tracking approach involves first a method for automatic robot mask generation. Leveraging this mask, we present a tracking technique that seamlessly integrates dif Fig. 1: A snake-like robot, Arcsnake [7], is tracked on camera in the outdoor environment by a hovering drone. ferentiable rendering with the Kalman filter, ensuring precise online state estimation. We conduct experiments in both laboratory and outdoor environments (Figure 1). Through both qualitative and quantitative evaluations, we demonstrate the effectiveness of our method in different scenarios. Our contributions are threefold: * We present the first work on marker-less state estimation for a snake robot from a single monocular camera. * Our method combines differentiable rendering with a Kalman filter, and simultaneously estimates the joint angle and the pose of a snake robot. * Validation of the effectiveness of the algorithm on a snake robot in both structured and unstructured environments, achieving a localization accuracy of 0.05 m for the robot base position and 0.11 rad on the robot's joint states. ## II Previous Work ### _Robot Localization from Single Camera_ Localizing the robot is crucial for a wide range of robotic applications, especially when relying on a single camera, which presents unique challenges. One popular approach to address this is using the fiducial markers as 2D point features [11, 12]. For articulated robots like a snake robot, the 3D position of the markers can be calculated using robot kinematics and the robot pose can be derived by solving a Perspective-n-Point problem [13, 14, 15, 16]. As the field evolved, there was a shift towards marker-less pose estimation. Initial efforts in this direction utilized depth cameras to localize articulated robots [17, 18, 19, 20]. With the rise of Deep Neural Networks (DNNs), a new paradigm emerged. DNNs, with their advantages of extracting point features without the need for markers, have significantly enhanced the performance of marker-less pose estimation for articulated robots [21, 22, 23, 24, 25]. Beyond keypoint-based methods, recent works [26, 27] have demonstrated the potential of rendering-based methods. Benefiting from the dense correspondence provided by robot masks, rendering-based methods achieve state-of-the-art performance on robot pose estimation. However, they suffer from processing speed. In this work, we adopt a rendering-based approach for robot state estimation. Instead of purely relying on the rendering, we integrate image moments with a Kalman Filter, aiming to utilize temporal information to achieve precise and fast online inference using a single camera. ### _Snake Robot State Estimation_ For a broader category of mobile robots, the primary focus of state estimation has been on localizing the robot within its surroundings. For instance, Milella et al. [28] utilizes visually distinctive features on stereo images for localization. Several other works [29, 30, 31] have proposed methods that take into account the environment dynamics and potential measurement errors to enhance localization accuracy. However, in the realm of snake robots, state estimation becomes even more intricate due to the need to consider joint angles for accurate 3D space modeling. Historically, state estimation for snake robots has relied on the robot's internal proprioceptive sensors, as highlighted by works like Rollinson et al. [32, 33]. Then, the filtering methods, like the Unscented Kalman Filter and Extended Kalman Filter [34, 35], have been employed to account for the measurement error for real-time estimation. In this work, we seek to estimate both the position and joint angle of the snake robot using only images. This approach not only simplifies the estimation process but also enhances the robot's adaptability in outdoor scenarios. ``` Input : Initialized robot state \(\mathbf{x}_{0|0},\Sigma_{0|0}\) Output : Estimated robot state \(\mathbf{x}_{t|t},\Sigma_{t|t}\) 1whilereceive new image \(\mathbb{I}_{t}\)do // Motion Model 2\(\mathbf{x}_{t|t-1},\Sigma_{t|t-1}\leftarrow motionModel(\mathbf{x}_{t-1|t-1}, \mathbf{v}_{t-1},\Sigma_{t-1|t-1})\) // Observation from Image 3\(\mathbb{M}_{t}^{ref}\gets f_{seg}(\mathbb{I}_{t})\) 4\(\mathbf{m}_{t}\gets computeMoments(\mathbb{M}_{t}^{ref})\) // Observation Model 5\(\mathcal{M}_{t|t-1}\gets reconstructMesh(\mathbf{x}_{t|t-1})\) 6\(\mathbb{M}_{t|t-1}^{pred}\gets renderPrediction(\mathbf{x}_{t|t-1}, \mathcal{M}_{t|t-1})\) 7\(\hat{\mathbf{m}}_{t}\gets computeMoments(\mathbb{M}_{t|t-1}^{pred})\) 8\(H_{t}=\frac{\partial\mathbf{m}_{t}}{\partial\mathbf{x}_{t|t-1}}\) // Compute the Residual 9\(\mathbf{y}_{t}=\mathbf{m}_{t}-\hat{\mathbf{m}}_{t}\) // Update Belief 10\(K_{t}=\Sigma_{t|t-1}H_{t}^{\top}(H_{t}\Sigma_{t|t-1}H_{t}^{\top})^{-1}\) 11\(\mathbf{x}_{t|t}=\mathbf{x}_{t|t-1}+K_{t}\mathbf{y}_{t}\) 12\(\Sigma_{t|t}=(I-K_{t}H_{t})\Sigma_{t|t-1}\) // Refine with Image Loss 13fornumber of refinement stepsdo 14\(\mathcal{M}_{t|t}\gets reconstructMesh(\mathbf{x}_{t|t})\) 15\(\mathbb{M}_{t|t}^{pred}\gets renderPrediction(\mathbf{x}_{t|t}, \mathcal{M}_{t|t})\) 16\(\mathcal{L}_{t}\gets computeLoss(\mathbb{M}_{t|t}^{pred},\mathbb{M}_{t}^{ ref})\) 17\(\mathbf{x}_{t|t}=\mathbf{x}_{t|t}-\lambda\frac{\partial\mathcal{L}_{t}}{\partial\mathbf{x}_{t|t}}\) // Update Velocity 18\(\mathbf{v}_{t}\gets computeVelocity(\mathbf{x}_{t|t},\mathbf{x}_{t-1|t-1})\) ``` **Algorithm 1**Online State Estimation ## III Methodology The overall proposed approach follows an online state estimation method combining differentiable rendering of a robot mask, with image moment prediction, a robot motion model, and a Kalman filter to estimate the joint angle and the pose of a mobile robot from a single camera. The method includes, additionally, refinement steps and velocity update steps to enhance the accuracy of the estimation, as well as model transfer techniques to reduce computation and memory costs so that the method can run on modest hardware. The details follow in the next section, and Algorithm 1 outlines the main steps of the method. ### _Motion Model with Belief Propagation_ For AMR navigation, the robot state, denoted by \(\mathbf{x}_{t}\), can encapsulate various attributes such as joint angles, camera-to-robot transformations, and other necessary parameters at time \(t\). In this work, we define the robot state as \(\mathbf{x}:=[\theta,\mathbf{q},\mathbf{b}]\), where \(\theta\in\mathbb{R}^{N}\) is the robot joint angle, \(\mathbf{q}\) is the quaternion, and \(\mathbf{b}\) is the translational vector. The quaternion and the translational vector are parametrizations of the \(\mathbf{T}_{b}^{c}\in SE(3)\), which is the robot pose in the camera frame. The next state of the robot is predicted with a motion model, based on its previous state and velocity. This prediction phase provides a rough direction for belief propagation. We will model the robot's motion using a simple linear relationship: \[\mathbf{b}_{t|t-1}=\mathbf{b}_{t-1|t-1}+\mathbf{v}_{t-1}\Delta t \tag{1}\] where we try to predict the position of the robot \(\mathbf{b}_{t|t-1}\) at time \(t\) by considering the previous robot position \(\mathbf{b}_{t-1|t-1}\), the velocity \(\mathbf{v}_{t-1}\), and the time step \(\Delta t\). We will make the assumption that there is negligible process noise (i.e., imperfections in the system's motion model are negligible as compared to observation noise), leading to the following expression for the propagation of the covariance matrix: \[\Sigma_{t|t-1}=F_{t}\Sigma_{t-1|t-1}F_{t}^{\top} \tag{2}\] In this case, \(F_{t}\) is the identity matrix, reflecting our assumption that the motion model follows a linear relationship without any non-linear or stochastic effects. ### _Automatic Mask Generation for Segmentation_ The proposed state estimation algorithm requires segmenting the robot from images, but manually labeling the robot masks can be highly time-consuming. Recently, the zero-shot generalizable segmentation model, Segment Anything Model (SAM) [36], allows automatic robot mask generation with simple bounding box prompts. Given the binary robot mask of the previous frame, \(\mathbb{M}_{t-1}\in\mathbb{R}^{H\times W}\), the bounding box prompt for the current frame, \(\mathcal{B}_{t}:=(u_{min},v_{min},u_{max},v_{max})\), is estimated by a mask-to-box operation, \[(u_{min},v_{min}) =\min\{(u,v)\,|\,\mathbb{M}_{t-1}[u,v]\neq 0\} \tag{3}\] \[(u_{max},v_{max}) =\max\{(u,v)\,|\,\mathbb{M}_{t-1}[u,v]\neq 0\} \tag{4}\] Then, the SAM is utilized to generate the robot mask of the current frame, given the bounding box prompt \(\mathcal{B}_{t}\), as shown in Fig. 2. To ensure the robustness of the bounding box prompt, the robot mask is dilated before performing the mask-to-box operation. Using SAM for robot mask generation can, however, be slow as SAM is not optimized for real-time application (around 0.5 seconds per frame using a single Nvidia GeForce RTX 4090 GPU). To achieve real-time performance, we utilize the robot masks generated from SAM to train a lightweight neural network for segmentation. Specifically, we employ DeepLabV3+ [37], a popular semantic segmentation architecture, to segment the robot from RGB images during the online estimation process. By training DeepLabV3+ with the generated masks, we ensure that our system can segment the robot in real-time with modest memory and computation requirements, effectively enabling realistic deployment in the wild. ### _Observation Model for Belief Propagation_ In this section, we introduce the mapping from the predicted robot states \(\mathbf{x}_{t|t-1}\) to the observation of image moment [38]\(\hat{\mathbf{m}}_{t}\) in the proposed algorithm 1. Given the predicted robot states \(\mathbf{x}_{t|t-1}\), which includes joint angle and robot pose, we first reconstruct the robot mesh by interconnecting individual robot body parts through forward kinematics. For a snake-like (serpentine) robot, we approximate each individual robot body part as a cylinder with the dimension mentioned in [39, 7]. Given a mesh vertex \(\mathbf{r}^{n}\in\mathbb{R}^{3}\) on the \(n\)-th robot link, this vertex undergoes a transformation into the robot base frame considering the joint angle: \[\mathbf{r}^{b}=\mathbf{T}_{n}^{b}(\theta)\mathbf{r}^{n} \tag{5}\] where \(\cdot\) represents the homogeneous representation of a point (i.e. \(\mathbf{\overline{r}}=[\mathbf{r},1]^{T}\)), and \(\mathbf{T}_{n}^{b}(\theta)\) is the coordinate frame transformation obtained from the forward kinematics [40]. Having the reconstructed robot mesh and the predicted robot base-to-camera transformation, \(\mathbf{T}_{b}^{c}\), the PyTorch3D differentiable renderer [41] comes into play to produce a virtual-model-derived, or rendered robot mask. By referencing techniques similar to those in [27], a differentiable silhouette renderer paired with a perspective camera is employed. The Fig. 2: Example of the bounding box prompt generated by mask-to-box operation (top) and the corresponding robot mask generated using SAM (bottom). _SoftSilhouetteShader_ is specifically leveraged to compute pixel values that form the robot mask. With the rendered robot mask, \(\mathbb{M}\), the image moments become computable as: \[M_{ij}=\sum_{u}\sum_{v}u^{i}v^{j}\mathbb{M}(u,v) \tag{6}\] Then, we derive the centroid, which is our observation for belief propagation, by: \[\hat{\mathbf{m}}=\left[\frac{M_{10}}{M_{00}}\quad\frac{M_{01}}{M_{00}}\right]^ {\top} \tag{7}\] We employ pytorch autograd [42] to track the gradient of each step and compute the observation matrix \(H\) by collecting the derivatives of the image moment \(\hat{\mathbf{m}}\) with respect to the robot states \(\mathbf{x}_{t|t-1}\). Finally, an Extended Kalman Filter (EKF) [34] is employed to update the belief of the robot states (lines 9-12 in Alg. 1), which ensures that our belief about the robot states is continually refined as more observations come in. ### _Image Loss Refinement and Velocity Estimation_ While image moments have historically proven useful in object tracking [38, 43], their efficacy diminishes in the complex arena of robot state estimation. This is because they encapsulate only limited details of the robot mask. Consequently, a direct method that compares the estimated and reference robot masks provides an enhancement to state estimation accuracy. We predict the robot mask from estimated robot states using the same differentiable rendering pipeline as described in Section III-C. To measure the difference between this prediction and the reference mask, we employ an image loss function, which sums the squared differences between the predicted mask \(\mathbb{M}^{pred}\) and the reference mask \(\mathbb{M}^{ref}\) across the image dimensions: \[\mathcal{L}=\sum_{i=0}^{H-1}\sum_{j=0}^{W-1}\left(\mathbb{M}^{pred}(i,j)- \mathbb{M}^{ref}(i,j)\right)^{2}. \tag{8}\] We refine the mean of the robot states by applying back-propagation on this image loss (line 17 in Alg. 1), bringing the estimation closer to the true state. As a final step, in service of the next belief propagation timestep, we derive the velocity from the updated position: \[\mathbf{v}_{t}=\frac{\mathbf{b}_{t|t}-\mathbf{b}_{t-1|t-1}}{\Delta t} \tag{9}\] This velocity is used for the motion model in forthcoming iterations, as it feeds into predictions for the robot's future states. ## IV Experiments and Results To comprehensively assess the efficacy of our proposed state estimation algorithm, we collected datasets of a snake robot operating in both structured and unstructured environments. These datasets facilitated both qualitative and quantitative evaluations of the state estimation method. The snake robot hardware is described in [39, 7] and is the evolutionary precursor to the NASA Extant Exobiology Life Surveygor (EELS) robot [10] that is anticipated to serve a science research vehicle for both earth science missions as well as extraterrestrial planetary exploration on Saturn's moon, Enceladus, or Jupiter's moon, Europa. **Snake-Lab Dataset**: We introduced the Snake-Lab Dataset for evaluating the accuracy of the joint angle estimation and robot pose estimation. This dataset was acquired in a lab setting using an Intel(r) Realsense(tm) camera at a resolution of (1280, 720). The robot's joint angles were recorded using electromagnetic sensors and were synchronized with the captured images. Additionally, the robot's spatial position was determined using the depth capabilities of the camera. For evaluation metrics, we employed the Euclidean distance for position estimation and the \(L_{1}\) norm for joint angle estimation. **Snake-Outdoor Dataset**: To examine the robustness of our algorithm in less structured environments, we collected the Snake-Outdoor dataset. This dataset comprises three videos: the first two were recorded using a hand-held camera at a resolution of (1280, 720), while the third was captured via a drone camera, which has no direct connection to the snake robot system. Given the absence of ground truth for the robot's state in this setting, we adopted the Intersection-over-Union metric (IoU): \[\text{IoU}=\frac{|\mathbb{M}^{ref}\cap\mathbb{M}^{pred}|}{|\mathbb{M}^{ref} \cup\mathbb{M}^{pred}|} \tag{10}\] to compare the ground-truth robot mask \(\mathbb{M}^{ref}\) with our algorithm's estimated mask \(\mathbb{M}^{pred}\). ### _Implementation Details_ To train DeepLabV3+, we collected around 1500 images, captured at a resolution of (1280, 720) and the ground truth segmentation masks were generated using Segment Anything Model [36]. We used the Adam optimizer [44] for gradient descent with 20 epochs and 8 batch size. The initial learning rate was set to 0.0001 and was decayed by a factor of 0.1 at the 10th epoch. During the online estimation, we resize the raw image to a resolution of (640, 360). Both the observed robot mask and the rendered robot mask are processed at this resolution. For the refinement step, we set the learning rate to 0.005 and also used the Adam optimizer for gradient descent. All computational experiments were executed on a system equipped with an Intel(r) Core(tm) i9-11900F Processor and NVIDIA GeForce RTX 4090. To strike a balance between \begin{table} \begin{tabular}{l c c} \hline \hline & Position error (m) & Joint state error (rad) \\ \hline Static & 0.0278 & 0.0605 \\ Moving camera & 0.0647 & 0.0849 \\ Moving robot & 0.0587 & 0.1352 \\ \hline Overall & 0.0540 & 0.1125 \\ \hline \hline \end{tabular} \end{table} TABLE I: Average Position and State Estimation Error on Snake-Lab Dataset accuracy and processing speed, we perform 10 refinement iterations for each incoming image, ensuring optimal performance while sustaining an estimation speed of 1 FPS. ### _Experiment on Snake-Lab dataset_ We present the qualitative results on the Snake-Lab dataset in Figure 4, and the quantitative evaluation of our state estimation algorithm is presented in Table I. We also plot the estimated joint trajectory with sensor readings in Figure 3. The results are segmented based on different scenarios: static conditions, moving camera, and moving robot. Under static conditions, where both the camera and the robot remain stationary, both the joint angle error and position error are the lowest, indicating that the algorithm performs exceptionally well in stable environments. Moving the camera or robot slightly affects the algorithm's accuracy. This could be attributed to the dynamic nature of the camera and the robot's movements, which might introduce complexities in state estimation. The overall average position error and joint angle error across all scenarios are 0.0540 m and 0.1125 rad, respectively. These results affirm the robustness of our state estimation algorithm, even in varying conditions. However, it's evident that dynamic factors, such as camera or robot movement, introduce some challenges, leading to increased errors. ### _Experiment on Snake-Outdoor dataset_ Table II presents the quantitative evaluation of our state estimation algorithm on the Snake-Outdoor dataset. The results are organized based on the number of refinement steps taken, which are 1, 5, and 10. The performance metric used is the Intersection-over-Union (IoU) for each video, and the speed of the algorithm in frames per second (FPS) is also provided. From the Table II, we can see a clear trade-off between accuracy and speed. As the number of refinement steps increases, there is a noticeable improvement in the Mean IoU, but the speed decreases. With 10 refinement steps, the algorithm operates at 1 FPS, which might be a limiting factor for real-time applications. However, the significant \begin{table} \begin{tabular}{l c c c} \hline \hline & \multicolumn{3}{c}{Number of refinement steps} \\ \cline{2-4} & 1 & 5 & 10 \\ \hline Video 1 (Mean IoU) & 0.4659 & 0.7632 & 0.8665 \\ Video 2 (Mean IoU) & 0.2456 & 0.3584 & 0.7690 \\ Video 3 (Mean IoU) & 0.3088 & 0.4394 & 0.8210 \\ \hline Speed (FPS) & 3.5 & 1.5 & 1 \\ \hline \hline \end{tabular} \end{table} TABLE II: Quantitative evaluation on Snake-Outdoor dataset. We compute the IoU between the estimated robot mask and the ground-truth robot mask. We also report the processing speed under different settings. Fig. 4: Qualitative results on Snake-Lab dataset. We derive the skeleton from the estimated robot pose and joint angle, and visualize it by projecting the skeleton on images. Fig. 3: Plots of estimated joint trajectory vs. sensor reading for the Snake-Lab dataset in the moving robot scenario. For each joint, we plot the pitch and yaw angle separately. Note that the snake robot uses magnetic encoders for sensor readings and are slightly noisy due misalignment between the encoder and magnet from vibrations during the experiment. boost in accuracy might justify this trade-off in scenarios where precision is critical. We also present qualitative results in Fig 5, showing the estimated skeleton and the predicted robot mask overlaid on the images. We can observe the estimated skeleton aligns with the robot's actual structure, providing a clear and intuitive understanding of the algorithm's performance in real-world, outdoor settings. ## V Conclusion In this work, we present a novel method for state estimation of snake robots using a single camera. The proposed approach combines differentiable rendering with the Kalman filter, fusing temporal information with a rendering-based optimization technique to improve the estimation process, which enhances the method's adaptability in outdoor scenarios. The results demonstrate the efficacy of our approach on a snake robot, validating its performance in both structured and unstructured environments. We believe this technique opens up possibilities for expanded capabilities for ambulatory mobile robot deployment and navigation in complex environments, making it a promising solution for future mobile robot applications. For future works, an exciting avenue is the exploration of how our method can be adapted for collaborative robotics, where multiple robots work in tandem. This could involve state estimation in scenarios where robots share sensory data to navigate or perform tasks (e.g. drone-assisted routing in different landscapes). Fig. 5: Qualitative results on Snake-Outdoor dataset. We show the estimated skeleton and predicted robot mask overlaid on images. Rows 1-2 correspond to video 1, rows 3-4 correspond to video 2, and rows 5-6 correspond to video 3. Notably, there’s a precise alignment of the skeleton and mask with the robot as shown in the images. ## Acknowledgement We thank Professor Nikolay Atanasov and Jason Stanley from the Existential Robotics Laboratory at UCSD for his assistance with the drone experiments, and NASA Jet Propulsion Laboratory for their continued mission guidance.
ロボットが複雑な環境内をナビゲートする際には、正確な状態推定と位置決めが必要となります。 robustness と安全な動作を確保するためです。移動ロボットのように、ロボット蛇のようなロボットについては、従来の方法では、センサーやマーカーを使用する必要があり、これにより複雑化、コスト増加、故障点が増加します。一方、外部カメラを環境に導入することで、非常に簡単に行えます。このカメラの画像からロボットの状態推定を、marker-less で行う方法が理想的です。両者はシンプルでコスト効果的です。しかし、このプロセスにおける課題は、ロボットがカメラを動かすことによって、外部的な較正が行われない場合や、ロボットの移動中にカメラの位置が変化する場合に、ロボットをtracking することです。このシナリオ自体には複雑な課題があります。ノイズのある観察下でのロボットの姿勢の1枚画像の再構成。この論文では、カメラから ambulatory ロボットを追跡
2309.09301
RenderIH: A Large-scale Synthetic Dataset for 3D Interacting Hand Pose Estimation
The current interacting hand (IH) datasets are relatively simplistic in terms of background and texture, with hand joints being annotated by a machine annotator, which may result in inaccuracies, and the diversity of pose distribution is limited. However, the variability of background, pose distribution, and texture can greatly influence the generalization ability. Therefore, we present a large-scale synthetic dataset RenderIH for interacting hands with accurate and diverse pose annotations. The dataset contains 1M photo-realistic images with varied backgrounds, perspectives, and hand textures. To generate natural and diverse interacting poses, we propose a new pose optimization algorithm. Additionally, for better pose estimation accuracy, we introduce a transformer-based pose estimation network, TransHand, to leverage the correlation between interacting hands and verify the effectiveness of RenderIH in improving results. Our dataset is model-agnostic and can improve more accuracy of any hand pose estimation method in comparison to other real or synthetic datasets. Experiments have shown that pretraining on our synthetic data can significantly decrease the error from 6.76mm to 5.79mm, and our Transhand surpasses contemporary methods. Our dataset and code are available at https://github.com/adwardlee/RenderIH.
Lijun Li, Linrui Tian, Xindi Zhang, Qi Wang, Bang Zhang, Mengyuan Liu, Chen Chen
2023-09-17T15:30:58
http://arxiv.org/abs/2309.09301v3
# RenderIH: A Large-scale Synthetic Dataset for 3D Interacting Hand Pose Estimation ###### Abstract The current interacting hand (IH) datasets are relatively simplistic in terms of background and texture, with hand joints being annotated by a machine annotator, which may result in inaccuracies, and the diversity of pose distribution is limited. However, the variability of background, pose distribution, and texture can greatly influence the generalization ability. Therefore, we present a large-scale synthetic dataset -RenderIH- for interacting hands with accurate and diverse pose annotations. The dataset contains 1M photo-realistic images with varied backgrounds, perspectives, and hand textures. To generate natural and diverse interacting poses, we propose a new pose optimization algorithm. Additionally, for better pose estimation accuracy, we introduce a transformer-based pose estimation network, TransHand, to leverage the correlation between interacting hands and verify the effectiveness of RenderIH in improving results. Our dataset is model-agnostic and can improve more accuracy of any hand pose estimation method in comparison to other real or synthetic datasets. Experiments have shown that pretraining on our synthetic data can significantly decrease the error from 6.76mm to 5.79mm, and our Transhand surpasses contemporary methods. Our dataset and code are available at [https://github.com/adwardlee/RenderIH](https://github.com/adwardlee/RenderIH). ## 1 Introduction 3D interacting hand (IH) pose estimation from a single RGB image is a key task for human action understanding and has many applications, such as human-computer interaction, augmented and virtual reality, and sign language recognition. However, obtaining 3D interacting hand pose annotations from real images is very challenging and time-consuming due to the severe self-occlusion problem. Some previous works [12, 18] have collected some real hand interaction data using a sophisticated multi-view camera system and made manual annotations, but the amount of data is limited. Synthetic 3D annotation data has become increasingly popular among researchers because of its easy acquisition and accurate annotation [27, 22, 3, 7, 15, 24, 41]. However, there remain two main challenges: the validity of the generated 3D hand poses and the diversity and realism of the generated images. Therefore, in this paper, we present a high-fidelity synthetic dataset of 3D hand interaction poses for precise monocular hand pose estimation. Firstly, ensuring the validity of the generated 3D interacting hand poses is a crucial challenge for a synthetic hand system. For example, the pose of Ego3d [22] is randomized which means a significant portion of the data is not valid. To ensure effective hand interactions, the generated two-hand poses must be proximal to each other, while increasing the risk of hand interpenetration. Therefore, we design an optimization process that considers the constraints of hand attraction and anti-penetration in the meantime, to ensure the proximity of two interacting hands and prevent the occur Figure 1: **Randomly selected samples from RenderIH dataset.** The rendered hands are realistic and varied, capturing a variety of poses, textures, backgrounds, and illuminations. rence of hand penetration (Section 3.1). In addition, the plausibility of hand poses must also be considered. Hence, we introduce anatomic pose constraints and apply adversarial learning to ensure that the generated hand poses adhere to anatomical constraints and realism. Benefiting from pose optimization, our generated dataset contains a rich set of validated two-hand interaction poses as shown in Figure 1. Secondly, most existing 3D synthetic hand images lack diversity in terms of backgrounds, lighting, and texture conditions, which prevents them from capturing the complex distribution of real hand data [22, 3, 15]. Most existing datasets for hand gesture recognition, such as Ego3d [22], Obman [15], and MVHM [3], do not consider the quality and diversity of the images. For instance, Ego3d [22] uses the same texture as the MANO model [29], which is unrealistic and monotonous. In contrast, our rendering system introduces various textures, backgrounds, and lighting effects that can produce vivid and realistic synthetic hand images (see Section 3.2). By combining HDR background, dynamic lighting, and ray-tracing renderer, we obtain 1M high-quality gesture images (see Figure 1). To assess the performance of our proposed dataset, we carried out comprehensive experiments on it. We demonstrate how much we can reduce the dependency on real data by using our synthetic dataset. Then we contrast our proposed RenderIH with other 3D hand datasets, such as H2O-3D [12] and Ego3d [22], by training a probing model for each of them and testing on a third-party dataset. Finally, we train a transformer-based network on a mixed dataset of RenderIH and InterHand2.6M (IH2.6M) and achieve state-of-the-art (SOTA) results on 3D interacting hand pose estimation. Our main contributions are as follows: * We propose an optimization method to generate valid and natural hand-interacting poses that are tightly coupled and avoid interpenetration. For image generation, we design a high-quality image synthesis system that combines rich textures, backgrounds, and lighting, which ensures the diversity and realism of the generated images. * Based on our data generation system, we construct a large-scale high-fidelity synthetic interacting hand dataset called **RenderIH**, which contains 1 million synthetic images and 100K interacting hand poses. To the best of our knowledge, this is the largest and most high-quality synthetic interacting dataset so far. * We conduct extensive experiments to verify the effectiveness of our proposed dataset-RenderIH. The results show that with the help of our synthetic dataset, using only 10% of real data can achieve comparable accuracy as the models trained on real hand data. We also propose a transformer-based network that leverages our dataset and achieves SOTA results. ## 2 Related work ### Realistic hand dataset Establishing a realistic hand dataset is a tedious and challenging procedure, most realistic data are collected by different sensors [26, 13, 11, 42, 40, 30, 20] including multiple cameras and depth sensors. STB dataset [40] obtained 3D annotations of a single hand (SH) via 2D manual labels and depth data. Since manual annotations are time-consuming [26], some researchers [30, 26, 13, 42] utilized semi-automatic methods to make annotations. Moon et al. [26] captured hand interactions with hundreds of cameras. They manually annotated the 2D keypoints of both hands on a few images and utilized a machine detector to help annotate the rest data. While some researchers [11, 1, 31] proposed automatic methods to make annotations, Hampali et al. [11] collected hand-object (HO) interactions and jointly optimized 2D key points on multiple RGB-D images to estimate 3D hand poses. Some researchers [8, 38, 9] obtain the 3D annotations of hands via some special equip \begin{table} \begin{tabular}{l|c c c c c c c} \hline \hline **Dataset** & **Type** & **Data size** & **MT** & **AP** & **background** & **illumination** & **Hand type** & **IH Size** \\ \hline NYU [31] & real & 243K & - & \(\bigstar\) & lab & uniform & SH & - \\ STB [40] & real & 36K & - & \(\bigstar\) & lab & uniform & SH & - \\ H2O-3D [12] & real & 76K & - & \(\bigstar\) & lab & uniform & HO & - \\ H2O [38] & real & 571K & - & \(\bigstar\) & indoor scenes & uniform & HO & - \\ MVHM [3] & synthetic & 320K & \(\bigstar\) & \(\bigstar\) & static scenes & uniform & SH & - \\ ObMan [15] & synthetic & 147K & \(\bigvee\) & \(\bigstar\) & static scenes & uniform & HO & - \\ DARTset [7] & synthetic & 800K & \(\bigvee\) & \(\bigstar\) & static scenes & manual & SH & - \\ \hline IH2.6M [26] & real & 2.6M & - & \(\bigstar\) & lab & uniform & **IH** & 628K \\ Ego3d [22] & synthetic & 50K & \(\bigstar\) & \(\bigstar\) & static scenes & random & **IH** & 40K \\ \hline **RenderIH (Ours)** & synthetic & 1M & \(\bigvee\) & \(\bigvee\) & HDR scenes & **dynamic** & **IH** & **1M** \\ \hline \hline \end{tabular} \end{table} Table 1: **Comparison of the related hand datasets. “MT” is short for multi-textures and means whether the hand models in the dataset are assigned with diverse textures, AP is short for anti-penetration, “Hand type” means which interaction type the dataset focus on (SH-single hand, HO-hand to object, IH-hand to hand), and “IH Size” means the proportion of IH poses. “HDR” is short for High Dynamic Range. Static scenes refer to the use of randomly selected images as the background.** ment. Ye et al. [38] captured hand poses via multiple joint trackers. Due to the limitation of the data collection scene, most realistic datasets are in simple scenarios, e.g. lab [40, 30, 11] or green screen [42, 26, 38, 1, 32]. Most realistic datasets focus on SH or HO interactions and very few papers [26, 32] collect interacting hand data. ### Synthetic hand dataset To obtain precise annotations and increase the dataset's diversity, several papers [27, 22, 3, 7, 15, 24, 41] established synthetic hand dataset by applying multiple backgrounds [41] or different hand textures [7]. Most datasets [3, 7, 27, 41] focus on SH pose data. DARTset [7] introduced a shaped wrist and rendered hand images with different skins and accessories. But the dataset did not contain IH. To simulate the HO interactions, Hasson et al. [15] utilized physics engine [25] to generate object manipulation poses, but their rendered images are not photo-realistic. Although some datasets [22, 24] provide poses of both hands, the rendered images are not natural enough and lack diversity. Those poses of Ego3d [22] were randomized, which leads to severe interpenetration between hands and the pose is relatively strange. Based entirely on the pose annotations of IH2.6M [26], AJH [24] produced a synthetic interacting hand dataset, but only hand masks were created and other annotations were missing. We summarize some representative hand datasets and compare them to ours in Table 1. While most datasets focus on SH or HO interactions, they are deficient in handling mesh collision, maintaining high-quality annotations, and providing pose diversity to some extent. ## 3 RenderIH dataset One of the main contributions of our paper is the interacting hand pose optimization method that can generate valid and natural poses. In our paper, **valid** poses are non-penetration hands and conform to the anatomic constraint outlined in Table 2. The **natural** poses not only conform to the anatomy but also frequently occur in daily life. We uniformly combine generated poses with a variety of hand textures, high dynamic range (HDR) backgrounds, and camera views. All collections are sampled independently to create images as diverse as possible. In Section 3.1, we introduce our new hand pose generation algorithm. After hand pose generation, how to render the synthetic image is demonstrated in Section 3.2. In Section 3.3, we briefly introduce some statistics about our RenderIH dataset. ### Interacting hand pose optimization **Hand model**. Based on the widely used parametric hand model MANO [29], Yang et al. [37] proposed A-MANO, which assigns the twist-splay-bend Cartesian coordinate frame to each joint along the kinematic tree and fit the natural hand better. Therefore, we adopt A-MANO to make our optimization more biologically plausible. **Initial pose generation**. To produce massive valid and natural IH interaction poses, we derive raw poses from the IH2.6M [26] and then augment the raw poses by assigning random rotation offsets to hand joints. The augmented poses are shown in Figure 3, after augmentation, the rotation of the \(j_{th}\) finger joint can be expressed as: \[\{R_{ji}\in SO(3)\}_{i=1}^{I}=\{R_{j}R_{b}(\theta_{i}^{b})R_{s}(\theta_{i}^{s })\}_{i=1}^{I}, \tag{1}\] where \(I\) is the number of augmentation, \(R_{b/s}(\theta)\) denotes the rotation along the bend/splay axe, the angle offset \(\theta^{b}\in[-90^{\circ},90^{\circ}]\) and \(\theta^{s}\in[-30^{\circ},30^{\circ}]\). \(SO(3)\) is a group of 3D rotations. \(\theta^{s}=0\) when the joint is not the finger root joint. To avoid abnormal gestures, each augmented joint is restricted according to Table 2. As the augmented poses are totally random, most of them suffer from serious mesh penetration and their gestures are unnatural, it is necessary to optimize the poses. **Anti-penetration**. Inspired by [17], we adopt multi-person interpenetration loss to interacting hands and propose to divide the hand region into 16 parts. Let \(\Omega\) be the modified Signed Distance Field (SDF) [14] for each hand. The SDF is defined on a voxel grid of dimensions \(N\times N\times N\). It is defined as follows: \[\Omega(x,y,z)=-min(SDF(x,y,z),0), \tag{2}\] Figure 3: Visualization for the effect of different components in optimization. Figure 2: The distribution of anchors and hand subdivision. Purple points denote the anchors. where \(\Omega\) states that its value within a hand is positive and proportional to the distance from the surface, and it is simply zero outside. The penetration loss for a single hand is calculated as follows: \[L_{p}^{s}=\sum_{v\in\{V\}}\Omega_{\hat{s}}(v). \tag{3}\] \(V\) means the hand vertices, \(s\) is the side of the hand, and \(\hat{s}\) is the side of the other hand. While the hand is highly articulated with a complex pose and shape, basic hand mesh SDF is not accurate enough. We propose to divide the hand into 16 parts based on its joint position and compute a separate \(\Omega\) function for each hand submesh which is divided according to the hand subdivision in Figure 2. After applying for each submesh, the penetration loss is defined as: \[L_{p}^{s}=\sum_{i=1}^{N}\sum_{j=1}^{N}(\sum_{v\in\{M_{sj}\}}\Omega_{\hat{s}i}( v)), \tag{4}\] where \(M_{sj}\) means the \(j^{th}\) submesh of the hand. The total loss of this part is \(L_{p}=L_{p}^{right}+L_{p}^{left}\). The detailed visualization comparison between basic SDF loss and our penetration loss is shown in the supplementary material (SM). **Interhand attraction**. When the IH is in close contact, severe hand occlusion may occur, making it difficult to make annotations. Additionally, the available close contact data are limited. To address this problem, it is recommended to ensure the IH remains in tight contact. To create contact between the hands, simply bringing the closest vertices together would suffice. However, to reduce the optimization's time complexity, we adopt anchors to guide the position and pose of both hands. As shown in Figure 2, to downsample the hand vertices as anchors, we traverse IH2.6M to assess the contact frequency of each vertex with the other hand. We selected the vertices with the highest contact frequency as the initial anchors and proceeded to sample the remaining vertices sequentially. Subsequently, we skip the 2-hop neighbors and then continue to sample the yet-to-be-selected ones. Finally, we obtained 108 anchors. If anchor \(a_{j}^{l}\) on the left and the anchor \(a_{i}^{r}\) on the right hand are the closest, they will establish an anchor pair, and the loss of anchor pairs is defined as: \[L_{ij}^{A}=\frac{1}{2}k_{ij}\Delta{d_{ij}}^{2}, \tag{5}\] where \(\Delta{d_{ij}}=||a_{i}^{r}-a_{j}^{l}||_{2}\). And \(k_{ij}=0.5*cos(\frac{\pi}{s}\Delta{\bar{d}_{ij}})+0.5\), in which \(\Delta{\bar{d}_{ij}}\) is the initial distance between anchors pair. This definition means the initially close anchors tend to keep in contact. Especially the factor \(s\) is set to \(0.02m\), and we set \(k_{ij}=0\) if \(\Delta{\bar{d}_{ij}}>s\). The anchor pairs connection and \(k_{ij}\) will be rebuilt during the optimization to adapt to dynamically changing IH poses. However, only these constraints cannot keep interacting poses valid with random joint angles, we further introduce anatomic optimization. **Anatomic Optimization.** The finger comprises joints, namely the Carpometacarpal joint (CMC), the Metacarpophalangeal joint (MP), and the Interphalangeal joint (IP). According to the coordinates systems of A-MANO, each finger has three joints, and we denote them as root (CMC of thumb, MP of the others), middle (MP of thumb, Proximal IP of the others), and end joint (IP of thumb, Distal IP of the others). Each of them theoretically has 3 DOF. We define the hand pose in Figure 2 as the T-pose, where all rotation angles are zero. The constraints are defined as follows: * **Available rotation directions.** Middle and end joint can only rotate \(\theta_{i}^{b}\) around the B (Bend) axe, while the root can also rotate \(\theta_{i}^{s}\) around S (Splay) axe. Always keep \(\theta_{i}^{t}=0\) around the T (Twist) axe. * **Angle limitations.** According to hand kinematics [10, 19], the joint rotation limitations are presented in Table 2. The anatomic optimization objective for each hand is defined as: \[L_{a}=\sum_{i=1}^{15}\sum_{a\in\{b,s,t\}}(\beta(\theta_{i}^{a}))^{2}, \tag{6}\] where \(\beta(\theta_{i}^{a})=max(\theta_{i}^{a}-\hat{\theta_{i}^{a}},0)+min(\theta_{ i}^{a}-\hat{\theta_{i}^{a}},0)\) is the deviation of the rotation angle from its range, and \(\hat{\theta_{i}^{a}}\)/\(\theta_{i}^{a}\) is the max/min value of \(\theta_{i}^{a}\)'s range. **Natural discriminator.** After anatomic optimization, the poses become valid. However, as shown in Figure 3(e), some optimized poses would not be natural enough. To get the natural poses, we further employ a discriminator \(\mathcal{D}\). The detailed structure of the discriminator is illustrated in Figure 4. The single-hand pose \(\Theta\) is given as input to the multi-layer discriminator. The output layer predicts a value \(\in[0,1]\) which represents the probability of belonging to the natural pose. The objective for \(\mathcal{D}\) is: \[L_{\mathcal{D}}=\mathbb{E}_{\Theta\sim P_{R}}[(\mathcal{D}(\Theta)-1)^{2}]+ \mathbb{E}_{\Theta\sim P_{G}}[\mathcal{D}(\Theta)^{2}], \tag{7}\] where \(P_{R}\) represents a hand pose from real datasets, such as IH2.6M [26] and Freihand [42], \(P_{G}\) is a generated pose. The adversarial loss that is backpropagated to pose opti \begin{table} \begin{tabular}{c|c c c} \hline finger\textbackslash{}joint & root (B,S) & middle (B) & end (B) \\ \hline thumb & \([-20,40],[-30,30]\) & \([-8,50]\) & \([-10,100]\) \\ index & \([-25,70],[-25,15]\) & \([-4,110]\) & \([-8,90]\) \\ middle & \([-25,80],[-15,15]\) & \([-7,100]\) & \([-8,90]\) \\ ring & \([-25,70],[-25,15]\) & \([-10,100]\) & \([-8,90]\) \\ pinky & \([-22,70],[-20,30]\) & \([-8,90]\) & \([-8,90]\) \\ \hline \end{tabular} \end{table} Table 2: **Joint rotation limitations.** The values are in degrees. ’B’/’S’ denotes whether the joint can bend/splay. mization is defined as: \[L_{adv}=\mathbbm{E}_{\Theta\sim P_{G}}[(\mathcal{D}(\Theta)-1)^{2}]. \tag{8}\] The discriminator is pre-trained before optimization. We extract 63K natural single-hand poses from Freihand [42], DexYCB [2], and IH2.6M [26], their "natural" probabilities \(p_{n}\) are labeled as 1. To get unnatural poses, we follow the methods in "Initial pose generation" to randomly add offsets to the poses, and calculate their probabilities according to the offsets (the higher the offsets, the closer the \(p_{n}\) is to 0). The qualitative and quantitive improvements brought by \(\mathcal{D}\) could be seen in SM. Since the natural standard may vary from person to person, we also conducted a user study to confirm the discriminator's effect in SM. **Poses Optimization.** In IH optimization, for each hand, it has 15 joints rotation \(\Theta=\{R_{i}\in SO(3)\}_{i=1}^{15}\), hand root rotation \(R_{r}\in SO(3)\) and hand root translation \(T_{r}\in\mathbb{R}^{3}\), we take \(\psi=\{\Theta,T_{r}\}\) as the optimization parameters and the total IH loss is denoted as: \[\small\begin{split}&\small\begin{split} argmin(w_{1}\sum_{i=1}^{A_{r}}\sum_{j=1}^{A_{l}}L_{ij}^{A}+w_{2}L_{a}+w_{3}L_{adv }+w_{4}L_{p}),\end{split}\end{split} \tag{9}\] where \(A_{r}\)/\(A_{l}\) is the anchor numbers of right/left hand, \(L_{a}=L_{a}^{r}+L_{a}^{l}\), and \(w_{*}\) is the weight hyperparameter. ### Rendering Our dataset offers various benefits, including high-resolution hand textures that create a more natural appearance. Additionally, we simulate natural lighting and environments to address limited diversity in studio settings. Furthermore, our dataset covers a wide range of poses and camera positions, bridging the gap between real-world applications and synthetic data. **Texture.** To enhance the variety of skin textures we present a broad selection of hues as illustrated in Figure 5. Color tones include white, light-skinned European, dark-skinned European, Mediterranean or olive, yellow, dark brown, and black. A total of 30 textures are available. In addition, random skin tone parameters can be superimposed on these base skin tones in the shaders to adjust brightness, contrast, and more. Apart from that, these textures also depict wrinkles, bones, veins, and hand hairs to cope with differences in gender, ethnicity, and age. **Lighting and background.** It is widely accepted that high-quality synthetic data should resemble real-world scenes as much as possible. For instance, the authors mixed their synthetic hands images with diverse real-world background photographs when creating IH synthetic data [22]. However, simply pasting the rendered hands on the background images is unnatural due to differences in lighting conditions and light angles. Since creating a large number of various synthetic 3D background models is time-consuming, we composite synthetic hands with various real-world scenery panoramic images. We collect 300 high-dynamic-range (HDR) photography with realistic indoor and outdoor scenes with appropriate lighting for rendering purposes. They enable our hand models to blend seamlessly with diverse settings resulting in highly photorealistic rendered scenes (see Figure 6). **Camera Settings.** We defined a spherical camera arrangement that can contain both viewpoints, enhancing the generalization of the model to different viewpoints. The center of the two-handed model is first computed and placed at the center of the world, and the camera track is placed around the center with the camera pointing to the center. Figure 7 shows the layout of our simulation environment. For each pose, we define four 360-degree circular tracks, which can be averaged by the number of samples to define dense or sparse viewpoints. For sparse sampling, 10 viewpoints were selected for each track. **Render quality.** Our major objective is to improve the photorealism of the synthetic dataset. Therefore, we render the scene in Blender based on the ray-tracing rendering engine Cycles. When creating the hand mesh, we used custom shader settings to adjust the base color, subsurface, and roughness to make the skin more realistic. The resolution of Figure 4: The architecture of the discriminator. Figure 5: Same hand with different hand textures. Figure 6: Same hand under diverse illumination. Figure 7: Different viewpoints from the camera track. the image is 512\(\times\)334 pixels and the color depth is 8 bits. ### Analysis of RenderIH dataset For distribution diversity comparison, we project the hand pose in IH2.6M and RenderIH into the embedding space using TSNE [34]. Figure 9 clearly shows that our data has a broader pose distribution than IH2.6M. Examples of synthetic images are depicted in Figure 1 and the rendering video can be found in the SM. More visualization effects of different optimization modules and statistical information can be found in the SM. ## 4 TransHand We propose a transformer-based network, TransHand, for 3D interacting hand pose estimation and conduct extensive experiments on it. As the transformer blocks are effective in modeling global interactions among mesh vertices and body joints [23, 35], we propose a transformer-based IH network. Our system contains two parts: the encoder and the decoder. Given an image with size 256\(\times\)256, the encoder outputs a global feature vector \(G_{F}\) and the intermediate feature maps \(\{F_{i},i=1,2,3\}\) where \(i\) indicates the feature level. After that, we map \(G_{F}\) to the left vertex feature \(L_{F}\) and the right vertex feature \(R_{F}\) by using fully connected layers. Since the global feature does not contain fine-grained local details, we concatenate different level features \(F_{i}\) with the hand vertex feature as input to the decoder blocks. As shown in Figure 8, the decoder consists of 3 identical blocks. Each block consists of 2 sub-modules, each sub-module is a typical transformer encoder composed of a multi-head attention module and an MLP layer. Each block is made up of two transformer encoders. As there is usually mutual occlusion in IH, it is natural to combine the other hand feature to improve the estimation precision. Inspired by Slowfast [6], we use a symmetric structure to incorporate the other hand feature by adding it, which is the lateral connection in the Correlation Encoder (CE) shown in Figure 8. Each block has three inputs including the left vertex feature, right vertex feature, and image feature. The blocks gradually upsample the coarse mesh up to refined mesh and finally to the original dimension with 778 vertices. **Loss Function.** For training, we apply \(L_{1}\) loss to 3D mesh vertices and hand joints, and \(L_{1}\) loss to 2D projected vertices and hand joints. \[L_{joint}=\sum_{s=0}^{1}\sum_{i=0}^{M-1}\sum_{d\in\{3D,2D\}} \|J_{s,i}^{d}-J_{s,i}^{d,GT}\|_{1}, \tag{10}\] \[L_{mesh}=\sum_{s=0}^{1}\sum_{i=0}^{N-1}\sum_{d\in\{3D,2D\}}\|V_ {s,i}^{d}-V_{s,i}^{d,GT}\|_{1}, \tag{11}\] where \(s\) represents the hand side, \(i\) represents the number of joints or vertices, and \(d\) denotes whether the computation is for 3D or 2D. To guarantee the geometric continuity of the predicted vertices, smoothness loss is applied which regularizes the consistency of the normal direction between the predicted and the ground truth mesh: \[L_{smooth}=\sum_{s=0}^{1}\sum_{f=0}^{F-1}\sum_{j=0}^{2}\|e_{f,j,s} \cdot n_{f,s}^{GT}\|_{1}, \tag{12}\] Figure 8: Network architecture. We use the global features extracted by the encoder to predict the left-hand features and right-hand features. After that, our model gradually regresses the hand vertices from 3 identical correlation encoder blocks by fusing multi-resolution image features with hand features. Each correlation encoder contains two transformer encoders and lateral connection from the other hand feature. Figure 9: TSNE visualization for IH poses distribution. Our data not only contain the raw poses of IH2.6M but also fill the vacancy by augmentation, resulting in a broader distribution. where \(f\) means the face index of hand mesh, \(j\) means the edge of face \(f\) and \(n^{GT}\)is the GT normal vector of this face. ## 5 Experiments ### Experiment setup **Dataset**. IH2.6M [26] is the largest real dataset with interacting hand (IH), and most of our experiments are conducted on this dataset. As we are only focused on IH, we selected only the IH data with both human and machine annotations. After discarding single-hand samples and invalid labeling, we obtain 366K training samples and 261K testing samples. Tzionas dataset [33] is a small IH dataset. We only use it for generalization ability evaluation by using the models trained from different datasets. H2O-3D [12] is a real dataset with 3D pose annotations for two hands and an object during interactions. It contains 60K samples. Ego3d [22] provides 50K synthetic images and corresponding labels of two hands, in which 40K samples are IH and the poses are randomized. **Implementation details**. The input images are resized to \(256\times 256\) and fed to TransHand encoder to generate the global feature and image feature maps. ResNet50 [16] is selected as the encoder. For all experiments, the networks are implemented using Pytorch [28]. We train all models with IH images using Adam optimizer. The initial learning rate is \(1e^{-4}\) and the batch size is 64. All experiments are performed on 1 NVIDIA Ampere A100 GPU. To demonstrate the usefulness of our RenderIH, we train three mainstream IH pose estimation methods on IH2.6M and a combination of IH2.6M and RenderIH, InterNet1[26], DIGIT1[4] and state-of-the-art method IntagHand2[21]. Footnote 1: Since InterNet and DIGIT are trained on the IH subset of IH2.6M v0.0, we train them on v1.0 to make fair comparisons. Footnote 2: All the training codes have been open-sourced by the authors. **Evaluation metrics**. To evaluate these methods, we report results by two standard metrics: Mean Per Joint Position Error (MPJPE) and the Mean Per Joint Position Error with Procrustes Alignment (PAMPJPE) in millimeters (mm). Additionally, to ensure a fair evaluation with prior research [21, 39], we select the MCP joint of middle finger as root joint and also report SMPJPE which performs scaling to the ground truth bone length. To evaluate the accuracy of estimating the relative position between the left and right hand roots during interaction, we utilize the mean relative-root position error (MRRPE) [4] and hand-to-hand contact deviation (CDev) [5] metrics. More results are presented in SM with wrist as root joint for future comparison. ### Results and analysis **User study for naturalness**. Since the perceptions of "natural" may differ from human to human, We conduct experiments to prove the discriminator's effect. We invited 20 persons with/without computer technical background, their ages are from 20 to 60, and the proportion of male to female was approximately 2:1. For each of them, we show 120 pictures (including 30 of augmented poses, 30 of optimized poses, 30 of optimized without discriminator, and 30 of raw poses from IH2.6M) of the IH poses, they are asked to determine whether the shown poses are natural, we count the NR (natural rate) of each category. The results are presented in Table 3, the "Raw poses" are those from IH2.6M[26], they are performed by humans and have high NR, however, some serious mesh-penetration caused by annotation mistakes might make the testers hardly to determine the "natural". The "Augmented poses" are augmented from the raw poses by assigning random rotation offsets to hand joints, they follow the joint limitation but have randomness, and some of them are in mesh penetration, the NR is low in this category. Optimizing the augmented poses without \(\mathcal{D}\) solves the penetration, and the poses are valid, but the poses are not natural enough. It is clear that \(\mathcal{D}\) improves the natu Figure 10: Qualitative results of our method on IH2.6M test set. \begin{table} \begin{tabular}{c c c c} \hline \hline With \(\mathcal{D}\) & No \(\mathcal{D}\) & Raw poses & Augmented poses \\ \hline 81.25\% & 54.68\% & 90.82\% & 32.92\% \\ \hline \hline \end{tabular} \end{table} Table 3: User study on natural rate. The higher the number, the more natural it is. ralness of the poses. **Effectiveness of correlation encoder.** Table 4 shows the performance of models with and without the CE. The baseline method fuses the left-hand feature and right-hand feature with the image feature independently through a transform encoder. The result indicates CE can improve performance by fusing the correlation between hands. Our model is used as default model for subsequent experiments. **Mixing synthetic with real images.** To demonstrate the usefulness of RenderIH, we test InterNet, DIGIT, IntagHand, and our TransHand on the IH2.6M test set under the setting of training with or without using the full 1M data from the RenderIH dataset. As shown in Table 5, RenderIH is helpful to further reduce the estimation error. For example, the error can be greatly reduced from 10.9mm to 9.72mm for the SOTA IntagHand method. The results prove that our RenderIH has great complementarity with real data. Meanwhile, when hand-hand occlusion is severe, training with our synthetic dataset can handle those cases better than IH2.6M only which is shown in Figure 11.To quantify the impact of interaction and occlusion, we use the IoU between left and right hand ground truth masks following DIGIT [4]. The higher IoU implies more occlusion and half-length of the error bars correspond to 0.5 times of MPJPE standard deviation. With minimal occlusion, the MPJPE is similar between the mixed image model and IH2.6M only. As occlusion increases, the mixed image model reduces MPJPE more substantially than IH2.6M alone. This highlights the value of our RenderIH data. **Synthetic data size influence.** During the training phase that involved various combinations of synthetic data and the IH2.6M training set, an obvious decline in the error is observed initially, followed by a gradual decrease after the incorporation of 900K synthetic images, as illustrated in Figure 12. The trend indicates that beyond a certain volume of synthetic data, the benefits of incorporating additional data become marginal. To balance the cost of training and accuracy, we select 1M as the optimal size for RenderIH. **Training strategy comparison.** The training strategy of synthetic data and real data is studied in this section. From Figure 13, both data mix training and pretraining from synthetic data can lead to significantly higher accuracy. Compared to dataset mixing, pretraining on the synthetic followed by fine-tuning on real images led to better precision. In contrast to dataset mixing, our results suggest that pretraining on synthetic data followed by finetuning on real images offers a more effective approach for reducing error. **Real data size influence.** We study how the real data \begin{table} \begin{tabular}{c|c} \hline \hline method\textbackslash{}metric & PAMPJPE/MPJPE/SMPJPE(mm)\textbackslash{} \\ \hline Baseline & 7.32/11.12/10.82 \\ Baseline+CE & 6.76/10.6/9.63 \\ \hline \hline \end{tabular} \end{table} Table 4: Effect of correlation encoder (CE) on IH2.6M test set (PAMPJPE/MPJPE/SMPJPE(mm)\textbackslash{}). It is shown that CE helps reduce the error by a clear margin. Figure 11: Comparing MPJPE by the degree of occlusion on Tzionas dataset. The IoU between groundtruth left/right masks measures the degree of interaction. The left (yellow) and right (blue) hand masks provide interaction examples in each IoU range Figure 12: Results of training IH2.6M with different number of RenderIH images on MPJPE(mm)\textbackslash{}. \begin{table} \begin{tabular}{c|c c} \hline \hline method\textbackslash{}train set & IH2.6M & Mixed \\ \hline InterNet & 18.28 & 17.19 \\ DIGIT & 15.48 & 14.28 \\ IntagHand & 10.9 & 9.72 \\ \hline Ours & 10.6 & 10.06 \\ \hline \hline \end{tabular} \end{table} Table 5: Comparison between models trained from IH2.6M and a mixture of RenderIH and IH2.6M in MPJPE(mm)\textbackslash{}. The methods are reproduced using their official training code. Figure 13: Comparison between training with RenderIH only, with part of IH2.6M only, the combination of the two, with pretraining on RenderIH and finetuning on IH2.6M. size affects the estimation precision in Figure 13. We use all the samples from RenderIH in this section. For real data, we sample the number of data ranging from 3663 to 366358, which takes 1%, 5%, 10%, 30%, 50%, 70%, and 100% of the real data. Although training only on RenderIH performs poorly, the MPJPE can be greatly reduced from 27.73mm to 12.6mm by finetuning on only 1% of real data. With finetuning on 10% of real data, the MPJPE can be almost the same as training on the full real data. When finetuning on all real data, the error can be 0.96mm lower than training only on all real data. **Comparison with H2O-3D dataset, Ego3d dataset and RenderIH subset.** In Table 7 and Table 8, we compare the generalization ability of these datasets with the same number of 40K samples. The model pretrained on RenderIH reaches lower error than other models pretrained on H2O-3D and Ego3d in Table 7, which proves that our artificial data is realistic and the knowledge is more easily transferable. The model trained on RenderIH performs better, possibly because all images have objects that interfere with two-handed interaction in H2O-3D. When training TransHand on RenderIH and IH2.6M, the estimation error is the lowest both in the IH2.6M and Tzionas dataset which is shown in Table 8. Especially the result on Tzionas dataset shows our varied pose distribution, background, and texture is helpful for improving generalization. **Comparison with SOTA methods.** As is shown in Table 6, our TransHand can outperform SOTA IntagHand method trained from its official code. Furthermore, their method involves multitask learning and their network comprises of complex graph transformer modules. In comparison, our method is simpler yet highly effective. When pretraining on RenderIH and finetuning on the IH2.6M data, our method can further reduce the MPJPE by about 1mm. Better hand-hand contact (CDev) and better relative root translation (MRRPE) can be observed in this table. Moreover, it is shown in Table 9 that training on our dataset in addition to IH2.6M can lead to obviously lower error on the Tzionas dataset compared with training on IH2.6M alone. Results that are computed with wrist as root is shown in Section 3.3 of SM. **Qualitative results**. Our qualitative results are shown in Figure 10. We can see our method can generate high-quality IH results in IH2.6M images. More in-the-wild results can be found in the SM. ## 6 Conclusion In this paper, we propose a new large-scale synthetic dataset for 3D IH pose estimation. Various experiments are conducted to study the effectiveness of RenderIH. With the whole synthetic hand images and only 10% of real hand images, we can achieve precision that is comparable to the same method which is trained on all the real hand images. We hope that this dataset could be a meaningful step towards developing 3D IH pose estimation models that do not depend on real data and adaptable to to various scenes. \begin{table} \begin{tabular}{c|c c c c c} \hline \hline Method & PAMPJPE\(\downarrow\) & MPJPE\(\downarrow\) & SMPJPE\(\downarrow\) & MRRPE\(\downarrow\) & CDev\(\downarrow\) \\ \hline InterNet\({}^{*}\)[26] & 11.72 & 18.28 & 16.68 & - & - \\ DIGIT\({}^{*}\)[4] & 9.72 & 15.48 & 13.43 & - & - \\ InterShape [39] & - & - & 13.07 & - & - \\ HDR [24] & - & 13.12 & - & - & - \\ IntagHand [21] & 6.10 & 10.30 & 8.79 & 12.1 & 25.1 \\ IntagHand\({}^{*}\) & 7.16 & 10.90 & 10.47 & 13.6 & 29.6 \\ \hline Ours & 6.76 & 10.66 & 9.63 & 12.98 & 27.9 \\ Ours\({}^{\#}\) & 5.79 & 9.64 & 8.18 & 11.95 & 24.6 \\ \hline \hline \end{tabular} \end{table} Table 6: Comparing with SOTA methods on IH2.6M test set (\(*\) means official code reproduction, \(\#\) means RenderIH pretraining) \begin{table} \begin{tabular}{c|c c} \hline \hline train set\(\backslash\)test set & IH2.6M & Tzionas \\ \hline H2O-3D+IH2.6M & 11.05/9.91 & 12.03/12.02 \\ Ego3d+IH2.6M & 10.66/9.60 & 11.13/11.06 \\ RenderIH+IH2.6M & 10.58/9.52 & 10.63/10.56 \\ \hline \hline \end{tabular} \end{table} Table 7: Generalization ability comparison between H2O-3D, Ego3d, and RenderIH on MPJPE/SMPJPE (mm)\(\downarrow\). The number of samples is 40K and fixed for each dataset. \begin{table} \begin{tabular}{c|c c} \hline \hline \multicolumn{1}{c|}{train set\(\backslash\)test set} & IH2.6M & Tzionas \\ \hline H2O-3D+IH2.6M & 11.05/9.91 & 12.03/12.02 \\ Ego3d+IH2.6M & 10.66/9.60 & 11.13/11.06 \\ RenderIH+IH2.6M & 10.58/9.52 & 10.63/10.56 \\ \hline \hline \end{tabular} \end{table} Table 8: Training on the mixture of datasets with all IH2.6M data on MPJPE/SMPJPE (mm)\(\downarrow\). The number of samples is 40K for each dataset. \begin{table} \begin{tabular}{l|c} \hline \hline Metrics & MPJPE/MRRPE/CDev\(\downarrow\) \\ \hline Training set\(\backslash\)Test set & Tzionas \\ \hline RenderIH & 22.11/25.8/47.7 \\ IH2.6M & 11.38/11.1/19.9 \\ IH2.6M+RenderIH & 10.49/9.37/19.5 \\ \hline \hline \end{tabular} \end{table} Table 9: The comparison of training with or without our dataset and test on Tzionas dataset. RenderIH: A large-scale synthetic dataset for 3D interacting hand pose estimation (_Supplementary Material_) This supplementary material contains additional information that could not be included in the main manuscript due to space limitations. We will begin by providing more detailed information about the dataset. Following that, we will briefly discuss the pose optimization details in our approach. Then we will then present additional visualization results from our qualitative experiments. Finally, we will discuss the broader impacts and limitations of our dataset. ## 1 More details on RenderIH RenderIH is composed of 1 million synthetic images by varying the pose, camera view, and environment (texture, lighting, and background). By collecting annotations from IH2.6M, we removed samples of similar poses resulting in 3680 distinctive poses. For each distinctive pose, we augment \(I=30\) poses. After augmenting and optimization, we filter out those IH poses that still have notable penetration or exceed joint limits, the remaining data accounts for 93% of the total, and we produce approximately 100K natural and non-interpenetration IH poses. Then we apply 10 camera viewpoints to each pose and produce 1M synthetic images in total. For each image, we randomly pick from a collection of 300 HDR images to illuminate the hand and provide the background together with a hand texture map. The rendering process took more than 200 hours using 4 NVIDIA A100 GPUs. As for the corresponding annotation, we provide pose and shape parameters, 3D joint coordinates, 2D joint coordinates, and camera intrinsic and extrinsic parameters. It is worth noting that the synthetic data labels can be freely extended based on the user's preferences, such as generating hand parts segmentation masks. The automatically generated annotations are free of noise and are more flexible than the traditional labels of the real dataset. Some rendering examples to illustrate our photo-realistic effect are provided in the **video demo**. hands, making the IH has more contact. As shown in Figure 15 b), to avoid abnormal anchors pairs, the pair can only be established when \(\bar{n_{i}^{a}}\cdot\bar{n_{j}^{a}}<0\), in which \(\bar{n^{a}}\) is the mesh face normal vector of the anchor. However, the IH attraction might cause a negative influence when the parts are in serious overlaps, as shown in Figure 15 c), there are conflicts between pairs, making the mesh hard to separate, the simple way to solve this problem is separating the hands at first so that we could have better anchor pairs. \[\underset{\psi^{\tau},\psi^{l}}{argmin}(w_{1}\sum_{i=1}^{A_{\tau}}\sum_{j=1}^{A _{l}}L_{ij}^{A}+w_{2}L_{a}+w_{3}L_{adv}+w_{4}L_{p}), \tag{13}\] In our implementation, we optimize the loss function in Equation 13 which is defined in the main paper in 215 iterations, we assign a larger weight \(w_{4}\) for \(L_{p}\) and a smaller weight \(w_{1}\) for \(L^{A}\) at the beginning to separate the hands, \(w_{1}\) will increase while \(w_{4}\) decrease during the optimization until 165th iteration. The anchor pairs will be rebuilt every 40 iterations to adapt to dynamically changing IH. The learning rate is set to 0.01 and will reduce after 20 no-loss-decaying iterations. Adam solver is utilized for optimization. ## 3 More visualization results ### Results for different optimization components **Visualization of the effect of different components.** We define multiple optimization loss functions to get valid and natural IH poses. As shown in Figure 16, the "Augmented Pose" is randomly augmented from the raw poses in IH2.6M, the joint poses are restricted according to Table 2 in the main paper. After being optimized by the full constraints, we get natural and non-interpenetration poses. Comparing Figure 16(b) and Figure 16(c), we can see that adopting anchors to make IH attraction has no significant differences from employing full vertices while reducing the time complexity. Furthermore, as demonstrated in Figure 16(d), the natural discriminator \(\mathcal{D}\) could make the IH more natural, the **natural** poses are defined in the main paper, they not only conform to the anatomy but also frequently occur in daily life. Additionally, as shown in Figure 16(e), IH attraction enhances hand contact, which is hard to annotate in reality due to inter-occlusion. ### Qualitative results comparison **Comparison with IntagHand.** To better demonstrate the superiority of our data and method, we compare our result with the existing state-of-the-art method IntagHand [21] (Their models is also trained on the combination of IH2.6M [26] and synthetic images). Some qualitative comparisons with IntagHand are shown in Figure 19. By directly projecting 3D hand mesh onto the image, we can \begin{table} \begin{tabular}{l|c} \hline \hline Training set\textbackslash{}Metrics & PAMPJPE/MPJPE/SMPJPE/MRRPE\textbackslash{} \\ \hline RenderIH & 13.50/47.73/49.42/32.08 \\ IH2.6M & 6.76/16.78/13.97/14.63 \\ IH2.6M+RenderIH & 5.79/15.78/12.16/14.15 \\ \hline \hline \end{tabular} \end{table} Table 10: The comparison of training with or without our dataset and test on IH2.6M dataset. Wrist joint is used as root. Figure 17: Qualitative comparison with our method and IntagHand [21] on InterHand2.6M under a variety of viewpoints and different levels of inter-hand occlusion. Red circles are used to highlight the positions where our methods can generate better results. In the first row, our result can be even better than the ground truth, where the middle, ring, and little fingers of the right hand are curved. see our result is closer to the pose in the raw image. Additionally, the results of these images from various views are also presented (see Figure 17). In the first row of Figure 17, our result can be even better than the ground truth, where the middle, ring, and little fingers of the right hand are curved. To further compare our generalization ability, we compare with IntagHand on in-the-wild images (see Figure 18). The results show that our method can clearly achieve less interpretation of two hands and more accurate finger interactions. **Impact of synthetic data.** When only RenderIH is used for training, the performance is worse than when only IH2.6M is used, in part because the background variation in Tzionas is limited. The trend can be seen in the qualitative result in Figure 20. However, as a synthetic dataset, the function of our dataset is to largely reduce the number of real data needed for training instead of replacing real data entirely. ### Quantitive results with wrist joint as root joint For convenient future comparison, we report our model's performance using wrist joint as root joint following common practice. As shown in Table 10, the model trained on a mixture of RenderIH and IH2.6M demonstrates consistent improvement across all metrics compared to training on IH2.6M alone. ## 4 Broader impacts and limitations **Broader impacts.** In this paper, we introduce a synthetic 3D hand dataset, RenderIH, with accurate and diverse poses. Since there are no large-scale synthetic interacting hand datasets, RenderIH will be impactful for the community, due to its unprecedented scale, diversity, and rendering quality. Moreover, the dataset not only can be used to improve the generalization ability in real scenes but also can be used for domain adaptation. **Limitations.** The hyperparameters of pose optimization are chosen on the basis of experimental results, such as factor \(k\) and \(s\) in Interhand attraction and weights in the final optimization loss. In the future, we may set them as learnable parameters that can be automatically learned from data.
``` 現在の相互作用ハンド(IH)のデータセットは、背景と質感において比較的単純で、機械annotatorによって手関節がアノテーションされているが、誤差が生じる可能性があり、姿勢分布の多様性は限られている。しかし、背景、姿勢分布、質感の可変性は、汎用性の能力に大きく影響を与える。そこで、私たちはこの大規模な合成データセットRenderIHを相互作用ハンドに正確で多様な姿勢のアノテーションを提供する。このデータセットには、様々な背景、視点、手質感の1,000,000のフォトリアリスティックな画像が含まれている。自然で多様な相互作用姿勢を生成するため、新しい姿勢最適化アルゴリズムを提案している。また、より正確な姿勢推定のため、transformerベースの姿勢推定ネットワークTransHandを導入し、相互作用する手の相関関係を利用して効果を検証している。RenderIHは、モデルに依存
2309.06032
Explicit formula for the Gamma-convergence homogenized quadratic curvature energy in isotropic Cosserat shell models
We show how to explicitly compute the homogenized curvature energy appearing in the isotropic $\Gamma$-limit for flat and for curved initial configuration Cosserat shell models, when a parental three-dimensional minimization problem on $\Omega \subset \mathbb{R}^3$ for a Cosserat energy based on the second order dislocation density tensor $\alpha:=\overline{R} ^T {\rm Curl}\,\overline{R} \in \mathbb{R}^{3\times 3}$, $\overline{R}\in {\rm SO}(3)$ is used.
Maryam Mohammadi Saem, Emilian Bulgariu, Ionel-Dumitrel Ghiba, Patrizio Neff
2023-09-12T08:05:20
http://arxiv.org/abs/2309.06032v1
Explicit formula for the Gamma-convergence homogenized quadratic curvature energy in isotropic Cosserat shell models ###### Abstract We show how to explicitly compute the homogenized curvature energy appearing in the isotropic \(\Gamma\)-limit for flat and for curved initial configuration Cosserat shell models, when a parental three-dimensional minimization problem on \(\Omega\subset\mathbb{R}^{3}\) for a Cosserat energy based on the second order dislocation density tensor \(\alpha:=\overline{R}^{T}\mathrm{Curl}\overline{R}\in\mathbb{R}^{3\times 3}\), \(\overline{R}\in\mathrm{SO}(3)\) is used. ###### Contents * 1 Introduction * 2 Three dimensional geometrical nonlinear and physical linearCosserat models * 2.1 General notation * 2.2 Geometrical nonlinear and physically linear Cosserat elastic 3D models * 2.3 More on Cosserat-curvature strain measures * 3 Homogenized curvature energy for the flat Cosserat-shell model via \(\Gamma\)-convergence * 4 Homogenized curvature energy for the curved Cosserat-shell model via \(\Gamma\)-convergence * 4.1 The calculation of the homogenized curvature energy * 4.2 \(\Gamma\)-convergence result for the curved shell model * 5 Conclusion ## 1 Introduction The Cosserat theory introduced by the Cosserat brothers in 1909 [16, 14] represents a generalization of the elasticity theory. While the elasticity theory models each constituent particle of the body as a material point, i.e., it is able to model only the translation of each particle through the classical deformation \(\varphi\colon\Omega\subset\mathbb{R}^{3}\to\mathbb{R}^{3}\), the Cosserat theory models the micro-rotation of each particle attaching to each material point an independent triad of orthogonal directors, the microrotation \(\overline{R}\colon\Omega\subset\mathbb{R}^{3}\to\mathrm{SO}(3)\). Invariance of the energy under superposed rigid body motions (left-invariance under \(\mathrm{SO}(3)\)) allowed them to conclude the suitable form of the energy density as \(W=W(\overline{U},\mathfrak{K})\), where \(\overline{U}:=\overline{R}^{T}\mathrm{D}\varphi\) is the first Cosserat deformation tensor and \(\mathfrak{K}:=(\overline{R}^{T}\partial_{x_{i}}\overline{R},\overline{R}^{T} \partial_{x_{2}}\overline{R},\overline{R}^{T}\partial_{x_{3}}\overline{R})\) is the second Cosserat deformation tensor. The Cosserat brothers never considered specific forms of the elastic energy and they never linearised their model to obtain the well-known linear Cosserat (micropolar) model [27]. In the present paper we only consider isotropic material, i.e., the behaviour of the elastic material is modelled with the help of an additionally right-invariant energy under \(\mathrm{SO}(3)\). In addition we will consider quadratic energies in suitable strains (a physically linear dependence of the stress tensor and of the couple-stress tensor on the strain measures) which allows an explicit and practical [31, 42] representation of the energy. In [41] we have provided a nonlinear membrane-like Cosserat shell model on a curved reference configuration starting from a geometrically nonlinear, physically linear three-dimensional isotropic Cosserat model. Beside the change of metric, the obtained membrane-like Cosserat shell model [41] is still capable to capture the transverse shear deformation and the Cosserat-curvature due to remaining Cosserat effects. The Cosserat-shell model presented in [41] for curved initial configuration generalizes the Cosserat-shell model constructed in [34] for flat initial configurations. There are many different ways to mathematically model shells [29], e.g., the _derivation approach_[33, 32, 21, 22, 25, 23, 24], the _intrinsic approach_[1, 2, 28], the _asymptotic method_, the _direct approach_[26, 3, 10, 11, 12, 16, 19, 28, 40, 5, 9, 6, 7]). However, _Gamma-convergence_ methods are preferred in the mathematical community. When the Cosserat parental three-dimensional energy is considered, in the deduction of the Gamma-limit for the curved initial configuration we have to construct four homogenized energies, while in the expression of the Gamma-limit will appear only two: the homogenized membrane energy and the homogenized curvature energy. In the deduction of the Gamma-limit in [41], we have explicitly stated the form of the homogenized membrane energy, while the explicit form of the homogenized curvature energy was only announced and we have used only some implicit properties (its continuity, convexity, etc.). The same was done in the deduction of the Gamma-limit for a flat initial configuration in [34] (we notice that another form of the Cosserat-curvature energy was considered) and no explicit form of the homogenized curvature energy could be given. In [41] we have announced the form of the homogenized curvature energy without giving details about its deduction. Therefore, this is the main aim of this paper, i.e., to provide the solutions of all optimization problems needed for having an explicit approach of Cosserat shell model for flat (Section 3) and curved (Section 4) initial configurations via the Gamma-limit. The second goal is to point out the advantages, at least from a computation point of view, of the usage of the curvature strain tensor \(\alpha:=\overline{R}^{T}\mathrm{Curl}\overline{R}\) in the parental three-dimensional Cosserat-curvature energy, instead of other curvature strain tensors considered in literature. We mention that, even if \(\alpha\) is controlled by \(\widehat{\mathfrak{K}}:=\left(\overline{R}^{T}\mathrm{D}(\overline{R}.e_{1}),R^{T}\mathrm{D}(\overline{R}.e_{2}),R^{T}\mathrm{D}(\overline{R}.e_{3}) \right)\in\mathbb{R}^{3\times 9}\) used in [32, 34], an explicit expression via Gamma-convergence of the homogenisation of the quadratic curvature energy in terms of the third order tensor \(\widehat{\mathfrak{K}}\) is missing in the literature, even for flat initial configuration. In fact it turned out that \(\widehat{\mathfrak{K}}\) is frame-indifferent but not itself isotropic, a fact which makes it unsuitable to be used in an isotropic model. Beside these advantages of the usage of the second order dislocation density tensor \(\alpha\), there is a one-two-one relation between \(\alpha\) and the so-called wryness tensor \(\Gamma:=\left(\mathrm{axl}(\overline{R}^{T}\,\partial_{x_{1}}\overline{R})\,| \,\mathrm{axl}(\overline{R}^{T}\,\partial_{x_{2}}\overline{R})\,|\,\mathrm{axl} (\overline{R}^{T}\,\partial_{x_{3}}\overline{R})\,\right)\in\mathbb{R}^{3 \times 3}\) (second order Cosserat deformation tensor [16], a Lagrangian strain measure for curvature-orientation change[17]). This property is not shared with \(\widehat{\mathfrak{K}}\) and \(\mathfrak{K}\). We show that considering \(\mathfrak{K}\) is equivalent to a particular choice of the constitutive coefficients in a model in which \(\alpha\) is used and, therefore, the formulas determined for the homogenized quadratic curvature energy via Gamma convergence are valid for parental isotropic three-dimensional energies which are quadratic in \(\mathfrak{K}\), too. However, the general form of a quadratic isotropic energy has a more complicated form in terms of \(\mathfrak{K}\) in comparison to the case when we express it in terms of \(\alpha\), see Subsection 2.3. Therefore, from a computational point of view it is more convenient to consider \(\alpha\) in a isotropic Cosserat model. Moreover, using [36] (see Subsection 2.3), we have that \(\alpha\) controls \(\mathfrak{K}\) in \(L^{2}(\Omega,\mathbb{R}^{3\times 3\times 3})\) which controls \(\widehat{\mathfrak{K}}\) in \(L^{2}(\Omega,\mathbb{R}^{3\times 3\times 3})\) which controls \(\alpha\) in \(L^{2}(\Omega,\mathbb{R}^{3\times 3})\). Therefore, a positive definite quadratic form in terms of one of the three variants of appropriate curvature tensors is energetically controlled by each of the three Cosserat-curvature tensors. This is why when the problem of existence of the solution or other similar qualitative results are considered, the form of the used Cosserat-curvature strain tensor is irrelevant, in the sense that if such a result is obtained for a Cosserat-curvature quadratic in Cosserat-curvature strain tensor then it may be immediately extended for Cosserat-curvature quadratic energies in the other two Cosserat-curvature strain tensor considered in the present paper. However, the usage of the Cosserat-curvature strain tensor \(\alpha\) has two main advantages: * a quadratic isotropic energy in terms of the wryness tensor \(\Gamma\) is rewritten in a transparent and explicit form as a quadratic energy in terms of dislocation density tensor \(\alpha\) (and vice versa), see Subsection 2.3; * the expression of a quadratic isotropic energy in terms of the wryness tensor \(\Gamma\) is very simple and suitable for analytical computations, see Subsection 2.3; * it admits the explicit analytical calculation of the homogenized quadratic curvature energies in the construction of the Cosserat shell model via \(\Gamma\)-convergence method, see Sections 3 and 4. ## 2 Three dimensional geometrical nonlinear and physical linear Cosserat models ### General notation Before continuing, let us introduce the notation we will use or have already used in Section 1 and in the abstract. We denote by \(\mathbb{R}^{m\times n}\), \(n,m\in\mathbb{N}\), the set of real \(m\times n\) second order tensors, written with capital letters. We adopt the usual abbreviations of Lie-group theory, i.e., \(\mathrm{GL}(n)=\{X\in\mathbb{R}^{n\times n}\mid\det(X)\neq 0\}\) the general linear group, \(\mathrm{SL}(n)=\{X\in\mathrm{GL}(n)\mid\det(X)=1\}\), \(\mathrm{O}(n)=\{X\in\mathrm{GL}(n)\mid X^{T}X=\mathbb{1}_{n}\}\), \(\mathrm{SO}(n)=\{X\in\mathrm{GL}(n)|X^{T}X=\mathbb{1}_{n},\det(X)=1\}\) with corresponding Lie-algebras \(\mathfrak{so}(n)=\{X\in\mathbb{R}^{n\times n}\mid X^{T}=-X\}\) of skew symmetric tensors and \(\mathfrak{sl}(n)=\{X\in\mathbb{R}^{n\times n}\mid\mathrm{tr}(X)=0\}\) of traceless tensors. Here, for \(a,b\in\mathbb{R}^{n}\) we let \(\big{<}a,b\big{>}_{\mathbb{R}^{n}}\) denote the scalar product on \(\mathbb{R}^{n}\) with associated (squared) vector norm \(\|a\|_{\mathbb{R}^{n}}^{2}=\big{<}a,a\big{>}_{\mathbb{R}^{n}}\). The standard Euclidean scalar product on \(\mathbb{R}^{n\times n}\) is given by \(\big{<}X,Y\big{>}_{\mathbb{R}^{n\times n}}=\mathrm{tr}(XY^{T})\), and thus the (squared) Frobenius tensor norm is \(\|X\|^{2}=\big{<}X,X\big{>}_{\mathbb{R}^{n\times n}}\). In the following we omit the index \(\mathbb{R}^{n},\mathbb{R}^{n\times n}\). The identity tensor on \(\mathbb{R}^{n\times n}\) will be denoted by \(\mathbb{1}_{n}\), so that \(\mathrm{tr}(X)=\big{<}X,\mathbb{1}_{n}\big{>}\). We let \(\mathrm{Sym}(n)\) and \(\mathrm{Sym}^{+}(n)\) denote the symmetric and positive definite symmetric tensors, respectively. For all \(X\in\mathbb{R}^{3\times 3}\) we set \(\mathrm{sym}\,X=\frac{1}{2}(X^{T}+X)\in\mathrm{Sym}(3)\), \(\mathrm{skew}\,X=\frac{1}{2}(X-X^{T})\in\mathfrak{so}(3)\) and the deviatoric part \(\mathrm{dev}\,X=X-\frac{1}{n}\)\(\mathrm{tr}(X)\,\mathbb{1}_{n}\in\mathfrak{sl}(n)\) and we have the orthogonal Cartan-decomposition of the Lie-algebra \(\mathfrak{gl}(3)=\{\mathfrak{sl}(3)\cap\mathrm{Sym}(3)\}\oplus\mathfrak{so}(3 )\oplus\mathbb{R}\cdot\mathbb{1}_{3},\ X=\mathrm{dev}\,\mathrm{sym}\,X+ \mathrm{skew}\,X+\frac{1}{3}\mathrm{tr}(X)\,\mathbb{1}_{3}\,.\) We use the canonical identification of \(\mathbb{R}^{3}\) with \(\mathfrak{so}(3)\), and, for \(A=\begin{pmatrix}0&-a_{3}&a_{2}\\ a_{3}&0&-a_{1}\\ -a_{2}&a_{1}&0\end{pmatrix}\in\mathfrak{so}(3)\) we consider the operators \(\mathrm{axl}\,:\,\mathfrak{so}(3)\to\mathbb{R}^{3}\) and \(\mathrm{anti}:\mathbb{R}^{3}\to\mathfrak{so}(3)\) through \(\mathrm{axl}\,A:=(a_{1},a_{2},a_{3})^{T}\), \(A.\,v=(\mathrm{axl}\,A)\times v\), \((\mathrm{anti}(v))_{ij}=-\epsilon_{ijk}\,v_{k}\ \ \forall\,v\in\mathbb{R}^{3}\), \((\mathrm{axl}\,A)_{k}=-\frac{1}{2}\,\epsilon_{ijk}A_{ij}=\frac{1}{2}\, \epsilon_{kij}A_{ji}\,,\)\(A_{ij}=-\epsilon_{ijk}\,(\mathrm{axl}\,A)_{k}=:\mathrm{anti}(\mathrm{axl}\,A)_{ij}\), where \(\epsilon_{ijk}\) is the totally antisymmetric third order permutation tensor. For \(X\in\mathrm{GL}(n)\), \(\mathrm{Adj}(X)\) denotes the tensor of transposed cofactors, while the \((i,j)\) entry of the cofactor is the \((i,j)\)-minor times a sign factor. Here, given \(z_{1},z_{2},z_{3}\in\mathbb{R}^{n\times k}\), the notation \((z_{1}\,|\,z_{2}\,|\,z_{3})\) means a matrix \(Z\in\mathbb{R}^{n\times 3k}\) obtained by taking \(z_{1},z_{2},z_{3}\) as block matrices. A third order tensor \(A=(A_{ijk})\in\mathbb{R}^{3\times 3\times 3}\) will be replaced with an equivalent object, by reordering its components in a \(\mathbb{R}^{3\times 9}\) matrix \(A\equiv(A_{1}\,|\,A_{2}\,|\,A_{3})\in\mathbb{R}^{3\times 9},\ A_{k}:=(A_{ijk})_{ij}=A.\,e_{k}\in\mathbb{R}^{3\times 3},\ k=1,2,3\), and we consider \(\mathrm{sym}A=\big{(}\mathrm{sym}\,A_{1}\,|\,\mathrm{sym}\,A_{2},\,|\,\mathrm{ sym}\,A_{3}\big{)}\in\mathbb{R}^{3\times 9}\), \(\mathrm{skew}A=\big{(}\mathrm{skew}A_{1}\,|\,\mathrm{skew}A_{2}\,|\,\mathrm{ skew}A_{3}\big{)}\in\mathbb{R}^{3\times 9}\), \(\mathrm{tr}(A)=\mathrm{tr}(A_{1})+\mathrm{tr}(A_{2})+\mathrm{tr}(A_{3}).\) Moreover, we define the products of a second order tensor \(B=(B_{ij})_{ij}\in\mathbb{R}^{3\times 3}\) and a third order tensor \(A=(A_{1}\,|\,A_{2}\,|\,A_{3})\in\mathbb{R}^{3\times 9}\) in a natural way as \(B\,A=(B\,A_{1}\,|\,B\,A_{2}\,|\,B\,A_{3})\in\mathbb{R}^{3\times 9}\), \(A\,B=\big{(}\sum_{k=1}^{3}A_{k}\,B_{k1}\,|\,\sum_{k=1}^{3}A_{k}\,B_{k2}\,|\, \sum_{k=1}^{3}A_{k}\,B_{k3}\big{)}\in\mathbb{R}^{3\times 9}.\) Let us remark that for \(B=(B_{ij})_{ij}\in\mathrm{GL}^{+}(3)\) having the inverse \(B=(B^{ij})_{ij}\) and \(A,C\in\mathbb{R}^{3\times 9}\) the following equivalences hold true \(A\,B=C\ \Leftrightarrow\ \sum_{k=1}^{3}A_{k}\,B_{kl}=C_{l}\ \Leftrightarrow\ \sum_{l=1}^{3}(B^{lm}\sum_{k=1}^{3}A_{k}\,B_{kl})=\sum_{l=1}^{3}C_{l}\,B^{lm}\ \Leftrightarrow\ A=C\,B^{-1}.\) We define the norm of a third order tensor \(A=(A_{1}\,|\,A_{2}\,|\,A_{3})\in\mathbb{R}^{3\times 9}\) by \(\|A\|^{2}=\sum_{k=1}^{3}\|A_{k}\|^{2}.\) For \(A_{1},A_{2},A_{3}\in\mathfrak{so}(3)\) we define \(\mathrm{axl}\,A=(\mathrm{axl}\,A_{1}\,|\,\mathrm{axl}\,A_{2}\,|\,\mathrm{axl} \,A_{3})\in\mathbb{R}^{3\times 3}\), while for \(z=(z_{1}\,|\,z_{2}\,|\,z_{3})\in\mathbb{R}^{3\times 3}\) we define \(\mathrm{anti}\,z=(\mathrm{anti}\,z_{1}\,|\,\mathrm{anti}\,z_{2}\,|\,\mathrm{ anti}\,z_{3})\in\mathbb{R}^{3\times 9}\). For a given matrix \(M\in\mathbb{R}^{2\times 2}\) we define the lifted quantity \(M^{\flat}=\begin{pmatrix}M_{11}&M_{12}&0\\ M_{21}&M_{22}&0\\ 0&0&0\end{pmatrix}\in\mathbb{R}^{3\times 3}\). Let \(\Omega\) be an open domain of \(\mathbb{R}^{3}\). The usual Lebesgue spaces of square integrable functions, vector or tensor fields on \(\Omega\) with values in \(\mathbb{R}\), \(\mathbb{R}^{3}\) or \(\mathbb{R}^{3\times 3}\), respectively will be denoted by \(\mathrm{L}^{2}(\Omega)\). Moreover, we introduce the standard Sobolev spaces \(\mathrm{H}^{1}(\Omega)=\{u\in\mathrm{L}^{2}(\Omega)\,|\,\mathrm{D}\,u\in \mathrm{L}^{2}(\Omega)\}\), \(\mathrm{H}(\mathrm{curl};\Omega)=\{v\in\mathrm{L}^{2}(\Omega)\,|\,\mathrm{ curl}\,v\in\mathrm{L}^{2}(\Omega)\}\) of functions \(u\) or vector fields \(v\), respectively. For vector fields \(u=(u_{1},u_{ corresponding Sobolev-spaces will be denoted by \(\mathrm{H}^{1}(\Omega)\) and \(\mathrm{H}^{1}(\mathrm{Curl};\Omega)\), respectively. We will use the notations: \(\mathrm{D}_{\xi}\), \(\mathrm{D}_{x}\), \(\mathrm{Curl}_{\xi}\), \(\mathrm{Curl}_{x}\) etc. to indicate the variables for which these quantities are calculated. ### Geometrical nonlinear and physically linear Cosserat elastic 3D models We consider an elastic material which in its reference configuration fills the three dimensional domain \(\Omega\subset R^{3}\). In the Cosserat theory, each point of the reference body is endowed with three independent orthogonal directors, i.e., with a matrix field \(\overline{R}:\Omega\to\mathrm{SO}(3)\) called the _microrotation_ tensor. Let us remark that while the tensor \(\mathrm{polar}(\mathrm{D}\varphi)\in\mathrm{SO}(3)\) of the polar decomposition of \(F:=\mathrm{D}\varphi=\mathrm{polar}(\mathrm{D}\varphi)\sqrt{(\mathrm{D} \varphi)^{T}\mathrm{D}\varphi}\) is not independent of \(\varphi\)[38, 8, 37], the tensor \(\overline{R}\) in the Cosserat theory is independent of \(\mathrm{D}\varphi\). In other words, in general, \(\overline{R}\neq\mathrm{polar}(\mathrm{D}\varphi)\). In geometrical nonlinear and physically linear Cosserat elastic 3D models, the deformation \(\varphi\) and the microrotation \(\overline{R}\) are the solutions of the following _nonlinear minimization problems_ on \(\Omega\): \[I(\varphi,F,\overline{R},\partial_{x_{i}}\overline{R})=\int_{\Omega}\left[W_{ \mathrm{strain}}(F,\overline{R})+W_{\mathrm{Cosserat-curv}}(\overline{R}, \partial_{x_{i}}\overline{R})\right]\,dV\quad\mapsto\min\,.\qquad\mathrm{w.r.t }\quad(\varphi,\overline{R}), \tag{2.1}\] where \(F=\mathrm{D}\varphi\) represents the deformation gradient, \(W_{\mathrm{strain}}(F,\overline{R})\) is the strain energy, \(W_{\mathrm{Cosserat-curv}}(\overline{R},\partial_{x_{i}}\overline{R})\) is the Cosserat curvature (bending) energy and and \(dV\) denotes the volume element in the \(\Omega\)-configuration. For simplicity of exposition we consider that the external loadings are not present and we have only Dirichlet-type boundary conditions for \(\varphi\). In this paper, the strain energy is considered to be a general isotropic quadratic energy (physically linear) in terms of the non-symmetric Biot-type stretch tensor \(\overline{U}:\,=\,\overline{R}^{T}F\in\mathbb{R}^{3\times 3}\) (the first Cosserat deformation tensor), i.e., \[W_{\mathrm{strain}}(F,\overline{R})=W_{\mathrm{mp}}(\overline{U}):\,=\,\mu\, \|\mathrm{dev}\,\mathrm{sym}(\overline{U}-\mathbb{1}_{3})\|^{2}+\mu_{\mathrm{ c}}\,\|\mathrm{skew}(\overline{U}-\mathbb{1}_{3})\|^{2}+\frac{\kappa}{2}\,[ \mathrm{tr}(\mathrm{sym}(\overline{U}-\mathbb{1}_{3}))]^{2}\,, \tag{2.2}\] while the Cosserat curvature (bending) energy \(W_{\mathrm{Cosserat-curv}}(\overline{R},\partial_{x_{i}}\overline{R})\) is considered to be isotropic in terms of \(\overline{R}\) and quadratic in the following curvature strain candidates \[\mathfrak{K}:= \,\overline{R}^{T}\mathrm{D}\overline{R}=\left(\overline{R}^{T} \partial_{x_{1}}\overline{R},\overline{R}^{T}\partial_{x_{1}}\overline{R}, \overline{R}^{T}\partial_{x_{1}}\overline{R}\right)\in\mathbb{R}^{3\times 9},\] \[\widehat{\mathfrak{K}}:= \,\left(\overline{R}^{T}\mathrm{D}(\overline{R}.e_{1}),R^{T} \mathrm{D}(\overline{R}.e_{2}),R^{T}\mathrm{D}(\overline{R}.e_{3})\right)\in \mathbb{R}^{3\times 9}, \tag{2.3}\] \[\alpha: = \,\overline{R}^{T}\,\mathrm{Curl}\,\overline{R}\in\mathbb{R}^{3 \times 3}\] \[\Gamma:= \,\left(\mathrm{axl}(\overline{R}^{T}\,\partial_{x_{1}}\overline {R})\,|\,\mathrm{axl}(\overline{R}^{T}\,\partial_{x_{2}}\overline{R})\,|\, \mathrm{axl}(\overline{R}^{T}\,\partial_{x_{3}}\overline{R})\,\right)\in \mathbb{R}^{3\times 3}\text{ (the wyness tensor)}.\] The second order Cosserat deformation tensor \(\Gamma\) (the wyness tensor) was considered as a Lagrangian strain measure for curvature-orientation change [17]) since the introduction of the Cosserat model [16], the second order dislocation density tensor \(\alpha\) is in a direct relation to the wynyness tensor via Nye's formulas and energetically controls \(\mathfrak{K}\) (see [36, 30]), the tensor \(\mathfrak{K}\) represents a first impulse choice while \(\widehat{\mathfrak{K}}\) is an ad hoc choice which is not suitable for an isotropic quadratic curvature energy. Let us notice that all the mentioned curvature tensors are frame-indifferent by definition, i.e., they remain invariant after the change \(\overline{R}\to\overline{Q}\,\overline{R}\), with \(\overline{Q}\in\mathrm{SO}(3)\) constant. In addition, \(\Gamma\), \(\alpha\) and \(\mathfrak{K}\) are isotropic, property which are not shared with \(\widehat{\mathfrak{K}}\). The suitable form of the isotropic Cosserat-curvature energy is discussed in Section 2.3. However, let us already announce that from our point of view, the most suitable expression for analytical computations of the general isotropic energy quadratic in \(\overline{R}\) is \[W_{\mathrm{Cosserat-curv}}(\overline{R},\partial_{x_{i}}\overline {R})=W_{\mathrm{curv}}(\alpha): = \,\mu\,L_{\mathrm{c}}^{2}\left(b_{1}\,\|\mathrm{sym}\,\alpha\|^ {2}+b_{2}\,\|\mathrm{skew}\,\alpha\|^{2}+\frac{b_{3}}{4}\,[\mathrm{tr}(\alpha)]^{ 2}\right)\] \[= \,\mu\,L_{\mathrm{c}}^{2}\left(b_{1}\,\|\mathrm{sym}\,\Gamma\|^ {2}+b_{2}\,\|\mathrm{skew}\,\Gamma\|^{2}+b_{3}\,[\mathrm{tr}(\Gamma)]^{2} \right).\] The parameters \(\mu\,\) and \(\lambda\) are the elasticity _Lame-type_1 constants, \(\kappa=\frac{2\,\mu\,+3\,\lambda}{3}\) is the _infinitesimal bulk modulus_, \(\mu_{\mathrm{c}}>0\) is the _Cosserat couple modulus_ and \(L_{\mathrm{c}}>0\) is the _internal length_ and responsible for _size effects_ in the sense that smaller samples are relatively stiffer than larger samples. If not stated otherwise, we assume, here, that \(\mu\,>0,\,\kappa>0,\,\mu_{c}>0\). The Cosserat couple modulus \(\mu_{c}\) controls the deviation of the microrotation \(\overline{R}\) from the continuum rotation \(\mathrm{polar}(\mathrm{D}\varphi)\) in the polar decomposition of \(\mathrm{D}\varphi=\mathrm{polar}(\mathrm{D}\varphi)\cdot\sqrt{\mathrm{D} \varphi^{T}\mathrm{D}\varphi}\). For \(\mu_{c}\to\infty\) the constraint \(R=\mathrm{polar}(\mathrm{D}\varphi)\) is generated and the model would turn into a Toupin couple stress model. We also assume that \(b_{1}>0,b_{2}>0\) and \(b_{3}>0\), which assures the _coercivity_ and _convexity of the curvature_ energy [34]. ### More on Cosserat-curvature strain measures 3.1 The curvature tensor \(\mathfrak{K}=\overline{R}^{T}\mathrm{D}\overline{R}\in\mathbb{R}^{3\times 9}\) A first choice for a curvature strain tensor is the third order elastic Cosserat curvature tensor [14, 16, 13, 15] \[\mathfrak{K}:= \overline{R}^{T}\mathrm{D}\,\overline{R}=\overline{R}^{T}\,( \partial_{x_{1}}\overline{R}\,|\,\partial_{x_{2}}\overline{R}\,|\,\partial_ {x_{3}}\overline{R})=(\overline{R}^{T}\,\partial_{x_{1}}\overline{R}\,| \,\overline{R}^{T}\,\partial_{x_{2}}\overline{R}\,|\,\overline{R}^{T}\, \partial_{x_{3}}\overline{R})\in\mathbb{R}^{3\times 9}, \tag{2.5}\] and the curvature energy given by observing that \(\widetilde{W}_{\mathrm{curv}}(\mathfrak{K}):=a_{1}\,\|\mathfrak{K}\|^{2}\,.\) This one-parameter choice is motivated [20] by \(\mathfrak{K}\equiv\mathrm{skew}\,\mathfrak{K}\), since \(\overline{R}^{T}\partial_{x_{i}}\overline{R}\in\mathfrak{so}(3)\), \(i=1,2,3\), and \[\mathrm{sym}\,\mathfrak{K} =\big{(}\mathrm{sym}(\overline{R}^{T}\partial_{x_{1}}\overline{R }\,|\,\mathrm{sym}(\overline{R}^{T}\partial_{x_{2}}\overline{R}\,|\, \mathrm{sym}(\overline{R}^{T}\partial_{x_{3}}\overline{R})\big{)}=(0_{3}\,|\, 0_{3}\,|\,0_{3}),\] \[\mathrm{tr}\mathfrak{K} =\mathrm{tr}(\overline{R}^{T}\partial_{x_{1}}\overline{R})+ \mathrm{tr}(\overline{R}^{T}\partial_{x_{2}}\overline{R})+\mathrm{tr}( \overline{R}^{T}\partial_{x_{3}}\overline{R})=0,\] \[\mathrm{skew}\,\mathfrak{K} =\big{(}\mathrm{skew}(\overline{R}^{T}\partial_{x_{1}}\overline{R }\,|\,\mathrm{skew}(\overline{R}^{T}\partial_{x_{2}}\overline{R}\,|\, \mathrm{skew}(\overline{R}^{T}\partial_{x_{3}}\overline{R})\big{)}. \tag{2.6}\] However, this is not the most general form of an quadratic isotropic energy in \(\overline{R}\) as will be seen later. The third order tensor \(\mathfrak{K}=\left(\overline{R}^{T}\partial_{x_{1}}\overline{R}\,|\,\overline {R}^{T}\partial_{x_{2}}\overline{R}\,|\,\overline{R}^{T}\partial_{x_{3}} \overline{R}\right)\in\mathbb{R}^{3\times 9}\) is usually replaced by the wryness tensor \(\Gamma=\left(\mathrm{axl}(\overline{R}^{T}\partial_{x_{1}}\overline{R})\,|\, \mathrm{axl}(\overline{R}^{T}\,\partial_{x_{2}}\overline{R})\,|\,\mathrm{axl} (\overline{R}^{T}\partial_{x_{3}}\overline{R})\,\right)\), since we have the one-to-one relations \[\mathfrak{K}=\mathrm{anti}\,\Gamma,\qquad\qquad\Gamma=\mathrm{axl}\,\mathfrak{K}, \tag{2.7}\] due to the fact that \(\overline{R}^{T}\partial_{x_{1}}\overline{R}\in\mathfrak{so}(3)\), \(i=1,2,3\), which in indices read \[\mathfrak{K}_{ijk}=\overline{R}_{li}\frac{\partial\overline{R}_{lj}}{ \partial x_{k}},\qquad\qquad\mathfrak{K}_{ijk}=-\epsilon_{ijl}\Gamma_{lk}, \qquad\qquad\Gamma_{ik}=\frac{1}{2}\,\sum_{r,l=1}^{3}\epsilon_{ilr}\mathfrak{K }_{lrk}. \tag{2.8}\] For a detailed discussion on various strain measures of the non-linear micropolar continua we refer to [39]. **Proposition 2.1**.: _A general isotropic quadratic energy depending on \(\overline{R}^{T}\mathrm{D}\overline{R}\in\mathbb{R}^{3\times 9}\) has the form_ \[\widetilde{W}(\mathfrak{K})= \,b_{1}\,\|\mathrm{sym}\,\mathrm{axl}\,\mathfrak{K}\|^{2}+b_{2}\, \|\mathrm{skew}\,\mathrm{axl}\,\mathfrak{K}\|^{2}+4\,b_{3}\,[\mathrm{tr}( \mathrm{axl}\,\mathfrak{K})]^{2}\] \[= \,b_{1}\,\|\mathrm{sym}\,\big{(}\,\mathrm{axl}(\mathfrak{K}.e_{1 })\,|\,\,\mathrm{axl}(\mathfrak{K}.e_{2})\,|\,\,\mathrm{axl}(\mathfrak{K}.e_{3} )\big{)}\|^{2} \tag{2.9}\] \[+b_{2}\,\|\mathrm{skew}\,\big{(}\,\mathrm{axl}(\mathfrak{K}.e_{1 })\,|\,\,\mathrm{axl}(\mathfrak{K}.e_{2})\,|\,\,\mathrm{axl}(\mathfrak{K}.e_{3} )\big{)}\|^{2}\] \[+b_{3}\,[\mathrm{tr}\big{(}\big{(}\,\mathrm{axl}(\mathfrak{K}.e_{ 1})\,|\,\,\mathrm{axl}(\mathfrak{K}.e_{2})\,|\,\,\mathrm{axl}(\mathfrak{K}.e_{3} )\big{)}\big{)}]^{2}.\] Proof.: The proof is based on the result from [18] and on the identities (2.7). Indeed, a quadratic energy in \(\mathfrak{K}\) is a quadratic energy in \(\Gamma\). Due to the results presented in [18], a quadratic isotropic energy written in terms of \(\Gamma\) is given by \[W(\Gamma)= \,b_{1}\,\|\mathrm{sym}\,\Gamma\|^{2}+b_{2}\,\|\mathrm{skew}\, \Gamma\|^{2}+b_{3}\,[\mathrm{tr}(\Gamma)]^{2}. \tag{2.10}\] Using (2.7), the proof is complete. We can express the uni-constant isotropic curvature term as a positive definite quadratic form in terms of \(\Gamma\), i.e., \[\|\mathrm{D}\overline{R}\|_{\mathbb{R}^{3\times 3\times 3}}^{2} =\|\overline{R}^{T}\mathrm{D}\overline{R}\,\|_{\mathbb{R}^{3 \times 3\times 3}}^{2}=\|\mathfrak{K}\|_{\mathbb{R}^{3\times 3\times 3}}^{2}=\| \overline{R}^{T}\partial_{x_{1}}\overline{R}\|_{\mathbb{R}^{3\times 3}}^{2}+\| \overline{R}^{T}\partial_{x_{2}}\overline{R}\|_{\mathbb{R}^{3\times 3}}^{2}+\| \overline{R}^{T}\partial_{x_{2}}\overline{R}\|_{\mathbb{R}^{3\times 3}}^{2}+\| \overline{R}^{T}\partial_{x_{3}}\overline{R}\|_{\mathbb{R}^{3\times 3}}^{2}\] \[=2\,\|\,\mathrm{axl}(\overline{R}^{T}\partial_{x_{1}}\overline{R}) \|_{\mathbb{R}^{3}}^{2}+2\,\|\,\mathrm{axl}(\overline{R}^{T}\partial_{x_{2}} \overline{R})\|_{\mathbb{R}^{3}}^{2}2\,\|\,\mathrm{axl}(\overline{R}^{T} \partial_{x_{3}}\overline{R})\|_{\mathbb{R}^{3}}^{2}=2\,\|\Gamma\|_{\mathbb{R}^{3 \times 3}}^{2}. \tag{2.11}\] Therefore, a general positive definite quadratic isotropic curvature energy (2.9) in \(\Gamma\) is a positive definite quadratic form in terms of \(\|\overline{R}^{T}\mathrm{D}\overline{R}\|_{\mathbb{R}^{3\times 3}}^{2}\), and vice versa. Thus, working with a quadratic isotropic positive definite energy in terms of \(\overline{R}^{T}\mathrm{D}\overline{R}\) is equivalent with working with a quadratic positive definite isotropic energy in terms of \(\Gamma\). Since the expression of a isotropic curvature energy has a simpler form in terms of \(\Gamma\), in order to keep the calculations as simple as possible, we prefer to work with \(\Gamma\). 3.2 The curvature tensor \(\alpha=\overline{R}^{T}\mathrm{Curl}\,\overline{R}\in\mathbb{R}^{3\times 3}\) Another choice as curvature strain is the _second order dislocation density tensor_\(\alpha\). Comparing with \(\overline{R}^{T}\mathrm{D}\,\overline{R}\), it simplifies considerably the representation by allowing to use the orthogonal decomposition \[\overline{R}^{T}\,\mathrm{Curl}\,\overline{R}=\alpha=\mathrm{dev}\,\mathrm{ sym}\,\alpha+\mathrm{skew}\,\alpha+\frac{1}{3}\,\mathrm{tr}(\alpha)\mathbb{1}_{3}. \tag{2.12}\] Moreover, it yields an equivalent control of spatial derivatives of rotations [36] and allows us to write the curvature energy in a fictitious Cartesian configuration in terms of the wryness tensor [36, 17]\(\Gamma\in\mathbb{R}^{3\times 3}\), since (see [36]) the following close relationship between the _wryness tensor_ and the _dislocation density tensor_ holds \[\alpha=-\Gamma^{T}+\mathrm{tr}(\Gamma)\,\mathbb{1}_{3},\qquad\text{or equivalently,}\qquad\Gamma=-\alpha^{T}+\frac{1}{2}\mathrm{tr}(\alpha)\,\mathbb{1}_{3}. \tag{2.13}\] Hence, \[\mathrm{sym}\,\Gamma= -\mathrm{sym}\,\alpha+\frac{1}{2}\mathrm{tr}(\alpha)\,\mathbb{1} _{3},\qquad\mathrm{dev}\,\mathrm{sym}\,\Gamma=-\mathrm{dev}\,\mathrm{sym}\,\alpha,\] \[\mathrm{skew}\,\Gamma= -\mathrm{skew}(\alpha^{T})=\mathrm{skew}\,\alpha,\qquad\qquad \mathrm{tr}(\Gamma)=-\mathrm{tr}(\alpha)+\frac{3}{2}\,\mathrm{tr}(\alpha)= \frac{1}{2}\mathrm{tr}(\alpha) \tag{2.14}\] and \[\mathrm{sym}\,\alpha\,=\,-\mathrm{sym}\,\Gamma+\mathrm{tr}(\Gamma)\, \mathbb{1}_{3},\quad\mathrm{dev}\,\mathrm{sym}\,\alpha\,=\,-\mathrm{dev}\, \mathrm{sym}\,\Gamma,\quad\mathrm{skew}\,\alpha\,=\,\mathrm{skew}\,\Gamma, \quad\mathrm{tr}(\alpha)\,=\,2\,\mathrm{tr}(\Gamma). \tag{2.15}\] In addition, from [18] we have **Proposition 2.2**.: _A general quadratic isotropic energy depending on \(\alpha\) has the form_ \[W_{\mathrm{curv}}(\alpha)= \,b_{1}\,\|\mathrm{sym}\,\alpha\|^{2}+b_{2}\,\|\mathrm{skew}\, \alpha\|^{2}+\frac{b_{3}}{4}\,[\mathrm{tr}(\alpha)]^{2}. \tag{2.16}\] Proof.: We use again that a quadratic energy in \(\alpha\) is a quadratic energy in \(\Gamma\), i.e., due to [18], is given by (2.10). The proof is complete after using the Nye's formulas (2.13). Since a quadratic isotropic positive definite energy in terms of \(\overline{R}^{T}\mathrm{D}\overline{R}\) is equivalent with a quadratic positive definite isotropic energy in terms of \(\Gamma\), considering \(\alpha\) is equivalent with considering \(\mathfrak{K}\), as long as a quadratic isotropic energy is used. As we will see in the present paper, a quadratic curvature energy in terms of \(\alpha\) is suitable for explicit calculations of homogenized curvature energy for shell models via the \(\Gamma\)-convergence method. 3.3 The curvature (in fact: bending) tensor \(\widehat{\mathfrak{K}}=\big{(}\overline{R}^{T}\mathrm{D}(\overline{R}.e_{1}) \,|\,\overline{R}^{T}\mathrm{D}(\overline{R}.e_{2})\,|\,\overline{R}^{T} \mathrm{D}(\overline{R}.e_{3})\big{)}\in\mathbb{R}^{3\times 9}\) The curvature tensor \(\widehat{\mathfrak{K}}=\big{(}\overline{R}^{T}\mathrm{D}(\overline{R}.e_{1}) \,|\,\overline{R}^{T}\mathrm{D}(\overline{R}.e_{2})\,|\,\overline{R}^{T} \mathrm{D}(\overline{R}.e_{3})\big{)}\in\mathbb{R}^{3\times 9}\) is motivated from the flat Cosserat shell model [33, 32]. Indeed, in this setting, the general bending energy term appearing from an engineering ansatz through the thickness of the shell appears as \[\frac{h^{3}}{12}\left(\mu\,\|\,\mathrm{sym}(\overline{R}^{T}\mathrm{D}( \overline{R}.e_{3}))\|^{2}+\mu_{\mathrm{c}}\,\|\mathrm{skew}(\overline{R}^{T} \mathrm{D}(\overline{R}.e_{3}))\|^{2}+\,\frac{\lambda\,\mu}{\lambda+2\mu} \,\big{[}\mathrm{tr}(\overline{R}^{T}\mathrm{D}(\overline{R}.e_{3}))\big{]}^{ 2}\right). \tag{2.17}\] Motivated by this, in earlier papers [32, 33, 35, 34], as generalization of \(\overline{R}^{T}\mathrm{D}(\overline{R}.e_{3})\), the third order elastic Cosserat curvature tensor is considered in the form \[\widehat{\mathfrak{K}}=\big{(}\overline{R}^{T}\mathrm{D}(\overline{R}.e_{1}) \,|\,\overline{R}^{T}\mathrm{D}(\overline{R}.e_{2})\,|\,\overline{R}^{T} \mathrm{D}(\overline{R}.e_{3})\big{)}=\overline{R}^{T}\big{(}\mathrm{D}( \overline{R}.e_{1}),\mathrm{D}(\overline{R}.e_{2}),\mathrm{D}(\overline{R}.e_{ 3})\big{)}\in\mathbb{R}^{3\times 9}, \tag{2.18}\] treating the three directions \(e_{1},e_{2},e_{3}\) equally, and the curvature energy is taken to be \[\widehat{W}_{\text{curv}}(\widehat{\mathfrak{R}})=a_{1}\|\text{sym}\,\widehat{ \mathfrak{R}}\|^{2}+a_{2}\|\,\text{skew}\,\widehat{\mathfrak{R}}\|^{2}+a_{3}[ \text{tr}(\widehat{\mathfrak{R}})]^{2}. \tag{2.19}\] We mean by \(\,\widehat{\mathfrak{R}}.e_{i}=\overline{R}^{T}\text{D}(\overline{R}.e_{i})\,\) and \[\|\text{sym}\,\widehat{\mathfrak{R}}\|^{2} =\sum_{i=1}^{3}\|\text{sym}\,\widehat{\mathfrak{R}}.e_{i}\|^{2} =\sum_{i=1}^{3}\|\text{sym}\,(\overline{R}^{T}\text{D}(\overline{R}.e_{i})) \|^{2},\] \[\|\text{skew}\,\widehat{\mathfrak{R}}\|^{2} =\sum_{i=1}^{3}\|\text{skew}\,\widehat{\mathfrak{R}}.e_{i}\|^{2} =\sum_{i=1}^{3}\|\text{skew}\,(\overline{R}^{T}\text{D}(\overline{R}.e_{i})) \|^{2}, \tag{2.20}\] \[[\text{tr}(\widehat{\mathfrak{R}})]^{2} =\sum_{i=1}^{3}[\text{tr}(\widehat{\mathfrak{R}}.e_{i})]^{2}= \sum_{i=1}^{3}[\text{tr}(\overline{R}^{T}\text{D}(\overline{R}.e_{i}))]^{2}.\] However, this curvature energy has now three abstract orthogonal preferred directions, which makes it only cubic and not isotropic, as we will see. There does not exist an analysis to show that this is the most general form of an isotropic energy depending on \(\widehat{\mathfrak{R}}\). Actually, as we will see in the following, for general positive values for \((\alpha_{i}^{1},\alpha_{i}^{2},\alpha_{i}^{3})\) the energies of the form (2.17) are anisotropic. We simplify the discussion by considering only the energy \(\|\overline{R}^{T}\text{D}(\overline{R}.e_{i})\|^{2}\). After the transformation \(\overline{R}\to\overline{R}\,\overline{Q}\), with \(\overline{Q}=(e_{1}\,|\,e_{3}\,|\,e_{2})\in\text{SO}(3)\) constant, we have \[\|\overline{Q}^{T}\overline{R}^{T}\text{D}(\overline{R}\,\overline{Q}.e_{3}) \|^{2}=\|\overline{R}^{T}\text{D}(\overline{R}\,\overline{Q}.e_{3})\|^{2}=\| \overline{R}^{T}\text{D}(\overline{R}.e_{2})\|^{2}\neq\|\overline{R}^{T} \text{D}(\overline{R}.e_{3})\|^{2}. \tag{2.21}\] As regards the direct relation between \(\|\widehat{\mathfrak{R}}\|^{2}=\sum_{i=1}^{3}\left\|\overline{R}^{T}\text{D}( \overline{R}.e_{i})\right\|^{2}\) and \(\|\alpha\|^{2}\) we have \[\sum_{i=1}^{3}\left\|\overline{R}^{T}\text{D}(\overline{R}.e_{i} )\right\|_{\mathbb{R}^{3\times 3}}^{2} =\sum_{i=1}^{3}\left\|\text{D}(\overline{R}.e_{i})\right\|_{ \mathbb{R}^{3\times 3}}^{2}=\left\|\text{D}\overline{R}\right\|_{\mathbb{R}^{3 \times 3}}^{2}\] \[=1\cdot\|\text{dev}\,\text{sym}\,\overline{R}^{T}\text{Cur} \overline{R}\|_{\mathbb{R}^{3\times 3}}^{2}+1\cdot\|\,\text{skew}\,\overline{R}^{T}\text{ Cur}\overline{R}\|_{\mathbb{R}^{3\times 3}}^{2}+\frac{1}{12}\cdot[\text{tr}(\overline{R}^{T} \text{Cur}\overline{R})]^{2} \tag{2.22}\] \[\geq c_{+}\big{\|}\text{Cur}\overline{R}\,\big{\|}_{\mathbb{R}^{3 \times 3}}^{2},\] where \(c_{+}>0\) is a constant. Since a coercive curvature energy in \(\widehat{\mathfrak{R}}\) is completely controlled by \(\sum_{i=1}^{3}\left\|\overline{R}^{T}\text{D}(\overline{R}.e_{i})\right\|_{ \mathbb{R}^{3\times 3}}^{2}\), a positive definite quadratic isotropic energy of the form in terms of \(\alpha=\overline{R}^{T}\text{Cur}\overline{R}\) (equivalently on the wryness tensor \(\Gamma\)) is a positive definite quadratic form in terms of \(\|\overline{R}^{T}\text{D}\overline{R}\|_{\mathbb{R}^{3\times 3}}^{2}\), and vice versa. Hence, a quadratic positive definite energy in terms of \(\widehat{\mathfrak{R}}\) is energetically equivalent with a quadratic positive definite energy in terms of \(\alpha\) (and \(\Gamma\)). Let us remark that both \(\left(\partial_{x_{1}}\overline{R}\,|\,\partial_{x_{2}}\overline{R}\,|\, \partial_{x_{3}}\overline{R}\right)\in\mathbb{R}^{3\times 9}\), \(\left(\text{D}(\overline{R}.e_{1}),\text{D}(\overline{R}.e_{2}),\text{D}( \overline{R}.e_{3})\right)\) contain the same terms, \(\frac{\partial\overline{R}_{i}}{\partial x_{k}}\), \(i,j,k=1,2,3\), but differently ordered. By multiplication with \(\overline{R}^{T}\) of \(\left(\partial_{x_{1}}\overline{R}\,|\,\partial_{x_{2}}\overline{R}\,|\, \partial_{x_{3}}\overline{R}\right)\in\mathbb{R}^{3\times 9}\) and \(\left(\text{D}(\overline{R}.e_{1})\,|\,\text{D}(\overline{R}.e_{2})\,|\,\text{ D}(\overline{R}.e_{3})\right)\) we obtain \(\mathfrak{R}\) and \(\widehat{\mathfrak{R}}\), respectively, i.e. \(\mathfrak{R}_{ijk}=\overline{R}_{li}\frac{\partial\overline{R}_{li}}{ \partial x_{k}},\,\,\widehat{\mathfrak{R}}_{ijk}=\overline{R}_{li}\frac{ \partial\overline{R}_{lk}}{\partial x_{j}},i,j,k=1,2,3\) and \(\mathfrak{R}_{ijk}=\widehat{\mathfrak{R}}_{ikj},\,\,i,j,k=1,2,3.\) We have the following relation between \(\widehat{\mathfrak{R}}\) and \(\Gamma\) \[\widehat{\mathfrak{R}}_{ijk}=\mathfrak{R}_{ikj}=-\epsilon_{ikl}\Gamma_{lj}, \Gamma_{ik}=\frac{1}{2}\,\sum_{r,l=1}^{3}\epsilon_{ilr}\mathfrak{R}_{lrk}=\frac {1}{2}\,\sum_{r,l=1}^{3}\epsilon_{ilr}\widehat{\mathfrak{R}}_{lkr}. \tag{2.23}\] Let us introduce the operator \(\mathcal{A}:\mathbb{R}^{3\times 9}\to\mathbb{R}^{3\times 9}\) by \((\mathcal{A}.\widehat{\mathfrak{R}})_{ijk}=\widehat{\mathfrak{R}}_{ikj}\). **Proposition 2.3**.: _A general isotropic energy depending on \(\widehat{\mathfrak{R}}=\left(\overline{R}^{T}\text{D}(\overline{R}.e_{1})\,|\, \overline{R}^{T}\text{D}(\overline{R}.e_{2})\,|\,\overline{R}^{T}\text{D}( \overline{R}.e_{3})\right)\in\mathbb{R}^{3\times 9}\) has the form_ \[\widehat{W}(\widehat{\mathfrak{R}})= b_{1}\,\|\text{sym}\,\,\text{axl}(\mathcal{A}.\widehat{ \mathfrak{R}})\|^{2}+b_{2}\,\|\text{skew}\,\,\text{axl}(\mathcal{A}.\widehat{ \mathfrak{R}})\|^{2}+b_{3}\,[\text{tr}(\text{axl}(\mathcal{A}.\widehat{ \mathfrak{R}}))]^{2}. \tag{2.24}\] Proof.: Using (2.1), we have that a quadratic isotropic energy in \(\widehat{\mathfrak{R}}\) is given by \[\widehat{W}(\widehat{\mathfrak{R}})= \,b_{1}\,\|\mathrm{sym}\,\,\mathrm{axl}\,\mathfrak{R}\|^{2}+b_{2} \,\|\mathrm{skew}\,\,\mathrm{axl}\,\mathfrak{R}\|^{2}+b_{3}\,[\mathrm{tr}( \mathrm{axl}\,\mathfrak{R})]^{2}. \tag{2.25}\] Since \(\mathcal{A}.\widehat{\mathfrak{R}}=\mathfrak{R}\), the proof is complete. Let us remark that comparing to (2.17), a general isotropic energy depending on \(\widehat{\mathfrak{R}}\), i.e., (2.24) is different since we have the summation of different products between the elements of \(\overline{R}^{T}\) and \(\mathrm{D}\overline{R}\), due to the action of the axial operator together with the operator \(\mathcal{A}\). From (2.17) one could obtain an isotropic energy by setting \[\int_{\widetilde{Q}\in\mathrm{SO}(3)}\frac{h^{3}}{12}\sum_{i=1}^{3}\left(\mu \,\|\,\mathrm{sym}(\overline{R}^{T}\mathrm{D}(\widetilde{Q}.e_{i}))\|^{2}+ \mu_{\mathrm{c}}\,\|\mathrm{skew}(\overline{R}^{T}\mathrm{D}(\widetilde{Q}. e_{i}))\|^{2}+\,\frac{\lambda\,\mu}{\lambda+2\mu}\,\big{[}\mathrm{tr}( \overline{R}^{T}\mathrm{D}(\widetilde{Q}.e_{i}))\big{]}^{2}\right), \tag{2.26}\] i.e., averaging over all directions. ## 3 Homogenized curvature energy for the flat Cosserat-shell model via \(\Gamma\)-convergence Let us consider an elastic material which in its reference configuration fills the three dimensional _flat shell-like thin_ domain \(\Omega_{h}=\omega\times\big{[}-\frac{h}{2},\frac{h}{2}\big{]}\), and \(\omega\subset\mathbb{R}^{2}\) a bounded domain with Lipschitz boundary \(\partial\omega\). The scalar \(0<h\ll 1\) is called _thickness_ of the shell. Due to the discussion from Subsection 2.3, in this paper we consider the Cosserat-curvature energy in terms of the wryness tensor \(\Gamma\) in the form \[\widetilde{W}_{\mathrm{curv}}(\Gamma)\,=\,\mu\,L_{\mathrm{c}}^{2}\left(b_{1}\, \|\mathrm{sym}\,\Gamma\|^{2}+b_{2}\,\|\mathrm{skew}\,\,\Gamma\|^{2}+\,b_{3} \,[\mathrm{tr}(\Gamma)]^{2}\right)\,. \tag{3.1}\] In order to apply the methods of \(\Gamma\)-convergence for constructing the variational problem on \(\omega\) of the flat Cosserat-shell model, the first step is to transform our problem further from \(\Omega_{h}\) to a _domain_ with fixed thickness \(\Omega_{1}=\omega\times[-\frac{1}{2},\frac{1}{2}]\subset\mathbb{R}^{3},\; \omega\subset\mathbb{R}^{2}\). For this goal, scaling of the variables (dependent/independent) would be the first step. In all our computations the mark \(\cdot^{\sharp}\) indicates the nonlinear scaling and the mark \(\cdot_{h}\) indicates that the assigned quantity depends on the thickness \(h\). In a first step we will apply the nonlinear scaling to the deformation. For \(\Omega_{1}=\omega\times\Big{[}-\frac{1}{2},\frac{1}{2}\Big{]}\subset \mathbb{R}^{3}\), \(\omega\subset\mathbb{R}^{2}\), we define the scaling transformations \[\zeta\colon\;\eta\in\Omega_{1}\mapsto\mathbb{R}^{3}\,,\qquad\zeta(\eta_{1}, \eta_{2},\eta_{3}):=(\eta_{1},\eta_{2},h\,\eta_{3})\,,\quad\zeta^{-1}\colon \;x\in\Omega_{h}\mapsto\mathbb{R}^{3}\,,\qquad\zeta^{-1}(x_{1},x_{2},x_{3}):= (x_{1},x_{2},\frac{x_{3}}{h})\,,\] with \(\zeta(\Omega_{1})=\Omega_{h}\). By using the above transformations (3.2) we obtain the formula for the transformed deformation \(\varphi\) as \[\varphi(x_{1},x_{2},x_{3}) =\varphi^{\sharp}(\zeta^{-1}(x_{1},x_{2},x_{3}))\quad\forall x\in \Omega_{h}\,;\qquad\varphi^{\natural}(\eta)=\varphi(\zeta(\eta))\quad\forall \eta\in\Omega_{1}\,,\] \[\mathrm{D}_{x}\varphi(x_{1},x_{2},x_{3}) =\begin{pmatrix}\partial_{\eta_{1}}\varphi_{1}^{\natural}(\eta) \,\,\partial_{\eta_{2}}\varphi_{1}^{\natural}(\eta)\,\,\frac{1}{h}\partial_{ \eta_{2}}\varphi_{1}^{\natural}(\eta)\\ \partial_{\eta_{1}}\varphi_{2}^{\natural}(\eta)\,\,\partial_{\eta_{2}}\varphi _{2}^{\natural}(\eta)\,\,\frac{1}{h}\partial_{\eta_{3}}\varphi_{2}^{\natural} (\eta)\\ \partial_{\eta_{1}}\varphi_{3}^{\natural}(\eta)\,\,\partial_{\eta_{2}}\varphi _{3}^{\natural}(\eta)\,\,\frac{1}{h}\partial_{\eta_{3}}\varphi_{3}^{\natural} (\eta)\end{pmatrix}=\mathrm{D}_{\eta}^{\natural}\varphi^{\natural}(\eta)=:F_{h }^{\natural}\,. \tag{3.2}\] Now we will do the same process for the microrotation tensor \(\overline{R}_{h}^{\natural}\colon\Omega_{1}\to\mathrm{SO}(3)\) \[\overline{R}(x_{1},x_{2},x_{3})=\overline{R}_{h}^{\natural}(\zeta^{-1}(x_{1}, x_{2},x_{3}))\qquad\forall x\in\Omega_{h}\,;\,\,\,\overline{R}_{h}^{\natural}( \eta)=\overline{R}(\zeta(\eta))\,,\quad\forall\eta\in\Omega_{1}\,. \tag{3.3}\] With this, the non-symmetric stretch tensor expressed in a point of \(\Omega_{1}\) is given by \[\overline{U}_{e}^{\natural}=\overline{R}_{h}^{\natural,T}F_{h}^{\natural}= \overline{R}_{h}^{\natural,T}\mathrm{D}_{\eta}^{\natural}\varphi^{\natural}( \eta)\,. \tag{3.4}\] and \[\Gamma^{\natural}_{e,h}=\Big{(}\text{axl}(\overline{R}^{\natural,T}_{h}\,\partial_ {\eta_{1}}\overline{R}^{\natural}_{h})\,|\,\text{axl}(\overline{R}^{\natural,T}_ {h}\,\partial_{\eta_{2}}\overline{R}^{\natural}_{h})\,|\,\frac{1}{h}\text{axl}( \overline{R}^{\natural,T}_{h}\,\partial_{\eta_{3}}\overline{R}^{\natural}_{h}) \,\Big{)}. \tag{3.5}\] The next step, in order to apply the \(\Gamma\)-convergence technique, is to transform the minimization problem onto the _fixed domain_\(\Omega_{1}\), which is independent of the thickness \(h\). According to the results from the previous subsection, we have found that the original three-dimensional variational problem (2.3) is equivalent to the following minimization problem on \(\Omega_{1}\) \[I^{\natural}_{h}(\varphi^{\natural},\text{D}^{h}_{\eta}\varphi^{\natural}, \overline{R}^{\natural}_{h},\Gamma^{\natural}_{e,h})=\int_{\Omega_{1}}\;h\, \left[\Big{(}W_{\text{mp}}(U^{\natural}_{e,h})+\widetilde{W}_{\text{curv}}( \Gamma^{\natural}_{e,h})\Big{)}\right]\,dV_{\eta}\quad\mapsto\quad\min\;\text{ w.r.t}\;(\varphi^{\natural},\overline{R}^{\natural}_{h})\,, \tag{3.6}\] where \[W_{\text{mp}}(U^{\natural}_{e,h}) = \mu\,\|\text{sym}(U^{\natural}_{e,h}-\mathbb{1}_{3})\|^{2}+\mu_ {c}\,\|\,\text{skew}(U^{\natural}_{e,h}-\mathbb{1}_{3})\|^{2}+\frac{\lambda} {2}[\text{tr}(\text{sym}(U^{\natural}_{e,h}-\mathbb{1}_{3}))]^{2}\,,\] \[\widetilde{W}_{\text{curv}}(\Gamma^{\natural}_{e,h}) = \mu\,L^{2}_{c}\,\Big{(}a_{1}\,\|\text{dev}\,\text{sym}\,\Gamma^{ \natural}_{e,h}\|^{2}+a_{2}\,\|\text{skew}\,\Gamma^{\natural}_{e,h}\|^{2}+\,a _{3}\,[\text{tr}(\Gamma^{\natural}_{e,h})]^{2}\Big{)}\] \[= \mu\,L^{2}_{c}\,\Big{(}b_{1}\,\|\text{sym}\,\Gamma^{\natural}_{ e,h}\|^{2}+b_{2}\,\|\text{skew}\,\,\Gamma^{\natural}_{e,h}\|^{2}+\,b_{3}\,[ \text{tr}(\Gamma^{\natural}_{e,h})]^{2}\Big{)}\;,\] where \(a_{1}=b_{1}\), \(a_{2}=b_{2}\) and \(a_{3}=\frac{12b_{3}-b_{1}}{3}\). In the article [34] one aim of the authors was to to find the \(\Gamma\)-limit of the family of functional which is related to \[\mathcal{I}^{\natural}_{h}(\varphi^{\natural},\text{D}^{h}_{\eta} \varphi^{\natural},\overline{R}^{\natural}_{h},\Gamma^{\natural}_{h})=\begin{cases} \frac{1}{h}\,I^{\natural}_{h}(\varphi^{\natural},\text{D}^{h}_{\eta}\varphi^{ \natural},\overline{R}^{\natural}_{h},\Gamma^{\natural}_{h})&\quad\text{if }\;(\varphi^{ \natural},\overline{R}^{\natural}_{h})\in\mathcal{S}^{\prime},\\ +\infty&\quad\text{else in }X,\end{cases} \tag{3.8}\] where \[X :=\{(\varphi^{\natural},\overline{R}^{\natural}_{h})\in\text{L}^ {2}(\Omega_{1},\mathbb{R}^{3})\times\text{L}^{2}(\Omega_{1},\text{SO}(3))\}\,, \tag{3.9}\] \[\mathcal{S}^{\prime} :=\{(\varphi,\overline{R}_{h})\in\text{H}^{1}(\Omega_{1},\mathbb{ R}^{3})\times\text{H}^{1}(\Omega_{1},\text{SO}(3))\,\big{|}\;\varphi|_{\partial \Omega_{1}}(\eta)=\varphi^{\natural}_{d}(\eta)\}\,.\] That means, to obtain an energy functional expressed only in terms of the weak limit of a subsequence of \((\varphi^{\natural}_{h_{j}},\overline{R}^{\natural}_{h_{j}})\in X\), when \(h_{j}\) goes to zero. In other words, as we will see, to construct an energy function depending only on quantities definite on the planar midsurface \(\omega\). However, in [34] the authors have considered a different Cosserat-curvature energy based on the Cosserat-curvature tensor \(\widehat{\mathfrak{R}}=(\overline{R}^{T}\text{D}(\overline{R}.e_{1})R_{1}, \overline{R}^{T}\text{D}(\overline{R}.e_{2}),\overline{R}^{T}\text{D}( \overline{R}.e_{3}))\in\mathbb{R}^{3\times 3\times 3}\), which in the simplest form reads \[\widehat{W}_{\text{curv}}(\widehat{\mathfrak{R}})=\mu\frac{L^{2}_{c}}{12} \Big{(}\alpha_{1}\|\text{sym}\widehat{\mathfrak{R}}\|^{2}+\alpha_{2}\|\,\text{ skew}\,\widehat{\mathfrak{R}}\|^{2}+\alpha_{3}\text{tr}(\widehat{\mathfrak{R}})^{2} \Big{)}\,, \tag{3.10}\] and no explicit form of the homogenized curvature energy has been computed (perhaps even not possible to be computed for curved initial configuration). In fact \(\widehat{\mathfrak{R}}\) is not isotropic and has to be avoided in an isotropic model, as seen above. In order to construct the \(\Gamma\)-limit there is the need to solve two auxiliary optimization problems, i.e., * the optimization problem which for each pair \((m,\overline{R}_{0})\), where \(m:\omega\to\mathbb{R}^{3}\), \(\overline{R}_{0}:\omega\to\text{SO}(3)\) defines the homogenized membrane energy \[W^{\text{hom,plate}}_{\text{mp}}(\mathcal{E}^{\text{plate}}_{m,\overline{R}_{0} }):=\inf_{\widetilde{d}\in\mathbb{R}^{3}}\,W_{\text{mp}}\Big{(}\overline{R}^{T} _{0}(\text{D}m|\widetilde{d})\Big{)}=\inf_{\widetilde{d}\in\mathbb{R}^{3}}\,W_{ \text{mp}}\Big{(}\mathcal{E}^{\text{plate}}_{m,\overline{R}_{0}}-(0|0| \widetilde{d})\Big{)}.\] (3.11) where \(\mathcal{E}^{\text{plate}}_{m,\overline{R}_{0}}:=\overline{R}^{T}_{0}(\text{D}m |0)-\mathbb{1}^{\flat}_{2}\,\) denotes the _elastic strain tensor_ for the flat Cosserat-shell model. O2: the optimization problem which for each \(\overline{R}_{0}:\omega\to\mathrm{SO}(3)\) defines the homogenized curvature energy \[\widetilde{W}^{\mathrm{hom,plate}}_{\mathrm{curv}}(\mathcal{K}^{ \mathrm{plate}}_{\overline{R}_{0}}): =\widetilde{W}^{*}_{\mathrm{curv}}\Big{(}\mathrm{axl}(\overline{R }_{0}^{T}\,\partial_{\eta_{1}}\overline{R}_{0})\,|\,\mathrm{axl}(\overline{R} _{0}^{T}\,\partial_{\eta_{2}}\overline{R}_{0})\,|\,\,\mathrm{axl}\,(A^{*})\, \Big{)} \tag{3.12}\] \[=\inf_{A\in\mathfrak{so}(3)}\widetilde{W}_{\mathrm{curv}}\Big{(} \mathrm{axl}(\overline{R}_{0}^{T}\,\partial_{\eta_{1}}\overline{R}_{0})\,|\, \mathrm{axl}(\overline{R}_{0}^{T}\,\partial_{\eta_{2}}\overline{R}_{0})\,|\, \,\mathrm{axl}\,(A)\,\Big{)}\] where \(\mathcal{K}^{\mathrm{plate}}_{\overline{R}_{0}}:\,=\,\Big{(}\mathrm{axl}( \overline{R}_{0}^{T}\,\partial_{x_{1}}\overline{R}_{0})\,|\,\mathrm{axl}( \overline{R}_{0}^{T}\,\partial_{x_{2}}\overline{R}_{0})\,|0\Big{)}\not\in \mathrm{Sym}(3)\) denotes the elastic bending-curvature tensor for the flat Cosserat-shell model. The first optimisation problem O1 was solved in [34] giving \[W^{\mathrm{hom,plate}}_{\mathrm{mp}}(\mathcal{E}^{\mathrm{plate}}_{m,\overline {R}_{0}})=W_{\mathrm{shell}}\big{(}[\mathcal{E}^{\mathrm{plate}}_{m,\overline{ Q}_{e,0}}]\|\big{)}+\frac{2\,\mu\,\,\mu_{\mathrm{c}}}{\mu_{c}+\mu}\|[ \mathcal{E}^{\mathrm{plate}}_{m,\overline{Q}_{e,0}}]^{\perp}\|^{2},\] with the orthogonal decomposition in the tangential plane and in the normal direction2 Footnote 2: Here, for vectors \(\xi,\eta\in\mathbb{R}^{n}\), we have considered the tensor product \((\xi\otimes\eta)_{ij}=\xi_{i}\,\eta_{j}\). Let us denote by \(\overline{R}_{i}\) the columns of the matrix \(\overline{R}\), i.e., \(\overline{R}=(\overline{R}_{1}\,|\,\overline{R}_{2}\,|\,\overline{R}_{3})\), \(\overline{R}_{i}=\overline{R}\,e_{i}\). Since \((1_{3}-e_{3}\otimes e_{3})\overline{R}^{T}=(\overline{R}_{1}\,|\,\overline{R }_{2}\,|\,0)^{T}\), it follows that \([\mathcal{E}^{\mathrm{plate}}_{m,\overline{Q}_{e,0}}]^{\parallel}=(\overline{ R}_{1}\,|\,\overline{R}_{2}\,|\,0)^{T}(\mathrm{D}m|0)-1_{2}^{\flat}=(( \overline{R}_{1}\,|\,\overline{R}_{2})^{T}\,\mathrm{D}m)^{\flat}-1_{2}^{\flat}\), while \[[\mathcal{E}^{\mathrm{plate}}_{m,\overline{Q}_{e,0}}]^{\perp}=(0\,|\,0\,|\, \overline{R}_{3})^{T}(\mathrm{D}m|0)=\begin{pmatrix}0&0&0\\ 0&0&0\\ \langle\overline{R}_{3},\partial_{x_{1}}m\rangle&\langle\overline{R}_{3}, \partial_{x_{2}}m\rangle&0\end{pmatrix}\,. \tag{3.13}\] and \[W_{\mathrm{shell}}\big{(}[\mathcal{E}^{\mathrm{plate}}_{m,\overline{R}_{0}}]^{ \parallel}\big{)}=\,\mu\,\|\mathrm{sym}\,\,[\mathcal{E}^{\mathrm{plate}}_{m, \overline{R}_{0}}]^{\parallel}\|^{2}+\mu_{\mathrm{c}}\,\|\mathrm{skew}\,\,[ \mathcal{E}^{\mathrm{plate}}_{m,}]^{\parallel}\|^{2}+\,\frac{\lambda\,\mu}{ \lambda+2\,\mu}\,\left[\mathrm{tr}([\mathcal{E}^{\mathrm{plate}}_{m,\overline {R}_{0}}]^{\parallel})\right]^{2}. \tag{3.14}\] As regards the second optimization problem O2, in [34] the authors had to solve a similar problem but corresponding to a curvature energy given by (3.10), i.e., the dimensionally reduced homogenized curvature energy is defined through the \[W^{\mathrm{hom,\,plate}}_{\mathrm{curv}}(\mathcal{A})=\inf_{u,v,w\in\mathbb{R }^{3}}\widehat{W}_{\mathrm{curv}}\Big{(}(\mathcal{A}e_{1}|u),(\mathcal{A}e_{2 }|v),(\mathcal{A}e_{3}|w)\Big{)}\,, \tag{3.15}\] where \(\mathcal{A}:=(\overline{R}_{0}^{T}(\partial_{x_{1}}(\overline{R}_{0}e_{1})| \partial_{x_{2}}(\overline{R}_{0}e_{1})),\overline{R}_{0}^{T}(\partial_{x_{1 }}(\overline{R}_{0})e_{2}|\partial_{x_{2}}(\overline{R}_{0}e_{2})),\overline{R }_{0}^{T}(\partial_{x_{1}}(\overline{R}_{0}e_{3})|\partial_{x_{2}}(\overline{R }_{0}e_{3})))\). In this representation, calculating the homogenized energy looks more difficult and it was not explicitly done. In this section we show that considering the curvature energy depending on the Cosserat-curvature tensor \(\alpha\) (equivalently on the three-dimensional wryness tensor \(\Gamma\)) the calculation of the homogenized curvature energy (i.e., the solution of O2) is easier and analytically achievable. **Theorem 3.1**.: _The homogenized curvature energy for a flat Cosserat-shell model is given by_ \[W^{\mathrm{hom,plate}}_{\mathrm{curv}}(\Gamma)=\mu L_{\mathrm{c}}^{2}\Big{(}b_ {1}\|\mathrm{sym}[\mathcal{K}^{\mathrm{plate}}_{\overline{R}_{0}}]^{ \parallel}\|^{2}+b_{2}\|\,\mathrm{skew}[\mathcal{K}^{\mathrm{plate}}_{ \overline{R}_{0}}]^{\parallel}\|^{2}+\frac{b_{1}b_{3}}{(b_{1}+b_{3})}\mathrm{ tr}([\mathcal{K}^{\mathrm{plate}}_{\overline{R}_{0}}]^{\parallel})^{2}+\frac{2\,b_{1}b_{2}}{b_{1}+b_{2}} \|[\mathcal{K}^{\mathrm{plate}}_{\overline{R}_{0}}]^{\perp}\|\Big{)}\] _with the orthogonal decomposition in the tangential plane and in the normal direction_ \[\mathcal{K}^{\mathrm{plate}}_{\overline{R}_{0}}=[\mathcal{K}^{\mathrm{plate}}_{ \overline{R}_{0}}]^{\parallel}+[\mathcal{K}^{\mathrm{plate}}_{\overline{R}_{0}}]^{ \perp},\qquad[\mathcal{K}^{\mathrm{plate}}_{\overline{R}_{0}}]^{\parallel}:=e_{3} \otimes e_{3}\,\mathcal{K}^{\mathrm{plate}}_{\overline{R}_{0}},\qquad[ \mathcal{K}^{\mathrm{plate}}_{\overline{R}_{0}}]^{\perp}:=(1_{3}-e_{3}\otimes e _{3})\,\mathcal{K}^{\mathrm{plate}}_{\overline{R}_{0}}\,. \tag{3.16}\] Proof.: Let us define \(\Gamma_{0}=(\Gamma_{1}^{0}\,|\,\Gamma_{2}^{0}\,|\,\Gamma_{3}^{0}):=\Big{(} \mathrm{axl}(\overline{R}_{0}^{T}\,\partial_{\eta_{1}}\overline{R}_{0})\,|\, \mathrm{axl}(\overline{R}_{0}^{T}\,\partial_{\eta_{2}}\overline{R}_{0})\,|\, \,\mathrm{axl}\,(A)\,\Big{)}\). Then the homogenized curvature energy turns out to be \[W^{\mathrm{hom}}_{\mathrm{curv}}((\Gamma_{1}^{0}\,|\,\Gamma_{2}^{0}))= \widetilde{W}_{\mathrm{curv}}(\Gamma_{1}^{0}\,|\,\Gamma_{2}^{0}|c^{*})=\inf_{c \in\mathbb{R}^{3}}W_{\mathrm{curv}}((\Gamma_{1}^{0}\,|\,\Gamma_{2}^{0}|c))\,. \tag{3.17}\] By using the relation (3.1), we start to do the calculations for the sym, skew and trace parts as \[\mathrm{sym}\Gamma^{0}=\begin{pmatrix}\Gamma^{0}_{11}&\frac{\Gamma^{0}_{21}+ \Gamma^{0}_{21}}{2}&\frac{c_{1}+\Gamma^{0}_{31}}{2}\\ \frac{\Gamma^{0}_{21}+\Gamma^{0}_{12}}{2}&\frac{\Gamma^{0}_{22}}{2}&\frac{c_{2 }+\Gamma^{0}_{32}}{2}\\ \frac{\Gamma^{0}_{31}+c_{1}}{2}&\frac{\Gamma^{0}_{32}+c_{2}}{2}&c_{3}\end{pmatrix}\,, \qquad\mathrm{skew}\,\Gamma^{0}=\begin{pmatrix}0&\frac{\Gamma^{0}_{21}-\Gamma^{0} _{21}}{2}&\frac{c_{1}-\Gamma^{0}_{31}}{2}\\ \frac{\Gamma^{0}_{21}-\Gamma^{0}_{12}}{2}&0&\frac{c_{2}-\Gamma^{0}_{32}}{2} \\ \frac{\Gamma^{0}_{31}-c_{1}}{2}&\frac{\Gamma^{0}_{32}-c_{2}}{2}&0\end{pmatrix}\,, \tag{3.19}\] and \(\mathrm{tr}(\Gamma_{0})=(\Gamma^{0}_{11}+\Gamma^{0}_{22}+c_{3})\,.\) We have \[W_{\mathrm{curv}}(\Gamma_{0}) =\mu L^{2}_{c}\Big{(}b_{1}\big{(}(\Gamma^{0}_{11})^{2}+\frac{1}{2 }(\Gamma^{0}_{12}+\Gamma^{0}_{21})^{2}+\frac{1}{2}(c_{1}+\Gamma^{0}_{31})^{2}+ (\Gamma^{0}_{22})^{2}+\frac{1}{2}(c_{2}+\Gamma^{0}_{32})^{2}+c_{3}^{2}\big{)} \tag{3.20}\] \[\qquad\qquad+b_{2}\big{(}\frac{1}{2}(\Gamma^{0}_{12}-\Gamma^{0} _{21})^{2}+\frac{1}{2}(c_{1}-\Gamma^{0}_{31})^{2}+\frac{1}{2}(c_{2}-\Gamma^{0 }_{32})^{2}\big{)}+b_{3}(\Gamma^{0}_{11}+\Gamma^{0}_{22}+c_{3})^{2}\Big{)}\,.\] But this is an easy optimization problem in \(\mathbb{R}^{3}\). Indeed, the stationary points are \[0 =\frac{\partial W_{\mathrm{curv}}(\Gamma_{0})}{\partial c_{1}}=b_ {1}(c_{1}+\Gamma^{0}_{31})+b_{2}(c_{1}-\Gamma^{0}_{31})=(b_{1}+b_{2})c_{1}+(b _{1}-b_{2})\Gamma^{0}_{31}\quad\Rightarrow\quad c_{1}=\frac{b_{2}-b_{1}}{b_{ 1}+b_{2}}\Gamma^{0}_{31}\,,\] \[0 =\frac{\partial W_{\mathrm{curv}}(\Gamma_{0})}{\partial c_{2}}=b _{1}(c_{2}+\Gamma^{0}_{32})+b_{2}(c_{2}-\Gamma^{0}_{32})=(b_{1}+b_{2})c_{2}+( b_{1}-b_{2})\Gamma^{0}_{32}\quad\Rightarrow\quad c_{2}=\frac{b_{2}-b_{1}}{b_{ 1}+b_{2}}\Gamma^{0}_{32}\,, \tag{3.21}\] \[0 =\frac{\partial W_{\mathrm{curv}}(\Gamma_{0})}{\partial c_{3}}=b _{1}c_{3}+b_{3}(\Gamma^{0}_{11}+\Gamma^{0}_{22}+c_{3})\quad\Rightarrow\quad c _{3}=\frac{-b_{3}}{b_{1}+b_{3}}(\Gamma^{0}_{11}+\Gamma^{0}_{22})\,,\] and the matrix defining the quadratic function in \(c_{1},c_{2},c_{3}\) is positive definite, this stationary point is the minimizer, too. By inserting the unknowns inside \(W_{\mathrm{curv}}\) we find \(W_{\mathrm{curv}}^{\mathrm{hom,\;plate}}\) given by \[W_{\mathrm{curv}}^{\mathrm{hom,\;plate}}(\Gamma) =\mu L^{2}_{c}\Big{(}b_{1}\big{(}(\Gamma^{0}_{11})^{2}+(\Gamma^{0} _{22})^{2}+(\frac{-b_{3}}{b_{1}+b_{3}}(\Gamma^{0}_{11}+\Gamma^{0}_{22}))^{2}+ \frac{1}{2}(\Gamma^{0}_{21}+\Gamma^{0}_{12})^{2}+\frac{1}{2}(\frac{b_{2}-b_{1 }}{b_{1}+b_{2}}\Gamma^{0}_{31}+\Gamma^{0}_{31})^{2}\] \[\qquad\qquad+\frac{1}{2}\big{(}\frac{b_{2}-b_{1}}{b_{1}+b_{2}} \Gamma^{0}_{32}+\Gamma^{0}_{32})^{2}\big{)}+b_{2}\big{(}\frac{1}{2}(\Gamma^{0}_ {12}-\Gamma^{0}_{21})^{2}+\frac{1}{2}(\frac{b_{2}-b_{1}}{b_{1}+b_{2}}\Gamma^{ 0}_{31}-\Gamma^{0}_{31})^{2}\] \[\qquad\qquad+\frac{1}{2}\big{(}\frac{b_{2}-b_{1}}{b_{1}+b_{2}} \Gamma^{0}_{32}-\Gamma^{0}_{32})^{2}\big{)}+b_{3}\big{(}(\Gamma^{0}_{11}+ \Gamma^{0}_{22})-\frac{b_{3}}{b_{1}+b_{3}}(\Gamma^{0}_{11}+\Gamma^{0}_{22}) \big{)}^{2}\Big{)}\] \[=\mu L^{2}_{c}\Big{(}b_{1}\big{(}(\Gamma^{0}_{11})^{2}+(\Gamma^ {0}_{22})^{2}+\frac{b_{3}^{2}}{(b_{1}+b_{3})^{2}}(\Gamma^{0}_{11}+\Gamma^{0}_{ 22})^{2}+\frac{1}{2}(\Gamma^{0}_{21}+\Gamma^{0}_{12})^{2}+2\frac{b_{2}^{2}}{(b _{1}+b_{2})^{2}}(\Gamma^{0}_{31})^{2}\] \[\qquad\qquad+2\frac{b_{2}^{2}}{(b_{1}+b_{2})^{2}}(\Gamma^{0}_{32 })^{2}\big{)}+b_{2}\big{(}\frac{1}{2}(\Gamma^{0}_{12}-\Gamma^{0}_{21})^{2}+2 \frac{b_{1}^{2}}{(b_{1}+b_{2})^{2}}(\Gamma^{0})^{2}_{31}+2\frac{b_{1}^{2}}{(b _{1}+b_{2})^{2}}(\Gamma^{0}_{32})^{2}\big{)}\] \[\qquad\qquad+b_{3}\frac{b_{1}^{2}}{(b_{1}+b_{3})^{2}}(\Gamma^{0}_ {11}+\Gamma^{0}_{22})^{2}\Big{)} \tag{3.22}\] \[=\mu L^{2}_{c}\Big{(}b_{1}\big{(}(\Gamma^{0}_{11})^{2}+(\Gamma^ {0}_{22})^{2}\big{)}+\frac{b_{1}b_{3}}{(b_{1}+b_{3})}(\Gamma^{0}_{11}+\Gamma^{0}_ {22})^{2}+\frac{b_{1}}{2}(\Gamma^{0}_{21}+\Gamma^{0}_{12})^{2}\] \[\qquad\qquad+2\frac{b_{1}b_{2}}{(b_{1}+b_{2})}(\Gamma^{0}_{31})^{ 2}+2\frac{b_{1}b_{2}}{(b_{1}+b_{2})}(\Gamma^{0}_{32})^{2}+\frac{b_{2}}{2}( \Gamma^{0}_{21}-\Gamma^{0}_{12})^{2}\Big{)}\] \[=\mu L^{2}_{c}\Big{(}b_{1}\|\mathrm{sym}\Gamma_{\square}\|^{2}+b_ {2}\|\,\mathrm{skew}\,\Gamma_{\square}\|^{2}+\frac{b_{1}b_{3}}{(b_{1}+b_{3})} \mathrm{tr}(\Gamma_{\square})^{2}+\frac{2b_{1}b_{2}}{(b_{1}+b_{2})}\|\,\Big{(} \Gamma^{0}_{31}\Big{)}\,\|^{2}\Big{)}\,,\] where \(\Gamma_{\square}=\begin{pmatrix}\Gamma^{0}_{11}&\Gamma^{0}_{12}\\ \Gamma^{0}_{21}&\Gamma^{0}_{22}\end{pmatrix}\). Therefore, the homogenized curvature energy for the flat Cosserat-shell model is \[W_{\mathrm{curv}}^{\mathrm{hom,\;plate}}(\Gamma) =\mu L^{2}_{c}\Big{(}b_{1}\|\mathrm{sym}\Gamma_{\square}\|^{2}+b_ {2}\|\,\mathrm{skew}\,\Gamma_{\square}\|^{2}+\frac{b_{1}b_{3}}{(b_{1}+b_{3})} \mathrm{tr}(\Gamma_{\square})^{2}+\frac{2b_{1}b_{2}}{(b_{1}+b_{2})}\|\, \begin{pmatrix}\Gamma^{0}_{ Since we have now the explicit form of both homogenized energies (membrane and curvature), we are ready to indicate the exact form of the \(\Gamma\)-limit of the sequence of functionals \(\mathcal{J}_{h_{j}}\colon X\to\overline{\mathbb{R}}\) and to provide the following theorem, see [41] **Theorem 3.2**.: _Assume the boundary data satisfy the conditions_ \[\varphi_{d}^{\natural}=\varphi_{d}\big{|}_{\partial\Omega_{1}}\text{(in the sense of traces) for }\ \varphi_{d}\in\mathrm{H}^{1}(\Omega_{1};\mathbb{R}^{3}),\qquad\Gamma_{1}\subset\partial \tag{3.24}\] _and let the constitutive parameters satisfy_ \[\mu\,>0,\qquad\quad\kappa>0,\qquad\quad\mu_{\mathrm{c}}>0,\qquad\quad a_{1}>0, \qquad a_{2}>0,\qquad\quad a_{3}>0\,. \tag{3.25}\] _Then, for any sequence \((\varphi_{h_{j}}^{\natural},\overline{R}_{h_{j}}^{\natural})\in X\) such that \((\varphi_{h_{j}}^{\natural},\overline{R}_{h_{j}}^{\natural})\to(\varphi_{0}, \overline{R}_{0})\) as \(h_{j}\to 0\), the sequence of functionals \(\mathcal{I}_{h_{j}}\colon X\to\overline{\mathbb{R}}\) from (3.8) \(\Gamma\)-converges to the limit energy functional \(\mathcal{I}_{0}\colon X\to\overline{\mathbb{R}}\) defined by_ \[\mathcal{I}_{0}(m,\overline{R}_{0})=\begin{cases}\int_{\omega}[W_{\mathrm{mp} }^{\mathrm{hom,plate}}(\mathcal{E}_{m,\overline{R}_{0}}^{\mathrm{plate}})+ \widetilde{W}_{\mathrm{curv}}^{\mathrm{hom,plate}}(\mathcal{K}_{\overline{R}_ {0}}^{\mathrm{plate}})]\;d\omega&\text{if}\quad(m,\overline{R}_{0})\in \mathcal{S}_{\omega}^{\prime}\,,\\ +\infty&\text{else in }X,\end{cases} \tag{3.26}\] _where_ \[m(x_{1},x_{2}) :=\varphi_{0}(x_{1},x_{2})=\lim_{h_{j}\to 0}\varphi_{h_{j}}^{ \natural}(x_{1},x_{2},\frac{1}{h_{j}}x_{3}),\qquad\overline{Q}_{e,0}(x_{1},x_ {2})=\lim_{h_{j}\to 0}\overline{R}_{h_{j}}^{\natural}(x_{1},x_{2},\frac{1}{h_{j}}x_{ 3}),\] \[\mathcal{E}_{m,\overline{R}_{0}}^{\mathrm{plate}} =\overline{R}_{0}^{T}(\mathrm{D}m|0)-\mathbb{1}_{2}^{\flat}\,, \qquad\mathcal{K}_{\overline{R}_{0}}^{\mathrm{plate}}=\Big{(}\mathrm{axl}( \overline{R}_{0}^{T}\,\partial_{x_{1}}\overline{R}_{0})\,|\,\mathrm{axl}( \overline{R}_{0}^{T}\,\partial_{x_{2}}\overline{R}_{0})\,|0\Big{)}\not\in \mathrm{Sym}(3)\,,\] _and_ \[W_{\mathrm{mp}}^{\mathrm{hom,plate}}(\mathcal{E}_{m,\overline{R}_ {0}}^{\mathrm{plate}}) =\,\mu\,\|\mathrm{sym}\ [\mathcal{E}_{m,\overline{R}_{0}}^{\mathrm{plate}}]\|^{2}+ \mu_{\mathrm{c}}\,\|\mathrm{skew}\ [\mathcal{E}_{m,\overline{R}_{0}}^{\mathrm{plate}}]\|^{2}+ \frac{\lambda\,\mu}{\lambda+2\,\mu}\,\big{[}\mathrm{tr}([\mathcal{E}_{m, \overline{R}_{0}}^{\mathrm{plate}}]\|)^{2}+\frac{2\,\mu\;\mu_{\mathrm{c}}}{ \mu_{\mathrm{c}}\,+\mu}\|[\mathcal{E}_{m,\overline{R}_{0}}^{\mathrm{plate}}]^ {T}n_{0}\|^{2} \tag{3.27}\] \[=W_{\mathrm{shel}}\big{(}[\mathcal{E}_{m,\overline{R}_{0}}^{ \mathrm{plate}}]\|\big{)}+\frac{2\,\mu\;\mu_{\mathrm{c}}}{\mu_{\mathrm{c}}\,+ \mu}\|[\mathcal{E}_{m,\overline{R}_{0}}^{\mathrm{plate}}]^{\perp}\|^{2},\] \[\widetilde{W}_{\mathrm{curv}}^{\mathrm{hom,plate}}(\mathcal{K}_{ \overline{R}_{0}}^{\mathrm{plate}}) =\inf_{A\in\mathfrak{s}\mathfrak{s}(3)}\widetilde{W}_{\mathrm{ curv}}\Big{(}\mathrm{axl}(\overline{R}_{0}^{T}\,\partial_{\eta_{1}}\overline{R}_{0})\,|\, \mathrm{axl}(\overline{R}_{0}^{T}\,\partial_{\eta_{2}}\overline{R}_{0})\,|\, \mathrm{axl}(A)\,\Big{)}[(\mathrm{D}_{x}\Theta)^{\natural}\!(0)]^{-1}\] \[=\mu L_{c}^{2}\Big{(}b_{1}\|\mathrm{sym}[\mathcal{K}_{\overline{ R}_{0}}^{\mathrm{plate}}]\|^{2}+b_{2}\|\,\mathrm{skew}[\mathcal{K}_{\overline{R}_{0}}^{\mathrm{plate}}]\|^{2}+ \frac{b_{1}b_{3}}{(b_{1}+b_{3})}\mathrm{tr}([\mathcal{K}_{\overline{R}_{0}}^{ \mathrm{plate}}]\|)^{2}+\frac{2\,b_{1}b_{2}}{b_{1}+b_{2}}\|[\mathcal{K}_{ \overline{R}_{0}}^{\mathrm{plate}}]^{\perp}\|\Big{)}\,.\] ## 4 Homogenized curvature energy for the curved Cosserat-shell model via \(\Gamma\)-convergence In this section we consider the case of a curved Cosserat-shell model and we give the explicit form and the detailed calculation of the homogenized curvature energy. In comparison to the flat Cosserat-shell model, the calculations are more complicated. Hence, let us consider an elastic material which in its reference configuration fills the three dimensional _shell-like thin_ domain \(\Omega_{\xi}\subset R^{3}\), i.e., we assume that there exists a \(C^{1}\)-diffeomorphism \(\Theta\colon R^{3}\to R^{3}\) with \(\Theta(x_{1},x_{2},x_{3}):=(\xi_{1},\xi_{2},\xi_{3})\) such that \(\Theta(\Omega_{h})=\Omega_{\xi}\) and \(\omega_{\xi}=\Theta(\omega\times\{0\})\), where \(\Omega_{h}\subset R^{3}\) for \(\Omega_{h}=\omega\times\big{[}-\frac{h}{2},\frac{h}{2}\big{]}\), with \(\omega\subset R^{2}\) a bounded domain with Lipschitz boundary \(\partial\omega\). The scalar \(0<h\ll 1\) is called _thickness_ of the shell, while the domain \(\Omega_{h}\) is called _fictitious Cartesian configuration_ of the body. In fact, in this paper, we consider the following diffeomorphism \(\Theta\colon R^{3}\to R^{3}\) which describes the curved surface of the shell \[\Theta(x_{1},x_{2},x_{3})=y_{0}(x_{1},x_{2})+x_{3}\,n_{0}(x_{1},x_{2})\,, \tag{4.1}\] where \(y_{0}\colon\omega\to R^{3}\) is a \(C^{2}(\omega)\)-function and \(n_{0}=\frac{\partial_{x_{1}}y_{0}\times\partial_{x_{2}}y_{0}}{\|\partial_{x_{1}}y_ {0}\times\partial_{x_{2}}y_{0}\|}\) is the unit normal vector on \(\omega_{\xi}\). Remark that \[\mathrm{D}_{x}\Theta(x_{3})\,=\,(\mathrm{D}y_{0}|n_{0})+x_{3}(\mathrm{D}n_{0}|0 )\ \,\forall\,x_{3}\in\left(-\frac{h}{2},\frac{h}{2}\right),\ \,\mathrm{D}_{x}\Theta(0)\,=\,(\mathrm{D}y_{0}|\,n_{0}),\ \ [ \mathrm{D}_{x}\Theta(0)]^{-T}\,e_{3}\,=n_{0}, \tag{4.2}\] and \(\det\mathrm{D}_{x}\Theta(0)=\det(\mathrm{D}y_{0}|n_{0})=\sqrt{\det[(\mathrm{D}y_{0} )^{T}\mathrm{D}y_{0}]}\) represents the surface element. We also have the polar decomposition \(\mathrm{D}_{x}\Theta(0)=Q_{0}\,U_{0}\), where \[Q_{0}=\mathrm{polar}(\mathrm{D}_{x}\Theta(0))=\mathrm{polar}([\mathrm{D}_{x} \Theta(0)]^{-T})\in\mathrm{SO}(3)\quad\text{and}\quad U_{0}\in\mathrm{Sym}^{+ }(3)\,. \tag{4.3}\] The first step in our shell model is to transform the problem to a variational problem defined on the fictitious flat configuration \(\Omega_{h}=\omega\times\big{[}-\frac{h}{2},\frac{h}{2}\big{]}\). The next step, in order to apply the \(\Gamma\)-convergence technique, is to transform the minimization problem onto the _fixed domain_\(\Omega_{1}\), which is independent of the thickness \(h\). These two steps were done in [41], the three-dimensional problem (2.1) (corresponding to the Cosserat-curvature tensor \(\alpha\)) being equivalent to the following minimization problem on \(\Omega_{1}\) \[I_{h}^{\natural}(\varphi^{\natural},\mathrm{D}_{\eta}^{h} \varphi^{\natural},\overline{Q}_{e,h}^{\natural},\Gamma_{e,h}^{\natural})= \int_{\Omega_{1}}\Big{(}W_{\mathrm{mp}}(U_{e,h}^{\natural})+\widetilde{W}_{ \mathrm{curv}}(\Gamma_{e,h}^{\natural})\Big{)}\det(\mathrm{D}_{\eta}\zeta( \eta))\det((\mathrm{D}_{x}\Theta)^{\natural}(\eta_{3}))\;dV_{\eta}\] \[=\int_{\Omega_{1}}\;h\;\Big{[}\Big{(}W_{\mathrm{mp}}(U_{e,h}^{ \natural})+\widetilde{W}_{\mathrm{curv}}(\Gamma_{e,h}^{\natural})\Big{)}\det ((\mathrm{D}_{x}\Theta)^{\natural}(\eta_{3}))\Big{]}\;dV_{\eta}\mapsto\min\; \mathrm{w.r.t}\;(\varphi^{\natural},\overline{Q}_{e,h}^{\natural})\,, \tag{4.4}\] where \[W_{\mathrm{mp}}(U_{e,h}^{\natural}) =\;\mu\,\|\mathrm{sym}(U_{e,h}^{\natural}-\mathbb{1}_{3})\|^{2} +\mu_{c}\,\|\,\mathrm{skew}(U_{e,h}^{\natural}-\mathbb{1}_{3})\|^{2}+\frac{ \lambda}{2}[\mathrm{tr}(\mathrm{sym}(U_{e,h}^{\natural}-\mathbb{1}_{3}))]^{2}\,,\] \[\widetilde{W}_{\mathrm{curv}}(\Gamma_{e,h}^{\natural}) =\;\mu\,L_{c}^{2}\,\Big{(}b_{1}\,\|\,\mathrm{sym}\,\Gamma_{e,h}^ {\natural}\|^{2}+b_{2}\,\|\mathrm{skew}\,\Gamma_{e,h}^{\natural}\|^{2}+\,b_{3 }\,[\mathrm{tr}(\Gamma_{e,h}^{\natural})]^{2}\Big{)}\;, \tag{4.5}\] \[U_{e,h}^{\natural} =\overline{Q}_{e,h}^{\natural,T}h_{h}^{\natural}[(\mathrm{D}_{x }\Theta)^{\natural}(\eta_{3})]^{-1}=\overline{Q}_{e,h}^{\natural,T}D_{\eta}^{ h}\varphi^{\natural}(\eta)[(\mathrm{D}_{x}\Theta)^{\natural}(\eta_{3})]^{-1}\,,\] \[\Gamma_{e,h}^{\natural} =\Big{(}\mathrm{axl}(\overline{Q}_{e,h}^{\natural,T}\partial_{ \eta_{1}}\overline{Q}_{e,h}^{\natural})\,|\,\mathrm{axl}(\overline{Q}_{e,h}^{ \natural,T}\partial_{\eta_{2}}\overline{Q}_{e,h}^{\natural})\,|\,\frac{1}{h} \mathrm{axl}(\overline{Q}_{e,h}^{\natural,T}\partial_{\eta_{3}}\overline{Q}_{ e,h}^{\natural})\,\Big{)}[(\mathrm{D}_{x}\Theta)^{\natural}(\eta_{3})]^{-1},\] with \((\mathrm{D}_{x}\Theta)^{\natural}(\eta_{3})\) the nonlinear scaling (see (3.2)) of \(\mathrm{D}_{x}\Theta\), \(F_{h}^{\natural}=\mathrm{D}_{\eta}^{h}\varphi^{\natural}\) the nonlinear scaling of the gradient of the mapping \(\varphi\colon\Omega_{h}\to\Omega_{c}\,,\;\varphi(x_{1},x_{2},x_{3})=\varphi_ {\xi}(\Theta(x_{1},x_{2},x_{3}))\,,\,\overline{Q}_{e,h}^{\natural}\) the nonlinear scaling of the _elastic microrotation_\(\overline{Q}_{e}\colon\Omega_{h}\to\mathrm{SO}(3)\) defined by \(\overline{Q}_{e}(x_{1},x_{2},x_{3}):=\overline{R}_{\xi}(\Theta(x_{1},x_{2},x_{ 3}))\,.\) Since for \(\eta_{3}=0\) the values of \(\mathrm{D}_{x}\Theta\), \(Q_{0}\), \(U_{0}\) expressed in terms of \((\eta_{1},\eta_{2},0)\) and \((x_{1},x_{2},0)\) coincide, we will omit the sign \(.\lx@note{footnote}{Here, ”non-fully” means that the introduced quantities still depend on $\eta_{3}$ and $h$, because the elements $\mathrm{D}_{(\eta_{1},\eta_{2})}\varphi^{\natural}$ still depend on $\eta_{3}$ and $\overline{Q}^{\natural,T}$ depends on $h$.}\) and we will understand from the context the variables into discussion, i.e., \[(\mathrm{D}_{x}\Theta)(0) :=(\mathrm{D}y_{0}\,|n_{0})=(\mathrm{D}_{x}\Theta)^{\natural}( \eta_{1},\eta_{2},0)\equiv(\mathrm{D}_{x}\Theta)(x_{1},x_{2},0), \tag{4.6}\] \[Q_{0}(0) :=Q_{0}^{\natural}(\eta_{1},\eta_{2},0)\equiv Q_{0}(x_{1},x_{2},0), \qquad\qquad U_{0}(0):=U_{0}^{\natural}(\eta_{1},\eta_{2},0)\equiv U_{0}(x_{1},x _{2},0).\] In order to construct the \(\Gamma\)-limit of the rescaled energies \[\mathcal{I}_{h}^{\natural}(\varphi^{\natural},\mathrm{D}_{\eta}^{h} \varphi^{\natural},\overline{R}_{h}^{\natural},\Gamma_{h}^{\natural})=\begin{cases} \frac{1}{h}\,I_{h}^{\natural}(\varphi^{\natural},\mathrm{D}_{\eta}^{h} \varphi^{\natural},\overline{Q}_{e,h}^{\natural},\Gamma_{e,h}^{\natural})& \text{if }\;(\varphi^{\natural},\overline{Q}_{e,h}^{\natural})\in\mathcal{S}^{ \prime},\\ +\infty&\text{else in $X$,}\end{cases} \tag{4.7}\] for curved Cosserat-shell model we have to solve the following **four (not only two as for flat Cosserat-shell models)** auxiliary optimization problem. 1. For each \(\varphi^{\natural}:\Omega_{1}\to\mathbb{R}^{3}\) and \(\overline{Q}_{e,h}^{\natural}:\Omega_{1}\to\mathrm{SO}(3)\) we determine a vector \(d^{*}\in\mathbb{R}^{3}\) through \[W_{\mathrm{mp}}^{\mathrm{hom},\natural}(\mathcal{E}_{\varphi^{ \natural},\overline{Q}_{e,h}^{\natural}}) :=W_{\mathrm{mp}}\Big{(}\overline{Q}_{e,h}^{\natural,T}(\mathrm{D}_{( \eta_{1},\eta_{2})}\varphi^{\natural}|d^{*})[(\mathrm{D}_{x}\Theta)^{\natural} (\eta_{3})]^{-1}\Big{)}\] \[=\inf_{c\in\mathbb{R}^{3}}W_{\mathrm{mp}}\Big{(}\overline{Q}_{e,h}^ {\natural,T}(\mathrm{D}_{(\eta_{1},\eta_{2})}\varphi^{\natural}|c)[(\mathrm{D}_{x }\Theta)^{\natural}(\eta_{3})]^{-1}\Big{)}, \tag{4.8}\] where \(\mathcal{E}_{\varphi^{\natural},\overline{Q}_{e,h}^{\natural}}:=(\overline{Q}_{e }^{\natural,T}\mathrm{D}_{(\eta_{1},\eta_{2})}\varphi^{\natural}-(\mathrm{D}y_{0} )^{\natural}|0)[(\mathrm{D}_{x}\Theta)^{\natural}(\eta_{3})]^{-1}\,\) represents the non fully3 dimensional reduced elastic shell strain tensor. O2: For each pair \((m,\overline{Q}_{e,0})\), where \(m:\omega\to\mathbb{R}^{3}\), \(\overline{Q}_{e,0}:\omega\to\mathrm{SO}(3)\) we determine the vector \(\vec{d}^{*}\in\mathbb{R}^{3}\) through \[W_{\mathrm{mp}}^{\mathrm{hom}}(\mathcal{E}_{m,\overline{Q}_{e,0}}):=W_{ \mathrm{mp}}\Big{(}\overline{Q}_{e,0}^{T}(\mathrm{D}m|\vec{d}^{*})[(\mathrm{D} _{x}\Theta)(0)]^{-1}\Big{)} =\inf_{\widetilde{c}\in\mathbb{R}^{3}}W_{\mathrm{mp}}\Big{(} \overline{Q}_{e,0}^{T}(\mathrm{D}m|\widetilde{c})[(\mathrm{D}_{x}\Theta)(0)]^{ -1}\Big{)}\] (4.9) \[=\inf_{\widetilde{c}\in\mathbb{R}^{3}}W_{\mathrm{mp}}\Big{(} \mathcal{E}_{m,\overline{Q}_{e,0}}-(0|0|\widetilde{c})[(\mathrm{D}_{x}\Theta) (0)]^{-1}\Big{)},\] where \(\mathcal{E}_{m,\overline{Q}_{e,0}}:=(\overline{Q}_{e,0}^{T}\mathrm{D}m- \mathrm{D}y_{0}|0)[\mathrm{D}_{x}\Theta(0)]^{-1}\) represents the _elastic shell strain tensor_. O3: For each \(\overline{Q}_{e,h}^{\natural}:\Omega_{1}\to\mathrm{SO}(3)\) we determine the skew-symmetric matrix \(A^{*}\in\mathfrak{so}\,3\), i.e. its axial vector \(\mathrm{axl}\,A^{*}\in\mathbb{R}^{3}\) through \[\widetilde{W}_{\mathrm{curv}}^{\mathrm{hom},\natural}(\mathcal{ K}_{\overline{Q}_{e,h}^{\natural}}): =\widetilde{W}_{\mathrm{curv}}\Big{(}\big{(}\mathrm{axl}( \overline{Q}_{e,h}^{\natural,T}\partial_{\eta_{1}}\overline{Q}_{e,h}^{ \natural})\,|\,\mathrm{axl}(\overline{Q}_{e,h}^{\natural,T}\partial_{\eta_{2}} \overline{Q}_{e,h}^{\natural})\,|\,\mathrm{axl}\,(A^{*})\,\big{)}[(\mathrm{D} _{x}\Theta)^{\natural}(\eta_{3})]^{-1}\Big{)}\] (4.10) \[=\inf_{A\in\mathfrak{so}(3)}\widetilde{W}_{\mathrm{curv}}\Big{(} \big{(}\mathrm{axl}(\overline{Q}_{e,h}^{\natural,T}\partial_{\eta_{1}} \overline{Q}_{e,h}^{\natural})\,|\,\mathrm{axl}(\overline{Q}_{e,h}^{\natural, T}\partial_{\eta_{2}}\overline{Q}_{e,h}^{\natural})\,|\,\mathrm{axl}\,(A)\, \big{)}[(\mathrm{D}_{x}\Theta)^{\natural}(\eta_{3})]^{-1}\Big{)},\] where \(\mathcal{K}_{\overline{Q}_{e,h}^{\natural}}:=\,\Big{(}\mathrm{axl}(\overline{ Q}_{e,h}^{\natural,T}\partial_{\eta_{1}}\overline{Q}_{e,h}^{\natural})\,|\, \mathrm{axl}(\overline{Q}_{e,h}^{\natural,T}\partial_{\eta_{2}}\overline{Q}_{ e,h}^{\natural})\,|0\Big{)}[(\mathrm{D}_{x}\Theta)^{\natural}(\eta_{3})]^{-1}\) represents a not fully reduced elastic shell bending-curvature tensor, in the sense that it still depends on \(\eta_{3}\) and \(h\), since \(\overline{Q}_{e,h}^{\natural}=\overline{Q}_{e,h}^{\natural}(\eta_{1},\eta_{2}, \eta_{3})\). Therefore, \(\widetilde{W}_{\mathrm{curv}}^{\mathrm{hom},\natural}(\mathcal{K}_{\overline{Q }_{e,h}^{\natural}})\) given by the above definitions still depends on \(\eta_{3}\) and \(h\). O4: For each \(\overline{Q}_{e,0}:\Omega_{1}\to\mathrm{SO}(3)\) we determine the skew-symmetric matrix \(A^{*}\in\mathfrak{so}(3)\), i.e. its axial vector \(\mathrm{axl}\,A^{*}\in\mathbb{R}^{3}\), though \[\widetilde{W}_{\mathrm{curv}}^{\mathrm{hom}}(\mathcal{K}_{ \overline{Q}_{e,0}}): =\widetilde{W}_{\mathrm{curv}}^{*}\Big{(}\big{(}\mathrm{axl}( \overline{Q}_{e,0}^{T}\,\partial_{\eta_{1}}\overline{Q}_{e,0})\,|\,\mathrm{ axl}(\overline{Q}_{e,0}^{T}\,\partial_{\eta_{2}}\overline{Q}_{e,0})\,|\,\, \mathrm{axl}\,(A^{*})\,\big{)}[(\mathrm{D}_{x}\Theta)^{\natural}(0)]^{-1} \Big{)}\] (4.11) \[=\inf_{A\in\mathfrak{so}(3)}\widetilde{W}_{\mathrm{curv}}\Big{(} \big{(}\mathrm{axl}(\overline{Q}_{e,0}^{T}\,\partial_{\eta_{1}}\overline{Q}_{e, 0})\,|\,\mathrm{axl}(\overline{Q}_{e,0}^{T}\,\partial_{\eta_{2}}\overline{Q}_{ e,0})\,|\,\mathrm{axl}\,(A)\,\big{)}[(\mathrm{D}_{x}\Theta)^{\natural}(0)]^{-1} \Big{)}\,,\] where \(\mathcal{K}_{\overline{Q}_{e,0}}:=\,\Big{(}\mathrm{axl}(\overline{Q}_{e,0}^{T} \,\partial_{x_{1}}\overline{Q}_{e,0})\,|\,\mathrm{axl}(\overline{Q}_{e,0}^{T} \,\partial_{x_{2}}\overline{Q}_{e,0})\,|0\Big{)}[\mathrm{D}_{x}\Theta(0)\,]^{-1 }\not\in\mathrm{Sym}(3)\) represents the _the elastic shell bending-curvature tensor_. Let us remark that having the solutions of the optimization problems O1 and O3, the solutions for the optimization problems O2 and O4, respectively, follow immediately. However, we cannot skip the solution of the optimization problems O1 and O3 and use only the solutions of the optimization problems O2 and O4, since the knowledge of \(W_{\mathrm{mp}}^{\mathrm{hom}}\) and \(W_{\mathrm{curv}}^{\mathrm{hom}}\) is important in the proof of the \(\Gamma\)-convergence result. This is the first major difference between \(\Gamma\)-convergence for curved initial configurations and flat initial configuration. The solutions of the first two optimization problems and the complete calculations where given in [41], while the analytical calculations of the last two optimization problems were left open until now. For the completeness of the exposition we recall the following result **Theorem 4.1**.: _[_41_]_ _The solution of the optimization problem O2 is_ \[\vec{d}^{*}=\Big{(}1-\frac{\lambda}{2\,\mu+\lambda}\langle\mathcal{E}_{m, \overline{Q}_{e,0}},\mathbbm{1}_{3}\rangle\Big{)}\overline{Q}_{e,0}n_{0}+ \frac{\mu_{c}-\mu}{\mu_{c}+\mu}\ \overline{Q}_{e,0}\mathcal{E}_{m,\overline{Q}_{e,0}}^{T}n_{0}\,, \tag{4.12}\] _and_ \[\begin{split} W_{\mathrm{mp}}^{\mathrm{hom}}(\mathcal{E}_{m, \overline{Q}_{e,0}})&=\,\mu\,\|\mathrm{sym}\ \,\mathcal{E}_{m,\overline{Q}_{e,0}}^{\parallel}\|^{2}+\mu_{c}\,\|\mathrm{skew}\ \mathcal{E}_{m,\overline{Q}_{e,0}}^{\parallel}\|^{2}+\,\frac{\lambda\,\mu}{ \lambda+2\,\mu}\,\big{[}\mathrm{tr}(\mathcal{E}_{m,\overline{Q}_{e,0}}^{ \parallel})\big{]}^{2}+\frac{2\,\mu\ \mu_{c}}{\mu_{c}+\mu}\|\mathcal{E}_{m,\overline{Q}_{e,0}}^{T}n_{0}\|^{2}\\ &=W_{\mathrm{shell}}\big{(}\mathcal{E}_{m,\overline{Q}_{e,0}}^{ \parallel}\big{)}+\frac{2\,\mu\ \mu_{c}}{\mu_{c}+\mu}\|\mathcal{E}_{m,\overline{Q}_{e,0}}^{\perp}\|^{2}, \end{split} \tag{4.13}\] _where \(W_{\rm shell}\big{(}\mathcal{E}^{\parallel}_{m,\overline{Q}_{e,0}}\big{)}=\ \mu\,\|{\rm sym}\ \mathcal{E}^{ \parallel}_{m,\overline{Q}_{e,0}}\|^{2}+\mu_{c}\,\|{\rm skew}\ \mathcal{E}^{\parallel}_{m,\overline{Q}_{e,0}}\|^{2}+\ \frac{\lambda\,\mu}{ \lambda+2\,\mu}\,\big{[}{\rm tr}(\mathcal{E}^{\parallel}_{m,\overline{Q}_{e,0} })\big{]}^{2}\) with the orthogonal decomposition in the tangential plane and in the normal direction_ \[\mathcal{E}_{m,\overline{Q}_{e,0}}=\mathcal{E}^{\parallel}_{m,\overline{Q}_{e,0}}+\mathcal{E}^{\perp}_{m,\overline{Q}_{e,0}},\ \ \ \ \ \ \ \ \ \ \mathcal{E}^{\parallel}_{m,\overline{Q}_{e,0}}:={\rm A}_{y_{0}}\,\mathcal{E} _{m,\overline{Q}_{e,0}},\ \ \ \ \ \ \ \ \ \ \mathcal{E}^{\perp}_{m,\overline{Q}_{e,0}}\coloneqq(\mathbb{1}_{3}-{\rm A}_{ y_{0}})\,\mathcal{E}_{m,\overline{Q}_{e,0}}, \tag{4.14}\] _and \({\rm A}_{y_{0}}:=({\rm Dy}_{0}|0)\ [{\rm D}\Theta_{x}(0)\,]^{-1}\in\mathbb{R}^{3 \times 3}\)._ In the remainder of this section we provide the explicit solutions of the optimization problems O3 and O4. We remark that, while in the case of flat initial configuration the solution of O4 is very easy to be found, in the case of curved initial configuration the calculations are more difficult. Beside this, for curved initial configurations there is a need to solve the optimization problem O3, too. Notice that for flat initial configuration the optimization problems O3 and O4 coincide. ### The calculation of the homogenized curvature energy We have the following isotropic curvature energy formula for a curved configuration \[\widetilde{W}_{\rm curv}(\Gamma^{\natural}_{e,h})=\mu L_{c}^{2}\Big{(}b_{1}\|{ \rm sym}\,\Gamma^{\natural}_{e,h}\|^{2}+b_{2}\,\|\,{\rm skew}\,\Gamma^{\natural }_{e,h}\|^{2}+b_{3}{\rm tr}(\Gamma^{\natural}_{e,h})^{2}\Big{)}\,. \tag{4.15}\] **Theorem 4.2**.: _The solution of the optimization problem O3 given by (4.10) is_ \[c^{*}=\frac{(b_{2}-b_{1})}{b_{1}+b_{2}}\mathcal{K}^{T}_{\overline{Q}^{*}_{e,h} }n_{0}-\frac{2b_{3}}{2(b_{1}+b_{3})}{\rm tr}(\mathcal{K}_{\overline{Q}^{*}_{e,h}})n_{0} \tag{4.16}\] _and the coresponding homogenized curvature energy is_ \[W^{hom}_{curv}(\mathcal{K}_{\overline{Q}^{*}_{e,h}})=\mu L_{c}^{2}\Big{(}b_{1 }\|{\rm sym}\mathcal{K}^{\parallel}_{\overline{Q}^{*}_{e,h}}\|^{2}+b_{2}\|\,{ \rm skew}\,\mathcal{K}^{\parallel}_{\overline{Q}^{*}_{e,h}}\|^{2}+\frac{b_{1 }b_{3}}{(b_{1}+b_{3})}{\rm tr}(\mathcal{K}^{\parallel}_{\overline{Q}^{*}_{e,h }})^{2}+\frac{2b_{1}b_{2}}{b_{1}+b_{2}}\|\mathcal{K}^{\perp}_{\overline{Q}^{*}_ {e,h}}\|\Big{)}\,,\] _where \(\mathcal{K}^{\parallel}_{\overline{Q}^{*}_{e,h}}\) and \(\mathcal{K}^{\perp}_{\overline{Q}^{*}_{e,h}}\) represent the orthogonal decomposition of the a not fully reduced elastic shell bending-curvature tensor \(\mathcal{K}_{\overline{Q}^{*}_{e,h}}\) in the tangential plane and in the normal direction, respectively._ Proof.: We need to find \[\widetilde{W}^{\rm hom,\natural}_{\rm curv}(\mathcal{K}_{\overline{Q}^{*}_{e,h }})=\widetilde{W}_{\rm curv}(\mathcal{K}_{\overline{Q}^{*}_{e,h}}+(0|0|c^{*})[( {\rm D}_{x}\Theta)^{\natural}(\eta_{3})]^{-1})=\inf_{c\in\mathbb{R}^{3}} \widetilde{W}_{\rm curv}(\underbrace{\mathcal{K}_{\overline{Q}^{*}_{e,h}}+(0|0 |c)[({\rm D}_{x}\Theta)^{\natural}(\eta_{3})]^{-1}}_{=:\mathcal{K}^{\overline{Q }^{*}_{e,h}}})\,. \tag{4.17}\] The Euler-Lagrange equations appear from variations with respect to arbitrary increments \(\delta c\in\mathbb{R}^{3}\). \[\langle DW_{\rm curv}(\mathcal{K}^{c^{*}}_{\overline{Q}^{*}_{e,h }}),(0|0|\delta c)[({\rm D}_{x}\Theta)^{\natural}(\eta_{3})]^{-1}\rangle=0 \Leftrightarrow \langle[D\widetilde{W}_{\rm curv}(\mathcal{K}^{c^{*}}_{\overline {Q}^{*}_{e,h}})]\,[({\rm D}_{x}\Theta)^{\natural}(\eta_{3})]^{-T},e_{3}\otimes \delta c\rangle=0 \tag{4.18}\] \[\Leftrightarrow \langle[D\widetilde{W}_{\rm curv}(\mathcal{K}^{c^{*}}_{\overline {Q}^{*}_{e,h}})]\,[({\rm D}_{x}\Theta)^{\natural}(\eta_{3})]^{-T}\,e_{3},\delta c\rangle=0\] \[\Leftrightarrow \langle[D\widetilde{W}_{\rm curv}(\mathcal{K}^{c^{*}}_{\overline {Q}^{*}_{e,h}})]\,n_{0},\delta c\rangle=0\quad\forall\delta c\in\mathbb{R}^{3}.\] Therefore, we deduce that if \(c^{*}\) is a minimum then \[[D\widetilde{W}_{\rm curv}(\mathcal{K}^{c^{*}}_{\overline{Q}^{*}_{e,h}})]\,n_{ 0}=0\quad\Leftrightarrow\quad\Big{(}2a_{1}{\rm sym}(\mathcal{K}^{c^{*}}_{ \overline{Q}^{*}_{e,h}})+2\,a_{2}\,{\rm skew}(\mathcal{K}^{c^{*}}_{\overline {Q}^{*}_{e,h}})+2a_{3}{\rm tr}(\mathcal{K}^{c^{*}}_{\overline{Q}^{*}_{e,h}}) \Big{)}n_{0}=0\,. \tag{4.19}\] Since \(\mathcal{K}^{c^{*}}_{\overline{Q}^{*}_{e,h}}=\mathcal{K}_{\overline{Q}^{*}_{e,h }}+(0|0|c^{*})[({\rm D}_{x}\Theta)^{\natural}(\eta_{3})]^{-1}\,,\) we have \[2{\rm sym}\big{(}\mathcal{K}^{c^{*}}_{\overline{Q}^{*}_{e,h}} \big{)}n_{0} =2\Big{(}{\rm sym}(\mathcal{K}_{\overline{Q}^{*}_{e,h}})+{\rm sym}( (0|0|c^{*})[({\rm D}_{x}\Theta)^{\natural}(\eta_{3})]^{-1})\Big{)}n_{0} \tag{4.20}\] \[=\Big{(}{\rm axl}(\overline{Q}^{\natural,T}_{e,h}\,\partial_{ \eta_{1}}\overline{Q}^{\natural}_{e,h})\,|\,{\rm axl}(\overline{Q}^{\natural,T }_{e,h}\,\partial_{\eta_{2}}\overline{Q}^{\natural}_{e,h})\,|0\Big{)} \underbrace{[({\rm D}_{x}\Theta)^{\natural}]^{-1}(\eta_{3})\,n_{0}}_{=e_{3}}+ \mathcal{K}^{T}_{\overline{Q}^{*}_{e,h}}\,n_{0}\] \[\qquad+(0|0|c^{*})[({\rm D}_{x}\Theta)^{\natural}(\eta_{3})]^{ -1}n_{0}+((0|0|c^{*})[({\rm D}_{x}\Theta)^{\natural}(\eta_{3})]^{-1})^{T}n_ {0}\] \[=\mathcal{K}^{T}_{\overline{Q}^{*}_{e,h}}\,n_{0}+c^{*}+((0|0|c^{*} )[({\rm D}_{x}\Theta)^{\natural}(\eta_{3})]^{-1})^{T}n_{0}\,.\] Similar calculations show that \[2\,\text{skew}\,\big{(}\mathcal{K}_{\overline{Q}_{e,h}^{\natural}}^{c^ {*}}\big{)}n_{0} =2\Big{(}\,\text{skew}(\mathcal{K}_{\overline{Q}_{e,h}^{\natural}})+ \text{skew}((0|0|c^{*})[(\text{D}_{x}\Theta)^{\natural}(\eta_{3})]^{-1})\Big{)}n _{0}\] \[=-\mathcal{K}_{\overline{Q}_{e,h}^{\natural}}^{T}n_{0}+c^{*}-((0| 0|c^{*})[(\text{D}_{x}\Theta)^{\natural}(\eta_{3})]^{-1})^{T}n_{0}\,, \tag{4.21}\] while the trace term is calculated to be \[2\,\text{tr}(\mathcal{K}_{\overline{Q}_{e,h}^{\natural}}^{c^{*}})n _{0} =2\Big{(}\text{tr}(\mathcal{K}_{\overline{Q}_{e,h}^{\natural}})+ \text{tr}((0|0|c^{*})[(\text{D}_{x}\Theta)^{\natural}(\eta_{3})]^{-1})\Big{)}n _{0}\] \[=2\,\text{tr}(\mathcal{K}_{\overline{Q}_{e,h}^{\natural}})n_{0}+2 ((0|0|c^{*})[(\text{D}_{x}\Theta)^{\natural}(\eta_{3})]^{-1},\mathbb{1}\,_{3} )_{\mathbb{R}^{3\times 3}}\,n_{0}\] \[=2\,\text{tr}(\mathcal{K}_{\overline{Q}_{e,h}^{\natural}})n_{0}+2 (c^{*},[\underbrace{(\text{D}_{x}\Theta)^{\natural}(\eta_{3})]^{-T}e_{3}}_{= n_{0}})n_{0}=2\,\text{tr}(\mathcal{K}_{\overline{Q}_{e,h}^{\natural}})n_{0}+2 \,c^{*}\,n_{0}\otimes n_{0}\,. \tag{4.22}\] By using (4.19), we obtain \[b_{1}\mathcal{K}_{\overline{Q}_{e,h}^{\natural}}^{T}n_{0}+b_{1} c^{*}+b_{1}((0|0|c^{*})[(\text{D}_{x}\Theta)^{\natural}(\eta_{3})]^{-1})^{T}n_{0}-b_{2} \mathcal{K}_{\overline{Q}_{e,h}^{\natural}}^{T}n_{0}+b_{2}c^{*}\] \[\quad-b_{2}((0|0|c^{*})[(\text{D}_{x}\Theta)^{\natural}(\eta_{3} )]^{-1})^{T}n_{0}+2b_{3}\text{tr}(\mathcal{K}_{\overline{Q}_{e,h}^{\natural}} )n_{0}+2b_{3}\,c^{*}\,n_{0}\otimes n_{0}=0\,. \tag{4.23}\] Gathering similar terms gives us \[(b_{1}-b_{2})\mathcal{K}_{\overline{Q}_{e,h}^{\natural}}^{T}n_{0 }+(b_{1}+b_{2})c^{*}+(b_{1}-b_{2})((0|0|c^{*})[(\text{D}_{x}\Theta)^{\natural }(\eta_{3})]^{-1})^{T}n_{0}\] \[\quad+2b_{3}\text{tr}(\mathcal{K}_{\overline{Q}_{e,h}^{\natural} })n_{0}+2b_{3}\,c^{*}\,n_{0}\otimes n_{0}=0\,. \tag{4.24}\] We have \[((0|0|c^{*})[(\text{D}_{x}\Theta)^{\natural}(\eta_{3})]^{-1})^{T }n_{0} =(c^{*}\,(0|0|e_{3})[(\text{D}_{x}\Theta)^{\natural}(\eta_{3})]^{ -1})^{T}n_{0}=(c^{*}\,n_{0})^{T}n_{0}\] \[=n_{0}^{T}c^{*T}n_{0}=\langle n_{0},c^{*T}\rangle n_{0}=n_{0} \langle n_{0},c^{*}\rangle=n_{0}\otimes n_{0}\,c^{*}=c^{*}\,n_{0}\otimes n_{0}\,, \tag{4.25}\] and by using the decomposition [5, 6, 7]\(\mathbb{1}\,_{3}\,c^{*}=A_{y_{0}}\,c^{*}+n_{0}\otimes n_{0}\,c^{*}\,,\) we obtain \[(b_{1}-b_{2})\mathcal{K}_{\overline{Q}_{e,h}^{\natural}}^{T}n_{0 }+(b_{1}+b_{2})(A_{y_{0}}\,c^{*}+n_{0}\otimes n_{0}\,c^{*})+(b_{1}-b_{2})n_{0} \otimes n_{0}\,c^{*}\] \[\quad+2b_{3}\text{tr}(\mathcal{K}_{\overline{Q}_{e,h}^{\natural} })n_{0}+2b_{3}\,n_{0}\otimes n_{0}\,c^{*}=0\,, \tag{4.26}\] and \[[(b_{1}+b_{2})A_{y_{0}}+2(b_{1}+b_{3})n_{0}\otimes n_{0}]\,c^{*}=-(b_{1}-b_{2} )\mathcal{K}_{\overline{Q}_{e,h}^{\natural}}^{T}n_{0}-2b_{3}\text{tr}( \mathcal{K}_{\overline{Q}_{e,h}^{\natural}})\,n_{0}\,. \tag{4.27}\] Since \(A_{y_{0}}\) is orthogonal to \(n_{0}\otimes n_{0}\) and \(A_{y_{0}}^{2}=A_{y_{0}}\), \[\bigg{[}\frac{1}{b_{1}+b_{2}}A_{y_{0}}+\frac{1}{2(b_{1}+b_{3})}n_{0}\otimes n _{0}\bigg{]}\,[(b_{1}+b_{2})A_{y_{0}}+2(b_{1}+b_{3})n_{0}\otimes n_{0}]= \mathbb{1}_{3} \tag{4.28}\] (see [6]), we have \[[(b_{1}+b_{2})A_{y_{0}}+2(b_{1}+b_{3})n_{0}\otimes n_{0}]^{-1}= \frac{1}{b_{1}+b_{2}}A_{y_{0}}+\frac{1}{2(b_{1}+b_{3})}n_{0}\otimes n_{0} \tag{4.29}\] and we find \[c^{*}=(b_{2}-b_{1})\Big{[}\frac{1}{b_{1}+b_{2}}A_{y_{0}}+\frac{1}{2(b_{1}+b_{3}) }n_{0}\otimes n_{0}\Big{]}\mathcal{K}_{\overline{Q}_{e,h}^{\natural}}^{T}n_{0} -2b_{3}\text{tr}(\mathcal{K}_{\overline{Q}_{e,h}^{\natural}})\Big{[}\frac{1}{b_{1 }+b_{2}}A_{y_{0}}+\frac{1}{2(b_{1}+b_{3})}n_{0}\otimes n_{0}\Big{]}n_{0}\,.\] Because, \[\begin{split} A_{y_{0}}\mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h} }&=\mathbb{1}_{3}\mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h}}-n_ {0}\otimes n_{0}\mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h}}\\ &=\mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h}}-(0|0|n_{0})(0| 0|n_{0})^{T}[(\mathsf{D}_{x}\Theta)^{\natural}(\eta_{3})]^{-T}\Big{(}\text{ axl}(\overline{Q}^{\natural,T}_{e,h}\,\partial_{\eta_{1}}\overline{Q}^{\natural}_{e,h}) \,|\,\text{axl}(\overline{Q}^{\natural,T}_{e,h}\,\partial_{\eta_{2}}\overline{ Q}^{\natural}_{e,h})\,|0\Big{)}^{T}n_{0}\\ &=\mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h}}-(0|0|n_{0}) \big{(}\underbrace{[(\mathsf{D}_{x}\Theta)^{\natural}(\eta_{3})]^{-1}(0|0|n_{ 0})}_{(0|0|e_{3})}\big{)}^{T}\Big{(}\text{axl}(\overline{Q}^{\natural,T}_{e,h }\,\partial_{\eta_{1}}\overline{Q}^{\natural}_{e,h})\,|\,\text{axl}( \overline{Q}^{\natural,T}_{e,h}\,\partial_{\eta_{2}}\overline{Q}^{\natural}_{ e,h})\,|0\Big{)}^{T}n_{0}\\ &=\mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h}}-(0|0|n_{0}) \begin{pmatrix}0&0&0\\ 0&0&0\\ 0&0&1\end{pmatrix}\begin{pmatrix}\ast&\ast&\ast\\ \ast&\ast&\ast\\ 0&0&0\end{pmatrix}n_{0}=\mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h}}\,, \end{split} \tag{4.30}\] we obtain the unique minimizer \[c^{*}=\frac{(b_{2}-b_{1})}{b_{1}+b_{2}}\mathcal{K}^{T}_{\overline{Q}^{\natural }_{e,h}}n_{0}-\frac{2b_{3}}{2(b_{1}+b_{3})}\text{tr}(\mathcal{K}_{\overline{Q }^{\natural}_{e,h}})\,n_{0}\,. \tag{4.31}\] Next, we insert the minimizer \(c^{*}\) in (4.31). We have \[\begin{split}\|\text{sym}\,\mathcal{K}^{\overline{c}^{*}}_{ \overline{Q}^{\natural}_{e,h}}\|^{2}&=\|\text{sym}\big{(}\mathcal{K}_{ \overline{Q}^{\natural}_{e,h}}\big{)}\|^{2}+\|\text{sym}\big{(}(0|0|c)[( \mathsf{D}_{x}\Theta)^{\natural}(\eta_{3})]^{-1}\big{)}\|^{2}\\ &\qquad+2\,\Big{\langle}\text{sym}\big{(}\mathcal{K}_{\overline{Q }^{\natural}_{e,h}}\big{)},\text{sym}\big{(}(0|0|c)[(\mathsf{D}_{x}\Theta)^{ \natural}(\eta_{3})]^{-1}\big{)}\Big{\rangle}\\ &=\|\text{sym}(\mathcal{K}_{\overline{Q}^{\natural}_{e,h}}\big{)} \|^{2}\\ &\qquad+\|\text{sym}\Big{(}\frac{b_{2}-b_{1}}{b_{1}+b_{2}} \mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h}}(0|0|n_{0})[(\mathsf{D}_{x} \Theta)^{\natural}(\eta_{3})]^{-1}\Big{)}\|^{2}\\ &\qquad\qquad\qquad\qquad\qquad-\frac{b_{3}}{(b_{1}+b_{3})} \text{tr}(\mathcal{K}_{\overline{Q}^{\natural}_{e,h}})(0|0|n_{0})[(\mathsf{D}_ {x}\Theta)^{\natural}(\eta_{3})]^{-1}\Big{)}\Big{\rangle}\,,\end{split} \tag{4.32}\] and \[\begin{split}\|\text{sym}&\Big{(}\frac{b_{2}-b_{1}}{b_{1}+b_{ 2}}\mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h}}(0|0|n_{0})[(\mathsf{D}_{x} \Theta)^{\natural}(\eta_{3})]^{-1}-\frac{b_{3}}{(b_{1}+b_{3})}\text{tr}( \mathcal{K}_{\overline{Q}^{\natural}_{e,h}})(0|0|n_{0})[(\mathsf{D}_{x}\Theta) ^{\natural}(\eta_{3})]^{-1}\Big{)}\|^{2}\\ &=\frac{(b_{2}-b_{1})^{2}}{(b_{1}+b_{2})^{2}}\|\text{sym}( \mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h}}n_{0}\otimes n_{0})\|^{2}+ \frac{b_{3}^{2}}{(b_{1}+b_{3})^{2}}\text{tr}(\mathcal{K}_{\overline{Q}^{\natural }_{e,h}})^{2}\|n_{0}\otimes n_{0}\|^{2}\\ &\qquad-2\,\frac{b_{2}-b_{1}}{b_{1}+b_{2}}\,\frac{b_{3}}{(b_{1}+ b_{3})}\text{tr}(\mathcal{K}_{\overline{Q}^{\natural}_{e,h}})(\text{sym}( \mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h}}n_{0}\otimes n_{0}),n_{0} \otimes n_{0})\\ &=\frac{(b_{2}-b_{1})^{2}}{(b_{1}+b_{2})^{2}}\,\bigg{\langle} \text{sym}(\mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h}}n_{0}\otimes n_{0}), \text{sym}(\mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h}}n_{0}\otimes n_{0}) \bigg{\rangle}+\frac{b_{3}^{2}}{(b_{1}+b_{3})^{2}}\text{tr}(\mathcal{K}_{ \overline{Q}^{\natural}_{e,h}})^{2}\\ &\quad-\frac{b_{2}-b_{1}}{b_{1}+b_{2}}\,\frac{b_{3}}{(b_{1}+b_{3} )}\text{tr}(\mathcal{K}_{\overline{Q}^{\natural}_{e,h}})\langle\text{$\mathcal{K} ^{T}_{\overline{Q}^{\natural}_{e,h}}$}n_{0}\otimes n_{0},n_{0}\otimes n_{0} \rangle\\ &\quad-\frac{b_{2}-b_{1}}{b_{1}+b_{2}}\,\frac{b_{3}}{(b_{1}+b_{3} )}\text{tr}(\mathcal{K}_{\overline{Q}^{\natural}_{e,h}})\langle n_{0}\otimes n_{0} \,\mathcal{K}_{\overline{Q}^{\natural}_{e,h}},n_{0}\otimes n_{0}\rangle\\ &=\frac{(b_{2}-b_{1})^{2}}{4(b_{1}+b_{2})^{2}}\langle\mathcal{K}^ {T}_{\overline{Q}^{\natural}_{e,h}}n_{0}\otimes n_{0},\mathcal{K}^{T}_{ \overline{Q}^{\natural}_{e,h}}n_{0}\otimes n_{0}\rangle+\frac{(b_{2}-b_{1})^{2}} {4(b_{1}+b_{2})^{2}}\langle\mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h}}n_{0} \otimes n_{0},n_{0}\otimes n_{0}\,\mathcal{K}_{\overline{Q}^{\natural}_{e,h}}\rangle \\ &\quad+\frac{(b_{2}-b_{1})^{2}}{4(b_{1}+b_{2})^{2}} \langle n_{0}\otimes n_{0}\,\mathcal{K}_{\overline{Q}^{\natural}_{e,h}}, \mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h}}n_{0}\otimes n_{0}\rangle+ \frac{(b_{2}-b_{1})^{2}}{4(b_{1}+b_{2})^{2}}\langle n_{0}\otimes n_{0}\, \mathcal{K}_{\overline{Q}^{\natural}_{e,h}},n_{0}\otimes n_{0}\,\mathcal{K}_{ \overline{Q}^{\natural}_{e,h}}\rangle\\ &\quad+\frac{b_{3}^{2}}{(b_{1}+b_{3})^{2}}\text{tr}(\mathcal{K}_{ \overline{Q}^{\natural}_{e,h}})^{2}=\frac{(b_{2}-b_{1})^{2}}{2(b_{1}+b_{2})^{2}}\| \mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h}}n_{0}\|^{2}+\frac{b_{3}^{2}}{(b_{1}+ b_{3})^{2}}\text{tr}(\mathcal{K}_{\overline{Q}^{\natural}_{e,h}})^{2}.\end{split} \tag{4.33}\] Note that \[\langle\mathcal{K}^{T}_{\overline{Q}_{e,h}^{*}},n_{0}\otimes n_{0} \rangle=\langle\mathcal{K}^{T}_{\overline{Q}_{e,h}^{*}},n_{0}\otimes n_{0}\rangle\] \[=\langle\left(\operatorname{axl}(\overline{Q}_{e,h}^{\ast,T} \partial_{\eta_{1}}\overline{Q}_{e,h}^{\natural})\,|\operatorname{axl}( \overline{Q}_{e,h}^{\ast,T}\partial_{\eta_{2}}\overline{Q}_{e,h}^{\natural}) \,|0\right)\!\!\left(\operatorname{(D}_{x}\Theta)^{\natural}\!(\eta_{3}) \right]^{-1},(0|0|n_{0})[(\operatorname{D}_{x}\Theta)^{\natural}\!(0)]^{-1}\rangle\] \[=\left\langle\left(0|0|n_{0}\right)^{T}\!\left(\operatorname{axl }(\overline{Q}_{e,h}^{\ast,T}\partial_{\eta_{1}}\overline{Q}_{e,h}^{\natural} )\,|\operatorname{axl}(\overline{Q}_{e,h}^{\natural,T}\partial_{\eta_{2}} \overline{Q}_{e,h}^{\natural})\,|0\right)\!\!,\left(\begin{array}{cc} \operatorname{I}_{y_{0}}^{-1}&0\\ &0\\ 0&0\end{array}\right)^{-T}\right\rangle\] \[=\left\langle\left(0|0|n_{0}\right)^{T}\!\left(\operatorname{axl }(\overline{Q}_{e,h}^{\ast,T}\partial_{\eta_{1}}\overline{Q}_{e,h}^{\natural })\,|\operatorname{axl}(\overline{Q}_{e,h}^{\ast,T}\partial_{\eta_{2}} \overline{Q}_{e,h}^{\natural})\,|0\right)\!\!,\left(\begin{array}{cc} \operatorname{I}_{y_{0}}^{-1}&0\\ &0\\ 0&0\end{array}\right)^{-T}\right\rangle\] \[=\left\langle\left(\begin{array}{cc}0&0&0\\ 0&0&0\\ \ast&\ast&0\end{array}\right),\left(\begin{array}{cc}\ast&\ast&0\\ &\ast&\ast&0\\ 0&0&1\end{array}\right)\right\rangle=0\,, \tag{4.34}\] where the _Weingarten map (or shape operator)_ is defined by \(\operatorname{L}_{y_{0}}=\operatorname{I}_{y_{0}}^{-1}\Pi_{y_{0}}\in\mathbb{ R}^{2\times 2}\), where \(\operatorname{I}_{y_{0}}:=[\operatorname{Dy}_{0}]^{T}\operatorname{Dy}_{0}\in \mathbb{R}^{2\times 2}\) and \(\Pi_{y_{0}}:=\,-[\operatorname{Dy}_{0}]^{T}\operatorname{D}n_{0}\in\mathbb{R} ^{2\times 2}\) are the matrix representations of the _first fundamental form (metric)_ and the _second fundamental form_ of the surface, respectively. We also observe that \[n_{0}\otimes n_{0}[(\operatorname{D}_{x}\Theta)^{\natural}\!( \eta_{3})]^{-T} =(0|0|n_{0})[(\operatorname{D}_{x}\Theta)^{\natural}\!(0)]^{-1}[( \operatorname{D}_{x}\Theta)^{\natural}\!(\eta_{3})]^{-T} \tag{4.35}\] \[=(0|0|n_{0})[(\operatorname{D}_{x}\Theta)^{\natural}\!(0)]^{-1}[( \operatorname{D}_{x}\Theta)^{\natural}\!(0)]^{-T}\left(\begin{array}{cc} \operatorname{I}_{2}-x_{3}L_{y_{0}}&0\\ &0\\ 0&0\end{array}\right)^{-T}\] \[=(0|0|n_{0})\left(\begin{array}{cc}\operatorname{I}_{y_{0}}^{-1 }&0\\ &0\\ 0&0\end{array}\right)\left(\begin{array}{cc}\operatorname{I}_{2}-x_{3}L_{y_{0 }}&0\\ &0\\ 0&0\end{array}\right)^{-T}=(0|0|n_{0})\left(\begin{array}{cc}\ast&\ast&0\\ \ast&\ast&0\\ 0&0\end{array}\right)=(0|0|n_{0})\,.\] For every vector \(\widehat{u},v\in\mathbb{R}^{3}\) we have \[\langle\widehat{u}\otimes n_{0},v\otimes n_{0}\rangle=\langle(v \otimes n_{0})^{T}\widehat{u}\otimes n_{0},1\rangle=\langle(n_{0}\otimes v) \widehat{u}\otimes n_{0},\mathbb{1}\rangle=\langle n_{0}\otimes n_{0}\langle v,\widehat{u}\rangle,\mathbb{1}\rangle=\langle v,\widehat{u}\rangle\cdot \underbrace{\langle n_{0},n_{0}\rangle}_{=1}=\langle v,\widehat{u}\rangle\,,\] and \(n_{0}\otimes n_{0}=(0|0|n_{0})[(\operatorname{D}_{x}\Theta)^{\natural}\!(0)]^ {-1}\), we deduce \[\langle\mathcal{K}^{T}_{\overline{Q}_{e,h}^{*}}n_{0}\otimes n_{0},\mathcal{K} ^{T}_{\overline{Q}_{e,h}^{*}}n_{0}\otimes n_{0}\rangle=\langle\mathcal{K}^{T} _{\overline{Q}_{e,h}^{*}}n_{0},\mathcal{K}^{T}_{\overline{Q}_{e,h}^{*}}n_{0} \rangle=\|\mathcal{K}^{T}_{\overline{Q}_{e,h}^{*}}n_{0}\|^{2}\,. \tag{4.36}\] On the other hand, \[2\left\langle\operatorname{sym}\!\mathcal{K}_{\overline{Q}_{e,h}^{*}}, \operatorname{sym}\!\left(\frac{b_{2}-b_{1}}{b_{1}+b_{2}}\mathcal{K}^{T}_{ \overline{Q}_{e,h}^{*}}n_{0}\otimes n_{0}-\frac{b_{3}}{(b_{1}+b_{3})} \mathrm{tr}(\mathcal{K}_{\overline{Q}_{e,h}^{*}})n_{0}\otimes n_{0})\right\rangle =\frac{b_{2}-b_{1}}{b_{1}+b_{2}}\|\mathcal{K}^{T}_{\overline{Q}_{e,h}^{*}}n_{0} \|^{2}\,. \tag{4.37}\] Therefore \[\|\operatorname{sym}\!\mathcal{K}^{\overline{c}^{*}}_{\overline{Q}_{e,h}^{*}} \|^{2}=\|\operatorname{sym}\!\mathcal{K}_{\overline{Q}_{e,h}^{*}}\|^{2}+\frac{(b _{1}-b_{2})^{2}}{2(b_{1}+b_{2})^{2}}\|\mathcal{K}^{T}_{\overline{Q}_{e,h}^{*}}n_ {0}\|^{2}+\frac{b_{3}^{2}}{(b_{1}+b_{3})^{2}}\mathrm{tr}(\mathcal{K}_{ \overline{Q}_{e,h}^{*}})^{2}+\frac{b_{2}-b_{1}}{b_{1}+b_{2}}\|\mathcal{K}^{T}_{ \overline{Q}_{e,h}^{*}}n_{0}\|^{2}\,. \tag{4.38}\] Now we continue the calculations for the skew-symmetric part, \[\|\operatorname{skew}\!\mathcal{K}^{\overline{c}^{*}}_{\overline{Q}_{e,h}^{*}} \|^{2}= \|\operatorname{skew}\!\mathcal{K}_{\overline{Q}_{e,h}^{*}}\|^{2}+\| \operatorname{skew}((0|0|c^{*})[(\operatorname{D}_{x}\Theta)^{\natural}\!(\eta_{3}) ]^{-1})\|^{2}\] \[+2\langle\operatorname{skew}\!\mathcal{K}_{\overline{Q}_{e,h}^{*}}, \operatorname{skew}((0|0|c^{*})[(\operatorname{D}_{x}\Theta)^{\natural}\!(\eta_{3}) ]^{-1})\rangle. \tag{4.39}\] In a similar manner, we calculate the terms separately. Since \(n_{0}\otimes n_{0}\) is symmetric, we obtain \[\|\operatorname{skew}((0|0|c)[(\operatorname{D}_{x}\Theta)^{ \natural}\!(\eta_{3})]^{-1})\|^{2} =\|\operatorname{skew}(\frac{b_{2}-b_{1}}{b_{1}+b_{2}}\mathcal{K}^{T}_{ \overline{Q}_{e,h}^{*}}n_{0}\otimes n_{0}-\frac{b_{3}}{(b_{1}+b_{3})}\mathrm{ tr}(\mathcal{K}_{\overline{Q}_{e,h}^{*}})\,n_{0}\otimes n_{0})\|^{2} \tag{4.40}\] \[=\frac{(b_{1}-b_{2})^{2}}{(b_{1}+b_{2})^{2}}\|\operatorname{ skew}(\mathcal{K}^{T}_{\overline{Q}_{e,h}^{*}}n_{0}\otimes n_{0})\|^{2}.\] Using that \((n_{0}\otimes n_{0})^{2}=(n_{0}\otimes n_{0})\) we deduce \[\|\operatorname{skew}(\mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h} }\,n_{0}\otimes n_{0})\|^{2} =\frac{1}{4}\left\langle\mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h}}\,n_{0}\otimes n_{0},\mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h}}\,n_{0 }\otimes n_{0}\right\rangle-\frac{1}{4}\left\langle\mathcal{K}^{T}_{ \overline{Q}^{\natural}_{e,h}}\,n_{0}\otimes n_{0},n_{0}\otimes n_{0}\, \mathcal{K}_{\overline{Q}^{\natural}_{e,h}}\right\rangle\] \[\quad-\frac{1}{4}\left\langle n_{0}\otimes n_{0}\,\mathcal{K}_{ \overline{Q}^{\natural}_{e,h}},\mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h} }\,n_{0}\otimes n_{0}\right\rangle+\frac{1}{4}\left\langle n_{0}\otimes n_{0 }\,\mathcal{K}_{\overline{Q}^{\natural}_{e,h}},n_{0}\otimes n_{0}\,\mathcal{ K}_{\overline{Q}^{\natural}_{e,h}}\right\rangle \tag{4.41}\] \[=\frac{1}{2}\|\mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h}}\,n_ {0}\|^{2}\,.\] We have as well \[2(\operatorname{skew}\mathcal{K}_{\overline{Q}^{\natural}_{e,h}}, \operatorname{skew}((0|0|c^{*})[(\operatorname{D}_{x}\Theta)^{\natural}(\eta_ {3})]^{-1}))=2\,\frac{(b_{2}-b_{1})}{(b_{1}+b_{2})}\left\langle\operatorname{ skew}\mathcal{K}_{\overline{Q}^{\natural}_{e,h}},\operatorname{skew}(\mathcal{K}^{T}_{ \overline{Q}^{\natural}_{e,h}}\,n_{0}\otimes n_{0})\right\rangle \tag{4.42}\] \[\quad-\frac{(b_{2}-b_{1})}{2(b_{1}+b_{2})}\langle\mathcal{K}^{T}_ {\overline{Q}^{\natural}_{e,h}},\mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h }}\,n_{0}\otimes n_{0}\rangle+\frac{(b_{2}-b_{1})}{2(b_{1}+b_{2})}\langle \mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h}},n_{0}\otimes n_{0}\,\mathcal{ K}_{\overline{Q}^{\natural}_{e,h}}\rangle=-\frac{(b_{2}-b_{1})}{(b_{1}+b_{2})}\| \mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h}}n_{0}\|^{2}\,,\] and we obtain \[\|\operatorname{skew}\mathcal{K}^{\overline{c}^{*}_{\overline{Q}^{\natural}_{ e,h}}}\|^{2}=\|\operatorname{skew}\mathcal{K}_{\overline{Q}^{\natural}_{e,h}}\|^{2}+ \frac{(b_{2}-b_{1})^{2}}{2(b_{1}+b_{2})^{2}}\|\mathcal{K}^{T}_{\overline{Q}^{ \natural}_{e,h}}\,n_{0}\|^{2}-\frac{(b_{2}-b_{1})}{(b_{1}+b_{2})}\|\mathcal{K} ^{T}_{\overline{Q}^{\natural}_{e,h}}n_{0}\|^{2}. \tag{4.43}\] Finally, the trace-term is computed. A further needed calculation is \[\left[\operatorname{tr}(\mathcal{K}^{\overline{c}^{*}}_{\overline {Q}^{\natural}_{e,h}})\right]^{2} =\left(\operatorname{tr}(\mathcal{K}_{\overline{Q}^{\natural}_{ e,h}})+\operatorname{tr}\bigl{(}(0|0|c)[(\operatorname{D}_{x}\Theta)^{ \natural}(\eta_{3})]^{-1})\bigr{)}\right)^{2} \tag{4.44}\] \[=\left(\operatorname{tr}(\mathcal{K}_{\overline{Q}^{\natural}_{ e,h}})+\frac{(b_{2}-b_{1})}{2(b_{1}+b_{2})}\langle\mathcal{K}^{T}_{\overline{Q}^{ \natural}_{e,h}}n_{0}\otimes n_{0},\mathbb{1}_{3}\rangle-\frac{b_{3}}{(b_{1} +b_{3})}\operatorname{tr}(\mathcal{K}_{\overline{Q}^{\natural}_{e,h}}) \underbrace{\langle n_{0}\otimes n_{0},\mathbb{1}_{3}\rangle}_{\langle n_{0},n_{0}\rangle=1}\right)^{2}\] \[=\frac{b_{1}^{2}}{(b_{1}+b_{3})^{2}}\operatorname{tr}(\mathcal{K }_{\overline{Q}^{\natural}_{e,h}})^{2}.\] Now we insert the above calculations in \(\widetilde{W}_{\operatorname{curv}}(\mathcal{K}_{\overline{Q}^{\natural}_{e,h }}+(0|0|c^{*})[(\operatorname{D}_{x}\Theta)^{\natural}(\eta_{3})]^{-1})\), and obtain \[W^{\hom}_{\operatorname{curv}}=\mu L^{2}_{c} \Big{(}b_{1}(\|\mathrm{sym}\mathcal{K}_{\overline{Q}^{\natural }_{e,h}}\|^{2}+\frac{(b_{1}-b_{2})^{2}}{2(b_{1}+b_{2})^{2}}\|\mathcal{K}^{T}_{ \overline{Q}^{\natural}_{e,h}}n_{0}\|^{2}+\frac{b_{3}^{2}}{(b_{1}+b_{3})^{2} }\mathrm{tr}(\mathcal{K}_{\overline{Q}^{\natural}_{e,h}})^{2}+\frac{b_{2}-b_{1 }}{b_{1}+b_{2}}\|\mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h}}n_{0}\|^{2})\] \[\quad+b_{2}(\|\operatorname{skew}\mathcal{K}_{\overline{Q}^{ \natural}_{e,h}}\|^{2}+\frac{(b_{2}-b_{1})^{2}}{2(b_{1}-b_{2})^{2}}\|\mathcal{K }^{T}_{\overline{Q}^{\natural}_{e,h}}\,n_{0}\|^{2}-\frac{b_{2}-b_{1}}{b_{1}+b_ {2}}\|\mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h}}n_{0}\|^{2})\] \[\quad+b_{3}\frac{b_{1}^{2}}{(b_{1}+b_{3})^{2}}\mathrm{tr}( \mathcal{K}_{\overline{Q}^{\natural}_{e,h}})^{2}\Big{)}\,, \tag{4.45}\] which reduces to \[W^{\hom}_{\operatorname{curv}}(\mathcal{K}_{\overline{Q}^{\natural}_{e,h}})=\mu L ^{2}_{c}\Big{(}b_{1}\|\mathrm{sym}\mathcal{K}_{\overline{Q}^{\natural}_{e,h}}\|^{2 }+b_{2}\|\operatorname{skew}\mathcal{K}_{\overline{Q}^{\natural}_{e,h}}\|^{2}- \frac{(b_{1}-b_{2})^{2}}{2(b_{1}+b_{2})}\|\mathcal{K}^{T}_{\overline{Q}^{ \natural}_{e,h}}n_{0}\|^{2}+\frac{b_{1}b_{3}}{(b_{1}+b_{3})}\mathrm{tr}( \mathcal{K}_{\overline{Q}^{\natural}_{e,h}})^{2}\Big{)}\,. \tag{4.46}\] One may apply the orthogonal decomposition of a matrix \(X\) \[X=X^{\|}+X^{\perp},\qquad\qquad X^{\|}\coloneqq\mathrm{A}_{y_{0}}\,X,\qquad \qquad X^{\perp}\coloneqq(\mathbb{1}_{3}-\mathrm{A}_{y_{0}})\,X, \tag{4.47}\] for the matrix \(\mathcal{K}_{\overline{Q}^{\natural}_{e,h}}\), where \(A_{y_{0}}=(\operatorname{D}\!y_{0}|0)[\operatorname{D}_{x}\!\Theta(0)]^{-1}\). After inserting the decomposition in the homogenized curvature energy, we get \[W_{\rm curv}^{\rm hom}(\mathcal{K}_{\overline{Q}^{\natural}_{e,h}}) =\mu L_{c}^{2}\Big{(}b_{1}\|{\rm sym}\mathcal{K}_{\overline{Q}^{ \natural}_{e,h}}\|^{2}+b_{2}\|\,{\rm skew}\,\mathcal{K}_{\overline{Q}^{\natural }_{e,h}}\|^{2}-\frac{(b_{1}-b_{2})^{2}}{2(b_{1}+b_{2})}\|\mathcal{K}_{\overline {Q}^{\natural}_{e,h}}^{T}n_{0}\|^{2}+\frac{b_{1}b_{3}}{(b_{1}+b_{3})}{\rm tr}( \mathcal{K}_{\overline{Q}^{\natural}_{e,h}})^{2}\Big{)} \tag{4.48}\] \[\quad+\frac{b_{1}b_{3}}{(b_{1}+b_{3})}{\rm tr}(\mathcal{K}_{ \overline{Q}^{\natural}_{e,h}}^{\parallel})^{2}+\frac{b_{1}+b_{2}}{2}\| \mathcal{K}_{\overline{Q}^{\natural}_{e,h}}^{T}n_{0}\|\Big{)}\] \[=\mu L_{c}^{2}\Big{(}b_{1}\|{\rm sym}\mathcal{K}_{\overline{Q}^{ \natural}_{e,h}}^{\parallel}\|^{2}+b_{2}\|\,{\rm skew}\,\mathcal{K}_{ \overline{Q}^{\natural}_{e,h}}^{\parallel}\|^{2}+\frac{b_{1}b_{3}}{(b_{1}+b_{ 3})}{\rm tr}(\mathcal{K}_{\overline{Q}^{\natural}_{e,h}}^{\parallel})^{2}+ \frac{2b_{1}b_{2}}{b_{1}+b_{2}}\|\mathcal{K}_{\overline{Q}^{\natural}_{e,h}}^ {\perp}\|\Big{)}\,.\qquad\blacksquare\] As regards the homogenized curvature energy for the following curvature energy given by the optimization problem O4, some simplified calculations as for the optimization problem O3 lead us to **Theorem 4.3**.: _The solution of the optimization problem O3 given in (4.10) is given by_ \[c^{*}=\frac{(b_{2}-b_{1})}{b_{1}+b_{2}}\mathcal{K}_{\overline{Q}_{e,0}}^{T}n_ {0}-\frac{2b_{3}}{2(b_{1}+b_{3})}{\rm tr}(\mathcal{K}_{\overline{Q}_{e,0}})n_ {0} \tag{4.49}\] _and the coresponding homogenized curvature energy is_ \[W_{\rm curv}^{hom}(\mathcal{K}_{\overline{Q}_{e,0}})=\mu L_{c}^{2}\Big{(}b_{1} \|{\rm sym}\mathcal{K}_{\overline{Q}_{e,0}}^{\parallel}\|^{2}+b_{2}\|\,{\rm skew }\,\mathcal{K}_{\overline{Q}_{e,0}}^{\parallel}\|^{2}+\frac{b_{1}b_{3}}{(b_{ 1}+b_{3})}{\rm tr}(\mathcal{K}_{\overline{Q}_{e,0}}^{\parallel})^{2}+\frac{2b_ {1}b_{2}}{b_{1}+b_{2}}\|\mathcal{K}_{\overline{Q}_{e,0}}^{\perp}\|\Big{)}\,. \tag{4.50}\] _where \(\mathcal{K}_{\overline{Q}_{e,0}}^{\parallel}\) and \(\mathcal{K}_{\overline{Q}_{e,0}}^{\perp}\) represent the orthogonal decomposition of the fully reduced elastic shell bending-curvature tensor \(\mathcal{K}_{\overline{Q}_{e,0}}\) in the tangential plane and in the normal direction, respectively._ ### \(\Gamma\)-convergence result for the curved shell model Together with the calculations provided in [41], we obtain for the first time in the literature the explicit form of the Cosserat-shell model via \(\Gamma\)-convergence method given by the following theorem. **Theorem 4.4**.: _Assume that the initial configuration of the curved shell is defined by a continuous injective mapping \(\,y_{0}:\omega\subset\mathbb{R}^{2}\to\mathbb{R}^{3}\) which admits an extension to \(\overline{\omega}\) into \(C^{2}(\overline{\omega};\mathbb{R}^{3})\) such that for_ \[\Theta(x_{1},x_{2},x_{3})=y_{0}(x_{1},x_{2})+x_{3}\,n_{0}(x_{1},x_{2})\] _we have \(\det[{\rm D}_{x}\Theta(0)]\geq\,a_{0}>0\) on \(\overline{\omega}\), where \(a_{0}\) is a constant, and assume that the boundary data satisfy the conditions_ \[\varphi_{d}^{\natural}=\varphi_{d}\big{|}_{\Gamma_{1}}\text{(in the sense of traces) for }\ \varphi_{d}\in\mathrm{H}^{1}(\Omega_{1};\mathbb{R}^{3}). \tag{4.51}\] _Let the constitutive parameters satisfy_ \[\mu\,>0,\qquad\quad\kappa>0,\qquad\quad\mu_{\rm c}>0,\qquad\quad a_{1}>0, \qquad\quad a_{2}>0,\qquad\quad a_{3}>0\,. \tag{4.52}\] _Then, for any sequence \((\varphi_{h_{j}}^{\natural},\overline{Q}_{e,h_{j}}^{\natural})\in X\) such that \((\varphi_{h_{j}}^{\natural},\overline{Q}_{e,h_{j}}^{\natural})\to(\varphi_{0}, \overline{Q}_{e,0})\) as \(h_{j}\to 0\), the sequence of functionals \(\mathcal{J}_{h_{j}}\colon X\to\overline{\mathbb{R}}\) from (4.7) \(\,\,\Gamma\)-converges to the limit energy functional \(\mathcal{J}_{0}\colon X\to\overline{\mathbb{R}}\) defined by_ \[\mathcal{J}_{0}(m,\overline{Q}_{e,0})=\begin{cases}\int_{\omega}[W_{\rm mp}^{ \rm hom}(\mathcal{E}_{m,\overline{Q}_{e,0}})+\widetilde{W}_{\rm curv}^{\rm hom }(\mathcal{K}_{\overline{Q}_{e,0}})]\det({\rm D}y_{0}|n_{0})\ d\omega&\text{if} \quad(m,\overline{Q}_{e,0})\in\mathcal{S}_{\omega}^{\prime}\,,\\ +\infty&\text{else in}\,X,\end{cases} \tag{4.53}\] _where_ \[m(x_{1},x_{2}) :=\varphi_{0}(x_{1},x_{2})=\lim_{h_{j}\to 0}\varphi_{h_{j}}^{ \natural}(x_{1},x_{2},\frac{1}{h_{j}}x_{3}),\qquad\overline{Q}_{e,0}(x_{1},x_{ 2})=\lim_{h_{j}\to 0}\overline{Q}_{e,h_{j}}^{\natural}(x_{1},x_{2},\frac{1}{h_{j}}x_{3}), \tag{4.54}\] \[\mathcal{E}_{m,\overline{Q}_{e,0}} =(\overline{Q}_{e,0}^{T}{\rm D}m-{\rm D}y_{0}|0)[{\rm D}_{x}\Theta( 0)]^{-1},\] \[\mathcal{K}_{\overline{Q}_{e,0}} =\Big{(}{\rm axl}(\overline{Q}_{e,0}^{T}\,\partial_{x_{1}} \overline{Q}_{e,0})\,|\,{\rm axl}(\overline{Q}_{e,0}^{T}\,\partial_{x_{2}} \overline{Q}_{e,0})\,|0\Big{)}[{\rm D}_{x}\Theta(0)\,]^{-1}\not\in{\rm Sym}(3)\,,\] _and_ \[W_{\rm mp}^{\rm hom}(\mathcal{E}_{m,\overline{Q}_{e,0}}) =\,\mu\,\|{\rm sym}\ \mathcal{E}_{m,\overline{Q}_{e,0}}^{\|}\|^{2}+\mu_{c}\,\|{\rm skew}\ \mathcal{E}_{m,\overline{Q}_{e,0}}^{\|}\|^{2}+\,\frac{\lambda\,\mu}{\lambda+2\, \mu}\,\left[{\rm tr}(\mathcal{E}_{m,\overline{Q}_{e,0}}^{\|})\right]^{2}+\frac {2\,\mu\ \mu_{c}}{\mu_{c}+\mu}\|\mathcal{E}_{m,\overline{Q}_{e,0}}^{\mathcal{E}_{m, \overline{Q}_{e,0}}}n_{0}\|^{2}\] \[=W_{\rm shell}\big{(}\mathcal{E}_{m,\overline{Q}_{e,0}}^{\|}\big{)} +\frac{2\,\mu\ \mu_{c}}{\mu_{c}+\mu}\|\mathcal{E}_{m,\overline{Q}_{e,0}}^{\perp}\|^{2}, \tag{4.55}\] \[\widetilde{W}_{\rm curv}^{\rm hom}(\mathcal{K}_{\overline{Q}_{e,0}}) =\inf_{A\in\mathfrak{s}\mathfrak{s}(3)}\widetilde{W}_{\rm curv} \Big{(}{\rm axl}(\overline{Q}_{e,0}^{T}\,\partial_{\eta_{1}}\overline{Q}_{e,0 })\,|\,{\rm axl}(\overline{Q}_{e,0}^{T}\,\partial_{\eta_{2}}\overline{Q}_{e,0 })\,|\ {\rm axl}(A)\,\Big{)}[({\rm D}_{x}\mathfrak{O})^{\natural}(0)]^{-1}\] \[=\mu L_{c}^{2}\Big{(}b_{1}\|{\rm sym}\mathcal{K}_{\overline{Q}_{e,0}}^{\|}\|^{2}+b_{2}\|\,{\rm skew}\,\mathcal{K}_{\overline{Q}_{e,0}}^{\|}\|^{2 }+\frac{b_{1}b_{3}}{(b_{1}+b_{3})}{\rm tr}(\mathcal{K}_{\overline{Q}_{e,0}}^{ \|})^{2}+\frac{2\,b_{1}b_{2}}{b_{1}+b_{2}}\|\mathcal{K}_{\overline{Q}_{e,0}}^{ \perp}\|\Big{)}\,.\] Proof.: The proof is completely similar to the proof provided in [41], where only some implicit properties of the homogenized curvature energy were used and not its explicit form. ## 5 Conclusion The present paper gives the explicit calculation of the homogenized curvature energy. This explicit form was not directly necessary in order to prove the following Gamma-convergence result, some qualitative properties of \(\widetilde{W}_{\rm curv}^{\rm hom,\natural}(\mathcal{K}_{\overline{Q}_{e,h}^{ \natural}})\) and \(\widetilde{W}_{\rm curv}^{\rm hom}(\mathcal{K}_{\overline{Q}_{e,0}})\) being enough in the proof of the \(\Gamma\)-convergence result. However, the final \(\Gamma\)-convergence model has to be written in a explicit form, all the explicit calculations being provided in this paper. A comparison between (3.23) and (5.2), shows that the homogenized flat curvature energy can thus be obtained from the curved one, and that Theorem 3.2 may be seen as a corollary of Theorem 4.4. Indeed, let us assume that in the homogenized energy which we obtained in (4.48) we have \({\rm D}\Theta=1\,_{3},{\rm D}y_{0}=1\) and \(n_{0}=[{\rm D}_{x}\Theta(0)]e_{3}=e_{3}\), which corresponds to the flat shell case. Then \(\overline{Q}_{e,0}=\overline{R}_{0}\), \(\mathcal{K}_{\overline{Q}_{e,0}}=\mathcal{K}_{\overline{R}_{0}}^{\rm plate}\) and \[\mathcal{K}_{\overline{Q}_{e,0}}=\left(\begin{pmatrix}\Gamma_{\square}&0\\ &0\\ \hline\Gamma_{31}&\Gamma_{32}&0\end{pmatrix}\right)\left[({\rm D}_{x}\Theta)^{ \natural}\right]^{-1}\,=\mathcal{K}_{\overline{R}_{0}}^{\rm plate},\quad{\rm with }\quad\Gamma_{\square}=\begin{pmatrix}\Gamma_{11}&\Gamma_{12}\\ \Gamma_{21}&\Gamma_{22}\end{pmatrix} \tag{5.1}\] and we have \[W_{\rm curv}^{\rm hom}(\Gamma)=\mu L_{c}^{2}\Big{(}b_{1}\|{\rm sym}\Gamma_{ \square}\|^{2}+b_{2}\|\,{\rm skew}\,\Gamma_{\square}\|^{2}+\frac{b_{1}b_{3}}{( b_{1}+b_{3})}{\rm tr}(\Gamma_{\square})^{2}+\frac{2b_{1}b_{2}}{b_{1}+b_{2}}\left\| \begin{pmatrix}\Gamma_{31}\\ \Gamma_{32}\end{pmatrix}\right\|^{2}\Big{)}\,, \tag{5.2}\] and we rediscover the homogenized curvature energy for an initial flat-configuration from Theorem 3.2. In conclusion, the present paper completes the calculations of the membrane-like model constructed via \(\Gamma\)-convergence for flat and curved initial configuration of the shell given for the first time in literature the explicit form of the \(\Gamma\)-limit for both situations. In [6], by using a method which extends the reduction procedure from classical elasticity to the case of Cosserat shells, Birsan has obtained a Cosserat-shell by considering a general ansatz. For the particular case of a quadratic ansatz for the deformation map and skipping higher order terms, the membrane term of order \(O(h)\) from the Birsan's model [6] coincides with the homogenized membrane energy determined by us in [41], i.e., in both models the harmonic mean \(\frac{2\mu\,\mu_{c}}{\mu+\mu_{c}}\)of \(\mu\) and \(\mu_{c}\) is present. We note that in the model constructed in [21] the algebraic mean of \(\mu\) and \(\mu_{c}\) substitute the role of the harmonic mean from the model given in [6] and by the \(\Gamma\)-convergence model in [41]. However, a comparison between the curvature energy obtained in the current paper as part of the \(\Gamma\)-limit and the curvature energy obtained using other methods [21, 6], shows that, the weight of the energy term \(\|\mathcal{K}_{e,\natural}^{\perp}\|^{2}\) are different as following * derivation approach [21] as well as in the model given in [6]: the algebraic mean of \(b_{1}\) and \(b_{2}\), i.e., \(\frac{b_{1}+b_{2}}{2}\,\); * \(\Gamma\)-convergence: the harmonic mean of \(b_{1}\) and \(b_{2}\), i.e., \(\frac{2\,b_{1}b_{2}}{b_{1}+b_{2}}\,\).
Please provide the answer. ``` We show how to explicitly compute the homogenized curvature energy appearing in the isotropic $\Gamma$-limit for flat and for curved initial configuration Cosserat shell models, when a parental three-dimensional minimization problem on $\Omega \subset \mathbb{R}^3$ for a Cosserat energy based on the second-order dislocation density tensor $\alpha:=\overline{R} ^T {\rmCurl}\,\overline{R} \in \mathbb{R}^{3\times 3}$, $\overline{R}\in {\rm SO}(3)$ is used. ``` ``` コsseratシェルモデルにおいて、平坦な初期配置と曲面初期配置の両方において、Isotropic $\Gamma$-limitにおける homogenized curvature energyを明確に計算する方法を示します。この計算は、Cosseratエネルギーに基づく、3次元最小化問題を $\Omega \subset \mathbb{R}^3$ における使用に
2309.10888
Effect of interatomic repulsion on Majorana zero modes in a coupled quantum-dot-superconducting-nanowire hybrid system
We study the low-energy eigenstates of a topological superconductor wire modeled by a Kitaev chain, which is connected at one of its ends to a quantum dot through nearest-neighbor (NN) hopping and NN Coulomb repulsion. Using an unrestricted Hartree-Fock approximation to decouple the Coulomb term, we obtain that the quality of the Majorana end states is seriously affected by this term only when the dependence of the low-lying energies with the energy of the quantum dot shows a "diamond" shape, characteristic of short wires. We discuss limitations of the simplest effective models to describe the physics. We expect the same behavior in more realistic models for topological superconducting wires.
R. Kenyi Takagui Perez, A. A. Aligia
2023-09-19T19:29:12
http://arxiv.org/abs/2309.10888v3
Effect of interatomic repulsion on Majorana zero modes in a coupled quantum-dot-superconducting-nanowire hybrid system ###### Abstract We study the low-energy eigenstates of a topological superconductor wire modeled by a Kitaev chain coupled at one of its ends to a quantum dot by nearest-neighbor (NN) hopping and NN Coulomb repulsion. Using an unrestricted Hartree-Fock approximation to decouple the Coulomb term, we obtain that the quality of the Majorana end states is seriously affected by this term only when the dependence of the low-lying energies with the energy of the quantum dot shows a "diamond" shape, characteristic of short wires. ## I Introduction In recent years, topological superconducting wires is a field of intense research in condensed matter physics, because of both the interesting basic physics involved [1], but also because of possible applications in decoherence-free quantum computing based on the Majorana zero modes (MZMs) at their ends. [2; 3; 4; 5; 6; 7] The simplest model that presents MZMs at the ends is the Kitaev chain for p-wave superconductors [8]. Lutchyn _et al._[9] and Oreg _et al._[10] proposed a model for topological superconducting wires that includes spin-orbit coupling (SOC), proximity-induced s-wave superconductivity, and an applied magnetic field perpendicular to the direction of the SOC. The phase diagram of the lattice version of this model has been calculated recently [11]. For reasonable parameters the model has a topological phase with MZMs localized at its ends as the Kitaev chain. MZMs of similar wires were found experimentally [12; 13; 14; 15]. A difficulty of these experiments is to identify unambiguously that the zero modes are of topological origin, which implies that they remain at zero energy and localized at the end of the nanowire if small perturbations are applied to the system. Using the model mentioned above for s-wave topological superconducting wires, Prada _et al._ proposed that a quantum dot (QD) at the end of the nanowire may be used as a powerful spectroscopic tool to quantify the degree of Majorana nonlocality through a local transport measurement [16]. This proposal has been confirmed experimentally [17]. A similar procedure has been also proposed for the Kitaev spinless model [18], and further theoretical studies have been made recently for the spinfull model [19] and a minimal Kitaev chain [20]. In general, the energy of the dot level is varied changing the gate potential, and the low-energy levels detected by the conductance show either a crossing ("bowtie" shape like if Fig. 4) or a "diamond" pattern (like in Fig. 6 top) [16; 20; 22]. Compared to the large amount of theoretical works studying non-interacting superconducting wires, studies of the effects of interactions are rare [21]. Recently Ricco _et al._ pointed out that Coulomb repulsion between the electrons of the dot and the nanowire might lead to spoiling of the quality of the MZMs due to an effective increase of the coupling between the MZMs localized at the left and at the right of the nanowire [22]. They considered a spinless model consisting of a Kitaev chain with a QD at its left end. There is hopping between the QD and the chain and also an interaction \[H_{V}=Vn_{d}n_{w}, \tag{1}\] where \(n_{d}\) is the number of electrons in the dot and \(n_{w}\) is the total number of electrons in the superconducting wire. The authors replaced this operator by the parity operator at low energies \(n_{w}\sim i\gamma_{L}\gamma_{R}+1/2\) (neglecting the excited states), where \(\gamma_{\nu}\) is the Majorana at the end \(\nu\) (left or right) of the wire _at a given chemical potential_. Neglecting the states at higher energy, the authors solve exactly the effective low-energy model and show that \(H_{V}\) contributes to the displacement of the MZMs from zero energy, spoiling the Majorana quality and the topological properties. Usually the low-energy effective Hamiltonian is a very good approximation of the full one. For example, quantitative agreement has been found between both descriptions for the Josephson current between two topological superconducting wires [23]. However, a simple argument suggests that this might not be the case for the interaction given by Eq. (1). A simple mean field decoupling of this term gives \[Vn_{d}n_{w}\simeq V\left(\langle n_{d}\rangle n_{w}+n_{d}\langle n_{w}\rangle -\langle n_{d}\rangle\langle n_{w}\rangle\right). \tag{2}\] The first term on the right hand side is a correction to the chemical potential, and the second is a correction to the on-site energy of the dot. We obtain that even in the presence of hopping between the dot and the wire, the MZMs remain under these changes. In other words the states described by \(\gamma_{\nu}\)_change their form_ and accommodate to the new situation, as might be expected from the robustness of end states of topological character. This change is not captured by the low-energy effective Hamiltonian. In this work we calculate the low-energy spectrum of a Kitaev chain in which the leftmost side has a hopping and also a repulsion to a QD state. The latter is treated in the unrestricted Hartree-Fock approximation. In Section II we describe the model and the approximation. In Section III we show the numerical results. We summarize the results in Section IV. ## II Model and approximation The Hamiltonian of the Kitaev chain interacting with a QD is \[H = \sum_{j=1}^{N-1}(-tc_{j+1}^{\dagger}c_{j}+\Delta c_{j+1}^{\dagger} c_{j}^{\dagger}+\text{H.c.})-\mu\sum_{j=1}^{N}c_{j}^{\dagger}c_{j} \tag{3}\] \[+E_{d}d^{\dagger}d-t^{\prime}(d^{\dagger}c_{1}+\text{H.c.})\] \[+V\left(n_{d}-\frac{1}{2}\right)\left(n_{w}-\frac{1}{2}\right),\] where \(n_{d}=d^{\dagger}d\) and \(n_{1}=c_{1}^{\dagger}c_{1}\). The first two terms of Eq. (3) describe the Kitaev chain with hopping \(t\), p-wave superconducting order parameter \(\Delta\) and chemical potential \(\mu\). The third term describes the QD. The fourth term is the hopping between the QD and the Kitaev chain and the last term is the Coulomb repulsion between the electrons in the QD and the ones at the leftmost site of the chain. We treat this term in the unrestricted Hartree-Fock approximation: \[n_{d}n_{1} \simeq \left\langle n_{d}\right\rangle n_{1}+n_{d}\left\langle n_{1} \right\rangle-\left\langle n_{d}\right\rangle\left\langle n_{1}\right\rangle \tag{4}\] \[-\left\langle d^{\dagger}c_{1}\right\rangle c_{1}^{\dagger}d-d^{ \dagger}c_{1}\left\langle c_{1}^{\dagger}d\right\rangle+\left\langle d^{ \dagger}c_{1}\right\rangle\left\langle c_{1}^{\dagger}d\right\rangle\] \[+\left\langle d^{\dagger}c_{1}^{\dagger}\right\rangle c_{1}d+d^{ \dagger}c_{1}^{\dagger}\left\langle c_{1}d\right\rangle-\left\langle d^{ \dagger}c_{1}^{\dagger}\right\rangle\left\langle c_{1}d\right\rangle.\] We note that our model is different from that of Ricco _et al._[22], because they considered a repulsion with all the sites of the wire with the same intensity. We believe that our model is more realistic. Another difference is that they treated the repulsion exactly in an effective model within a low-energy subspace. We include all states but treat the repulsion using the approximation of Eq. (4). ## III Results We take \(t=1\) as the unit of energy and choose \(\Delta=0.2\). For later comparison, we discuss first the isolated Kitaev chain (without the quantum dot) for two different lengths of the chain. The resulting energies are shown in Fig. 1 as a function of the chemical potential \(\mu\). The curve is symmetric under the change of sign of \(\mu\), and therefore only positive values of \(\mu\) are displayed in the figure. As it is known, the system is topological for \(|\mu|<2t\). In this region, there are two low-energy states at energies near zero. For the infinite chain, these states correspond to the left and right MZMs \(\gamma_{L}\) and \(\gamma_{R}\) localized at the ends of the chain. In a finite chain, these modes are mixed with an effective term \(\lambda i\gamma_{L}\gamma_{R}\) and the energies are split in \(\pm\lambda\). As expected, \(\lambda\) decays exponentially with increasing system size. From Fig. 1, one can see that \(\lambda\) decreases almost four orders of magnitude when the length of the chain is increased from 20 to 50 sites. One can also see from the figures that \(\lambda\) oscillates as the chemical potential is varied. The period of oscillation is more that two times smaller for 50 sites in comparison with 20 sites, and it is also smaller for larger \(|\mu|\) near the topological transition to the trivial phase. For future discussion of the effects of the nearest-neighbor repulsion \(V\), we represent in Fig. 2, the independent expectation values that enter in the unrestricted Hartree-Foch approximation, Eq. 4, determined selfconsistently. We have chosen \(t^{\prime}=0.2\), \(V=1\), \(\mu=0\) and a chain of 50 sites excluding the quantum dot. The results are rather insensitive to system size. As expected, the occupancy of the dot is near 1, when its energy is negative and large in magnitude compared to \(t^{\prime}\) (\(-\epsilon_{d}\gg t^{\prime}\)), it is equal to 1/2 for \(\epsilon_{d}=0\) and it is near 0 for \(\epsilon_{d}\gg t^{\prime}\). In contrast, the occupancy of the first site of the dot \(\left\langle n_{1}\right\rangle\) follows qualitatively the opposite behavior: when \(\left\langle n_{d}\right\rangle>1/2\), the first site feels the repulsion with the electrons in the dot and its occupancy decreases, but its Figure 1: (Color online) Eigenvalues of the Kitaev chain for 20 sites (top) and 50 sites (middle and bottom) as a function of the chemical potential. hopping with the rest of the chain moderates this effect and the occupancy deviates from 0.5 in less than 0.2. The expectation value of the hopping \(\langle d^{\dagger}c_{1}\rangle\) follows qualitatively the behavior expected for a diatomic heteronuclear molecule, with a single orbital per atom, in which the two atomic states are hybridized. The expectation value is maximum when both atomic levels coincide (\(\epsilon_{d}=0\)) and decreases symmetrically with the difference between atomic levels. Te half width of the curve is expected to be of the order of the effective hopping, which in this case is \(t^{\prime}_{\rm eff}=t^{\prime}+V\langle d^{\dagger}c_{1}\rangle\). For \(\epsilon_{d}=0\), this value is near 0.5, considerably larger than the bare value \(t^{\prime}=0.2\). The pairing expectation value \(\langle d^{\dagger}c_{1}^{\dagger}\rangle\) follows qualitatively a similar dependence with the dot energy as the hopping contribution discussed above, but with smaller values. Its dependence with \(\epsilon_{d}\) is also narrower. Its physical origin is a proximity induced \(p\)-wave superconductivity, which is larger when the energy of the dot is nearer to the chemical potential of the wire. The resulting eigenvalues of the system as a function of the dot energy for \(V=0\) and \(V=1\) are compared in Fig. 3 for a chain of 20 sites. For 50 sites the discussion below is practically the same, but the results are displayed more clearly in the smaller system. For the sake of brevity we omit displaying the results for 50 sites. We discuss first the case \(V=0\). For large \(|\epsilon_{d}|\), the eigenvalues at small energies (of absolute value less than 1) are practically the same as those of the isolated Kitaev chain shown in Fig. 1. For \(\mu=0\), the results are symmetric under interchange of the sign of \(\epsilon_{d}\). In addition to the states of the isolated chain, there are roughly speaking two other symmetric states at energies \(\pm E_{m}\), which at a first approximation corresponds to that of higher absolute vale of an heteronuclear molecule (as that mentioned above) that mixes two states with energies \(\epsilon_{d}\) and zero. For large \(|\epsilon_{d}|\), \(E_{m}\sim\epsilon_{d}\) and for \(\epsilon_{d}=0\), \(E_{m}\sim t^{\prime}\). These states actually hybridize with the states of the isolated Kitaev chain showing several anticrossings that are evident in Fig. 3. When \(V\) is included, the higher energy eigenvalues are modified, particularly those related with the mixing of the dot state near \(\epsilon_{d}=0\). Since as explained above, the effective hopping between the dot and the first state of the chain increases from \(t^{\prime}=0.2\) to \(t^{\prime}\sim 0.5\), when \(V\) is increased from 0 to 1, a similar change takes place for the energies that are near \(\pm t^{\prime}\) in Fig. 3. However, the two energies with lowest absolute value, related with the splitting of the MZMs, are very little modified by \(V\). In Fig. 4 we display the energies related to the MZMs for two values of \(\mu\) and a chain of 50 sites. One can see that for \(\mu=0\), the inclusion of nearest-neighbor repulsion \(V\), at least within our unrestricted Hartree-Fock approximation slightly _decreases_ the splitting of the two low-energy states, indicating that the quality of the MZMs actually is _improved_ when the repulsion is added. For \(\mu\neq 0\), the symmetry under change of sign of the dot energy \(\epsilon_{d}\) is lost, and the asymmetry is increased with \(V\). In any case, the effect of \(V\) on the quality of the MZMs remains very small. The shape of the curve is similar to that found in previous experiments [17] and theory [16; 22]. In Fig. 5, we show the coefficients of the lowest eigenstate with positive energy for the parameters indicated inside the figure. The fermion is written as \(\sum_{i}\alpha_{i}f_{i}\), where \(\alpha_{i}\) are the 102 coefficients and the order of the Figure 3: (Color online) Energies of the system as a function of the energy of the dot for \(V=0\) (top) and \(V=1\) (bottom). Figure 2: (Color online) Expectation values entering Eq. (4) as a function of dot energy. corresponding fermions \(f_{i}\) is \(f_{1}=d^{\dagger}\), \(f_{2}=d\), \(f_{3}=c_{1}^{\dagger}\), \(f_{4}=c_{1}\),... \(f_{102}=c_{50}\). As expected, the state is a mixture of the MZMs at the ends with negligible weight in the middle of the chain. However, in contrast to the isolated Kitaev chain, there is a significant weight of the state also at the dot, with a probability which is about \(1/10\) compared to that of the first site in the chain. This probability increases with decreasing \(|\epsilon_{d}|\). Finally in Fig. 6 we display the energies for a short chain of 5 sites, with a significant mixing of both MZMs at the ends of the chain. In this case, the weight of the MZM at the right end is significant at the left end, and therefore it also feels the repulsion with the quantum dot. For \(V=0\) the shape is characteristic of the "diamond" shape observed in experiment [17] and in calculations [16; 22] when the hopping between the quantum dot and the MZM at the right end \(\gamma_{R}\) is important [16; 22]. In contrast to the previous cases, now the effect of adding the Coulomb repulsion is significant, leading to a strong further splitting of the MZMs, of the order of a fraction of \(t^{\prime}\) ## IV Summary and discussion We have solved a model for a Kitaev chain on a lattice, connected to a quantum dot at one of its ends by a hopping term and a Coulomb repulsion between the relevant state of the quantum dot and the end site of the chain. As the energy of the state of the quantum dot is varied, the energies of the two eigenstates of the system nearer to zero have one of the two characteristic shapes seen in experiment and previous theories, signaling the presence of Majorana zero modes (MZMs) at the ends of the wire, coupled between them. In one of them, the energies of the two states cross when the energy of the quantum dot is near to the Fermi energy. In this case, the coupling between MZMs is weak and analyzing the wave function of these eigenstates, one sees that the one of the MZMs has a substantial weight at the quantum dot. Treating the Coulomb repulsion in the unrestricted Hartree-Fock approximation, we find that it does not affect essentially the quality of the MZMs. In contrast in the other case, in which the energies of the low-lying states as a function of the dot level has a "diamond" shape, signaling a stronger coupling be Figure 5: (Color online) Coefficients of the lowest eigenstate of the system. Figure 4: (Color online) Comparison of the two energies nearer to zero in the system as a function of the energy of the dot between \(V=0\) and \(V=1\) for \(\mu=0\) (top) and \(\mu=0.5\) (bottom). Figure 6: (Color online) Energies of the system as a function of the energy of the dot for \(V=0\) (top) and \(V=1\) (bottom). tween the MZMs (shorter chains), the effect of the interatomic Coulomb repulsion is significant splitting further the MZMs. ###### Acknowledgements. R. K. T. P. has a scholarship of Instituto Balseiro. A. A. A. acknowledges financial support provided by PICT 2017-2726 and PICT 2018-01546 of the ANPCyT, Argentina.
topological superconductor wiremodeled by a Kitaev chain, one of its ends connected to a quantumdot through nearest-neighbor (NN) hopping and NN Coulomb repulsion. Using anunrestricted Hartree-Fock approximation to decouple the Coulomb term, we obtainthat the quality of the Majorana end states is seriously affected by this termonly when the dependence of the low-lying energies with the energy of the quantum dot shows a "diamond" shape, characteristic of short wires. We discusslimitations of the simplest effective models to describe the physics. We expectthe same behavior in more realistic models for topological superconductingwires.
2302.14744
Tightness of prescriptive tree-based mixed-integer optimization formulations
We focus on modeling the relationship between an input feature vector and the predicted outcome of a trained decision tree using mixed-integer optimization. This can be used in many practical applications where a decision tree or tree ensemble is incorporated into an optimization problem to model the predicted outcomes of a decision. We propose tighter mixed-integer optimization formulations than those previously introduced. Existing formulations can be shown to have linear relaxations that have fractional extreme points, even for the simple case of modeling a single decision tree. A formulation we propose, based on a projected union of polyhedra approach, is ideal for a single decision tree. While the formulation is generally not ideal for tree ensembles or if additional constraints are added, it generally has fewer extreme points, leading to a faster time to solve, particularly if the formulation has relatively few trees. However, previous work has shown that formulations based on a binary representation of the feature vector perform well computationally and hence are attractive for use in practical applications. We present multiple approaches to tighten existing formulations with binary vectors, and show that fractional extreme points are removed when there are multiple splits on the same feature. At an extreme, we prove that this results in ideal formulations for tree ensembles modeling a one-dimensional feature vector. Building on this result, we also show via numerical simulations that these additional constraints result in significantly tighter linear relaxations when the feature vector is low dimensional. We also present instances where the time to solve to optimality is significantly improved using these formulations.
Max Biggs, Georgia Perakis
2023-02-28T16:44:10
http://arxiv.org/abs/2302.14744v1
# Tightness of prescriptive tree-based mixed-integer optimization formulations ###### Abstract We focus on modeling the relationship between an input feature vector and the predicted outcome of a trained decision tree using mixed-integer optimization. This can be used in many practical applications where a decision tree or tree ensemble is incorporated into an optimization problem to model the predicted outcomes of a decision. We propose tighter mixed-integer optimization formulations than those previously introduced. Existing formulations can be shown to have linear relaxations that have fractional extreme points, even for the simple case of modeling a single decision tree. A formulation we propose, based on a projected union of polyhedra approach, is ideal for a single decision tree. While the formulation is generally not ideal for tree ensembles or if additional constraints are added, it generally has fewer extreme points, leading to a faster time to solve, particularly if the formulation has relatively few trees. However, previous work has shown that formulations based on a binary representation of the feature vector perform well computationally and hence are attractive for use in practical applications. We present multiple approaches to tighten existing formulations with binary vectors, and show that fractional extreme points are removed when there are multiple splits on the same feature. At an extreme, we prove that this results in ideal formulations for tree ensembles modeling a one-dimensional feature vector. Building on this result, we also show via numerical simulations that these additional constraints result in significantly tighter linear relaxations when the feature vector is low dimensional. We also present instances where the time to solve to optimality is significantly improved using these formulations. _Key words_ : Tree ensembles, Prescriptive analytics, Mixed-integer optimization ## 1 Introduction A fundamental problem in operations research and management science is decision-making under uncertainty. Recently, attention has been given to modeling uncertain outcomes using machine learning functions, trained from previous decisions made under a variety of circumstances (Bertsimas et al. 2016, Cheng et al. 2017, Tjeng et al. 2017, Boob et al. 2022, Anderson et al. 2018, Bunel et al. 2018, Fischetti and Jo 2018, Kumar et al. 2019, Misic 2020, Biggs et al. 2022, Bergman et al. 2022). Due to the complex nature of real-world decision-making, often the model that best represents the outcomes observed is nonlinear, such as a neural network or a tree ensemble. This leads to a potentially complex optimization problem for the decision-maker to find the best decision, as predicted by the machine learning function. An example of this occurs in reinforcement learning, where the future reward resulting from a decision is uncertain but can be approximated using machine learning models, such as decision trees or tree ensembles. In some applications, such as playing Atari video games (Mnih et al. 2015), the decision set is small so all the decisions can be enumerated and evaluated. In comparison, in many real-world operational problems - for example, dynamic vehicle routing problems (Bent and Van Hentenryck 2007, Pillac et al. 2011) or kidney transplantation (Sonmez and Unver 2017, Ashlagi et al. 2018)- complex decisions whose outcomes are uncertain need to be made at every stage of an online process. These decisions are often high dimensional or combinatorial in nature and subject to constraints on what is feasible. This can result in a very large action space. As a result, enumeration is no longer a tractable option, and a more disciplined optimization approach must be taken. Furthermore, the selection of the best action is further complicated by the nonlinear value function approximation. One approach to finding optimal decisions when the outcome is estimated using a complex machine learning method is to use mixed-integer optimization (MIO) to model this relationship. In particular, there has recently been significant interest in modeling trained neural networks, by encoding these relationships using auxiliary binary variables and constraints (Cheng et al. 2017, Tjeng et al. 2017, Anderson et al. 2018, Bunel et al. 2018, Fischetti and Jo 2018, Kumar et al. 2019, Wang et al. 2021). Another popular and powerful approach for supervised learning, yet one that is less studied in the prescriptive setting, is tree ensemble methods. Misic (2020) provides unconstrained optimization examples in drug discovery, where a tree ensemble predicts a measure of the activity of a proposed compound, and customized price optimization, where a tree ensemble predicts the profit as a function of prices and store-level attributes. Biggs et al. (2022) provide examples in real estate development of maximizing the sale price of a new house that is predicted as a function of construction decisions and location features, and a method for creating fair juries based on jurors' predicted a priori propensities to vote guilty or not due to their demographics and beliefs. These applications have nontrivial constraints, but can be represented as polyhedra with integer variables. Additional applications of trained decision trees or tree ensembles embedded in an optimization problem include retail pricing (Ferreira et al. 2015), assortment optimization (Chen et al. 2019, Chen and Misic 2022), last-mile delivery (Liu et al. 2021), optimal power flow (Halilbasic et al. 2018), auction design (Verwer et al. 2017), constraint learning (Maragno et al. 2021) and Bayesian optimization (Thebelt et al. 2021). The goal in these works is often to propose tractable optimization formulations, which allow large problem instances to be solved in a reasonable amount of time. An important consideration when formulating these mixed-integer optimization formulations is how _tight_, or strong, the formulation is. Most methods for optimizing mixed-integer formulations involve relaxing the integrality requirements on variables and solving a continuous optimization problem. In the popular branch and bound algorithm, if the optimal solution is fractional for integer variables, then multiple subproblems are created with added constraints to exclude the fractional solution. If there are fewer fractional solutions for the relaxed problem, corresponding to a tighter formulation, this can result in a significantly faster time to solve. Furthermore, some problems can be formulated in such a way that the linear relaxation doesn't have any fractional extreme points, known as an _ideal_ formulation. Oftentimes these ideal formulations can be solved extremely quickly. Another benefit of stronger formulations is that the linear programming (LP) relaxations provide tighter upper bounds, which are also useful in many applications. An example of this is evaluating the robustness of a machine learning model (Carlini and Wagner 2017, Dvijotham et al. 2018). If an input can be perturbed by a practically insignificant amount and result in a significantly different prediction, this suggests that the model is not robust. Evaluating robustness can be formulated as a constrained optimization problem over local inputs to find the maximally different output. As finding the exact optimal bound can be time-consuming, often an upper bound on how much the solution could change is sufficient. ### Contributions We model the relationship between the input feature vector and the predicted output for a trained decision tree. This can be used in a range of optimization applications involving decision trees or tree ensembles. We present a novel mixed-integer optimization formulation based on a projected _union of polyhedra_ approach, which we prove is ideal for a single tree. We show that existing mixed-integer optimization formulations for modeling trees, such as Biggs et al. (2022) or Misic (2020) do not have this property. We also show that the constraints in our model are facet-defining. While this formulation is generally not ideal when we impose polyhedral constraints on the decision, or when multiple trees are used in an ensemble model, the formulation generally excludes fractional extreme points present in Biggs et al. (2022) and Misic (2020), leading to tighter formulations. We also present new formulations that use a binary representation of the feature vector as proposed in Misic (2020). While these variables are more difficult to incorporate into a constrained optimization formulation, they do have some advantages when it comes to the branching behavior in the MIO solver, leading to a faster time to solve in some instances. We propose different constraints that can be added to tighten the formulation from Misic (2020). The _expset_ formulation is based on exploiting the greater than or equal to representation of the feature vector from Misic (2020), leading to larger groups of leaf variables being turned off when a split is made. The _elbow_ formulation removes specific fractional solutions that arise when there are nested branches on the same feature in a tree. We characterize the conditions in which each of these constraints removes fractional solutions, which generally occurs in scenarios where there are multiple splits on the same feature. Extending this, we show that the _expset_ formulation leads to an ideal formulation when all the splits are on the same feature, which occurs for tree ensembles when the feature vector is one-dimensional. This property doesn't hold for the formulation in Misic (2020). In conjunction with the _union of polyhedra_ formulation being ideal for a single tree with multiple features, this result provides insights for the practitioner on when different formulations might be tighter. While not directly comparable due to the use of different variables, when there are many trees in the ensemble but relatively few variables, the _expset_ formulation is likely to be tighter. When there are few trees but many variables, the _union of polyhedra_ formulation is likely to be tighter. We explore the performance of these approaches through extensive simulations. In partial agreement with our theoretical findings, we show that in some instances, the _union of polyhedra_ formulation appears to have significant solve time improvements for tree ensembles with few trees. Similarly, the _elbow_ offers improvements for problems with few features. While the _expset_ formulation generally doesn't offer faster solve times, we show that the linear relaxations it provides can be significantly stronger which is useful in many applications where a bound on the optimal solution is desired, particularly for trees with few features. ## 2 Preliminaries Given a feature vector \(\boldsymbol{w}\in D\subseteq\mathbb{R}^{d}\), our goal is to model the output of a decision tree \(f^{(t)}(\boldsymbol{w})\) using a mixed-integer optimization formulation. More formally, we model the graph, \(gr(f^{(t)};D)=\{\boldsymbol{w},y_{t}|\boldsymbol{w}\in D,y_{t}=f^{(t)}( \boldsymbol{w})\}\). With such a formulation, we can easily model a range of practical applications, such as finding the optimal feature vector to maximize the predicted outcome of a tree ensemble \(\sum_{t=1}^{T}y_{t}\), or solving a reinforcement learning subproblem with complex constraints where the value function is given by a decision tree. ### Decision trees A decision tree \(f^{(t)}(\boldsymbol{w})\) with \(p\) leaves is a piecewise constant function, where a constant outcome \(s_{l}\) is predicted if feature vector \(\boldsymbol{w}\) falls within a particular leaf \(\mathcal{L}_{l},l\in[p]\), so that \(f^{(t)}(\boldsymbol{w})=s_{l}\) if \(\boldsymbol{w}\in\mathcal{L}_{l}\). Each leaf, \(\mathcal{L}_{l}\), is a hyperrectangular set defined by an upper \(u_{il}\) and a lower (bottom) \(b_{il}\) bound for each feature dimension \(w_{i},i\in\;[d]\). Throughout, we assume \(w_{i}\) is bounded. A leaf is defined as: \[\mathcal{L}_{l} = \{\boldsymbol{w},y\ |\ w_{i} \leq u_{il}\qquad\forall\ i\in\;[d], \tag{1a}\] \[w_{i} \geq b_{il}\qquad\forall\ i\in\;[d],\] (1b) \[y = s_{l}\} \tag{1c}\] The upper bounds and lower bounds associated with each leaf are defined by a hierarchy of axis-aligned splits. We use the often-used convention that the splits in the tree are of the form \(w_{i}\leq\theta\)(Pedregosa et al., 2011). These splits define the tree and partition the feature space into leaves. We denote \(\mathbf{splits}(t)\) as the set of splits corresponding to tree \(t\in T\), \(\mathbf{left}(s)\) as the set of leaves to the left of split \(s\) in the tree (i.e., those that satisfy the split condition \(w_{i}\leq\theta\)), and \(\mathbf{right}(s)\) as the set of leaves to the right for which \(w_{i}>\theta\). The upper bounds \(u_{il}\) are defined by the threshold of the left splits that lead to the leaf, while the lower bounds \(b_{il}\) are defined by the thresholds of the right splits. In the case where there are multiple axis-aligned splits along a dimension leading to a leaf (i.e., \(w_{1}\leq 5\) then \(w_{1}\leq 2\)), the upper bound will be the minimum of all less than splits, while the lower bound will be the maximum. When there are no splits on a feature, the upper and lower bounds on the leaf are the upper and lower bounds on the feature vector. Figure 1: Examples of decision tree with corresponding notation and partition of the feature space ### Mixed-integer optimization Our goal is to model the graph \(gr(f;D)\) using mixed-integer optimization. To facilitate this, often auxiliary continuous \(\boldsymbol{q}\in\mathbb{R}^{n}\) and integer variables are introduced to help model the complex relationships between variables, although the formulations we study require only binary variables \(\boldsymbol{z}\in\{0,1\}^{m}\). A mixed-integer optimization formulation consists of linear constraints on \((\boldsymbol{w},y,\boldsymbol{q},\boldsymbol{z})\in\mathbb{R}^{d+1+n+m}\) which define a polyhedron \(Q\), combined with binary constraints on \(z\in\{0,1\}^{m}\). For a valid formulation, the set \((\boldsymbol{w},y)\) associated with a feasible solution \((\boldsymbol{w},y,\boldsymbol{q},\boldsymbol{z})\in Q\cap\mathbb{R}^{d+1+n} \times\{0,1\}^{m}\) must be the same as the graph we desire to model \((\boldsymbol{w},y)\in gr(f;D)\). More formally, the auxiliary variables \((\boldsymbol{q},\boldsymbol{z})\) are removed via an orthogonal projection \(Proj_{\boldsymbol{w},y}(Q)=\{\boldsymbol{w},y\mid\exists\ \boldsymbol{q}, \boldsymbol{z}\ s.t.\ \boldsymbol{w},y,\boldsymbol{q},\boldsymbol{z}\in Q\}\), to leave a set of feasible \((\boldsymbol{w},y)\). Therefore, a valid mixed-integer optimization formulation may be defined as: Definition 1 (Valid mixed-integer optimization formulation).: \[gr(f;D)=Proj_{\boldsymbol{w},y}(Q\cap\mathbb{R}^{d+1+n}\times\{0,1\}^{m})\] We will refer to \(Q\) as the linear relaxation of the formulation, which is the MIO formulation with the integrality requirements removed. An MIO formulation is ideal if the extreme points of the polyhedron are binary for those variables that are required to be: Definition 2 (Ideal formulation).: \[\operatorname{ext}(Q)\subseteq\mathbb{R}^{d+1+n}\times\{0,1\}^{m}\] where \(\operatorname{ext}(Q)\) is the extreme points of the polyhedron \(Q\). ## 3 Further relevant literature Modeling trained tree ensembles using mixed-integer optimization is studied in Biggs et al. (2022) and Misic (2020). Misic (2020) proved this problem in NP-Hard and proposed formulations for unconstrained optimization problems or problems with simple box constraints on each variable. Mistry et al. (2021) provide a customized branch and bound algorithm for optimizing gradient-boosted tree ensembles based on the MIO formulation in Misic (2020), while Perakis and Thayanaran (2021) also propose a customized branching procedure. Biggs et al. (2022) proposes formulations that include polyhedral constraints. This approach uses the big-M approach to linearize the nonlinear behavior of the trees. To optimize large tree ensembles in a reasonable amount of time, both Misic (2020) and Biggs et al. (2022) offer ways to decompose a large tree ensemble and propose heuristic approaches that involve truncating trees to a limited depth (Misic 2020) or sampling a subset of the trees (Biggs et al. 2022). All of these approaches involve solving a mixed-integer optimization formulation of an ensemble of trees. We follow a "Predict then Optimize" approach, where we study formulations based on an already trained decision tree or tree ensemble, but there has also been significant recent interest in the joint estimation and optimization problem using trees to prescribe actions directly from data (Kallus 2017, Zhou et al. 2018, Bertsimas et al. 2019, Elmachtoub et al. 2020, Biggs et al. 2021, Jo et al. 2021, Amram et al. 2022). ### Formulation from Misic (2020) We review the formulation from Misic (2020) both as a benchmark, and to motivate the formulations we propose. Rather than linking the feature vector \(\mathbf{w}\) directly to the output \(f(\mathbf{w})\), Misic (2020) uses a binary representation of the feature vector \(\mathbf{w}\), which represents whether the feature falls below each split in the tree. Specifically, binary variables are introduced with \[x_{ij}=\begin{cases}1&\text{if }w_{i}\leq\theta_{ij}\\ 0&\text{if }w_{i}\geq\theta_{ij}\end{cases}\] where \(\theta_{ij}\) is the \(j^{th}\) largest split threshold associated with dimension \(i\). As a result, the \(\mathbf{x}_{i}\) vector has the structure of consecutive 0's, followed by consecutive 1's. For example, \(\mathbf{x}_{i}=\{0,1,1\}\), would correspond to a solution that falls between the first and second thresholds. A drawback of this approach is that additional constraints are needed to incorporate the binary split representation \(\mathbf{x}\) into a constrained optimization problem for \(\mathbf{w}\). To introduce the formulation from Misic (2020), we need to introduce some additional notation. \(C(s)\) corresponds to the ranking of threshold \(s\) relative to the size of other thresholds for that feature, and \(V(s)\) corresponds to the feature involved in the split. For example, if \(\theta_{ij}\) is the \(j^{th}\) largest threshold for feature \(i\) associated with split \(s\), then \(C(s)=j\) and \(V(s)=i\). \(K_{i}\) denotes the number of thresholds for feature \(i\). Auxiliary variables \(\mathbf{z}\) are introduced, where \(z_{l}=1\) if the feature vector falls in leaf \(l\). The polyhedron \(Q^{misic}\), which links the binary representation \(\mathbf{x}\) to the predicted outcome \(y\), is: \[Q^{misic}=\{\mathbf{x},y,\mathbf{z}\mid \sum_{l\in\mathbf{left}(s)}z_{l}\leq x_{V(s)C(s)}\qquad\forall s \;\in\;\mathbf{splits}(t) \tag{2a}\] \[\sum_{l\in\mathbf{right}(s)}z_{l}\leq 1-x_{V(s)C(s)}\qquad \forall s\;\in\;\mathbf{splits}(t)\] (2b) \[x_{ij}\leq x_{ij+1}\qquad\forall i\;\in\;[d],\;\forall j\;\in\;[K_{i}]\] (2c) \[\sum_{l=1}^{p}z_{l}=1,\quad y=\sum_{l=1}^{p}s_{l}z_{l}\] (2d) \[\mathbf{x}\in[0,1]^{K_{i}}\qquad\forall i\in[d],\;\mathbf{z}\geq 0\} \tag{2e}\] The corresponding MIO formulation imposes binary constraints on \(\mathbf{x}\in\{0,1\}^{K_{i}}\;\forall i\in[d]\), but they are not necessary for \(\mathbf{z}\). Constraint (2a) enforces that if the condition at a split is not satisfied, \(x_{V(s)C(s)}=0\), then the solution does not fall within a leaf to the left of that split in the tree, so \(z_{l}=0\;\forall l\;\in\mathbf{left}(s)\). Conversely in constraint (2b), if the split is satisfied, \(x_{V(s)C(s)}=1\), then all leaves to the right are set to 0. Constraint (2c) links the solution to the feature vector across trees. If the solution is less than the \(j^{th}\) split, \(x_{ij}=1\), then the solution must also be less than all splits greater than this. As such, \(x_{ik}=1\;\forall j<k<K_{i}\), and the vector has the structure of consecutive zeros followed by consecutive ones. An issue with the formulations presented in both Misic (2020) and Biggs et al. (2022) is that the linear relaxation can have many fractional solutions. This can make the MIO slow to solve. In fact, neither formulation is ideal even for the simple case of modeling a single decision tree without any additional constraints on a feasible decision, as we show in the following example. Example 1 (Misic (2020) not ideal for a single tree).: Suppose there is a tree that first branches on the condition \(w\leq 5\) and then on \(w\leq 2\), as shown in Figure 1(a). In this example, \(x_{1}=1\) if \(w\leq 5\), and \(0\) otherwise, while \(x_{2}=1\) if \(w\leq 2\). The variables \(z_{l}=1\) if the solution is in leaf \(l\). The resulting linear relaxation from Misic (2020) is: \[\{\boldsymbol{x},\boldsymbol{z}\ |\ z_{2} \leq 1-x_{2}, z_{3} \leq 1-x_{1}, x_{2} \leq x_{1} 0 \leq \boldsymbol{x}\leq 1,\] \[z_{1} \leq x_{2}, z_{1} +z_{2} \leq x_{1}, z_{1} +z_{2} +z_{3} = 1, 0 \leq \boldsymbol{z}\}\] This has an extreme point at \(z_{1}=0,\ z_{2}=0.5,\ z_{3}=0.5,\ x_{1}=0.5,\ x_{2}=0.5\), when constraints \(z_{2}\leq 1-x_{2},\ z_{3}\leq 1-x_{1},\ x_{2}\leq x_{1},\ z_{1}+z_{2}+z_{3}=1, \ z_{1}\geq 0\) are active. Example 2 (Biggs et al. (2022) not ideal for a single tree).: Again, suppose there is a tree that first branches on the condition \(w\leq 5\) and then on \(w\leq 2\), as shown in Figure 1(b). This formulation uses a slightly different notation, where \(x_{ij}=1\) if the arc is on the path to the active leaf, \(i\) corresponds to the parent node, \(j=1\) refers to the left branch, and \(j=2\) refers to the right branch. For example, if \(w\leq 2\), then \(x_{11},x_{21}=1\), while \(x_{12},x_{22}=0\). We also assume \(w\) is bounded, \(0\leq w\leq 10\), and following guidance in Biggs et al. (2022) for choosing the big-M value, we set \(M=15\). The resulting formulation in Biggs et al. (2022) is: \[\{\boldsymbol{x},w\ |\ w-15(1-x_{11})\leq 5, w-15(1-x_{21})\leq 2, x_{21}+x_{22}=x_{11}, 0\leq \boldsymbol{x}\leq 1\] Figure 2: Examples of trees with fractional solutions and notation \[w+15(1-x_{12})\geq 5,\quad\quad w+15(1-x_{22})\geq 2,\quad\quad x_{12}+x_{21}+x_{22}=1, \quad\quad 0\leq w\leq 10\}\] This has an extreme point at \(x_{11}=1/3,\ x_{12}=2/3,\ x_{21}=1/3,\ x_{22}=0,w=0\), when constraints \(w+15(1-x_{12})\geq 5,\ x_{21}+x_{22}=x_{11},\ x_{11}+x_{12}+x_{21}+x_{22}=1,\ w \geq 0,\ x_{22}\geq 0\) are active. Furthermore, this is not just a consequence of the choice of \(M\) but is still an issue regardless of this choice. ## 4 Union of polyhedron formulation We propose an alternative MIO formulation for decision trees, which is tighter in the sense that it is ideal for modeling a single tree, unlike those presented in Example 1 and 2. In contrast with the formulation in Misic (2020), our proposed formulation directly relates the feature vector \(\mathbf{w}\), to the output \(f^{(t)}(\mathbf{w})\), instead of using a binary representation of the feature vector. This has an advantage that constraints can be placed directly on the feature vector \(\mathbf{w}\) for problems with additional constraints that need to be modeled. We can formulate a tree as a union of polyhedra since the solution will always fall into one of the leaves (hyperrectangles) that partition the feature space. This can be achieved using the classical extended formulation from Jeroslow (1987), which introduces many auxiliary variables to model the set. This is also known as a "multiple choice" formulation Vielma and Nemhauser (2011): \[Q^{ext}=\{\mathbf{w},y,\mathbf{w}^{l},y^{l},\mathbf{z}|\ u_{li}z_{l}\geq w_{i}^{l}\quad \quad\forall i\in[d],\ \forall l\in[p] \tag{3a}\] \[b_{li}z_{l}\leq w_{i}^{l}\quad\quad\forall i\in[d],\ \forall l\in[p]\] (3b) \[y^{l}=s_{l}z_{l},\quad\quad\forall l\in[p]\] (3c) \[\sum_{l=1}^{p}z_{l}=1,\] (3d) \[w_{i}=\sum_{l=1}^{p}w_{i}^{l}\quad\quad\forall i\in[d]\] (3e) \[y=\sum_{l=1}^{p}y^{l}\quad\quad\forall l\in[p]\] (3f) \[z_{l}\in[0,1]\quad\quad\forall l\in[p]\} \tag{3g}\] The formulation works by creating \(p\) auxiliary copies of each variable, \(\mathbf{w}^{l}\in\mathbb{R}^{d},y^{l}\in\mathbb{R}\), corresponding to each leaf to make the MIO formulation. Auxiliary binary variables \(z_{l}\in\{0,1\}^{p}\) are also introduced, which indicate which leaf the solution falls into. When \(z_{l}=1\), constraints (3a), (3b), and (3c) define the feasible region and score for that leaf. When \(z_{l}=0\), these constraints enforce that \(\mathbf{w}^{l}\) is set to be a vector of zeros. Constraints (3d) ensures only one leaf is chosen. Constraint (3e) and (3f) in turn define \(\mathbf{w}\) and \(y\) according to which leaf is active. This formulation is ideal as proved in Jeroslow and Lowe (1984) and Balas (1985), so the linear relaxation is guaranteed to have integer extreme points. However, these formulations often have computational issues when solved in practice (Vielma, 2019). This formulation introduces a large number of auxiliary variables (\((p+1)(d+2)\) variables in total), as well as many constraints (\(2pd+3p+d+1\)). It is well known that these formulations suffer from degeneracy, as many of the auxiliary variables are set to be 0, often resulting in poor performance in practice (Vielma, 2019). We can improve upon this formulation by projecting onto \(\mathbf{w}\). This eliminates the variables \(\mathbf{w}^{l}\) and thus results in a significantly smaller formulation. \[Q^{proj}=\{\mathbf{w},y,\mathbf{z}| \sum_{l=1}^{p}u_{li}z_{l}\geq w_{i} \forall i\in[d], \tag{4a}\] \[\sum_{l=1}^{p}b_{li}z_{l}\leq w_{i} \forall i\in[d],\] (4b) \[y=\sum_{l=1}^{p}z_{l}s_{l}\] (4c) \[\sum_{l=1}^{p}z_{l}=1\] (4d) \[z_{l}\in[0,1] \forall l\in[p]\} \tag{4e}\] We can prove this formulation is still ideal for a single tree after this projection. [Ideal formulation for a tree] The polyhedron \(Q^{proj}\) is ideal. This is proved in Appendix 0.A.1. The main idea behind this proof is that the _union of polyhedra_ formulation (3) is ideal, and therefore the projection onto variables \(\mathbf{w}\) is also ideal. These ideal projected formulations always exist, but in general, the projection is not a tractable operation and can result in a formulation with exponentially many constraints. In this special case, the resulting formulation (4) has only \(2d+1\) constraints (in addition to binary constraints) and \(p+d+1\) variables. Compared to formulation (3), this has significantly fewer variables and therefore does not suffer from degeneracy to the same extent. We also note that this formulation has considerably fewer constraints than in Misic (2020), which has approximately \(3p\) constraints and \(2p\) variables since typically \(d<<p\). The significance of this result is that it suggests that tree-based optimization approaches that use formulation (4) will be tighter than those used in Biggs et al. (2022) or Misic (2020). Specifically, there are fractional solutions for each tree, as shown in Examples 1 and 2, which do not exist in formulation (4). Although in general, the intersection of different tree polytopes, as occurs in tree ensemble optimization, introduces additional fractional solutions. This also occurs for the intersection of a tree polytope and additional polyhedral constraints. However, in practice, this formulation often results in a faster time to solve, particularly for forests with relatively few trees. If formulation (4) is reformulated slightly, we can prove some additional favorable properties, including, in particular, that the constraints are facet-defining. Definition 3 (Facet).: A face \(\mathcal{F}\) of a polyhedron \(\mathcal{P}\), represented by the inequality \(\boldsymbol{a}^{\prime}\boldsymbol{x}\geq b\), is called a facet of \(\mathcal{P}\) if \(dim(\mathcal{F})=dim(\mathcal{P})-1\). One of the variables \(z_{p}\) can be eliminated through the substitution \(z_{p}=1-\sum_{l=1}^{p-1}z_{l}\). Consequently, \(\boldsymbol{z}\in\{0,1\}^{p-1}\) and as a result, \(\boldsymbol{z}=0\) implies \(\boldsymbol{w}\in\mathcal{L}_{p}\). This leads to the following formulation: \[Q^{facet}=\{\boldsymbol{w},y,\boldsymbol{z}| \ u_{pi}+\sum_{l=1}^{p-1}(u_{li}-u_{pi})z_{l}\geq w_{i}\qquad \forall i\in[d], \tag{5a}\] \[b_{pi}+\sum_{l=1}^{p-1}(b_{li}-b_{pi})z_{l}\leq w_{i}\qquad \forall i\in[d],\] (5b) \[y=s_{p}+\sum_{l=1}^{p-1}z_{l}(s_{l}-s_{p})\] (5c) \[z_{l}\in[0,1]\qquad\forall l\in[p-1]\} \tag{5d}\] We can show that under mild assumptions, (5a) and (5b) are facet-defining. Lemma 1: _For all \(l\in[p]\), assume \(\mathcal{L}_{l}\) is non-empty. Furthermore, assume that for some \(k\in[p]\), \(\mathcal{L}_{k}\) is full dimensional, i.e., \(dim(\mathcal{L}_{k})=d\). Then constraints (5a) and (5b) are facet-defining for leaf \(k\)._ This is proved in A.2 with a proof technique similar to that in Anderson et al. (2018). This result is significant because it suggests there is no redundancy in formulation (5). MIO formulations generally take longer to solve when there are redundant variables and constraints. ### Extensions to tree ensembles and additional constraints The formulation can be applied to tree ensembles such as random forests or gradient-boosted tree ensembles. While the polyhedron modeling an individual tree is ideal, this formulation is not ideal in general as shown in this section. An alternative, but weaker, notion of tightness is whether a formulation is sharp. For a sharp formulation, the projection of the polyhedron \(Q\) onto the original variables \(\boldsymbol{w},y\) is equal to the convex hull (\(\operatorname{conv}(\cdot)\)) of the graph \(gr(f;D)\). This is formalized as follows: Definition 4 (Sharp formulation): \[\operatorname{conv}(gr(f;D))=Proj_{\boldsymbol{w},y}(Q)\] An ideal formulation is also sharp, but a sharp formulation isn't necessarily ideal. In Example 4 we give a simple tree ensemble that illustrates that the _union of polyhedra_ formulation is not ideal and not sharp. Example 3 (Intersection of trees is not ideal or sharp): Suppose we have the following two trees in an ensemble: \[f^{(1)}(w)=\begin{cases}1&0\leq w\leq 1\\ 4&1<w\leq 3\end{cases}\qquad\quad f^{(2)}(w)=\begin{cases}2&0\leq w\leq 2\\ 3&2<w\leq 3\end{cases}\] This leads to a tree ensemble: \[0.5(f^{(1)}(w)+f^{(2)}(w))=\begin{cases}1.5&0\leq w\leq 1\\ 3&1<w\leq 2\\ 3.5&2<w\leq 3\end{cases}\] This is visualized in Figure 4, where \(f^{(1)}(w)\) is the blue line, \(f^{(2)}(w)\) is the red line and the ensemble \(0.5(f^{(1)}(w)+f^{(2)}(w))\) is the purple dashed line. The _union of polyhedra_ formulation for this is as follows: \[\{w,y,\boldsymbol{z}\ | z_{2}^{(1)} \leq w, 2z_{2}^{(2)} \leq w, y = 0.5\left(z_{1}^{(1)}+4z_{2}^{(1)}+2z_{1}^{(2)}+3z_{2}^{(2)}\right),\] \[z_{1}^{(1)}+3z_{2}^{(1)} \geq w, 2z_{1}^{(2)}+3z_{2}^{(2)} \geq w, z_{1}^{(1)}+z_{2}^{(1)} = 1\ z_{1}^{(2)}+z_{2}^{(2)} = 1,\ \boldsymbol{z},\boldsymbol{w}\geq 0\}\] A basic feasible solution for this formulation is \(w=1,\ z_{1}^{(1)}=0,\ z_{2}^{(1)}=1,\ z_{2}^{(1)}=0.5,\ z_{2}^{(2)}=0.5,\ y=3.25\), which is not integral, so the formulation is not ideal. Furthermore, the projected solution, \(w=1,\ y=3.25\), is not in the convex hull of \(0.5(f^{(1)}(w)+f^{(2)}(w))\), so the formulation is not sharp. Figure 3: Tree ensemble formulation is not ideal or sharp. Extreme points of \(Q^{proj}\) are shown with hollow circles, while the convex hull of the tree ensemble graph is shown in shaded purple. This can be observed in Figure 3c, where the convex hull of the graph of the tree ensemble is shown in shaded purple. The extreme points of \(Q^{proj}\) projected into \(w,y\) space are shown with hollow circles. As can be observed, there are two extreme points of \(Q^{proj}\) that lie outside the convex hull of the graph. \(\quad\Box\) We also provide an example illustration that adding additional constraints to the feature vector, which may be useful for many practical applications, is not ideal. Example 4 (Adding additional constraints to a tree is not ideal).: Take the tree from Figure 1. Suppose that we add a simple constraint that \(w_{1}+w_{2}\leq 3\). Suppose additionally that there are upper and lower bounds on each feature, such that \(0\leq w_{1},w_{2}\leq 3\). The _union of polyhedra_ formulation is: \[\{w_{1},w_{2},\boldsymbol{z}\ |\qquad 2(z_{1}+z_{2})+3z_{3} \geq w_{1},\qquad 2z_{1}+3(z_{2}+z_{3}) \geq w_{2},\qquad\qquad z_{1}+z_{2}+z_{3}=1\] \[2z_{3} \leq w_{1},\qquad\qquad\qquad 2z_{2} \leq w_{2},\qquad w_{1}+w_{2}\leq 3,\ \boldsymbol{z}\geq 0\}\] This has a fractional solution \(w_{1}=2/3,\ w_{2}=7/3,\ z_{1}=2/3,\ z_{2}=0.0,\ z_{3}=1/3\), so it is not ideal. \(\quad\Box\) While the intersection of trees is not ideal or sharp, it still removes a significant number of fractional solutions from the linear relaxation compared to using formulations from Misic (2020) or Biggs et al. (2022) leading to faster solve times as explored empirically in Section 6. ## 5 Strengthening formulations with binary split variables We next present formulations that build upon the formulation from Misic (2020). In particular, these formulations use the binary variables from Misic (2020), which denote whether the feature vector is below each threshold in the tree. An advantage of this approach is its favorable branching behavior - setting a variable \(x_{ij}=1\) will force all variables with a split threshold above this to also be 1, due to the ordering constraints \(x_{ij}\leq x_{ij+1}\) (2c). In some cases, this results in a faster time to solve than the formulation in the previous section. We propose two ways to tighten this formulation to remove some of the fractional solutions, resulting in tighter linear relaxations and a faster time to solve in certain situations. ### Tighter formulation from variable structure To tighten the formulation from Misic (2020), we exploit the greater than or equal to representation of \(\mathbf{x}\), which leads to larger groups of leaf variables being turned off when a split is made. In Misic (2020), the \(\mathbf{x}\) variables have consecutive \(0\)'s followed by consecutive \(1\)'s. In Misic (2020), if \(x_{ij}=0\), this implies that all variables \(z_{l}\) to the left of the split are equal to \(0\) (constraint 2b). However, a stronger statement can be made. Due to the structure of \(\mathbf{x}\), all variables with lower thresholds are also equal to \(0\), i.e., \(x_{ik}=0\)\(\forall k<j\). This implies that variables \(z_{l}\) to the left of splits with lower thresholds also must be equal to \(0\). As an illustrative example, we examine the tree in Figure 3(a). If \(w_{2}>5\) (\(x_{22}=0\)), then not only is the variable to the left of this split equal to \(0\), \(z_{3}=0\), but also \(z_{1}=0\) due to the constraint \(x_{21}\leq x_{22}\) (constraint (2c) from Misic (2020)). Rather than enforcing the relatively weak constraint from Misic (2020) that \(z_{3}\leq x_{22}\), it is tighter to directly enforce \(z_{1}+z_{3}\leq x_{22}\). Similarly, if \(x_{ij}=1\), this implies that the variables \(z_{l}\) to the right of any splits greater than the \(j^{th}\) split are also set to \(0\). For example in Figure 3(a), if \(w_{2}\leq 2\) (\(x_{12}=1\)), then not only is the variable to the right of this split equal to \(0\) (\(z_{2}=0\)), but also \(z_{4}=0\), since the structure of \(\mathbf{x}\) implies that \(w_{2}\leq 5\) (\(x_{22}=1\)). To formalize this logic, we introduce new sets \(\mathbf{below}(s)\) and \(\mathbf{above}(s)\). The set \(\mathbf{below}(s)\) contains all leaves to the left of splits with thresholds less than or equal to the threshold at split \(s\) for a given tree. The set \(\mathbf{above}(s)\) contains all leaves to the right of leaves with a threshold greater than or equal to the threshold at split \(s\). As such, for adjacent splits on the same feature, \(s_{ij}\) and \(s_{ij+1}\), we can define \(\mathbf{below}(s_{ij+1})=\mathbf{below}(s_{ij})\cup\mathbf{left}(s_{ij+1})\) and \(\mathbf{above}(s_{ij})=\mathbf{above}(s_{ij+1})\cup\mathbf{right}(s_{ij})\). For the smallest and largest splits, we have initial conditions that \(\mathbf{below}(s_{i1})=\mathbf{left}(s_{i1})\), and \(\mathbf{above}(s_{iK_{i}})=\mathbf{right}(s_{iK_{i}})\). An equivalent pair of definitions are \(\mathbf{below}(s_{ij})=\bigcup_{k\leq j}\mathbf{left}(s_{ik})\) and \(\mathbf{below}(s_{ij})=\bigcup_{k\geq j}\mathbf{right}(s_{ik})\). An example of these sets is illustrated in Figure 4a. As a result, we can introduce a new formulation \(Q^{\mathit{exp}set}\), named after the notion of _expanded sets_, by replacing (2a) and (2b) with the following constraints: \[Q^{\mathit{exp}set}= \{\boldsymbol{x},y,\boldsymbol{z}\ |\ \sum_{l\in\mathbf{below}(s)}z_{l}\leq x_{V(s)C(s)}\qquad\forall s\ \in\ \mathbf{splits}(t) \tag{8a}\] \[\sum_{l\in\mathbf{above}(s)}z_{l}\leq 1-x_{V(s)C(s)}\qquad\forall s\ \in\ \mathbf{splits}(t)\] (8b) \[x_{ij}\leq x_{ij+1}\qquad\forall i\ \in\ [p],\ \forall j\ \in\ [K_{i}]\] (8c) \[\sum_{l}^{p}z_{l}=1,\quad y=\sum_{l=1}^{p}s_{l}z_{l}\] (8d) \[\boldsymbol{x}\in[0,1]^{K_{i}}\qquad\forall i\in[d],\ \boldsymbol{z}\geq 0\} \tag{8e}\] Constraints (8a) and (8b) are the counterparts of (2a) and (2b). Constraint (8a) enforces that when the condition at the split is not satisfied \(x_{V(s)C(s)}=0\), the solution does not fall within a leaf to the left of any split in the tree with a lower threshold for the same feature, while constraint (8b) enforces that all leaves to the right of greater splits are set to 0 if \(x_{V(s)C(s)}=1\), as discussed previously. It can be shown that when intersected with a binary lattice on \(\boldsymbol{x}\in\{0,1\}^{p}\), the feasible set of the MIO formulations (2) and (8) is the same. However, the linear relaxation, \(Q^{\mathit{exp}set}\) is generally a subset of \(Q^{\mathit{m}isic}\). This is shown in Proposition 1, which formalizes the rationale given above. **Proposition 1**: _The feasible sets associated with MIO formulations of \(Q^{\mathit{exp}set}\) and \(Q^{\mathit{m}isic}\) are equivalent, but the linear relaxation \(Q^{\mathit{exp}set}\) is a subset of \(Q^{\mathit{m}isic}\). Formally,_ \[Q^{\mathit{exp}set}\cap(\{0,1\}^{p}\times\mathbb{R}^{1+p})=Q^{\mathit{m}isic} \cap(\{0,1\}^{p}\times\mathbb{R}^{1+p}),\ \mathit{but}\ Q^{\mathit{exp}set}\subseteq Q^{\mathit{m}isic}\] We provide a formal proof in Appendix B. It can be shown that this formulation removes some fractional solutions from the LP relaxation of (2). In particular, this will occur when there are multiple splits on the same feature within the tree. To illustrate this, suppose we have two splits on the same variable, \(s\) and \(s^{\prime}\), where without loss of generality split \(s^{\prime}\) has the larger threshold. Define a reduced polyhedron that only includes the constraints related to these splits as follows: \[\tilde{Q}^{expset}(s,s^{\prime})=\{\boldsymbol{x},\boldsymbol{z} \ |\ \sum_{l\in\text{below}(s)}z_{l}\leq x_{V(s)C(s)},\ \sum_{l\in\text{above}(s)}z_{l}\leq 1-x_{V(s)C(s)},\] \[\sum_{l\in\text{below}(s^{\prime})}z_{l}\leq x_{V(s^{\prime})C(s^{\prime})},\ \sum_{l\in\text{above}(s^{\prime})}z_{l}\leq 1-x_{V(s^{ \prime})C(s^{\prime})},\ x_{V(s)C(s)}\leq x_{V(s^{\prime})C(s^{\prime})}\}\] \[\tilde{Q}^{misic}(s,s^{\prime})=\{\boldsymbol{x},\boldsymbol{z} \ |\ \sum_{l\in\text{left}(s)}z_{l}\leq x_{V(s)C(s)},\ \sum_{l\in\text{right}(s)}z_{l}\leq 1-x_{V(s)C(s)},\] \[\sum_{l\in\text{left}(s^{\prime})}z_{l}\leq x_{V(s^{\prime})C(s^{ \prime})},\ \sum_{l\in\text{right}(s^{\prime})}z_{l}\leq 1-x_{V(s^{\prime})C(s^{ \prime})},\ x_{V(s)C(s)}\leq x_{V(s^{\prime})C(s^{\prime})}\}\] If we examine these polyhedrons, we see that the \(\tilde{Q}^{expset}(s,s^{\prime})\) is a strict subset of \(\tilde{Q}^{misic}(s,s^{\prime})\) when there are multiple splits on the same variable. Proposition 2: _Suppose we have two splits on the same variable, \(s\) and \(s^{\prime}\), where \(s^{\prime}\) corresponds to the split with the larger threshold. Then_ \[\tilde{Q}^{expset}(s,s^{\prime})\subset\tilde{Q}^{misic}(s,s^{\prime})\] This is proved in Appendix C. This proof involves exploring the potential relationships between splits \(s\) and \(s^{\prime}\) (where split \(s\) is a child of \(s^{\prime}\) in the tree, where \(s^{\prime}\) is a child of \(s\), and where neither is a child of the other) and finding solutions \((\boldsymbol{x,z})\) that are in \(\tilde{Q}^{misic}(s,s^{\prime})\) but not in \(\tilde{Q}^{expset}(s,s^{\prime})\). An example that illustrates the strict subset is given in Example 6 from Section 5.3. In this example, we see that formulation (2) has fractional solutions, while formulation (8) has only integer solutions. Generally, the more splits there are on the same feature in the tree, the more these constraints will tighten the formulation. At an extreme, we have the scenario where all splits in the tree are on the same feature. In the one-dimensional setting, it can be shown that the above formulation is ideal even for tree ensembles. Theorem 2 (Ideal formulation for one-dimensional tree ensembles): _The polyhedron defining a tree ensemble \(\cap_{i=1}^{T}Q_{i}^{\textit{expset}}\) is ideal if the feature is one-dimensional (\(d=1\))._ This result is proved in Appendix E. It follows by proving that the matrix representation of the polyhedron is totally unimodular. In particular, the matrix has a special structure whereby it is possible to provide a bi-coloring of the columns, such that the difference in row sums between the two groups is in \(\{-1,0,1\}\). A result from Ghouila-Houri (1962) proves that such a matrix is totally unimodular. A linear program \(\{\max\boldsymbol{c}^{\prime}\boldsymbol{x}|A\boldsymbol{x}\leq\boldsymbol{b}\}\) has integer solutions if \(b\) is integer and \(A\) is a totally unimodular matrix (Schrijver 1998). The significance of this result is that it emphasizes the tightness of this formulation, relative to other formulations that are not ideal in this situation and have fractional solutions. In particular, in Example 1, we show that formulation (2) is not ideal even if the problem is one-dimensional with a single tree. Furthermore, although the formulation isn't ideal when the input vector has multiple dimensions, we empirically show in Section 6.1.1 that the relaxation is tighter when the input vector is low dimensional. It is interesting to contrast this result with Theorem 1. Theorem 1.1 states that the _union of polyhedra_ formulation is ideal for a single tree even with many features. This contrasts with Theorem 2, which shows the _expset_ formulation is ideal for many trees but only if the ensemble has a single feature. While it is difficult to directly compare the tightness of these formulations since they use different variables, this gives practitioners insight into the relative tightness of the different formulations. When there are many trees in the ensemble but relatively few variables, the _expset_ formulation is likely to be tighter. When there are few trees but many variables, the _union of polyhedra_ formulation is likely to be tighter. ### Tighter formulation from nested branches The relaxation of the formulation in the previous section still has some fractional extreme solutions, even in the case where a single tree is being modeled over multiple features. These fractional extreme solutions often arise when there are nested splits on the same feature, where one split follows another on the same branch. This is highlighted in the following example. Example 5 (Nested branches that can be tightened).: Consider a path to a leaf, which has two splits on the same variable in opposing directions as shown in Figure 4(a). Suppose we model this using the formulation (2) from Misic (2020): \[\{x_{1},x_{2},z\ |\ z\leq x_{1},\ z\leq 1-x_{2},\ x_{2}\leq x_{1},\ 0\leq x_{1},x_{2}\leq 1,\ 0\leq z\}\] This has an extreme point \(z=0.5,\ x_{1}=0.5,x_{2}=0.5\), as shown in Figure 4(b). Consider the following reformulation: \[\{x_{1},x_{2},z\ |z\leq x_{1}-x_{2},\ 0\leq x_{1},x_{2}\leq 1,\ 0\leq z\}\] This is shown in Figure 4(c). As can be observed, this has removed the fractional extreme point, leaving only integer extreme points. These fractional extreme points generally occur when a split to the left is followed by a split to the right for the same feature, or vice versa. More formally, we can characterize a valid set of constraints as follows: We define \(\textbf{right\_parent}(s)\) as the set of splits that are above and to the right of split \(s\) in the tree, with the additional requirement that these splits be on the same feature. That is, the split \(s\) is a left child of another split on the same feature in the tree. For the splits in this set, the thresholds are necessarily larger. We can also define \(\textbf{left\_parent}(s)\) as the set of splits that are above and to the left of split \(s\) for the same feature, for which the threshold is smaller. Figure 5: Example: cuts removing extreme point To illustrate this notation, in Figure 4b the split \(w_{2}\leq 2\) is the **left_parent** of the split \(w_{2}\leq 4\). We can generalize the constraints from Example 5 as follows: \[\sum_{l\in\textbf{right}(s)}z_{l} \leq x_{V(s^{\prime})C(s^{\prime})}-x_{V(s)C(s)}\qquad\forall s\; \in\;\textbf{splits}(t),\;\;s^{\prime}\in\textbf{right\_parent}(s) \tag{9a}\] \[\sum_{l\in\textbf{left}(s)}z_{l} \leq x_{V(s)C(s)}-x_{V(s^{\prime})C(s^{\prime})}\qquad\forall s\; \in\;\textbf{splits}(t),\;\;s^{\prime}\in\textbf{left\_parent}(s) \tag{9b}\] If we define \(Q^{elbow}\) as the polyhedron created by adding constraints (9a) and (9b) to formulation (2) from Misic (2020), we can show that the relaxation of this formulation is tighter, while still having the same feasible region when \(\mathbf{x}\) is restricted to a binary lattice, as shown in Proposition 3. Proposition 3: _The feasible set associated with MIO formulations \(Q^{elbow}\)and \(Q^{misic}\) are equivalent, but linear relaxation \(Q^{elbow}\) is a subset of \(Q^{misic}\). Formally,_ \[Q^{elbow}\cap(\{0,1\}^{p}\times\mathbb{R}^{1+p})=Q^{misic}\cap(\{0,1\}^{p} \times\mathbb{R}^{1+p}),\;\text{but}\;Q^{elbow}\subseteq Q^{misic}\] This is proved formally in Appendix D. As illustrated in Example 5, the feasible region is often a strict subset when there are nested splits on the same feature (\(Q^{elbow}\subset Q^{misic}\)). This suggests that when there are more splits on the same features in the tree, there will be more of an improvement using the _elbow_ formulation over Misic (2020). This also often occurs if the tree has fewer features. This is explored empirically in Section 6. However, simulation results suggest that the formulation is not ideal for tree ensembles with a single feature, unlike the _expset_ formulation. ### Comparison of tightening constraints In this section, we compare the relative tightness of the _expset_ and _elbow_ formulations (8 and 9, respectively). We will show that when these constraints are added separately to formulation (2) from Misic (2020), neither formulation is strictly tighter than the other. Rather, there are certain situations where one formulation is tighter than the other and vice versa, which we illustrate with examples. A simple example where formulation (8) is tighter than formulation (9) is when there are multiple splits on the same variable, but they do not have a nested structure. For example, in the tree in Figure 4a, there are two splits on \(w_{2}\), but these occur in different branches of the tree. In this situation, formulations (2) and (9) are the same since the constraints are added only for nested pairs of the same feature. Furthermore, formulation (9) is not tight, but the formulation (8) is tight. Example 6 (Expset Formulation is tighter than elbow formulation): For the tree given in Figure 4a, formulation (9) (and formulation (2)) is: \[\{\boldsymbol{x},\boldsymbol{z}\ |\ x_{11}\geq z_{1}+z_{2},\qquad \quad x_{21}\geq z_{2},\qquad\quad x_{22}\geq z_{3},\qquad\quad x_{21}\leq x_{2 2},\qquad\qquad 0\leq\boldsymbol{x}\leq 1,\] \[1-x_{11}\geq z_{3}+z_{4},\quad 1-x_{21}\geq z_{2},\quad 1-x_{22} \geq z_{4},\quad\quad z_{1}+z_{2}+z_{3}+z_{4}=1,\quad\quad 0\leq\boldsymbol{z}\}\] On the other hand formulation (8) is: \[\{\boldsymbol{x},\boldsymbol{z}\ |\ x_{11}\geq z_{1}+z_{2},\qquad\quad x _{21}\geq z_{2},\qquad\quad\quad x_{22}\geq\boxed{z_{1}}+z_{3},\ \ x_{21}\leq x_{22},\qquad\qquad\qquad 0\leq \boldsymbol{x}\leq 1,\] \[1-x_{11}\geq z_{3}+z_{4},\ \ 1-x_{21}\geq z_{2}+\boxed{z_{4}},\quad 1-x_{22}\geq z_{4},\qquad z_{1}+z_{2}+z_{3}+z_{4}=1, \ \ 0\leq\boldsymbol{z}\}\] For convenience, the difference in the formulations has been highlighted. Formulation (9) has fractional solutions \(x_{11}=0.5,x_{21}=0.5,x_{22}=0.5,z_{1}=0,z_{2}=0.5,z_{3}=0,z_{4}=0.5,x_{21}=0.5, x_{22}=0.5,z_{1}=0.5,z_{2}=0,z_{3}=0.5,z_{4}=0\), while formulation (8) has only integer solutions since the above fractional solutions violate the added constraints. \(\square\) To further understand the difference between the constraints from formulations (9) and (8), it is useful to examine situations in which they are the same. In particular, suppose we have two nested splits on the same feature, such that \(s^{\prime}\in\mathbf{right\_parent}(s)\), as in the tree in Figure 5a. We will examine constraints (8a) and (8b) and see when they imply the alternative constraint (9a). Specifically, we require that that \(\mathbf{above}(s)\) and \(\mathbf{below}(s^{\prime})\) cover the whole set of leaves, that is, \(\mathbf{below}(s^{\prime})\cup\mathbf{above}(s)=p\). This is formally stated in Lemma 2. Lemma 2: _Suppose \(s^{\prime}\in\mathbf{right\_parent}(s)\). If \(\mathbf{below}(s^{\prime})\cup\mathbf{above}(s)=p\),_ \[Q^{misic}\bigcap\sum_{l\in\mathbf{below}(s^{\prime})}z_{l}\leq x_{V(s^{\prime})C(s^{\prime})}\bigcap\sum_{l\in \mathbf{above}(s)}z_{l}\leq 1-x_{V(s)C(s)}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \Longrightarrow Q^{misic}\bigcap\sum_{l\in\mathbf{left}(s)}z_{l}\leq x_{V(s)C(s)}- x_{V(s^{\prime})C(s^{\prime})}\] _Similarly, suppose \(s^{\prime}\in\mathbf{left\_parent}(s)\). If \(\mathbf{above}(s^{\prime})\cup\mathbf{below}(s)=p\),_ \[Q^{misic}\bigcap\sum_{l\in\mathbf{below}(s)}z_{l}\leq x_{V(s)C(s)} \bigcap\ \sum_{l\in\mathbf{above}(s^{\prime})}z_{l}\leq 1-x_{V(s^{\prime})C(s^{ \prime})}\] \[\implies Q^{misic}\bigcap\sum_{l\in\mathbf{right}(s)}z_{l}\leq x_{V(s)C( s)}-x_{V(s^{\prime})C(s^{\prime})}\] This is proved in Appendix F. The condition \(\mathbf{below}(s^{\prime})\cup\mathbf{above}(s)=p\) is satisfied when all splits above \(s\) are on the same feature, or as an extreme case when the tree contains only one feature (the same condition as Theorem 2). When these conditions are not met, including constraint (9a) will tighten the formulation. An example where this condition is not met and formulation (9) is tighter than formulation (8) occurs in Figure 3(b). Example 7 (Elbow formulation is tighter than expset formulation).: For the tree from Figure 3(b), formulation (8) is: \[\{\boldsymbol{x},\boldsymbol{z}|\ x_{11}\geq z_{1}, x_{21}\geq z_{2}, x_{22}\geq z_{2}+\boxed{z_{3}}, x_{21}\leq x_{22}, 0\leq\boldsymbol{x}\leq 1,\] \[1-x_{11}\geq z_{2}+z_{3}+z_{4}, 1-x_{21}\geq z_{3}+z_{4}, 1-x_{22}\geq z_{4}, z_{1}+z_{2}+z_{3}+z_{4}=1, 0\leq\boldsymbol{z}\}\] Formulation (9) is: \[\{\boldsymbol{x},\boldsymbol{z}|\ x_{11}\geq z_{1}, x_{21}\geq z_{2}, x_{22}\geq z_{2}, x_{21}\leq x_{22},\ z_{1}+z_{2}+z_{3}+z_{4}=1,\] \[1-x_{11}\geq z_{2}+z_{3}+z_{4}, 1-x_{21}\geq z_{3}+z_{4}, 1-x_{22}\geq z_{4}, \boxed{x_{22}-x_{21}\geq z_{3}}, 0\leq\boldsymbol{x}\leq 1,\ 0\leq\boldsymbol{z}\}\] For convenience, the difference in the formulations has been highlighted again. Formulation (8) has a fractional solution \(x_{1}=0.5,x_{2}=0.5,x_{3}=0.5,z_{1}=0.5,z_{2}=0,z_{3}=0.5,z_{4}=0\), while formulation (9) has only integer solutions. Since each formulation has the advantage of removing different fractional solutions, including both sets of constraints can tighten the formulation further. We empirically explore how much these additional constraints tighten the LP relaxation for various datasets in Section 6.1.1. ## 6 Numerical Experiments In this section, we study the numerical performance of the formulations on both simulated and real-world data. We study two scenarios of practical interest. The first involves the time taken to solve to optimality for an objective estimated by a tree ensemble. We then focus on finding tight upper bounds to this problem, obtained by solving the linear relaxation. ### Experiments with tree ensembles In this section, we examine the time taken to solve to optimality for a problem where the objective function is estimated using a random forest. We compare formulation (4) denoted projected and formulation (9) denoted elbow, to formulation (2) from Misic (2020), denoted misc, and a formulation that uses the big-M method from Biggs et al. (2022), denoted bigM. The random forest is trained on previous decisions where the reward is generated from a simple triangle-shaped function, where observed samples have added noise: \[r_{i}=\sum_{j=1}^{d}(1-|w_{ij}|)+d\cdot\epsilon_{i}\] For this problem, \(r_{i}\) is a sampled reward, \(w_{i}\sim U(-1,1)^{d}\) is a random decision vector with \(d\) features, and \(\epsilon_{i}\sim U(0,1)\) is added noise. There are no additional constraints placed on the variables other than those used to model the tree. We train a random forest from this data using scikit-learn(Pedregosa et al., 2011). We calculate the solve time to optimality with an increasing number of trees in the forest and an increasing number of features. We increase the number of trees according to \(\{1,2,4,8,16,32\}\), and the number of features from \(1\) to \(5\). We repeat the experiment for \(10\) randomly generated datasets for each forest size and number of features. We use default parameters and a maximum depth for each tree of \(20\). For these parameters, each tree has an average of \(2893\) leaves. We show example problem sizes of the formulations when there are \(5\) features in Table 1. This shows the number of constraints, binary variables, and we show the sparsity of the constraint matrix with the number of nonzero entries. As noted earlier, the number of constraints in projected formulation is substantially smaller, while the number of binary variables is also less than the other formulations. The MIO formulations were solved using a Gurobi solver (Gurobi Optimization (2019)), with a time limit of 30 minutes (1800s) for each trial but otherwise default parameters. The experiments were run on a MacBook Pro with an Intel 8-Core i9@2.4GHz with 32GB RAM. In Table 2 we observe the time taken to solve optimally for different-sized trees. Each result is averaged over 50 trials: 10 trials for each input vector of 1 to 5 dimensions. We note that the \begin{table} \begin{tabular}{l l l l l} \hline \# trees & method & constraints & binary variables & nonzeros \\ \hline \multirow{4}{*}{1} & \multirow{4}{*}{projected} & 11 & 2766 & 27709 \\ & & misc & 8276 & 5521 & 54917 \\ & & bigM & 16560 & 8287 & 41398 \\ & & elbow & 8865 & 5521 & 61927 \\ \hline \multirow{4}{*}{2} & \multirow{4}{*}{projected} & 22 & 5627 & 56857 \\ & & misc & 16873 & 11242 & 112953 \\ & & bigM & 33720 & 16869 & 84296 \\ & & elbow & 18038 & 11242 & 128593 \\ \hline \multirow{4}{*}{4} & \multirow{4}{*}{projected} & 44 & 11312 & 114060 \\ & & misc & 34003 & 22610 & 225842 \\ & & bigM & 67818 & 33922 & 169537 \\ & & elbow & 36404 & 22610 & 254902 \\ \hline \multirow{4}{*}{8} & \multirow{4}{*}{projected} & 88 & 22832 & 227507 \\ & & misc & 68964 & 45646 & 453909 \\ & & bigM & 136914 & 68478 & 342269 \\ & & elbow & 73692 & 45646 & 520911 \\ \hline \multirow{4}{*}{16} & \multirow{4}{*}{projected} & 176 & 45206 & 455015 \\ & & misc & 137322 & 90386 & 911007 \\ & & bigM & 271110 & 135592 & 677743 \\ & & elbow & 146789 & 90386 & 1032816 \\ \hline \multirow{4}{*}{32} & \multirow{4}{*}{projected} & 352 & 91990 & 924083 \\ & & misc & 282939 & 183938 & 1847111 \\ \cline{1-1} & & bigM & 551718 & 275928 & 1379231 \\ \cline{1-1} & & elbow & 302335 & 183938 & 2097640 \\ \hline \end{tabular} \end{table} Table 1: Problem sizes for instance with 5 features Figure 6: Time taken to solve to optimality for random forests of varying sizes average time taken includes instances that didn't reach optimality, recorded as the maximum time allocated (1800s), so it is in fact a truncated mean. The percentage of instances that didn't reach optimality is recorded in the last four columns. As can be seen, the projected formulation is on average three to four times faster, and it finds an optimal solution more often within the given time. Figure 6 shows the results further broken down by the number of features, plotted on a log-log axis for clarity. We observe that the elbow formulation is often faster for tree ensembles with few trees. This might be useful in applications where many MIO problems need to be solved rapidly, such as policy iteration in reinforcement learning with tree-based value function approximations. We also observe a substantial solve time improvement using the elbow formulation when there is one feature, which agrees with the results presented in Section 5.2. We omitted the expset formulation (8) from these results because despite having a tighter linear relaxation (which is studied further in the following section), the solve time in practice was significantly slower. We conjecture that this is due to the increased density of the constraints, which contain many more variables, although it could also be due to other idiosyncracies of MIO solvers. #### 6.1.1 Tighter linear relaxations A problem of practical interest is finding tight upper bounds for maximization problems over an objective estimated by a tree ensemble. For large problem instances, finding the optimal solution can be prohibitively slow, considering that MIO \begin{table} \begin{tabular}{l l l l l l l l l} \hline & \multicolumn{4}{c}{truncated mean (s)} & \multicolumn{4}{c}{\% greater 1800s} \\ \cline{2-9} trees & projected & misc & bigM & elbow & projected & misc & bigM & elbow \\ \hline 1 & 0.47 & 0.98 & 1.00 & 0.75 & 0 & 0 & 0 & 0 \\ 2 & 0.92 & 2.09 & 1.96 & 1.67 & 0 & 0 & 0 & 0 \\ 4 & 2.16 & 6.83 & 6.15 & 5.82 & 0 & 0 & 0 & 0 \\ 8 & 8.50 & 49.14 & 56.16 & 36.82 & 0 & 0 & 0 & 0 \\ 16 & 103.30 & 1111.25 & 628.49 & 914.28 & 0 & 0.42 & 0.14 & 0.38 \\ 32 & 983.29 & 1552.09 & 1477.53 & 1363.65 & 0.32 & 0.76 & 0.66 & 0.7 \\ \hline geometric mean & 9.67 & 32.52 & 29.27 & 26.35 & & & & \\ \end{tabular} \end{table} Table 2: Time taken to solve to optimality formulations often exhibit exponential solve times. The relative quality of a fast heuristic solution can be assessed if an upper bound on the objective can be found. Another application of upper bounds is the verification of the robustness of a machine learning model (Carlini and Wagner 2017, Dvijotham et al. 2018) whereby an optimization problem is solved over local inputs to find maximally different output. Since finding the exact worst case can be prohibitively slow for large instances, a tight upper bound is often used instead (Carlini and Wagner 2017, Dvijotham et al. 2018). We analyze the formulations from Section 5.1 by analyzing the tightness of the linear relaxation. We compare formulations that use the same variables, specifically formulation (8, expset), formulation (2, misc), and (9, elbow). Additionally, we test a formulation that has both of the tightening constraints (expset+elbow). We use the same data-generating process as in Section 6.1, except rather than solving to find the optimal integer solution, we solve only the linear relaxation. For these experiments, we use forests with \(\{2,4,6,8,10\}\) trees, and increase the features according to \(\{1,2,4,8,12\}\). Again, we repeat each experiment with 10 randomly generated datasets. Figure 7 shows the optimality gap percentage, calculated from the difference between the objective of the linear relaxation and the optimal integer solution, as the number of features increases. We observe the effect of Theorem 2, whereby for tree ensembles with one feature, formulations based on expset are ideal. Moreover, for problems with relatively few features, the formulation is significantly tighter than formulation misc, whereas when the number of features is larger, the improvement is smaller. This is likely due to more features being associated with fewer splits per feature. We note that in isolation, the constraints introduced in expsum have a greater effect in tightening the formulation than those introduced in elbow, although combining both results in the tightest formulations. We also observe empirically that the elbow formulation is not ideal even in the single feature case. ### Real-world data We also study some datasets used to benchmark tree ensemble solve times used in Misic (2020). In particular, we study the concrete dataset (Yeh 1998), with 1030 observations. The dependent Figure 7: Tightness of linear relaxation variable is the compressive strength of concrete, with independent variables being the characteristics of the concrete mix. 1 Optimization aims to find the concrete with the highest compressive strength. We also study the winequalityred dataset Cortez et al. (2009), with 1599 observations. The dependent variable is the quality of the wine, while the independent variables are characteristics of the concrete mix. 2 Optimization aims to find the concrete with the highest compressive strength. We also study the winequalityred dataset Cortez et al. (2009), with 1599 observations. The dependent variable is the quality of the wine, while the independent variables are characteristics of the concrete mix. 3 Optimization aims to find the concrete with the highest compressive strength. We also study the winequalityred dataset Cortez et al. (2009), with 1599 observations. The dependent variable is the quality of the wine, while the independent variables are characteristics of the concrete mix. 4 Optimization aims to find the concrete with the highest compressive strength. We also study the winequalityred dataset Cortez et al. (2009), with 1599 observations. The dependent variable is the quality of the wine, while the independent variables are characteristics of the concrete mix. 5 Optimization aims to find the concrete with the highest compressive strength. We also study the winequalityred dataset Cortez et al. (2009), with 1599 observations. The dependent variable is the quality of the wine, while the independent variables are characteristics of the concrete mix. 6 Optimization aims to find the concrete with the highest compressive strength. We also study the winequalityred dataset Cortez et al. (2009), with 1599 observations. The dependent variable is the quality of the wine, while the independent variables are characteristics of the concrete mix. 7 Optimization aims to find the concrete with the highest compressive strength. We also study the winequalityred dataset Cortez et al. (2009), with 1599 observations. The dependent variable is the quality of the wine, while the independent variables are characteristics of the concrete mix. 8 Optimization aims to find the concrete with the highest compressive strength. We also study the winequalityred dataset Cortez et al. (2009), with 1599 observations. The dependent variable is the quality of the wine, while the independent variables are characteristics of the concrete mix. 9 Optimization aims to find the concrete with the highest compressive strength. We also study the winequalityred dataset Cortez et al. (2009), with 1599 observations. The dependent variable is the quality of the wine, while the independent variables are characteristics of the concrete mix. 1 Optimization aims to find the concrete with the highest compressive strength. We also study the winequalityred dataset Cortez et al. (2009), with 1599 observations. The dependent variable is the quality of the wine, while the independent variables are characteristics of the concrete mix. of the wine. 2 As such, the optimization problem is to choose characteristics of the wine such that the quality is maximized. Footnote 2: fixed acidity, volatile acidity, citric acid, residual sugar, chlorides, free sulfur dioxide, total sulfur dioxide, density, pH, sulfates, alcohol. #### 6.2.1 Solve time We explore the solve time for different formulations of different size random forest tree ensembles \(\{10,20,40,80,160\}\) and varying feature vector dimension \(\{1,3,5,7\}\) for Figure 9: Tightness of linear relaxation for random forests of varying sizes concrete data concrete and \(\{1,5,10\}\) for winequalityred. To test the effect of dimension, we use the first \(k\) features to predict the output. As in the previous section, we set the maximum solve time to be 30 minutes (1800s). The results for concrete and winequalityred are in Figures 8 and 10, respectively. We observe that for both datasets, the projected formulation performs relatively better than the formulation from Misic (2020) for instances where the feature vector has a lower dimension (fewer features). On the other hand, for instances with a larger number of features, the formulation Misic Figure 10: Time taken to solve to optimality for random forests of varying sizes winequalityred data (2020) can be faster to solve. Furthermore, the projected formulation (4) appears to be relatively faster for formulations with a small number of trees, which is particularly pronounced in Figures (c)c and (c)c. This is potentially an extension of Theorem 1; if (4) is ideal for a single tree, it is also potentially relatively tighter for a small number of trees. Again, this might have applications where many smaller problems need to be solved quickly, such as in reinforcement learning. For these datasets, the performance of the elbow formulation is generally comparable to Misic (2020), although there are improvements in the concrete dataset when there are few features. Figure 11: Tightness of linear relaxation for random forests of varying sizes winequalityred data #### 6.2.2 Tightness of linear relaxation We also compare the tightness of the linear relaxations for the concrete and winequalityred datasets in Figures 11 and 9. Across both datasets, we observe a similar outcome to the synthetic data experiments, whereby elbow+expset is generally the tightest, followed by expset, and finally the original misc formulation. We also observe that generally, the difference diminishes when there are more features in the data, potentially because there are fewer splits per feature, which is typically where the new formulations remove fractional points. ## 7 Conclusions and future work In this paper, we have proposed a variety of new mixed-integer optimization formulations for modeling the relationship between an input feature vector and the predicted output of a trained decision tree. We have introduced formulations that build on the variable structure from Misic (2020) and formulations that use the input feature directly. We have shown these formulations are provably tighter than existing formulations in some scenarios and have also characterized when some are tighter than others. We have shown conditions where these formulations are ideal, which gives further practical insight into when different formulations might be advantageous depending on the number of trees in the ensemble and the number of features the problem has. In addition to these theoretical insights, we have given experimental conditions where the different formulations succeed both in terms of the time taken to solve to optimality and the tightness of the corresponding linear relaxations. While the experimental results do not always fully agree with the theoretical findings or intuition due to the complex operations of commercial MIO solvers, we have identified situations where each different formulation has advantages and laid the groundwork for future computational studies. For future work, an interesting avenue is exploring the relationship between the formulations we provide and different polyhedral constraints. While in general, the formulations we provide are not ideal when combined with additional constraints, there may be special cases when they are or at least cuts that can be introduced to remove some of the fractional solutions.
2309.16221
Off-the-shelf bin picking workcell with visual pose estimation: A case study on the world robot summit 2018 kitting task
The World Robot Summit 2018 Assembly Challenge included four different tasks. The kitting task, which required bin-picking, was the task in which the fewest points were obtained. However, bin-picking is a vital skill that can significantly increase the flexibility of robotic set-ups, and is, therefore, an important research field. In recent years advancements have been made in sensor technology and pose estimation algorithms. These advancements allow for better performance when performing visual pose estimation. This paper shows that by utilizing new vision sensors and pose estimation algorithms pose estimation in bins can be performed successfully. We also implement a workcell for bin picking along with a force based grasping approach to perform the complete bin picking. Our set-up is tested on the World Robot Summit 2018 Assembly Challenge and successfully obtains a higher score compared with all teams at the competition. This demonstrate that current technology can perform bin-picking at a much higher level compared with previous results.
Frederik Hagelskjær, Kasper Høj Lorenzen, Dirk Kraft
2023-09-28T07:52:31
http://arxiv.org/abs/2309.16221v1
# Off-the-shelf bin picking workcell with visual pose estimation: ###### Abstract The World Robot Summit 2018 Assembly Challenge included four different tasks. The kitting task, which required bin-picking, was the task in which the fewest points were obtained. However, bin-picking is a vital skill that can significantly increase the flexibility of robotic set-ups, and is, therefore, an important research field. In recent years advancements have been made in sensor technology and pose estimation algorithms. These advancements allow for better performance when performing visual pose estimation. This paper shows that by utilizing new vision sensors and pose estimation algorithms pose estimation in bins can be performed successfully. We also implement a workcell for bin picking along with a force based grasping approach to perform the complete bin picking. Our set-up is tested on the World Robot Summit 2018 Assembly Challenge and successfully obtains a higher score compared with all teams at the competition. This demonstrate that current technology can perform bin-picking at a much higher level compared with previous results. ## I Introduction Bin-picking is fundamental task for industrial robotics. It allows for object feeding without a need for fixtures or manual insertion. This makes robotic solutions much more flexible and allows for adaptive production. A simple approach for bin picking, is visual pose estimation. However, the set-up of a pose estimation solution is a difficult task. It can be very time consuming to fine tune parameters to obtain adequate performance [1]. Especially for industrial objects with shiny metallic surfaces obtaining precise pose estimation can be very difficult. When integrating grasping this is further complicated [2]. This was especially demonstrated in the World Robot Summit Assembly Challenge in 2018 (WRSAC18) Kitting Task [3]. WRSAC18 had robotic workcells from all over the world competing in industrial robotics. The challenge consisted of four different tasks in which a robot should manipulate industrial objects. The competitors showed impressive results with state-of-the-art methods. However, the bin-picking task in particular, was very difficult for all teams. In the task fifteen different object varying in size and shape were to be grasped. The objects were placed in homogenous bins, and were to be placed in a kitting tray in specific positions. During competition ten selected objects were to be placed on kitting trays, repeated three times, with 20 minutes for completion, giving 40 seconds per object. Two points were earned per object, with an additional thirty points for a completed board, and with three repetitions this meant that 150 points could potentially be achieved [4]. However, the highest-scoring team only obtained 20 points. This meant that the best-performing team, accomplished ten of the thirty bin-picks in twenty minutes. Additional analysis of all teams, shows the object success rate of the objects, with the highest being 28.1 % and the lowest being 1.6 %. Analysis of the time spent on each task by the team showed that on average 21 % of the time was spent on this task. The lacking performance was thus not simply a result of not prioritizing the task [5]. Additionally, the task was removed for the subsequent challenge in 2020 Fig. 1: The different steps in the bin picking. Initially the object is placed in the bin. From the point cloud the poses are estimated. The best grasp pose is found in simulation, and finally the object is grasped. [6]. In this paper we introduce an off-the-shelf bin picking workcell. The developed workcell is tested on the WRSAC18 kitting task for the large objects. These were the objects with the lowest score during the challenge [3]. The workcell uses a modern depth sensor, the Zivid2 to obtain point clouds. We employ a modified version of a state-of-the-art pose estimation algorithm [7]. A robust bin picking procedure then grasps the objects, and they are finally placed in the kitting tray. Our developed workcell demonstrate good results for bin-picking. The workcell is able to successfully complete the kitting task for all large objects. The kitting lasts 28 seconds on average, and with an average of 40 seconds available per object, completing the full kitting is possible. We believe that the results of this workcell demonstrate the development in bin-picking in recent years compared with the results at WRSAC18. In this paper the following main contributions are presented. * Adapted state-of-the-art algorithm for colorless pose estimation * Implemented collision prediction grasp strategy for bin picking * A flexible workcell for automatic set-up of bin picking * State-of-the-art results for the WRSAC18 kitting task for large objects The remaining paper is structured as follows: We first review related papers in Sec. II. In Sec. III, our developed method is explained and the different contributions are elaborated. In Sec. IV, the experiments are performed, and the performance is verified. Finally, in Sec. V, a conclusion is given to the paper, and further work is discussed. ## II Related Work As bin picking is an important topic in creating flexible robotic set-ups, several different approaches have been developed. Solutions range from data driven [8], to based on simulations [9] and digital twin solutions [10]. Cameras are generally a part of the solution [11, 12]. At WRSAC18 several different bin picking solutions were presented. SDU Robotics was the team that obtained the overall highest score during the competition [13]. The solution presented by SDU Robotics consisted of two different solutions for small and large objects. For the small object a scooping mechanism was developed [14]. This allowed for mechanical singulation by shaking the object after scooping. A second robot then grasps the object from the scoop. However, this approach was not suitable for the larger objects, and a suction approach was created instead. After the grasping the object position is unknown in the hand and to perform the kitting the pose needs to be found. This was performed by placing the object on a re-grasping table, calculating the pose using template matching [15] and grasping the object with a fingered gripper [13]. The team which obtained the second highest overall score was JAKS [16]. JAKS used a combined 2D/3D vision system for pose estimation combined with pre-registered grasp poses. Using the robot and bin model they check the grasps for collision. Our method also employs collision check for grasping, however we also test for collision with the point cloud data using the sensors, and we allow for small collisions with the fingers. JAKS pose estimation system is based on LINEMOD [17], a classical pose estimation method based on template matching. The pose estimation system does not work for the smaller objects and a Hough [18] based search is used instead. Tactile sensing is used in the finger for detecting grasps, and the encoders in the gripper are used to verify the correct grasp. We employ the same strategy of using the encoder to verify that an object is grasped, however the gripper and not tactile sensors are used to detect grasps. The team which obtained the highest score in the kitting task was Robotic Materials [3]. Their method relied on a gripper with an inbuilt 3D sensor [19, 20]. Their gripper is shaped with long narrow fingers which allow for moving into the bin. We employ the same design for our fingers. They employ grasp primitives, which could potentially lead to grasping of unknown objects. In our application the grasp poses are defined beforehand to ensure the object is in a known pose. The grasping strategy of Robotic Materials allows for approaching in inclined angles, which allow for more grasp poses. Our method utilize the same approach to allow us to always have possible grasp poses. For some of the small objects, they used jigs were to reorient the objects [5]. In this paper we focus on the larger objects, and have not utilized any jigs. The team with the second highest score for the kitting challenge was Cambridge Robotics [21]. The team used an alternative gripper with adhesive pad. The pad is moved towards the object until contact is detected. The adhesive surface holds the object while the robot moves from the bin to the kitting tray. A mechanism then releases the object by pushing it off the adhesive. The objects are detected using a neural network trained on 15,200 images. The paper shows results for only the small objects, as the adhesive would be less effective for heavier objects. The approach obtains good results, with some errors when the objects were smaller than the release mechanism, and thus stayed attached to the finger. In a summary of the competition [5] it was speculated that the success of SDU Robotics was middle ground top-of-the-line industrial solutions and custom made solutions. Our implementation for the kitting task follows the same approach of a middle ground. All hardware for the task, i.e., robot, gripper and 3D sensor, are top-of-the-line industrial products. The software used is custom designed systems created for bin-picking. Additionally, the fingers are custom made for the bin-picking task. ## III Method The presented method is a pipeline for off-the-shelf bin picking. The method consists of both an offline set-up phase and an online run-time phase. The offline set-up phase consists of training the pose estimation algorithm and defining the grasp poses. During run-time, the robot moves the camera above the bin, object poses are computed, the best grasp pose is calculated and the robot attempts to grasp the object. This process is repeated until an object is grasped. The full pipeline is shown in Fig. 2 In the following section the different parts of the bin picking set-up is elaborated. The workcell is described in Sec. III-A, in Sec. III-B the pose estimation algorithm is explained, Sec. III-C contains the grasp poses, and finally in Sec. III-D the grasp executor is detailed. ### _The Workcell_ The workcell for the bin picking consists of an UR5 robotic arm, a Robotiq HAND-E gripper with 3D printed fingers and a Zivid2 robot-mounted depth sensor. The robot is placed on a table along with the trays and bins. The workcell uses a PC environment with an Intel i9-9900K 3.60GHz CPU and an NVIDIA GeForce RTX 2080 TI GPU. In the bin-picking task, the objects are placed randomly in the bin. Thus the grasp poses cannot be computed offline. A digital twin of the workcell has been created to avoid collisions when grasping. This allows the system to check all grasps poses for collision before execution. Additionally, by configuring the positions in the digital twin all actions are defined relative to the trays and bins. Thus multiple bins and trays can be placed around the workcell, and the robot can plan collision free solutions. The real workcell and digital twin are shown in Fig. 2(b). The digital twin is implemented using Robwork [22]. #### Iii-B1 Finger Design The finger design is inspired by the Robotic Materials finger [20]. The sharp curvature at the fingertip allows for moving into the bin without being blocked by minor collisions. Rubber is applied to the inside of the fingers to improve the friction of the grasp. The fingertips are shown grasping an object in Fig. 4. ### _Pose Estimation Algorithm_ To grasp and correctly place the objects, pose estimation is necessary. To perform the pose estimation, a variant of the ParaPose [7] algorithm has been implemented. This algorithm has been selected for several reasons. Firstly the algorithm has state-of-the-art performance on the challenging Occlusion dataset [23], secondly, the algorithm has a completely automatic set-up using synthetic data, and thirdly, the algorithm uses 3D point clouds for the pose estimation. The 3D input data is important as the provided CAD models often do not contain surface or color information. This generally holds for most CAD models in manufacturing industry. Pose estimation algorithms without color information are thus necessary. To accommodate this need, the ParaPose algorithm is adapted to not use color information, and the RGB input is removed from the network. **Instance Segmentation:** As the objects are placed in bins with homogeneous content, the detection task is vastly simplified. Using the digital twin of the workcell, the position of the bin is approximately known. Iterative Closest Point (ICP) is then used to obtain the actual position of the bin. The bin and remaining scene can then be removed from the point cloud, and only points belonging to the object remain. As in PointVoteNet [24], we sample anchor-points for the algorithm, which are then used for pose estimation. The task is thus reduced to instance segmentation for each point cloud at the anchor points. The full pose estimation procedure is as follows: * Refine bin position with ICP * Remove all points outside the bin * Generate anchor points * For each anchor point, sample point cloud based on radius * Process each point cloud with the network and compute matches * Perform multiple pose estimations for each point cloud and refine with ICP * Sort according to depth check * Non-maximum suppression based on ADDS [17] **Set-up:** We use the same set-up for ParaPose as in the original paper [7]. However, the synthetic data is adapted to the bin-picking scenario. The data is generated using the BlenderBin1 pipeline. BlenderBin is an extension of the BlenderProc [25] pipeline with a focus on bin-picking. The objects are placed in homogeneous bins replicating the real world scenario. Thus synthetic data is easily created and the network is trained using this data. Footnote 1: [https://github.com/hansaskov/BlenderBin/](https://github.com/hansaskov/BlenderBin/) During the network training the domain randomization is automatically learned. As color information has been omitted the RGB noise has been removed from the parameter tuning. While the pose estimation parameters are also optimized automatically in the original approach, we implement heuristic parameters in this method. The heuristic approach was chosen as the workcell remains static. Parameters will Fig. 2: The overall workflow for a bin picking operation. thus not change for new objects, and manual tuning is thus feasible. ### _Grasp Poses_ As the objects are placed freely in six degrees of freedom (6DOF), the world relative grasp poses are also freely placed in 6DOF. Thus a grasp pose can easily be invalided either because it cannot be reached, or as a result of collision with the workspace. To ensure that the found objects can be grasped, grasp poses should be defined all around the object. If only a single grasp pose is defined, the object would often not be graspable. Our workcell thus work best with method for automatically creating grasp poses. These approaches could be e.g., simulation such as Grasplt [26] or geometric primitives [27]. **Cylindrical Grasp Poses:** As all the objects in this task are cylindrical in shape we have created a grasp pose generator based on this. A single good grasp pose can thus be used to generate a set of grasps poses around the object. By rotating 360 degrees around the object along with a single 180 degree around the hand (taking the symmetry of the parallel gripper into account) a full set of grasp poses can be created. Using this method, the robot can create effective grasp poses from any cylindrical object. Compared with generating grasp poses from simulation this allows for consistent grasps independent of the object pose. **Computing Grasp Solutions:** When the pose estimation has been performed, the set of all grasp poses, \({}_{base}T_{tcp}\), can be computed using Eq. 1. Were \({}_{base}T_{obj}\) is the set of objects poses and \({}_{obj}T_{tcp}\) is the set of grasp poses. \[{}_{base}T_{tcp}=\ _{base}T_{obj}^{n}\ {}_{obj}T_{tcp}^{m} \tag{1}\] This results in \(n\times m\) solutions for possible grasps in TCP space. Using the analytical solver [22] this returns \(8\times n\times m\) possible joint configurations. The goal is then to find the grasp pose with the shortest distance in joint space. As the joints can generally move in parallel, only the largest joint distance is relevant. The best grasp pose can thus be calculated as in Eq. 2. \[\min_{\forall s\in S}||j_{s}-\hat{j}||_{0} \tag{2}\] Where \(S\) is the space of all grasping solutions and \(\hat{j}\) is the current joint configuration. For each successive joint configuration path planning is performed. If a collision free path cannot be found, the solution is discarded and the next joint configuration is tested. If the path is collision free, the position is accepted as a viable grasp pose. Fig. 4: The fingers grasping object type 7 in the bin. Fig. 3: The used workcell to accomplish the bin picking. The real workcell is shown to the left. To the right the Digital Twin is shown. The scene consists of the robot with gripper and camera, the bins (center) and kitting trays (bottom left). All actions are relative to the objects and the bins and trays can, therefore, be moved around in the workcell freely. #### Iii-C1 Additional sorting of poses When calculating the grasp poses an additional sorting of the poses is performed. This is done to optimize the success rate of the grasping operations without limiting the number of possible solutions. The sorting is made to prioritize grasps without any collision. The collision types are sorted into two types, collision with objects and collision with the bin. By allowing for these collisions the space of possible solutions increases, avoiding the situation were an object cannot be grasped at all, and the bin, therefore, cannot be emptied. However, it is expected that the failure rate for such grasps will be much higher. Thus we prioritize collision free grasps, to reduce the overall run-time. The object collisions are found using the depth data from the sensor. By projecting the fingers into the point cloud data any collision with the objects can be found. Collisions with the bin are found using the workcell model. The method does not accept any collisions with the model, but if by moving the fingers back 2cm in TCP space removes the collision there is a great change that the grasp will succeed. Thus the search for viable grasp poses if first performed with collision free poses \(T_{none}\), then poses with collision with objects are used \(T_{obj}\), and finally poses with collision with the bin are used \(T_{bin}\). The complete set of poses is thus as in Eq. 3. \[T_{poses}=\{T_{none};T_{obj};T_{bin}\} \tag{3}\] ### _The Grasp Executor_ The grasp execution is created to compensate for the fact that collision can occur while grasping the object. Thus the robot is moving to the grasp pose in collision mode. If collision is detected the robot moves into force mode and moves towards the grasp pose. At either timeout, or when reaching the position the robot stops moving. The fingers are then closed, and a grasp can be verified by checking that they are not completely closed. Before starting the grasp, the fingers are opened according to the object size so as decrease collision with other objects. The procedure is shown in Fig. 5. ## IV Experiments To demonstrate the performance of the developed workcell, tests are performed on the six large objects from WRSAC18. The number of objects in the bins are set to resemble the lineup at WRSAC18 [3]. The objects in bins are shown in Fig. 6. In the experiments the vision system is used for pose estimation and the grasp planner computes the optimal grasp pose. The grasp executor then finally performs the grasping. ### _Kitting Task_ To demonstrate that the workcell is able to perform the kitting task, we replicated the requirements of the competition, with three sets of trays to complete. Each set of trays were removed after completion and new empty trays were inserted. During the full run no manual manipulation were performed on the workcell expect for the replacement of the trays. The task is run with all large objects, giving 18 object instances to be bin picked. The kitting was performed successfully with 9 failed grasp attempts. However, each failure was recovered from. With two points per object this results in 32 points in all. But, at the competition only 12 large objects were to be bin-picked, thus a maximum of 24 points could be obtained. However, as the best scoring team at the competition obtained 20 points our method still outperforms the other methods. The full kitting task had a 20 minute run-time, with 30 objects in all, allowing an average per object run-time of 40 seconds. The complete run-time of our system for the 18 objects was 8 minutes and 32 seconds. Thus each bin picking lasted 28 seconds on average, well under the 40 seconds limit for each object. This indicates that the remaining points can still be obtained, and our method is appropriate to complete the full task. ### _Analysis of performance_ To analyze the performance of the workcell extensive testing is performed. For each object more than one hundred Fig. 5: The workflow when a grasp is performed. The ideal grasp moves to the intended pose, and force mode is only activated if collision is detected. Fig. 6: The experimental set-up of the six bins. The ids are according to [3]. bin-pickings are executed. The bin-picking performed until the bin was emptied, the bin was then refilled, the objects were scrambled, and the bin-picking resumed. During the bin-picking we record the overall success rate, and the success rate based on whether no collision was predicted, or if collision was predicted with objects or with the bin. We also record the success rate based on whether collision actually occurred, and if the grasping reached timeout. Additionally, we present the systems ability to predict collisions correctly. The results of the experiments are shown in Tab. I. From the experiments several interesting observations are made. It is clear that the collision prediction improves the performance. The success rate when no collision is predicted is much higher than when collision is predicted. It is thus valuable to prioritize poses without collision. Additionally, while the success rate decreases when a collision is predicted, the system is often able to perform a grasp. Thus if objects are only placed in positions which will result in collisions the system is still able to perform grasps. This combined with the systems ability to recover from failures allows the system empty bins even if objects are placed in challenging grasp poses. Additionally, the ability to grasp when actual collisions occurs is shown. While collision free grasps are more successful, the method is still able to grasp objects in collision. The success-rate for grasps when the timeout was reached is even lower, but grasps still succeed. Increasing the timeout run-time could increase the success-rate but increase the overall run-time. This is an interesting topic for further research. The accuracy of actually predicting a collision is between 72.0 % and 85.1 %. This shows that while improving performance the collision prediction is also fairly accurate. #### Iv-B1 Grasp Types To further analyze the results of grasps, the results for object 7 are split into grasp types. For the object six different grasps have been created, the grasps are shown in Fig. 7. The results for each of the different grasp types are shown in Tab. II. From the results several interesting observations are noted. All grasps are used during the process so having, a full coverage is important. Type 1 is the most used grasp, which indicates that the object often ends up in a position where this is the most desirable grasp. This is also seen by the fact that the number of collisions is very low. The second most applied grasp type is 6. However, here the success-rate is lower, and the collision rate is higher. This is possibly as a result of collision with the object when trying to insert the finger. Grasps 2 trough 5 are all variations of the same grasp with 2 and 5 being the most prevalent. These are possibly the most stable poses in the bin. ## V Conclusion This paper presented a workcell for off-the-shelf bin-picking. The bin-picking abilities are tested on the large objects from the World Robot Summit Assembly Challenge 2018. Here the workcell show state-of-the-art performance, by successfully kitting of all objects within the timeline. To accomplish this task a colorless pose estimation approach using point clouds has been developed. Combined with a grasp estimation planner grasp poses are obtained. Our developed grasp executor then completes the grasp and adapt to any collisions. Extensive testing shows the validity of our approach. We believe that these results demonstrate the progress of robotics and computer vision in recent years. In further work, it would be interesting to apply zero-shot pose estimation methods for the workcell. This could be combined with using real training data. While manual data collection is very expensive this could be performed automatically. As the bin-picking successfully grasps objects the feedback could be used to collect data and improve the pose estimation algorithm. Another interesting work would be the completion of the full kitting challenge. However, the smaller objects might introduce a need for more components such as re-grasping or fixtures. Additionally, novel large objects could be introduced and the workcell performance could be tested. \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|} \hline Type & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline \hline Attempts & 51 & 11 & 3 & 7 & 17 & 38 \\ \hline Success & 50 & 10 & 2 & 4 & 16 & 24 \\ \hline Collision & 2 & 0 & 1 & 0 & 3 & 14 \\ \hline \end{tabular} \end{table} TABLE II: Results for the different grasp types of object 7. \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|} \hline Number & 4 & 11 & 13 & 7 & 8 & 5 \\ Type & Motor & pulley & lider & Bearing & shaft & Pulley \\ \hline \hline Overall & 78.7 & 69.4 & 85.1 & 85.0 & 80.0 & 80.3 \\ \hline No Col. Pred. & 100 & 76.5 & 96.7 & 87.5 & 100 & 98.6 \\ Obj. Col. Pred. & 81.4 & 57.1 & 50.0 & 79.3 & 100 & 53.8 \\ Bin. Col. Pred. & 78.8 & 429.5 & 100 & 50.0 & 73.1 & 58.6 \\ \hline No Collision & 100 & 77.7 & 97.0 & 90.7 & 85.4 & 96.9 \\ Collision & 50.0 & 40.6 & 57.1 & 55.0 & 71.4 & 29.0 \\ Timeout & 27.8 & 20.0 & 40.0 & 14.3 & 62.3 & 16.7 \\ \hline \hline Pred. Acc & 77.0 & 79.2 & 85.1 & 77.2 & 72.0 & 76.4 \\ \hline \end{tabular} \end{table} TABLE I: Analysis of the Bin picking success rate for the different objects. The success is shown overall, when collision is predicted and not, and when actual collision occurred and not, and when the grasp resulted in timeout. Additionally, we show the accuracy of the system for predicting collision. Fig. 7: The six different grasps defined for object 7. Type 2, 3, 4 and 5 are similar with the relative angle to z varying.
2018年の世界ロボット会議の組み立てチャレンジに、4つの異なるタスクが含まれていました。ビンピックというタスクは、最も少ないポイントを獲得したタスクでした。しかし、ビンピックはロボットシステムのフレキシビリティを高める非常に重要なスキルであり、研究の重要な分野です。近年、センサー技術と姿勢推定アルゴリズムの進歩が進み、視覚的な姿勢推定の性能を向上させることが可能となりました。この論文では、新しい視覚センサーと姿勢推定アルゴリズムを用いることで、ビン内の姿勢推定が成功する可能性を示しています。また、ビンピックを行うためのワークセルと力に基づく掴み方を実装し、完全なビンピックを実行しました。このセットアップは、2018年の世界ロボット会議の組み立てチャレンジでテストされ、競争チームのスコアに比べて、より高いスコアを獲得しました。これは、現在の技術が従来の結果と比べてビンピックを行うことができるレベル
2310.02337
Hilbert Expansion of Boltzmann Equation with Soft Potentials and Specular Boundary Condition in Half-space
Boundary effects play an important role in the study of hydrodynamic limits in the Boltzmann theory. We justify rigorously the validity of the hydrodynamic limit from the Boltzmann equation of soft potentials to the compressible Euler equations by the Hilbert expansion with multi-scales. Specifically, the Boltzmann solutions are expanded into three parts: interior part, viscous boundary layer and Knudsen boundary layer. Due to the weak effect of collision frequency of soft potentials, new difficulty arises when tackling the existence of Knudsen layer solutions with space decay rate, which has been overcome under some constraint conditions and losing velocity weight arguments.
Jing Ouyang, Yong Wang
2023-09-19T01:44:53
http://arxiv.org/abs/2310.02337v1
Hilbert expansion of Boltzmann equation with soft potentials and specular boundary condition in half-space ###### Abstract. Boundary effects play an important role in the study of hydrodynamic limits in the Boltzmann theory. We justify rigorously the validity of the hydrodynamic limit from the Boltzmann equation of soft potentials to the compressible Euler equations by the Hilbert expansion with multi-scales. Specifically, the Boltzmann solutions are expanded into three parts: interior part, viscous boundary layer and Knudsen boundary layer. Due to the weak effect of collision frequency of soft potentials, new difficulty arises when tackling the existence of Knudsen layer solutions with space decay rate, which has been overcome under some constraint conditions and losing velocity weight arguments. Key words and phrases:Boltzmann equation, compressible Euler equations, hydrodynamic limit, Hilbert expansion, viscous boundary layer, Knudsen boundary layer ###### Contents * 1 Introduction and Main Results * 1.1 Introduction * 1.2 Asymptotic expansion * 1.3 Hilbert expansion * 2 Some Estimates for Soft Boltzmann Operators * 2.1 Preliminaries * 2.2 Estimate for \(\mathbf{L}^{-1}\) * 3 Existence of a Steady Linear Boltzmann Equation * 3.1 Approximate solutions and uniform estimate * 3.2 Proof of Theorem 3.1 * 4 Hilbert Expansions for Boltzmann Equation of Soft Potentials * 4.1 Linear parts of Hilbert expansion * 4.2 Estimates on the remainder * 4.3 Proof of Theorem 1.1 ## 1. Introduction and Main Results ### Introduction It is well-known that the Boltzmann equation is closely related to the fluid dynamical systems for both compressible and incompressible flows since the founding work of Maxwell [36] and Boltzmann [6]. In 1912, Hilbert proposed a systematic formal asymptotic expansion for Boltzmann equation with respect to Knudsen number \(\mathscr{K}_{n}\ll 1\). In 1916 and 1917, Enskog and Chapman independently proposed a different formal expansion, respectively. Based on Hilbert or Chapman-Enskog expansions, the standard fluid theory can be derived formally, for instance: the compressible Euler and Navier-Stokes equations, the incompressible Euler and Navier-Stokes (Fourier) equations, _et. al_. In the past decades, great effort has been devoted to the study of the hydrodynamic limit from the Boltzmann equation to the fluid systems. When the solutions of compressible Euler equations are smooth, Caflisch [7] rigorously justified the hydrodynamic limit of Boltzmann equation to the compressible Euler equations by a the truncated Hilbert expansion, see also [14, 33, 38, 41], and [17, 18] via a recent \(L^{2}\)-\(L^{\infty}\) framework. When the solutions of compressible Euler equations are consisted by the basic wave patterns (singularities), the convergence has been established in [25, 26, 27, 47, 48] in one-dimension case, and [43] for multi-dimensional planar rarefaction wave. There are also lots of literatures on the hydrodynamic limit of Boltzmann equation to the incompressible fluid equations, see [2, 3, 12, 5, 15, 21, 29, 35, 44] for incompressible Navier-Stokes equations, [28, 23] for incompressible Euler equations, and the references cited therein. All of the above-mentioned works on the compressible Euler limit were carried out in either spatially periodic domain or the whole space. However, in many important physical models, the physical boundaries occur naturally, and the boundary effects play an important role in the study of hydrodynamic limits in the Boltzmann theory. For initial boundary value problem, by a formal analysis, Sone [40] showed that the solutions contains three kinds of parts, i.e., interior part, viscous boundary layer and Knudsen boundary layer. Recently, Based on a systematic study of the viscous and Knudsen layers and the \(L^{2}-L^{\infty}\) framework, Guo-Huang-Wang [20] first justified rigorously the validity of the Hilbert expansion for the hard sphere Boltzmann equation with specular reflection boundary condition in half-space, which leads to derivations of both compressible Euler equations and acoustic equations, see [31] for Maxwell reflection boundary condition of hard potentials and [32] for diffuse reflection boundary condition of hard sphere. In the present paper, we aim to justify the hydrodynamic limit to the compressible Euler equations for the Boltzmann equation of soft potentials. The new difficulty for the soft potentials is that it is hard to establish the existence of solution for Knudsen boundary layer with enough space decay rate, which is crucial to close the Hilbert expansion. To our knowledge, for the specular boundary condition, the known results [13, 22] on the existence of Knudsen boundary layer are for hard sphere, and the exponential space decay was also obtained due to the strong effect of collision frequency \(\nu\cong 1+|v|\). For the other boundary conditions, we refer the readers to [1, 8, 42, 45] for hard potentials and [46] for soft potentials with in-flow boundary condition, [24] for hard sphere with diffuse reflection boundary condition, [4] for hard sphere with phase transition, and the references therein. We consider the scaled Boltzmann equation \[F_{t}+v\cdot\nabla_{x}F=\frac{1}{\mathscr{K}_{n}}Q(F,F), \tag{1.1}\] where \(F(t,x,v)\geq 0\) is the density distribution function for the gas particles with position \(x\in\mathbb{R}^{3}_{+}=\{x\in\mathbb{R}^{3}:x_{3}>0\}\) and velocity \(v\in\mathbb{R}^{3}\) at time \(t>0\), and \(\mathscr{K}_{n}>0\) is Knudsen number which is proportional to the mean free path. The Boltzmann collision term \(Q(F_{1},F_{2})\) on the right is defined in terms of the following bilinear form \[Q(F_{1},F_{2}) \equiv\int_{\mathbb{R}^{3}}\int_{\mathbb{S}^{2}}B(v-u,\omega)F_{ 1}(u^{\prime})F_{2}(v^{\prime})\,d\omega du\] \[\qquad-\int_{\mathbb{R}^{3}}\int_{\mathbb{S}^{2}}B(v-u,\omega)F_ {1}(u)F_{2}(v)\,d\omega du\] \[:=Q_{+}(F_{1},F_{2})-Q_{-}(F_{1},F_{2}), \tag{1.2}\] where the relationship between the post-collision velocity \((v^{\prime},u^{\prime})\) of two particles with the pre-collision velocity \((v,u)\) is given by \[u^{\prime}=u+[(v-u)\cdot\omega]\omega,\quad v^{\prime}=v-[(v-u)\cdot\omega]\omega,\] for \(\omega\in\mathbb{S}^{2}\), which can be determined by conservation laws of momentum and energy \[u^{\prime}+v^{\prime}=u+v,\quad|u^{\prime}|^{2}+|v^{\prime}|^{2}=|u|^{2}+|v|^{ 2}.\] The Boltzmann collision kernel \(B=B(v-u,\omega)\) in (1) depends only on \(|v-u|\) and \(\theta\) with \(\cos\theta=(v-u)\cdot\omega/|v-u|\). Throughout this paper, we consider cutoff soft potential model, i.e., \[B(v-u,\omega)=|v-u|^{\kappa}\cdot\beta(\theta),\quad\kappa\in(-3,0),\] where we assume the Grad cutoff condition holds, i.e., \[0\leq\beta(\theta)\leq\beta_{0}|\cos\theta|,\] for some constant \(\beta_{0}>0\). Denote \(\vec{n}=(0,0,-1)\) to be the outward normal of \(\mathbb{R}_{+}^{3}\) and the phase boundary in the space \(\mathbb{R}_{+}^{3}\times\mathbb{R}^{3}\) as \(\gamma:=\partial\mathbb{R}_{+}^{3}\times\mathbb{R}^{3}\). We split \(\gamma\) into outgoing boundary \(\gamma_{+}\), incoming boundary \(\gamma_{-}\), and grazing boundary \(\gamma_{0}\): \[\gamma_{+} =\{(x,v):x\in\partial\mathbb{R}_{+}^{3},v\cdot\vec{n}=-v_{3}>0\},\] \[\gamma_{-} =\{(x,v):x\in\partial\mathbb{R}_{+}^{3},v\cdot\vec{n}=-v_{3}<0\},\] \[\gamma_{0} =\{(x,v):x\in\partial\mathbb{R}_{+}^{3},v\cdot\vec{n}=-v_{3}=0\}.\] In the present paper, we consider the Boltzmann equation with specular reflection boundary conditions, i.e., \[F(t,x,v)|_{\gamma_{-}}=F(t,x,R_{x}v), \tag{1.3}\] where \[R_{x}v=v-2\{v\cdot\vec{n}\}\vec{n}=(v_{1},v_{2},-v_{3})^{t}. \tag{1.4}\] ### Asymptotic expansion From the formal analysis in [40], we know that the thickness of viscous boundary layer is \(\sqrt{\mathscr{L}_{n}}\). For simplicity, we use the new parameter \(\varepsilon=\sqrt{\mathscr{L}_{n}}\) and denote the Boltzmann solution to be \(F^{\varepsilon}\), then the Boltzmann equation (1.1) is rewritten as \[\partial_{t}F^{\varepsilon}+v\cdot\nabla_{x}F^{\varepsilon}=\frac{1}{ \varepsilon^{2}}Q(F^{\varepsilon},F^{\varepsilon}). \tag{1.5}\] #### 1.2.1. Interior expansion We define the interior expansion \[F^{\varepsilon}(t,x,v)\sim\sum_{k=0}^{\infty}\varepsilon^{k}F_{k}(t,x,v). \tag{1.6}\] Substituting (1.6) into (1.5) and comparing the order of \(\varepsilon\), one obtains \[\begin{split}\frac{1}{\varepsilon^{2}}:& 0=Q(F_{0},F_{0}),\\ \frac{1}{\varepsilon}:& 0=Q(F_{0},F_{1})+Q(F_{1},F_{0}), \\ \varepsilon^{0}:&\{\partial_{t}+v\cdot\nabla_{x}\}F_{0 }=Q(F_{0},F_{2})+Q(F_{2},F_{0})+Q(F_{1},F_{1}),\\ \varepsilon:&\{\partial_{t}+v\cdot\nabla_{x}\}F_{1 }=Q(F_{0},F_{3})+Q(F_{3},F_{0})+Q(F_{1},F_{2})+Q(F_{2},F_{1}),\\ &\vdots\\ \varepsilon^{k}:&\{\partial_{t}+v\cdot\nabla_{x}\}F_{k }=Q(F_{0},F_{k+2})+Q(F_{k+2},F_{0})+\sum_{\begin{subarray}{c}i+j=k+2\\ i,j\geq 1\end{subarray}}Q(F_{i},F_{j}).\end{split} \tag{1.7}\] It follows from \(\eqref{eq:1.1}\) and the celebrated H-theorem that \(F_{0}\) should be a local Maxwellian, i.e., \[\mu(t,x,v):=F_{0}(t,x,v)\equiv\frac{\rho(t,x)}{[2\pi T(t,x)]^{3/2}}\exp\bigg{\{} -\frac{|v-\mathfrak{u}(t,x)|^{2}}{2T(t,x)}\bigg{\}}, \tag{1.8}\] where \(\rho(t,x)\), \(\mathfrak{u}(t,x)=(\mathfrak{u}_{1},\mathfrak{u}_{2},\mathfrak{u}_{3})(t,x)\), and \(T(t,x)\) are defined by \[\int_{\mathbb{R}^{3}}F_{0}dv=\rho,\quad\int_{\mathbb{R}^{3}}vF_{0}dv=\rho \mathfrak{u},\quad\int_{\mathbb{R}^{3}}|v|^{2}F_{0}dv=\rho|\mathfrak{u}|^{2}+ 3\rho T,\] which represent the macroscopic density, velocity and temperature, respectively. Multiplying \(\eqref{eq:1.7}_{3}\) by \(1,v_{i},|v|^{2}\) and integrating on \(\mathbb{R}^{3}\), one obtains that \((\rho,\mathfrak{u},T)\) satisfies the compressible Euler system \[\begin{cases}\partial_{t}\rho+\operatorname{div}(\rho\mathfrak{u})=0,\\ \partial_{t}(\rho\mathfrak{u})+\operatorname{div}(\rho\mathfrak{u}\otimes \mathfrak{u})+\nabla p=0,\\ \partial_{t}[\rho(\frac{3T}{2}+\frac{|\mathfrak{u}|^{2}}{2})]+ \operatorname{div}[\rho\mathfrak{u}(\frac{3T}{2}+\frac{|\mathfrak{u}|^{2}}{2 })]+\operatorname{div}(p\mathfrak{u})=0,\end{cases} \tag{1.9}\] where \(p=\rho T\) is the pressure function. For the compressible Euler equations (1.9), we impose the slip boundary condition \[\mathfrak{u}\cdot\vec{n}|_{x_{3}=0}=\mathfrak{u}_{3}|_{x_{3}=0}=0. \tag{1.10}\] and the initial data \[(\rho,\mathfrak{u},T)(0,x)=(1+\delta\varphi_{0},\delta\Phi_{0},1+\delta\vartheta _{0})(x), \tag{1.11}\] with \(\|(\varphi_{0},\Phi_{0},\vartheta_{0})\|_{H^{s_{0}}}\leq 1\) where \(\delta>0\) is a parameter and \(s_{0}\geq 3\) is some given positive number. Choose \(\delta_{1}>0\) so that for any \(\delta\in(0,\delta_{1}]\), the positivity of \(1+\delta\varphi_{0}\) and \(1+\delta\vartheta_{0}\) is guaranteed. Then for each \(\delta\in(0,\delta_{1}]\), there is a family of classical solutions \((\rho^{\delta},\mathfrak{u}^{\delta},T^{\delta})\in C([0,\tau^{\delta}];H^{s_{ 0}}(\mathbb{R}^{3}_{+}))\cap C^{1}([0,\tau^{\delta}];H^{s_{0}-1}(\mathbb{R}^{3 }_{+}))\) of the compressible Euler equations (1.9)-(1.11) such that \(\rho^{\delta}>0\) and \(T^{\delta}>0\). For later use, we define the linearized collision operator \(\mathbf{L}\) by \[\mathbf{L}\mathfrak{h}=-\frac{1}{\sqrt{\mu}}\Big{\{}Q(\mu,\sqrt{\mu} \mathfrak{h})+Q(\sqrt{\mu}\mathfrak{h},\mu)\Big{\}}. \tag{1.12}\] Denote the null space of \(\mathbf{L}\) as \(\mathcal{N}\), it is clear that \[\mathcal{N}=\operatorname{span}\{\chi_{0},\chi_{1},\chi_{2},\chi_{3},\chi_{4 }\},\] where \[\chi_{0}=\frac{1}{\sqrt{\rho}}\sqrt{\mu},\quad\chi_{i}=\frac{1}{\sqrt{\rho T} }(v_{i}-\mathfrak{u}_{i})\sqrt{\mu},\quad\chi_{4}=\frac{1}{\sqrt{6\rho}}( \frac{|v-\mathfrak{u}|^{2}}{T}-3)\sqrt{\mu}.\] For each \(k\geq 1\), decompose \(f_{k}:=\frac{F_{k}}{\sqrt{\mu}}\) as \[f_{k} =\mathbf{P}f_{k}+\{\mathbf{I}-\mathbf{P}\}f_{k}\] \[\equiv\left\{\frac{\rho_{k}}{\sqrt{\rho}}\chi_{0}+\sum_{j=1}^{3} \sqrt{\frac{\rho}{T}}u_{k,j}\cdot\chi_{j}+\sqrt{\frac{\rho}{6}}\frac{\theta_{ k}}{T}\chi_{4}\right\}+\{\mathbf{I}-\mathbf{P}\}f_{k}\] \[\equiv\left\{\frac{\rho_{k}}{\rho}+u_{k}\cdot\frac{v-\mathfrak{u }}{T}+\frac{\theta_{k}}{6T}(\frac{|v-\mathfrak{u}|^{2}}{T}-3)\right\}\sqrt{ \mu}+\{\mathbf{I}-\mathbf{P}\}f_{k}, \tag{1.13}\] where \(\mathbf{P}\) is the macroscopic projection onto \(\mathcal{N}\). #### 1.2.2. Viscous boundary layer expansion Generally, the solution of interior expansion \(F_{i},i=1,2,\cdots\) do not satisfy the specular reflection boundary conditions. To overcome the difficulty coming from the boundary condition, the boundary layer expansion is needed, see [20] and [39, 40]. We define the scaled normal coordinate: \[y:=\frac{x_{3}}{\varepsilon}. \tag{1.14}\] For simplicity of presentation, we denote \[x_{\shortshortshort}=(x_{1},x_{2}),\quad\nabla_{\shortshortshort}=(\partial _{x_{1}},\partial_{x_{2}})\quad\text{and}\quad v_{\shortshort}=(v_{1},v_{2}). \tag{1.15}\] Motivated by [40, Section 3.4.1], we define the viscous boundary layer expansion as \[\bar{F}^{\varepsilon}(t,x_{\shortshortshort},y)\sim\sum_{k=1}^{\infty} \varepsilon^{k}\bar{F}_{k}(t,x_{\shortshort},y,v).\] Plugging \(F^{\varepsilon}+\bar{F}^{\varepsilon}\) into the Boltzmann equation (1.5) and comparing the order of \(\varepsilon\), then using (1.7), in the neighborhood of physical boundary, we have \[\begin{split}\frac{1}{\varepsilon}:&\qquad 0=Q(\mu_{0}, \bar{F}_{1})+Q(\bar{F}_{1},\mu_{0}),\\ \varepsilon^{0}:&\quad v_{3}\frac{\partial\bar{F}_{1 }}{\partial y}=[Q(\mu_{0},\bar{F}_{2})+Q(\bar{F}_{2},\mu_{0})]+y[Q(\partial_{ 3}\mu_{0},\bar{F}_{1})+Q(\bar{F}_{1},\partial_{3}\mu_{0})]\\ &\qquad\qquad+Q(F_{1}^{0},\bar{F}_{1})+Q(\bar{F}_{1},F_{1}^{0})+ Q(\bar{F}_{1},\bar{F}_{1}),\\ &\qquad\qquad\vdots\\ \varepsilon^{k}:&\quad\{\partial_{t}+v_{{}_{\shortparallel}} \cdot\nabla_{{}_{\shortparallel}}\}\bar{F}_{k}+v_{3}\frac{\partial\bar{F}_{ k+1}}{\partial y}=Q(\mu_{0},\bar{F}_{k+2})+Q(\bar{F}_{k+2},\mu_{0})\\ &\qquad\qquad+\sum_{\begin{subarray}{c}l+j=k+2\\ 1\leq l\leq b,\,j\geq 1\end{subarray}}\frac{y^{l}}{l!}\big{[}Q(\partial_{3}^{l} \mu_{0},\bar{F}_{j})+Q(\bar{F}_{j},\partial_{3}^{l}\mu_{0})\big{]}\\ &\qquad+\sum_{\begin{subarray}{c}i+j=k+2\\ i,j\geq 1\end{subarray}}\big{[}Q(F_{i}^{0},\bar{F}_{j})+Q(\bar{F}_{j},F_{i}^{0})+ Q(\bar{F}_{i},\bar{F}_{j})\big{]}\\ &\qquad+\sum_{\begin{subarray}{c}i+j+l=k+2\\ 1\leq l\leq b,\,i,j\geq 1\end{subarray}}\frac{y^{l}}{l!}\big{[}Q(\partial_{3}^{l}F_{i}^ {0},\bar{F}_{j})+Q(\bar{F}_{j},\partial_{3}^{l}F_{i}^{0})\big{]},\quad\text{ for }k\geq 1,\end{split} \tag{1.16}\] where we have used the Taylor expansions of \(\mu\) and \(F_{i}\) at \(x_{3}=0\), i.e., \[\mu(t,x_{1},x_{2},x_{3},v)=\mu_{0}+\sum_{l=1}^{\mathfrak{b}}\frac{1}{l!} \partial_{3}^{l}\mu_{0}\cdot x_{3}^{l}+\frac{x_{3}^{\mathfrak{b}+1}}{( \mathfrak{b}+1)!}\partial_{3}^{\mathfrak{b}+1}\tilde{\mu}, \tag{1.17}\] \[F_{i}(t,x_{1},x_{2},x_{3},v)=F_{i}^{0}+\sum_{l=1}^{\mathfrak{b}}\frac{1}{l!} \partial_{3}^{l}F_{i}^{0}\cdot x_{3}^{l}+\frac{x_{3}^{\mathfrak{b}+1}}{( \mathfrak{b}+1)!}\partial_{3}^{\mathfrak{b}+1}\mathfrak{F}_{i},\quad i\geq 1. \tag{1.18}\] Here we have used the simplified notations \[\begin{split}\partial_{3}^{l}\mu_{0}:&=(\partial_{3 }^{l}\mu)(t,x_{1},x_{2},0,v),\quad\partial_{3}^{\mathfrak{b}+1}\tilde{\mu}:=( \partial_{3}^{\mathfrak{b}+1}\mu)(t,x_{1},x_{2},\xi_{0},v),\\ \partial_{3}^{l}F_{i}^{0}:&=(\partial_{3}^{l}F_{i}) (t,x_{1},x_{2},0,v),\quad\partial_{3}^{\mathfrak{b}+1}\mathfrak{F}_{i}:=( \partial_{3}^{\mathfrak{b}+1}F_{i})(t,x_{1},x_{2},\xi_{i},v),\end{split} \tag{1.19}\] for some \(\xi_{i}\in(0,x_{3})\) with \(i\geq 0\). The number \(\mathfrak{b}\in\mathbb{N}_{+}\) will be chosen later. For the macro-micro decomposition of viscous and Knudsen boundary layers, we denote the corresponding linearized operator, macroscopic projection, and null space as \[\mathbf{L}_{0}=\mathbf{L}(t,x_{{}_{\shortparallel}},0,v),\qquad\mathbf{P}_{0 }=\mathbf{P}(t,x_{{}_{\shortparallel}},0,v),\qquad\mathcal{N}_{0}=\mathcal{N} (t,x_{{}_{\shortparallel}},0,v).\] It is noted that \(\mathbf{L}_{0},\mathbf{P}_{0}\) and \(\mathcal{N}_{0}\) are independent of normal variables. We define \[\bar{f}_{k}:=\frac{\bar{F}_{k}}{\sqrt{\mu_{0}}}, \tag{1.20}\] then it holds that \[\begin{split}\bar{f}_{k}&=\mathbf{P}_{0}\bar{f}_{k }+\{\mathbf{I}-\mathbf{P}_{\mathbf{0}}\}\bar{f}_{k}\\ &=\left\{\frac{\bar{\rho}_{k}}{\rho^{0}}+\bar{u}_{k}\cdot\frac{v- \mathfrak{u}^{0}}{T^{0}}+\frac{\bar{\theta}_{k}}{6T^{0}}(\frac{|v-\mathfrak{u} ^{0}|^{2}}{T^{0}}-3)\right\}\sqrt{\mu_{0}}+\{\mathbf{I}-\mathbf{P}_{\mathbf{0 }}\}\bar{f}_{k},\end{split}\] where and whereafter we always use the notation \((\rho^{0},\mathfrak{u}^{0},T^{0}):=(\rho,\mathfrak{u},T)(t,x_{{}_{\shortparallel}},0)\). Throughout the present paper, we always assume the far-field condition \[\bar{f}_{k}(t,x_{{}_{\shortparallel}},y,v)\to 0,\quad\text{as }y\to+\infty. \tag{1.21}\] #### 1.2.3. Knudsen boundary layer expansion To construct the solution satisfying the boundary condition at higher orders, we still need the Knudsen boundary layer. We define the new scaled normal coordinate: \[\eta:=\frac{x_{3}}{\varepsilon^{2}}.\] The Knudsen boundary layer expansion is defined as \[\hat{F}^{\varepsilon}(t,x_{\ Using (1.7), (1.16) and (1.22), one can obtain the equation of \(F_{R}^{\varepsilon}\) \[\partial_{t}F_{R}^{\varepsilon}+v\cdot\nabla_{x}F_{R}^{ \varepsilon}-\frac{1}{\varepsilon^{2}}\{Q(\mu,F_{R}^{\varepsilon})+Q(F_{R}^{ \varepsilon},\mu)\}\] \[=\varepsilon^{3}Q(F_{R}^{\varepsilon},F_{R}^{\varepsilon})+\sum_ {i=1}^{N}\varepsilon^{i-2}\{Q(F_{i}+\bar{F}_{i}+\hat{F}_{i},F_{R}^{\varepsilon })+Q(F_{R}^{\varepsilon},F_{i}+\bar{F}_{i}+\hat{F}_{i})\}\] \[\quad+R^{\varepsilon}+\bar{R}^{\varepsilon}+\hat{R}^{\varepsilon}, \tag{1.25}\] where \(R^{\varepsilon},\bar{R}^{\varepsilon}\) and \(\hat{R}^{\varepsilon}\) are defined in (4.8)-(4.10). The main purpose of the present paper is to establish the validity of the Hilbert expansion for the Boltzmann equation around the local Maxwellian \(\mu\) determined by compressible Euler equations (1.9). For later use, we define \[F_{R}^{\varepsilon}=\sqrt{\mu}f_{R}^{\varepsilon}. \tag{1.26}\] To use the \(L^{2}\)-\(L^{\infty}\) framework [17, 16], we also introduce a global Maxwellian \[\mu_{M}:=\frac{1}{(2\pi T_{M})^{3/2}}\exp\bigg{\{}-\frac{|v|^{2}}{2T_{M}} \bigg{\}},\] where \(T_{M}>0\) satisfies the condition \[T_{M}<\min_{x\in\mathbb{R}_{+}^{3}}T(t,x)\leq\max_{x\in\mathbb{R}_{+}^{3}}T(t,x)<2T_{M}. \tag{1.27}\] Using (1.27), one can easily deduce that there exists a positive constant \(C>0\) such that for some \(\frac{1}{2}<\alpha<1\), the following holds: \[\frac{1}{C}\mu_{M}\leq\mu(t,x,v)\leq C\mu_{M}^{\alpha}. \tag{1.28}\] We further define \[F_{R}^{\varepsilon}=\{1+|v|^{2}\}^{-\frac{\varepsilon}{2}}\sqrt{\mu_{M}}h_{R}^ {\varepsilon}\equiv\frac{1}{\varpi_{\mathfrak{t}}(v)}\sqrt{\mu_{M}}h_{R}^{ \varepsilon}, \tag{1.29}\] with \(\mathfrak{k}\geq 0\) and \(\varpi_{\mathfrak{t}}:=(1+|v|^{2})^{\frac{1}{2}}\). **Theorem 1.1**.: _Let \(\tau^{\delta}>0\) be the life-span of smooth solution of compressible Euler equations (1.9). Let \(\mathfrak{k}\geq 16\), \(N\geq 6\) and \(\mathfrak{b}\geq 5\). We assume the initial data_ \[F^{\varepsilon}(0,x,v) =\mu(0,x,v)+\sum_{i=1}^{N}\varepsilon^{i}\left\{F_{i}(0,x,v)+ \bar{F}_{i}(0,x_{\text{\tiny{\rm{i}}}},\frac{x_{3}}{\varepsilon},v)+\hat{F}_ {i}(0,x_{\text{\tiny{\rm{i}}}},\frac{x_{3}}{\varepsilon^{2}},v)\right\}\] \[\quad+\varepsilon^{5}F_{R}^{\varepsilon}(0,x,v)\geq 0,\] _and \(F_{i}(0),\bar{F}_{i}(0),i=1,\cdots,N\) satisfy the regularity and compatibility conditions described in Proposition 4.1, and_ \[\Big{\|}(\frac{F_{R}^{\varepsilon}}{\sqrt{\mu}})(0)\Big{\|}_{L^{2}_{x,v}}+ \varepsilon^{3}\Big{\|}(\varpi_{\mathfrak{t}}\frac{F_{R}^{\varepsilon}}{ \sqrt{\mu_{M}}})(0)\Big{\|}_{L^{\infty}_{x,v}}<\infty.\] _Then there exists a small positive constant \(\varepsilon_{0}>0\) such that the IBVP problem (1.5) and (1.3) has a unique solution for \(\varepsilon\in(0,\varepsilon_{0}]\) over the time interval \(t\in[0,\tau^{\delta}]\) in the following form of expansion_ \[F^{\varepsilon}(t,x,v) =\mu(t,x,v)+\sum_{i=1}^{N}\varepsilon^{i}\left\{F_{i}(t,x,v)+\bar {F}_{i}(t,x_{\text{\tiny{\rm{i}}}},\frac{x_{3}}{\varepsilon},v)+\hat{F}_{i}( t,x_{\text{\tiny{\rm{i}}}},\frac{x_{3}}{\varepsilon^{2}},v)\right\}\] \[\quad+\varepsilon^{5}F_{R}^{\varepsilon}(t,x,v)\geq 0, \tag{1.30}\] _with_ \[\sup_{t\in[0,\tau^{\delta}]}\left\{\left\|\frac{F_{R}^{\varepsilon}(t)}{\sqrt{ \mu}}\right\|_{L^{2}_{x,v}}+\varepsilon^{3}\Big{\|}\varpi_{\mathfrak{t}}(v) \frac{F_{R}^{\varepsilon}(t)}{\sqrt{\mu_{M}}}\Big{\|}_{L^{\infty}_{x,v}} \right\}\leq C(\tau^{\delta})<\infty. \tag{1.31}\] _Here the functions \(F_{i}(t,x,v),\tilde{F}_{i}(t,x_{\!v},y,v)\) and \(\hat{F}_{i}(t,x_{\!v},\eta,v)\) are the interior expansion, viscous and Knudsen boundary layers respectively constructed in Proposition 4.1._ **Remark 1.2**.: _From (1.30)-(1.31) and the uniform estimates in Proposition 4.1, it is direct to check that_ \[\sup_{t\in[0,\tau^{\varepsilon}]}\left\{\Big{\|}\Big{(}\frac{F^{ \varepsilon}-\mu}{\sqrt{\mu}}\Big{)}(t)\Big{\|}_{L^{2}(\mathbb{R}^{3}_{+} \times\mathbb{R}^{3})}+\Big{\|}\varpi_{\mathbf{t}}\left(\frac{F^{\varepsilon }-\mu}{\sqrt{\mu_{M}}}\Big{)}(t)\Big{\|}_{L^{\infty}(\mathbb{R}^{3}_{+}\times \mathbb{R}^{3})}\right\}\leq C\varepsilon\to 0.\] _Hence we have established the hydrodynamic limit from the Boltzmann equation to the compressible Euler system for the half-space problem._ **Remark 1.3**.: _For simplicity of presentation, we only give details of proof for the Boltzmann equation of soft potentials in the present paper. And we point out that it is also valid for the cases of hard potentials by similar arguments._ Now we briefly comment the key points of present paper. To estimate the microscopic part of interior expansions and viscous boundary layers, we need some decay property on pseudo-inverse linear operator \(\mathbf{L}^{-1}\) and \(\mathbf{L}^{-1}_{0}\). For \(-\frac{3}{2}<\kappa\leq 1\), the authors [30] obtained \[|\mu^{-\frac{\eta}{2}}\mathbf{L}^{-1}\mathfrak{g}(v)|\lesssim\| \mu^{-\frac{\eta^{\prime}}{2}}\mathfrak{g}\|_{L^{\infty}_{\tau}},\quad 0<q^{ \prime}<1.\] Due to the strong singularity, it is hard to establish above estimate for \(-3<\kappa\leq\frac{3}{2}\). In this paper, by observing the feature of Hilbert expansion on interior parts and viscous boundary layers, we can get the following control \[|\mu^{-\frac{\eta}{2}}\mathbf{L}^{-1}\mathfrak{g}(v)|^{2} \lesssim\sum_{0\leq\alpha\leq N}\|\partial_{v}^{\alpha}\{\mu^{- \frac{\eta}{2}}\mathbf{L}^{-1}\mathfrak{g}\}\|_{L^{2}}^{2}\lesssim\sum_{0\leq \alpha\leq N}\|\nu^{-1}\mu^{-\frac{\eta^{\prime}}{2}}\partial_{v}^{\alpha} \mathfrak{g}\|_{L^{2}}^{2},\ N\geq 2,\,0<q<q^{\prime}<1, \tag{1.32}\] by losing velocity derivatives, see section 2.2 for details. We point out that the losing velocity derivatives is natural since the interior parts and viscous boundary layers always possess enough regularity with respect to \(v\in\mathbb{R}^{3}\). The construction of Knudsen layers is more delicate for Boltzmann equation of soft potentials. Noting (1.22), to solve the Knudsen layer, it is equivalent to study the following linear boundary value problem \[\begin{cases}v_{3}\partial_{\eta}f+\nu^{0}(v)f-K^{0}f=\mathfrak{g},\\ f(0,v)|_{v_{3}>0}=f(0,R_{\eta}v),\\ \lim_{\eta\to\infty}f(\eta,v)=0.\end{cases} \tag{1.33}\] Especially, by noting the right hand side (RHS) of (1.22), we have to get at least space polynomial decay for the solution of (1.33) to continue the construction of Hilbert expansion. For hard sphere case, one can obtain even exponential decay with the help of strong effect of collision frequency \(\nu(v)\cong 1+|v|\). However, it is hard for the cases of soft potentials since the effect of collision frequency \(\nu(v)\cong(1+|v|)^{\kappa}\to 0\) as \(|v|\to\infty\) is very weak. To solve (1.33), we first establish the _a prior_ uniform \(L^{\infty}\) estimate for an approximate problem (see (3.17)), i.e., \[\|w_{l}f^{\lambda}\|_{L^{\infty}_{\eta,v}}+|w_{l}f^{\lambda}|_{L^ {\infty}(\gamma_{+})}\leq C\Big{(}\|\sqrt{\nu^{0}}f^{\lambda}\|_{L^{2}_{\eta,v }}+\|(\nu^{0})^{-1}w_{l}g\|_{L^{\infty}_{\eta,v}}+|w_{l+4}r|_{L^{\infty}( \gamma_{+})}\Big{)}, \tag{1.34}\] see Lemma 3.3 for details. Here we point out that the constant in (1.34) is independent of the length of domain \(\Omega=[0,d]\). For soft potentials, since the collision effect is weak, the key point is to take the number of collisions with the boundaries to depend on \(v\), that is \(k=\tilde{k}_{0}|v_{3}|(1+|v|)^{|\kappa|}\) with \(\tilde{k}_{0}\gg 1\). Under some constraint conditions, we can have the following \(L^{2}_{\eta,v}\) decay estimate by lossing velocity weight arguments \[\int_{0}^{d}(1+\eta)^{n}\|w_{l}f\|_{\nu}^{2}d\eta\leq C_{n}\int_{0}^{d}(1+\eta)^{ 2p_{n}}\|w_{l+2n+2}g\|_{L^{2}_{\pi}}^{2}d\eta,\quad p_{n}>\frac{n}{2}+1, \tag{1.35}\] see Lemmas 3.9-3.10 for details. For the space decay rate in \(L^{\infty}_{\eta,v}\), we multiply (1.33) by \((1+\eta)^{n}\) to obtain \[v_{3}\partial_{\eta}\{(1+\eta)^{n}f\}+\nu^{0}(v)\{(1+\eta)^{n}f\}-K^{0}\{(1+ \eta)^{n}f\}=(1+\eta)^{n}\mathfrak{g}+nv_{3}(1+\eta)^{n-1}f,\] which yields that \[\|w_{l}\,(1+\eta)^{n}f\|_{L^{\infty}_{\eta,v}} \lesssim\|w_{l+4}\,(1+\eta)^{n-1}f\|_{L^{\infty}_{\eta,v}}+\|( \nu^{0})^{\frac{1}{2}}(1+\eta)^{n}f\|_{L^{2}_{\eta,v}}\] \[+\|(\nu^{0})^{-1}w_{l}\,(1+\eta)^{n}\mathfrak{g}\|_{L^{\infty}_{ \eta,v}}.\] Then, using (1.35) and an induction arguments on \(n\), we finally obtain that \[\|w_{l}\,(1+\eta)^{n}f\|_{L^{\infty}_{\eta,v}} \lesssim\|w_{l+4n+4}\,(1+\eta)^{q_{n}}\mathfrak{g}\|_{L^{\infty}_ {\eta,v}},\quad\text{for }q_{n}>n+\frac{3}{2}.\] With above estimates, we obtain the existence of Knudsen boundary layer problem with enough space decay estimate in \(L^{\infty}_{\eta,v}\). With the help of above estimates on \(\mathbf{L}^{-1}\) and Knudsen boundary layer, by the same arguments as in [20], we can establish Hilbert expansion of Boltzmann equation of soft potentials with multi-scales in half-space. The paper is organized as follows. In section 2, we give some basic estimates on collision operator and establish the decay estimate of \(\mathbf{L}^{-1}\) for soft potentials. Section 3 is devoted to existence of Knudsen boundary layer of soft potentials with enough space decay rate. In section 4, we construct the Hilbert expansion of soft Boltzmann equation and prove Theorem 1.1. **Notations.** Throughout the present paper, \(C\) denotes a generic positive constant and vary from line to line. And \(C(a),C(b),\cdots\) denote the generic positive constants depending on \(a,\ b,\cdots\), respectively, which also may vary from line to line. We use \(\langle\cdot,\cdot\rangle\) to denote the standard \(L^{2}\) inner product in \(\mathbb{R}^{3}_{v}\). \(\|\cdot\|_{L^{2}}\) denotes the standard \(L^{2}(\mathbb{R}^{3}_{+}\times\mathbb{R}^{3}_{v})\)-norm, \(\|\cdot\|_{L^{\infty}}\) denotes the \(L^{\infty}(\mathbb{R}^{3}_{+}\times\mathbb{R}^{3}_{v})\)-norm and \(\|\cdot\|_{\nu}\) denotes \(\langle\nu\cdot,\cdot\rangle^{\frac{1}{2}}\). ## 2. Some Estimates for Soft Boltzmann Operators ### Preliminaries It follows from (1.12) that \(\mathbf{L}\mathfrak{h}=\nu(v)\mathfrak{h}-K\mathfrak{h}\) with \[\nu(v)=\iint_{\mathbb{R}^{3}\times\mathbb{S}^{2}}B(v-u,\omega)\mu(u)\,d \omega du\cong(1+|v|)^{\kappa}\quad\text{and}\quad K\mathfrak{h}=K_{1} \mathfrak{h}-K_{2}\mathfrak{h}, \tag{2.1}\] where \[(K_{1}\mathfrak{h})(v)=\mu^{\frac{1}{2}}(v)\iint_{\mathbb{R}^{3}\times\mathbb{ S}^{2}}\mathfrak{h}(u)\mu^{\frac{1}{2}}(u)B(v-u,\omega)\,d\omega du, \tag{2.2}\] and \[(K_{2}\mathfrak{h})(v) =\frac{1}{\sqrt{\mu}}\iint_{\mathbb{R}^{3}\times\mathbb{S}^{2}} B(v-u,\omega)\mu(v^{\prime})[\sqrt{\mu(u^{\prime})}\mathfrak{h}(u^{\prime})+\mu(u^{ \prime})\sqrt{\mu(v^{\prime})}\mathfrak{h}(v^{\prime})]dud\omega\] \[=\mu^{\frac{1}{2}}(v)\iint_{\mathbb{R}^{3}\times\mathbb{S}^{2}}[ \mu^{-\frac{1}{2}}(v^{\prime})\mathfrak{h}(v^{\prime})+\mu^{-\frac{1}{2}}(u^{ \prime})\mathfrak{h}(u^{\prime})]\mu(u)B(v-u,\omega)\,d\omega du,\] \[=2\mu^{\frac{1}{2}}(v)\iint_{\mathbb{R}^{3}\times\mathbb{S}^{2}} \mu^{-\frac{1}{2}}(v^{\prime})\mathfrak{h}(v^{\prime})\mu(u)B(v-u,\omega)\,d \omega du. \tag{2.3}\] By standard arguments as in [11], we can rewrite \(K_{i}\mathfrak{h}=\int_{\mathbb{R}^{3}}k_{i}(u,v)\mathfrak{h}(u)du\) with \[k_{1}(u,v)=C|v-u|^{\kappa}\mu^{\frac{1}{2}}(v)\mu^{\frac{1}{2}}(u),\] \[0\leq k_{2}(u,v)\leq\frac{C_{\kappa}}{|u-v|^{\frac{3-\kappa}{2}}}\exp\Big{\{}- \frac{1}{8}|v-u|^{2}-\frac{1}{8}\frac{(|v-u|^{2}-|u-u|^{2})^{2}}{|u-v|^{2}}\Big{\}}.\] It is well-known that there is a positive constant \(c_{0}>0\) such that \[\langle\mathbf{L}\mathfrak{h},\mathfrak{h}\rangle\geq c_{0}\| \{\mathbf{I}-\mathbf{P}\}\mathfrak{h}\|_{\nu}^{2}.\] For soft potentials, motivated by [37], we define a monotone cutoff function \(\chi_{z}(s)\in C^{\infty}(0,\infty)\) satisfying \[\chi_{z}(s)\equiv 0\text{ for }0\leq s\leq z,\quad\chi_{z}(s)\equiv 1\text{ for }s\geq 2z,\quad 0\leq\chi_{z}(s)\leq 1\text{ for all }s>0, \tag{2.4}\] where \(z\) is a parameter. Define \[(K^{m}\mathfrak{h})(v) =\int_{\mathbb{R}^{3}}\int_{\mathbb{S}^{2}}B(|v-u|,\theta)\tilde{ \chi}_{m}(|v-u|)\sqrt{\mu(u)\mu(v)}\mathfrak{h}(u)d\omega du\] \[-\int_{\mathbb{R}^{3}}\int_{\mathbb{S}^{2}}B(v-u,\theta)\tilde{ \chi}_{m}(|v-u|)\sqrt{\mu(u)\mu(u^{\prime})}\mathfrak{h}(v^{\prime})d\omega du\] \[-\int_{\mathbb{R}^{3}}B(|v-u|,\theta)\tilde{\chi}_{m}(|v-u|) \sqrt{\mu(u)\mu(v^{\prime})}\mathfrak{h}(u^{\prime})d\omega du\] \[=:K_{1}^{m}\mathfrak{h}(v)-K_{2}^{m}\mathfrak{h}(v),\] and \(K^{c}=K-K^{m}\), where \(\tilde{\chi}_{m}=1-\chi_{m}\). We denote \[(K^{m}\mathfrak{h})(v)=\int_{\mathbb{R}^{3}}k^{m}(v,u)\mathfrak{ h}(u)du,\quad(K^{c}\mathfrak{h})(v)=\int_{\mathbb{R}^{3}}k^{c}(v,u)\mathfrak{ h}(u)du. \tag{2.5}\] **Lemma 2.1** ([9]).: _For any \(0<m\leq 1\), it holds that_ \[|(K^{m}\mathfrak{h})(v)|\leq Cm^{3+\kappa}e^{-\frac{|v-u|^{2}}{ 10}}\|\mathfrak{h}\|_{L^{\infty}_{v}}, \tag{2.6}\] _where \(C>0\) is independent of \(m\). The kernels \(k^{m}(v,u)\) and \(k^{c}(v,u)\) satisfy_ \[|k^{m}(v,u)|\leq C_{\kappa}\{|v-u|^{\kappa}+|v-u|^{-\frac{3-\kappa }{2}}\}e^{-\frac{|v-u|^{2}+|u-u|^{2}}{16}}, \tag{2.7}\] _and_ \[|k^{c}(v,u)|\leq \frac{C_{\kappa}m^{a(\kappa-1)}}{|v-u|^{1+\frac{(1-a)}{2}(1- \kappa)}}\frac{1}{(1+|v-\mathfrak{u}|+|u-\mathfrak{u}|)^{a(1-\kappa)}}e^{- \frac{|v-u|^{2}-|u-u|^{2}|^{2}}{16|v-u|^{2}}}\] \[+C|v-u|^{\kappa}e^{-\frac{|v-u|^{2}}{4}}e^{-\frac{|u-u|^{2}}{4}}, \tag{2.8}\] _where \(a\in[0,1]\) is an arbitrary constant and \(C_{\kappa}\) depending only on \(\kappa\)._ **Remark 2.2**.: _The original version of Lemma 2.1 was proved in [9] for the global Maxwellian. And it is direct to check that it is still valid for local Maxwellian. We omit the details for simplicity of presentation._ Denote \[\tilde{w}(v)=(1+|v|^{2})^{\frac{1}{2}}\mu^{-\mathfrak{a}}\quad \text{with }l\geq 0,0\leq\mathfrak{a}<\frac{1}{2},\] and \[K^{c}_{\tilde{w}}h\equiv\tilde{w}K^{c}(\frac{h}{\tilde{w}})= \int_{\mathbb{R}^{3}}k^{c}_{\tilde{w}}(v,u)h(u)du.\] Then, from Lemma 2.1, it is clear that \[\int_{\mathbb{R}^{3}}|k^{c}_{\tilde{w}}(v,u)|e^{\frac{|v-u|^{2}}{ 32}}du \leq Cm^{\kappa-1}(1+|v|)^{\kappa-2}, \tag{2.9}\] \[\int_{\mathbb{R}^{3}}|k^{c}_{\tilde{w}}(v,u)|e^{\frac{|v-u|^{2}}{ 32}}du \leq C(1+|v|)^{-1}, \tag{2.10}\] where \(C\) is a constant independent of \(m\). **Lemma 2.3** ([34]).: _Let \(\Gamma(\mathfrak{h},\mathfrak{g})=\frac{1}{\sqrt{\mu}}Q(\sqrt{\mu}\mathfrak{h}, \sqrt{\mu}\mathfrak{g})\). For \(\kappa\in(-3,0)\), it holds that_ \[\Big{|}\int_{\mathbb{R}^{3}}\Gamma(\mathfrak{g}_{1},\mathfrak{g}_{2}) \mathfrak{g}_{3}dv\Big{|}\leq C\{\|\mathfrak{g}_{3}\|_{\nu}\|\mathfrak{g}_{2 }\|_{\nu}\|\varpi_{k}\mathfrak{g}_{1}\|_{L^{\infty}}+\|\mathfrak{g}_{3}\|_{ \nu}\|\mathfrak{g}_{1}\|_{\nu}\|\varpi_{k}\mathfrak{g}_{2}\|_{L^{\infty}}\}, \quad k>\frac{3}{2}. \tag{2.11}\] ### Estimate for \(\mathbf{L}^{-1}\) To consider the derivatives for operators \(K_{1},K_{2}\), we denote \(\xi:=u-v\). Then one can rewrite \(K_{1}\mathfrak{h}\) as \[K_{1}\mathfrak{h}(v) =\iint_{\mathbb{R}^{3}\times\mathbb{S}^{2}}|u-v|^{\kappa}\beta( \theta)\mu^{\frac{1}{2}}(v)\mu^{\frac{1}{2}}(u)\mathfrak{h}(u)\,d\omega du\] \[=\iint_{\mathbb{R}^{3}\times\mathbb{S}^{2}}|\xi|^{\kappa}\beta( \theta)\mu^{\frac{1}{2}}(v)\mu^{\frac{1}{2}}(v+\xi)\mathfrak{h}(v+\xi)\,d \omega d\xi,\] which yields that \[\partial_{v}^{\alpha}(K_{1}\mathfrak{h}) =\iint_{\mathbb{R}^{3}\times\mathbb{S}^{2}}|\xi|^{\kappa}\beta( \theta)\mu^{\frac{1}{2}}(v)\mu^{\frac{1}{2}}(v+\xi)\partial_{v}^{\alpha} \mathfrak{h}(v+\xi)\,d\omega d\xi\] \[\quad+\sum_{0\leq\alpha^{\prime}<\alpha}C_{\alpha}^{\alpha^{ \prime}}\iint_{\mathbb{R}^{3}\times\mathbb{S}^{2}}|\xi|^{\kappa}\partial_{v}^ {-\alpha^{\prime}}\big{(}\beta(\theta)\mu^{\frac{1}{2}}(v)\mu^{\frac{1}{2}}(v+ \xi)\big{)}\partial_{v}^{\alpha^{\prime}}\mathfrak{h}(v+\xi)\,d\omega d\xi,\] where \(\alpha=(\alpha_{1},\alpha_{2},\alpha_{3})\) is the multi-index, and \(\partial_{v}^{\alpha}:=\partial_{v_{1}}^{\alpha_{1}}\partial_{v_{2}}^{\alpha_ {2}}\partial_{v_{3}}^{\alpha_{3}}\). For small positive number \(\epsilon\), it is direct to check that \[|\partial_{v}^{\alpha-\alpha^{\prime}}\big{(}\beta(\theta)\mu^{\frac{1}{2}}( v)\mu^{\frac{1}{2}}(v+\xi)\big{)}|\leq C_{\epsilon,N}\mu^{\frac{1}{2}-\epsilon}(v) \mu^{\frac{1}{2}-\epsilon}(v+\xi).\] Hence one obtains \[|\partial_{v}^{\alpha}(K_{1}\mathfrak{h})| \leq\iint_{\mathbb{R}^{3}\times\mathbb{S}^{2}}|\xi|^{\kappa}\beta (\theta)\mu^{\frac{1}{2}}(v)\mu^{\frac{1}{2}}(v+\xi)|\partial_{v}^{\alpha} \mathfrak{h}(v+\xi)|\,d\omega d\xi \tag{2.12}\] \[\quad+C_{\epsilon,N}\sum_{0\leq\alpha^{\prime}<\alpha}\int_{ \mathbb{R}^{3}}|\xi|^{\kappa}\mu^{\frac{1}{2}-\epsilon}(v)\mu^{\frac{1}{2}- \epsilon}(v+\xi)|\partial_{v}^{\alpha^{\prime}}\mathfrak{h}(v+\xi)|\,d\xi\] \[\leq\iint_{\mathbb{R}^{3}\times\mathbb{S}^{2}}|v-u|^{\kappa}\beta (\theta)\mu^{\frac{1}{2}}(v)\mu^{\frac{1}{2}}(u)|\partial_{u}^{\alpha} \mathfrak{h}(u)|\,d\omega du\] \[\quad+C_{\epsilon,N}\sum_{0\leq\alpha^{\prime}<\alpha}\int_{ \mathbb{R}^{3}}|v-u|^{\kappa}\mu^{\frac{1}{2}-\epsilon}(v)\mu^{\frac{1}{2}- \epsilon}(u)|\partial_{u}^{\alpha^{\prime}}\mathfrak{h}(u)|\,du\] \[=:I_{1}+I_{2}.\] It follows from \(\mu^{\frac{1}{2}}(v)\mu^{\frac{1}{2}}(u)=\mu^{\frac{1}{2}}(v^{\prime})\mu^{ \frac{1}{2}}(u^{\prime})\) and (2.3) that \[K_{2}\mathfrak{h} =2\iint_{\mathbb{R}^{3}\times\mathbb{S}^{2}}\mu^{\frac{1}{2}}(u^ {\prime})\mu^{\frac{1}{2}}(u)\mathfrak{h}(v^{\prime})|v-u|^{\kappa}\beta( \theta)\,d\omega du\] \[=2\iint_{\mathbb{R}^{3}\times\mathbb{S}^{2}}\mu^{\frac{1}{2}}(v+ \xi_{\perp})\mu^{\frac{1}{2}}(v+\xi)\mathfrak{h}(v+\xi_{\parallel})|\xi|^{ \kappa}\beta(\theta)\,d\omega d\xi,\] which implies that \[\partial_{v}^{\alpha}K_{2}\mathfrak{h}=2\iint_{\mathbb{R}^{3} \times\mathbb{S}^{2}}\mu^{\frac{1}{2}}(v+\xi_{\perp})\mu^{\frac{1}{2}}(v+\xi) \partial_{v}^{\alpha}\mathfrak{h}(v+\xi_{\parallel})|\xi|^{\kappa}\beta( \theta)\,d\omega d\xi\] \[\quad+2\sum_{0\leq\alpha^{\prime}<\alpha}C_{\alpha}^{\alpha^{ \prime}}\iint_{\mathbb{R}^{3}\times\mathbb{S}^{2}}\partial_{v}^{\alpha-\alpha^{ \prime}}\big{(}\mu^{\frac{1}{2}}(v+\xi_{\perp})\mu^{\frac{1}{2}}(v+\xi)\big{)} \partial_{v}^{\alpha^{\prime}}\mathfrak{h}(v+\xi_{\parallel})|\xi|^{\kappa} \beta(\theta)\,d\omega d\xi,\] where \(\xi_{\shortparallel}:=[(u-v)\cdot\omega]\omega\) and \(\xi_{\perp}:=\xi-\xi_{\shortparallel}\). It is clear that \[|\partial_{v}^{\alpha-\alpha^{\prime}}\big{(}\mu^{\frac{1}{2}}(v+\xi_{\perp}) \mu^{\frac{1}{2}}(v+\xi))|\leq C_{\epsilon,N}\mu^{\frac{1}{2}-\epsilon}(v+\xi_{ \perp})\mu^{\frac{1}{2}-\epsilon}(v+\xi).\] Then we can obtain \[|\partial_{v}^{\alpha}(K_{2}\mathfrak{h})| \leq 2\iint_{\mathbb{R}^{3}\times\mathbb{S}^{2}}|\xi|^{\kappa}\beta( \theta)\mu^{\frac{1}{2}}(v+\xi_{\perp})\mu^{\frac{1}{2}}(v+\xi)|\partial_{v}^{ \alpha}\mathfrak{h}(v+\xi_{\ and \[\begin{split}|\langle\nu^{-1}\mu^{-\frac{q}{2}}I_{2}^{1-\chi_{r}},\mu^ {-\frac{q}{2}}\partial_{v}^{\alpha}\mathfrak{h}\rangle|&\leq C \sum_{0\leq\alpha^{\prime}<\alpha}\left(\int_{\mathbb{R}^{3}}|\partial_{u}^{ \alpha^{\prime}}\mathfrak{h}(u)|^{2}\nu(u)\,du\right)^{\frac{1}{2}}\\ &\qquad\qquad\times\left(\int_{\mathbb{R}^{3}}|\partial_{v}^{ \alpha}\mathfrak{h}(v)|^{2}\nu(v)\,dv\right)^{\frac{1}{2}}\\ &\leq\frac{1}{2}\|\partial_{v}^{\alpha}\mathfrak{h}\|_{\nu}^{2}+C \sum_{0\leq\alpha^{\prime}<\alpha}\|\partial_{v}^{\alpha^{\prime}}\mathfrak{h} \|_{\nu}^{2},\end{split} \tag{2.18}\] where \(C\) depends on \(\rho,\mathfrak{u},T,r,q\) and we have chosen \(0<\epsilon<\frac{1-q}{2}\) in the expression of \(I_{2}^{1-\chi_{r}}\). For \(|\langle\nu^{-1}\mu^{-\frac{q}{2}}(J_{i})^{1-\chi_{r}},\mu^{-\frac{q}{2}} \partial_{v}^{\alpha}\mathfrak{h}\rangle|\), using similar arguments as in [30, Lemma 3.2], one can get \[|\langle\nu^{-1}\mu^{-\frac{q}{2}}J_{i}^{1-\chi_{r}},\mu^{-\frac{q}{2}} \partial_{v}^{\alpha}\mathfrak{h}\rangle|\leq C\sum_{0\leq\alpha^{\prime}\leq \alpha}\|\partial_{v}^{\alpha}\mathfrak{h}\|_{\nu}^{2},\quad i=1,2,\] which, together with (2.17) and (2.18), yields (2.16). Therefore the proof of Lemma 2.6 is completed. **Lemma 2.7**.: _Let \(N\in\mathbb{N}\), \(|\alpha|\leq N\), \(0<q<q^{\prime}<1\), \(-3<\kappa<0\). For any \(r>0\), there exists a constant \(C>0\) such that the following estimates hold:_ \[\big{|}\langle\nu^{-1}\mu^{-\frac{q}{2}}(\partial_{v}^{\alpha}K_{1}\mathfrak{ h})^{\chi_{r}},\mu^{-\frac{q}{2}}\partial_{v}^{\alpha}\mathfrak{h}\rangle \big{|}\leq C\exp\big{(}-\frac{(1-q)r^{2}}{32T}\big{)}\sum_{0\leq\alpha^{ \prime}\leq\alpha}\|\mu^{-\frac{q}{2}}\partial_{v}^{\alpha^{\prime}}\mathfrak{ h}\|_{L_{v}^{2}}^{2}, \tag{2.19}\] _and_ \[|\langle\nu^{-1}\mu^{-\frac{q}{2}}(\partial_{v}^{\alpha}K_{2}\mathfrak{h})^{ \chi_{r}},\mu^{-\frac{q}{2}}\partial_{v}^{\alpha}\mathfrak{h}\rangle|\leq C \Big{\{}\frac{1}{1+r}\sum_{0\leq\alpha^{\prime}\leq\alpha}\|\mu^{-\frac{q}{2} }\partial_{v}^{\alpha^{\prime}}\mathfrak{h}\|_{L_{v}^{2}}^{2}+\exp(\frac{2qr^{ 2}}{T})\sum_{0\leq\alpha^{\prime}\leq\alpha}\|\partial_{v}^{\alpha^{\prime}} \mathfrak{h}\|_{\nu}^{2}\,\Big{\}}. \tag{2.20}\] _The constant \(C\) depends only on \(\rho,\mathfrak{u},T,q,N\)._ **Proof.** We divide it into several steps. We point out that the constants \(C\) in the proof do not depend on \(r\). _Step 1._ Estimates on \(|\langle\nu^{-1}\mu^{-\frac{q}{2}}(\partial_{v}^{\alpha}K_{1}\mathfrak{h})^{ \chi_{r}},\mu^{-\frac{q}{2}}\partial_{v}^{\alpha}\mathfrak{h}\rangle|\). Noting (2.12) and the definition of \(\chi_{r}(s)\), we have \[\begin{split}&\big{|}\langle\nu^{-1}\mu^{-\frac{q}{2}}I_{1}^{ \chi_{r}},\mu^{-\frac{q}{2}}\partial_{v}^{\alpha}\mathfrak{h}\rangle\big{|}\\ &\leq C\Big{(}\iint_{|u-v|\geq r}|\partial_{v}^{\alpha}\mathfrak{h} (v)\mu^{-\frac{q}{2}}(v)|^{2}\mu^{\frac{3(1-q)}{8}}(v)\mu^{\frac{1+q}{2}}(u)|u -v|^{\kappa}dudv\Big{)}^{\frac{1}{2}}\\ &\quad\times\Big{(}\iint_{|u-v|\geq r}|\partial_{u}^{\alpha} \mathfrak{h}(u)\mu^{-\frac{q}{2}}(u)|^{2}\mu^{\frac{3(1-q)}{8}}(v)\mu^{\frac{1+q }{2}}(u)|u-v|^{\kappa}dudv\Big{)}^{\frac{1}{2}}\,,\end{split}\] where we have used \(|\nu^{-1}(v)\mu^{\frac{1-q}{8}}(v)|\leq C(\rho,\mathfrak{u},T,q)<\infty\). Then it is direct to obtain \[\big{|}\langle\nu^{-1}\mu^{-\frac{q}{2}}I_{1}^{\chi_{r}},\mu^{-\frac{q}{2}} \partial_{v}^{\alpha}\mathfrak{h}\rangle\big{|}\leq C\exp\left(-\frac{(1-q)r^{ 2}}{32T}\right)\|\mu^{-\frac{q}{2}}\partial_{v}^{\alpha}\mathfrak{h}\|_{L_{v}^{ 2}}^{2}. \tag{2.21}\] Taking \(0<\epsilon<\frac{1-q}{8}\), one has \[\begin{split}\big{|}\langle\nu^{-1}\mu^{-\frac{q}{2}}I_{2}^{ \chi_{r}},\mu^{-\frac{q}{2}}\partial_{v}^{\alpha}\mathfrak{h}\rangle\big{|}& \leq C\exp\left(-\frac{(1-q)r^{2}}{32T}\right)\|\mu^{-\frac{q}{2}} \partial_{v}^{\alpha}\mathfrak{h}\|_{L_{v}^{2}}\sum_{0\leq\alpha^{\prime}<\alpha }\|\mu^{-\frac{q}{2}}\partial_{v}^{\alpha^{\prime}}\mathfrak{h}\|_{L_{v}^{2}} ^{2}\\ &\leq C\exp\left(-\frac{(1-q)r^{2}}{32T}\right)\sum_{0\leq\alpha^{ \prime}\leq\alpha}\|\mu^{-\frac{q}{2}}\partial_{v}^{\alpha^{\prime}}\mathfrak{h} \|_{L_{v}^{2}}^{2},\end{split}\] which, together with (2.21), yields (2.19). _Step 2._ Recall (2.13) and [30, Lemma 3.3], it is direct to have \[|\langle\nu^{-1}\mu^{-\frac{q}{2}}J_{1}^{\chi_{r}},\mu^{-\frac{q}{2}}\partial_{v }^{\alpha}\mathfrak{h}\rangle|\leq C\Big{\{}\frac{1}{1+r}\|\mu^{-\frac{q}{2}} \partial_{v}^{\alpha}\mathfrak{h}\|_{L_{v}^{2}}^{2}+\exp(\frac{2qr^{2}}{T})\| \partial_{v}^{\alpha}\mathfrak{h}\|_{L_{v}^{2}}^{2}\Big{\}}.\] For \(|\langle\nu^{-1}\mu^{-\frac{q}{2}}J_{2}^{\chi_{r}},\mu^{-\frac{q}{2}}\partial_{v}^{ \alpha}\mathfrak{h}\rangle|\), taking \(0<\epsilon<\frac{1-q}{2}\), we can obtain \[|\langle\mu^{-\frac{q}{2}}J_{2}^{\chi_{r}},\mu^{-\frac{q}{2}} \partial_{v}^{\alpha}\mathfrak{h}\rangle|\] \[\leq\frac{C}{1+r}\|\mu^{-\frac{q}{2}}\partial_{v}^{\alpha} \mathfrak{h}\|_{L_{v}^{2}}\sum_{0\leq\alpha^{\prime}<\alpha}\|\mu^{-\frac{q}{ 2}}\partial_{v}^{\alpha^{\prime}}\mathfrak{h}\|_{L_{v}^{2}}+C\exp(\frac{2qr^{2} }{T})\|\partial_{v}^{\alpha}f\|_{\nu}\sum_{0\leq\alpha^{\prime}<\alpha}\| \partial_{v}^{\alpha^{\prime}}\mathfrak{h}\|_{\nu}\] \[\leq\frac{C}{1+r}\sum_{0\leq\alpha^{\prime}\leq\alpha}\|\mu^{- \frac{q}{2}}\partial_{v}^{\alpha^{\prime}}\mathfrak{h}\|_{L_{v}^{2}}^{2}+C \exp(\frac{2qr^{2}}{T})\sum_{0\leq\alpha^{\prime}\leq\alpha}\|\partial_{v}^{ \alpha^{\prime}}\mathfrak{h}\|_{\nu}^{2}.\] Therefore the proof of Lemma 2.7 is completed. **Lemma 2.8** (Weighted hypocoercivity of \(\partial_{v}^{\alpha}\mathbf{L}\)).: _Let \(N\in\mathbb{N}\), \(|\alpha|\leq N\), \(0<q<1\) and \(-3<\kappa<0\). Then there is a constant \(C=C(\rho,\mathfrak{u},T,q,N)>0\) such that_ \[\langle\nu^{-1}\mu^{-\frac{q}{2}}\partial_{v}^{\alpha}(\mathbf{L}\mathfrak{h}),\mu^{-\frac{q}{2}}\partial_{v}^{\alpha}\mathfrak{h}\rangle\geq\frac{1}{2}\| \mu^{-\frac{q}{2}}\partial_{v}^{\alpha}\mathfrak{h}\|_{L_{v}^{2}}^{2}-C\sum_{ 0\leq\alpha^{\prime}<\alpha}\|\mu^{-\frac{q}{2}}\partial_{v}^{\alpha^{\prime} }\mathfrak{h}\|_{L_{v}^{2}}^{2}-C\sum_{0\leq\alpha^{\prime}\leq\alpha}\| \partial_{v}^{\alpha}\mathfrak{h}\|_{\nu}^{2}. \tag{2.22}\] Proof.: A direct calculation shows that \[\nu^{-1}(\partial_{v}^{\alpha}\mathbf{L}\mathfrak{h})=\nu^{-1} \partial_{v}^{\alpha}(\nu\mathfrak{h})-\nu^{-1}(\partial_{v}^{\alpha}K)^{1- \chi_{r}}\mathfrak{h}-\nu^{-1}(\partial_{v}^{\alpha}K_{1}\mathfrak{h})^{\chi _{r}}+\nu^{-1}(\partial_{v}^{\alpha}K_{2}\mathfrak{h})^{\chi_{r}}.\] Noting \(\nu^{-1}(v)\partial_{v}^{\alpha}(\nu(v))\leq C_{N}\), we have \[\langle\nu^{-1}\mu^{-\frac{q}{2}}(\partial_{v}^{\alpha}(\nu \mathfrak{h})),\mu^{-\frac{q}{2}}\partial_{v}^{\alpha}\mathfrak{h}\rangle\geq \frac{7}{8}\|\mu^{-\frac{q}{2}}\partial_{v}^{\alpha}\mathfrak{h}\|_{L_{v}^{2}}^ {2}-C_{N}\sum_{0\leq\alpha^{\prime}<\alpha}\|\mu^{-\frac{q}{2}}\partial_{v}^{ \alpha^{\prime}}\mathfrak{h}\|_{L_{v}^{2}}^{2}. \tag{2.23}\] Then it follows from (2.23) and Lemmas 2.6-2.7 that \[\langle\nu^{-1}\mu^{-\frac{q}{2}}(\partial_{v}^{\alpha}\mathbf{L} \mathfrak{h}),\mu^{-\frac{q}{2}}\partial_{v}^{\alpha}\mathfrak{h}\rangle\] \[\geq\frac{7}{8}\|\mu^{-\frac{q}{2}}\partial_{v}^{\alpha}\mathfrak{ h}\|_{L_{v}^{2}}^{2}-C\big{\{}\sum_{0\leq\alpha^{\prime}<\alpha}\|\mu^{-\frac{q}{2}} \partial_{v}^{\alpha^{\prime}}\mathfrak{h}\|_{L_{v}^{2}}^{2}-\sum_{0\leq\alpha ^{\prime}\leq\alpha}\|\partial_{v}^{\alpha^{\prime}}\mathfrak{h}\|_{\nu}^{2} \big{\}}\] \[\quad-C(\rho,\mathfrak{u},T,q,N)[\exp(-\frac{(1-q)r^{2}}{32T})+ \frac{1}{1+r}]\|\mu^{-\frac{q}{2}}\partial_{v}^{\alpha}\mathfrak{h}\|_{L_{v}^{2 }}^{2}.\] Taking \(r\) large enough, one gets (2.22). Therefore the proof of Lemma 2.8 is completed. For later use, we recall a result on the hypercoercivity in [15]. **Lemma 2.9** ([15]).: _Let \(-3<\kappa<0\) and \(|\alpha|\leq N\). Then there exists a constant \(C(\rho,\mathfrak{u},T,N)>0\) such that_ \[\langle\partial_{v}^{\alpha}(\mathbf{L}\mathfrak{h}),\partial_{v}^{\alpha} \mathfrak{h}\rangle\geq\frac{1}{2}\|\partial_{v}^{\alpha}\mathfrak{h}\|_{\nu}^ {2}-C\|\mathfrak{h}\|_{\nu}^{2}.\] Proof of Proposition 2.5.: Let \(\mathfrak{g}\in\mathcal{N}^{\perp}\), we denote \(\mathfrak{h}:=\mathbf{L}^{-1}\mathfrak{g}\), that is, \(\mathbf{L}\mathfrak{h}=\nu\mathfrak{h}-K\mathfrak{h}=\mathfrak{g}\). By Sobolev's embedding theorem, we have for \(N\geq 2\) that \[|\mu^{-\frac{q}{2}}\mathbf{L}^{-1}\mathfrak{g}| \leq C\sum_{|\alpha|\leq N}\|\partial_{v}^{\alpha}(\mu^{-\frac{q}{ 2}}\mathfrak{h})\|_{L_{v}^{2}} \tag{2.24}\] \[=C\sum_{|\alpha|\leq N}\|\mu^{-\frac{q}{2}}\partial_{v}^{\alpha} \mathfrak{h}\|_{L_{v}^{2}}+C\sum_{|\alpha|\leq 2}\sum_{0\leq\alpha^{\prime}< \alpha}C_{\alpha}^{\alpha^{\prime}}\|(\partial_{v}^{\alpha-\alpha^{\prime}}\mu^{- \frac{q}{2}})(\partial_{v}^{\alpha^{\prime}}\mathfrak{h})\|_{L_{v}^{2}}\] \[\leq C\sum_{|\alpha|\leq N}\|\mu^{-\frac{q^{\prime}}{2}}\partial_{v}^ {\alpha}\mathfrak{h}\|_{L_{v}^{2}},\] where we have used the fact that \[\mu^{\frac{q^{\prime}}{2}}\sum_{0\leq\alpha^{\prime}<2}(\partial_{v}^{\alpha- \alpha^{\prime}}\mu^{-\frac{q}{2}})\leq C\qquad\text{for any }0<q<q^{\prime}<1,\] in the last inequality. It follows from Lemmas 2.8-2.9 and \(\|\mathfrak{h}\|_{\nu}^{2}\lesssim\langle\mathbf{L}\mathfrak{h},\mathfrak{h}\rangle\) that \[\|\mu^{-\frac{q^{\prime}}{2}}\partial_{v}^{\alpha}\mathfrak{h}\|_{ L_{v}^{2}}^{2} \leq 2\langle\nu^{-1}\mu^{-\frac{q^{\prime}}{2}}\partial_{v}^{ \alpha}(\mathbf{L}\mathfrak{h}),\mu^{-\frac{q^{\prime}}{2}}\partial_{v}^{ \alpha}\mathfrak{h}\rangle+C\sum_{0\leq\alpha^{\prime}<\alpha}\|\mu^{-\frac{q ^{\prime}}{2}}\partial_{v}^{\alpha^{\prime}}\mathfrak{h}\|_{L_{v}^{2}}^{2}\] \[\quad+C\|\partial_{v}^{\alpha}\mathfrak{h}\|_{\nu}^{2}\] \[\leq 16\|\nu^{-1}\mu^{-\frac{q^{\prime}}{2}}\partial_{v}^{\alpha} \mathfrak{g}\|_{L_{v}^{2}}^{2}+\frac{1}{4}\|\mu^{-\frac{q^{\prime}}{2}} \partial_{v}^{\alpha}\mathfrak{h}\|_{L_{v}^{2}}^{2}+C\sum_{0\leq\alpha^{ \prime}<\alpha}\|\mu^{-\frac{q^{\prime}}{2}}\partial_{v}^{\alpha^{\prime}} \mathfrak{h}\|_{L_{v}^{2}}^{2}\] \[\quad+C\|\nu^{-1}\partial_{v}^{\alpha}\mathfrak{g}\|_{L_{v}^{2}} ^{2}+\frac{1}{4}\|\mathfrak{h}\|_{\nu}^{2},\] which, together with (2.24), yields that \[|\mu^{-\frac{q}{2}}\mathbf{L}^{-1}\mathfrak{g}|\leq C\sum_{| \alpha|\leq N}\|\mu^{-\frac{q^{\prime}}{2}}\partial_{v}^{\alpha}\mathfrak{h} \|_{L_{v}^{2}}\leq C\sum_{|\alpha|\leq N}\|\nu^{-1}\mu^{-\frac{q^{\prime}}{2}} \partial_{v}^{\alpha}\mathfrak{g}\|_{L_{v}^{2}},\] where the constant depend only \(\rho,\mathfrak{u},T\) and \(q^{\prime}\). Thus the proof of Proposition 2.5 is finished. **Remark 2.10**.: _Denote \(\mathbf{L}_{0}\mathfrak{h}=\nu^{0}(v)\mathfrak{h}-K^{0}\mathfrak{h}=\mathbf{ L}\mathfrak{h}|_{x_{3}=0}\). Similarly, we have \(K^{0}\mathfrak{h}=K^{0,c}\mathfrak{h}+K^{0,m}\mathfrak{h}\). Define_ \[w_{l}(v)=(1+|v|^{2})^{\frac{l}{2}}\mu_{0}^{-\mathfrak{a}}, \tag{2.25}\] _then we can define \(k_{w}^{0,m},k_{w}^{0,c}\) similarly as in section 2.1. It is obvious that we can have similar results as in (2.6)-(2.10) for \(K^{0,m},K^{0,c},k_{w}^{0,m},k_{w}^{0,c}\)._ _For \(\mathbf{L}_{0}\), one also has_ \[\langle\mathbf{L}_{0}\mathfrak{h},\mathfrak{h}\rangle\geq c_{1} \|\{\mathbf{I}-\mathbf{P}_{0}\}\mathfrak{h}\|_{\nu}^{2}, \tag{2.26}\] _since \(\nu^{0}\cong\nu\cong(1+|v|)^{\kappa}\). \(\mathbf{L}_{0}^{-1}\) can be defined as_ \[(\mathbf{L}_{0}|_{\mathcal{N}_{0}^{\perp}})^{-1}:\mathcal{N}_{0}^{\perp} \rightarrow\mathcal{N}_{0}^{\perp}.\] _Let \(\mathfrak{g}\in\mathcal{N}_{0}^{\perp}\), from Proposition 2.5, it is direct to know that_ \[|\mu_{0}^{-\frac{q}{2}}\mathbf{L}_{0}^{-1}\mathfrak{g}(v)|\leq C \sum_{|\alpha|\leq N}\|\partial_{v}^{\alpha}(\mu_{0}^{-\frac{q}{2}}\mathbf{L}_ {0}^{-1}\mathfrak{g})\|_{L_{v}^{2}}\leq C\sum_{|\alpha|\leq N}\|(\nu^{0})^{-1} \mu_{0}^{-\frac{q^{\prime}}{2}}\partial_{v}^{\alpha}\mathfrak{g}\|_{L_{v}^{2}},\quad\text{for }v\in\mathbb{R}^{3}. \tag{2.27}\] _where the constant \(C=C(\rho^{0},\mathfrak{u}^{0},T^{0},q^{\prime},N)>0\). These estimates will be used to study the viscous and Knudsen boundary layers._ **Remark 2.11**.: _We point out that all above results for soft potentials in this section are also valid for hard potentials. The proofs are very similar._ ## 3. Existence of a Steady Linear Boltzmann Equation To construct the Knudsen layer solutions, we study equation: \[\begin{cases}v_{3}\partial_{\eta}\mathfrak{f}+\mathbf{L}_{0}\mathfrak{f}=S, \quad(\eta,v)\in[0,\infty)\times\mathbb{R}^{3},\\ \mathfrak{f}(0,v)|_{v_{3}>0}=\mathfrak{f}(0,R_{x}v)+f_{b}(v),\\ \lim_{\eta\rightarrow\infty}\mathfrak{f}(\eta,v)=0.\end{cases} \tag{3.1}\] where \(S\) is a given function and \(f_{b}(v)\) is defined only for \(v_{3}<0\), and we always assume that it is extended to be \(0\) for \(v_{3}>0\). For soft potentials, there has not been work for Knudsen layer solutions with specular reflection boundary. **Theorem 3.1**.: _Recall \(w_{l}\) in (2.25). Assume \(l>2,\ 0\leq\mathfrak{a}<\frac{1}{2}\),_ \[\int_{\mathbb{R}^{3}}(1,v_{1}-\mathfrak{u}_{1}^{0},v_{2}-\mathfrak{ u}_{2}^{0},|v-\mathfrak{u}^{0}|^{2})\sqrt{\mu_{0}}Sdv =0, \tag{3.2}\] \[\int_{\mathbb{R}^{3}}(1,v_{1}-\mathfrak{u}_{1}^{0},v_{2}- \mathfrak{u}_{2}^{0},|v-\mathfrak{u}^{0}|^{2})v_{3}\sqrt{\mu_{0}}f_{b}dv =0,\] _and_ \[\|(1+\eta)^{q_{k}}w_{l+4k+4}S\|_{L^{\infty}_{q,v}}<\infty,\qquad \text{for}\quad k\in\mathbb{N}_{+},\ q_{k}>\max\{3,k+\frac{3}{2}\}, \tag{3.3}\] \[\|w_{l+4k+5}f_{b}\|_{L^{\infty}_{v}}<\infty,\qquad\text{for} \quad k\in\mathbb{N}_{+},\ q_{k}>\max\{3,k+\frac{3}{2}\},\] _then there exists a unique solution \(\mathfrak{f}\) of (3.1) such that_ \[\|(1+\eta)^{k}w_{l}\mathfrak{f}\|_{L^{\infty}_{q,v}}+|(1+\eta)^{k }w_{l}\mathfrak{f}(0,\cdot)|_{L^{\infty}_{v}}\] \[\leq C\Big{\{}\|(1+\eta)^{q_{k}}w_{l+4k+4}S\|_{L^{\infty}_{q,v}}+ \|w_{l+4k+5}f_{b}\|_{L^{\infty}_{v}}\Big{\}}, \tag{3.4}\] _where \(C>0\) is a positive constant._ **Remark 3.2**.: _As indicated in [19], in general, it is hard to obtain the normal derivatives estimates for the boundary value problem (3.1). Fortunately, it is easy to obtain the tangential and time derivatives estimates for the solution of (3.1), i.e.,_ \[\sum_{i+j\leq r}\|(1+\eta)^{k}w_{l}\partial_{t}^{i}\nabla_{{}_{ \shortparallel}}^{j}\mathfrak{f}(t,x_{{}_{\shortparallel}},\cdot,\cdot)\|_{L^ {\infty}_{q,v}}+\|w_{l}\partial_{t}^{i}\nabla_{{}_{\shortparallel}}^{j} \mathfrak{f}(t,x_{{}_{\shortparallel}},0,\cdot)\|_{L^{\infty}_{v}}\] \[\leq C\sum_{i+j\leq r}\Big{\{}\|w_{l+4k+5}\partial_{t}^{i}\nabla _{{}_{\shortparallel}}^{j}f_{b}(t,x_{{}_{\shortparallel}},\cdot)\|_{L^{ \infty}_{v}}+\|(1+\eta)^{q_{k}}w_{l+4k+4}\partial_{t}^{i}\nabla_{{}_{ \shortparallel}}^{j}S\|_{L^{\infty}_{q,v}}\Big{\}}, \tag{3.5}\] _provided the right hand side of (3.5) is bounded and \(q_{k}>\max\{3,k+\frac{3}{2}\}\). We point out that such an estimate (3.5) is enough for us to establish the Hilbert expansion. To prove the estimate (3.5), we study the equation of \(\partial_{t}^{i}\nabla_{{}_{\shortparallel}}^{j}(\sqrt{\mu_{0}}\mathfrak{f})\). It is direct to check that the new source term and boundary perturbation term satisfy the solvability conditions in Theorem 3.1, hence one can obtain the estimate for \(\partial_{t}^{i}\nabla_{{}_{\shortparallel}}^{j}(\sqrt{\mu_{0}}\mathfrak{f})\) by applying Theorem 3.1, therefore (3.5) follows immediately._ _Moreover, taking \(L^{\infty}_{x_{{}_{\shortparallel}}}\cap L^{2}_{x_{{}_{\shortparallel}}}\) over (3.5), one obtains_ \[\sum_{i+j\leq r}\sup_{t\in[0,\tau^{s}]}\Big{\{}\|(1+\eta)^{k}w_{l }\partial_{t}^{i}\nabla_{{}_{\shortparallel}}^{j}f(t)\|_{L^{\infty}_{x_{{}_{ \shortparallel}},\eta,v}\cap L^{2}_{x_{{}_{\shortparallel}}}L^{\infty}_{v,v} }+\|w_{l}\partial_{t}^{i}\nabla_{{}_{\shortparallel}}^{j}(t,\cdot,0,\cdot)\|_{ L^{\infty}_{x_{{}_{\shortparallel}}}\cap L^{2}_{x_{{}_{\shortparallel}}}L^{\infty}_{v}} \Big{\}}\] \[\leq C\sup_{t\in[0,\tau^{s}]}\Big{\{}\sum_{i+j\leq r}\Big{\{}\|w_ {l+4k+5}\partial_{t}^{i}\nabla_{{}_{\shortparallel}}^{j}f_{b}(t)\|_{L^{ \infty}_{x_{{}_{\shortparallel}},\tau}\cap L^{2}_{x_{{}_{\shortparallel}}}L^{ \infty}_{v}}\] \[\qquad\qquad+\sum_{i+j\leq r}\|(1+\eta)^{q_{k}}w_{l+4k+4} \partial_{t}^{i}\nabla_{{}_{\shortparallel}}^{j}S(t)\|_{L^{\infty}_{x_{{}_{ \shortparallel}},\eta,v}\cap L^{2}_{x_{{}_{\shortparallel}}}L^{\infty}_{v,v} }\Big{\}}\Big{\}},\quad q_{k}>\max\{3,k+\frac{3}{2}\}. \tag{3.6}\] Let \(\Upsilon(\eta)\) be a monotonic smooth cut-off function \[\Upsilon(\eta)\equiv 1,\ \text{for}\ \eta\in[0,1],\quad\text{and}\quad \Upsilon(\eta)\equiv 0,\ \text{for}\ \eta\in[2,+\infty).\] Define \[f(x,v):=\mathfrak{f}(x,v)-\Upsilon(\eta)f_{b}(v),\quad x=(x_{{}_{ \shortparallel}},\eta,v)\] then (3.1) is rewritten as \[\begin{cases}v_{3}\partial_{\eta}f+\mathbf{L}_{0}f=g:=S-v_{3} \partial_{\eta}\Upsilon(\eta)f_{b}(v)-\Upsilon(\eta)\mathbf{L}_{0}f_{b},\quad( \eta,v)\in[0,\infty)\times\mathbb{R}^{3},\\ f(0,v)|_{v_{3}>0}=f(0,R_{\eta}v),\\ \lim_{\eta\to\infty}f(\eta,v)=0,\end{cases} \tag{3.7}\] where \(x_{{}_{\shortparallel}}\) is regarded as parameters. The conditions (3.2) deduces that \[\int_{\mathbb{R}^{3}}(1,v_{1}-\mathfrak{u}_{1}^{0},v_{2}-\mathfrak{u}_{2}^{0},|v-\mathfrak{u}^{0}|^{2})\sqrt{\mu_{0}}g\ dv=0. \tag{3.8}\] Define the viscosity and thermal conductivity coefficients by \[\begin{split}\mu(T^{0})&:=T^{0}\langle\mathcal{A}^{0}_{3 1},\ \mathbf{L}_{0}^{-1}\mathcal{A}^{0}_{31}\rangle\equiv T^{0}\langle\mathcal{A}^{0}_{ ij},\ \mathbf{L}_{0}^{-1}\mathcal{A}^{0}_{ij}\rangle,\quad\forall i\neq j,\\ \kappa(T^{0})&:=\frac{2}{3}T^{0}\langle\mathcal{B}^ {0}_{3},\ \mathbf{L}_{0}^{-1}\mathcal{B}^{0}_{3}\rangle\equiv\frac{2}{3}T^{0}\langle \mathcal{B}^{0}_{i},\ \mathbf{L}_{0}^{-1}\mathcal{B}^{0}_{i}\rangle,\end{split} \tag{3.9}\] where \(i,j=1,2,3\) and \(\mathcal{A}^{0}_{ij}\), \(\mathcal{B}^{0}_{i}\) are \[\begin{split}\mathcal{A}^{0}_{ij}&:=\left\{\frac{(v _{i}-\mathfrak{u}^{0}_{i})(v_{j}-\mathfrak{u}^{0}_{j})}{T^{0}}-\delta_{ij}\frac {|v-\mathfrak{u}^{0}|^{2}}{3T^{0}}\right\}\sqrt{\mu_{0}},\\ \mathcal{B}^{0}_{i}&:=\frac{v_{i}-\mathfrak{u}^{0}_ {i}}{2\sqrt{T^{0}}}\left(\frac{|v-\mathfrak{u}^{0}|^{2}}{T^{0}}-5\right)\sqrt {\mu_{0}}.\end{split} \tag{3.10}\] Using Lemma 4.4 in [3], one has \(\langle T^{0}\mathcal{A}^{0}_{33},\mathbf{L}_{0}^{-1}\mathcal{A}^{0}_{33} \rangle=\frac{4}{3}\mu(T^{0})\). ### Approximate solutions and uniform estimate This section is devoted to the existence result for the linearized problem (3.7). To prove the existence of solution, we first consider a truncated approximate problem with penalized term: \[\begin{cases}\delta f^{\delta}+v_{3}\partial_{\eta}f^{\delta}+\mathbf{L}_{0} f^{\delta}=g,\\ f^{\delta}(\eta,v)|_{\gamma_{-}}=f^{\delta}(\eta,R_{\eta}v),\end{cases}\quad( \eta,v)\in\Omega_{d}\times\mathbb{R}^{3}, \tag{3.11}\] where \(\Omega_{d}=(0,d)\), \(d\geq 1\) and \(\delta\in(0,1]\). We define \[h^{\delta}(\eta,v):=w_{l}(v)f^{\delta}(\eta,v),\] then (3.11) can be rewritten as \[\begin{cases}\delta h^{\delta}+v_{3}\partial_{\eta}h^{\delta}+\nu^{0}(v)h^{ \delta}=K^{0}_{w_{l}}h^{\delta}+w_{l}g,\\ h^{\delta}(\eta,v)|_{\gamma_{-}}=h^{\delta}(\eta,R_{\eta}v),\end{cases} \tag{3.12}\] where \(K^{0}_{w_{l}}h=w_{l}K^{0}(\frac{h}{w_{l}}).\) Then it is clear that \[K^{0}_{w_{l}}h(v)=\int_{\mathbb{R}^{3}}k^{0}_{w_{l}}(v,u)h(u)du\quad\text{with }\quad k^{0}_{w_{l}}(v,u)=w_{l}(v)k^{0}(v,u)w_{l}(u)^{-1}. \tag{3.13}\] For the approximate problem (3.12), the most difficult part is to obtain the \(L^{\infty}_{\eta,v}\)-bound. Motivated by [10], multiplying (3.12) by \((1+|v|^{2})^{\frac{|\kappa|}{2}}\), one gets \[\begin{cases}(\nu^{0}(v)+\delta)(1+|v|^{2})^{\frac{|\kappa|}{2}}h^{\delta}+v_ {3}(1+|v|^{2})^{\frac{|\kappa|}{2}}\partial_{\eta}h^{\delta}=(1+|v|^{2})^{ \frac{|\kappa|}{2}}K^{0}_{w_{l}}h^{\delta}+(1+|v|^{2})^{\frac{|\kappa|}{2}}w_{ l}g,\\ h^{\delta}(\eta,v)|_{\gamma_{-}}=h^{\delta}(\eta,R_{\eta}v).\end{cases} \tag{3.14}\] Denote \(\hat{\nu}_{\delta}=(\nu^{0}(v)+\delta)(1+|v|^{2})^{\frac{|\kappa|}{2}},\hat{ \nu}=\nu^{0}(v)(1+|v|^{2})^{\frac{|\kappa|}{2}},\hat{v}_{3}=v_{3}(1+|v|^{2})^{ \frac{|\kappa|}{2}},\) then \(\eqref{eq:11}_{1}\) becomes \[\hat{\nu}_{\delta}h^{\delta}+\hat{v}_{3}\partial_{\eta}h^{\delta}=(1+|v|^{2})^ {\frac{|\kappa|}{2}}K^{0}_{w_{l}}h^{\delta}+(1+|v|^{2})^{\frac{|\kappa|}{2}}w_ {l}g.\] For given \((t,\eta,v)\), let \([X(s),V(s)]\) be the speeded backward characteristics for (3.14). Then \([X(s),V(s)]\) is determined by \[\begin{cases}\frac{dX(s)}{ds}=\hat{V}_{3}(s):=V_{3}(s)(1+|V|^{2})^{\frac{| \kappa|}{2}},\quad\frac{dV(s)}{ds}=0,\\ [X(t),V(t)]=[\eta,v],\end{cases}\] which yields that \[[X(s),V(s)]=[X(s;t,\eta,v),V(s;t,\eta,v)]=[\eta-(t-s)\hat{v}_{3},v].\] Now for each \((\eta,v)\) with \(\eta\in\bar{\Omega}_{d}\) and \(v_{3}\neq 0\), we define its backward exit time \(t_{\mathbf{b}}(\eta,v)\geq 0\) to be the last moment at which the back-time straight line \([X(-\tau;0,\eta,v),V(-\tau;0,\eta,v)]\) remains in \(\bar{\Omega}\): \[t_{\mathbf{b}}(\eta,v)=\sup\{s\geq 0:\eta-\tau\hat{v}_{3}\in\bar{\Omega}_{d}\text{ for }0 \leq\tau\leq s\}.\] We also define the last position \[\eta_{\mathbf{b}}(\eta,v)=\eta(t_{\mathbf{b}})=\eta-t_{\mathbf{b}}(\eta,v)\hat{v}_ {3}\in\partial\Omega_{d}.\] It is obvious that \(X(s)\), \(t_{\mathbf{b}}(\eta,v)\) and \(\eta_{\mathbf{b}}(x,v)\) are independent of the horizontal velocity \(v_{\shortparallel}:=(v_{1},v_{2})\). Let \(\eta\in\bar{\Omega}_{d}\), \((\eta,v)\notin\gamma_{0}\cup\gamma_{-}\) and \((t_{0},\eta_{0},v_{0})=(t,\eta,v)\), we inductively define \[(t_{k+1},\eta_{k+1},v_{k+1})=(t_{k}-t_{\mathbf{b}}(\eta_{k},v_{k}),\eta_{ \mathbf{b}}(\eta_{k},v_{k}),R_{\eta_{k+1}}v_{k}),\quad k\geq 1,\] and the back-time cycle as \[\begin{cases}X_{cl}(s;t,\eta,v)=\sum_{k}\mathbf{1}_{(t_{k+1},t_{k})}(s)\{\eta _{k}-\hat{v}_{k,3}(t_{k}-s)\},\\ \\ V_{cl}(s;t,\eta,v)=\sum_{k}\mathbf{1}_{(t_{k+1},t_{k})}(s)v_{k}.\end{cases} \tag{3.15}\] Clearly, for \(k\geq 1\) and \((\eta,v)\notin\gamma_{0}\cup\gamma_{-}\), it holds that \[\begin{split}&\eta_{k}=\frac{1-(-1)^{k}}{2}\eta_{1}+\frac{1+(-1 )^{k}}{2}\eta_{2},\quad v_{k,\shortparallel}=v_{0,\shortparallel},\quad v_{k,3}=(-1)^{k}v_{0,3},\\ & t_{k}-t_{k+1}=t_{1}-t_{2}=\frac{d}{|\hat{v}_{0,3}|}>0,\quad\nu^{0}(v) \equiv\nu^{0}(v_{k}).\end{split} \tag{3.16}\] Now we are in a position to construct solutions to (3.11) or equivalently (3.12). We first present a useful \(L^{\infty}\)_a priori_ uniform estimate which will be used frequently. **Lemma 3.3**.: _For any given \(\lambda\in[0,1]\), let \(f^{\lambda}\) be the solution of the following system:_ \[\begin{cases}\delta f^{\lambda}+v_{3}\partial_{\eta}f^{\lambda}+\nu^{0}(v)f^{ \lambda}-\lambda K^{0}f^{\lambda}=g,\\ \\ f^{\lambda}(\eta,v)|_{\gamma_{-}}=(1-\frac{1}{n})f^{\lambda}(\eta,R_{\eta}v)+ r(\eta,R_{\eta}v),\end{cases} \tag{3.17}\] _where \(n>1\) is an integer and \(g,r\) are given. Assume \(\|w_{l}f^{\lambda}\|_{L^{\infty}_{\eta,v}}+|w_{l}f^{\lambda}|_{L^{\infty}( \gamma_{+})}<\infty\), \(l>2\), then it holds that_ \[\|w_{l}f^{\lambda}\|_{L^{\infty}_{\eta,v}}+|w_{l}f^{\lambda}|_{L^{\infty}( \gamma_{+})}\leq C\|\sqrt{\nu^{0}}f^{\lambda}\|_{L^{2}_{\eta,v}}+C\{\|(\nu^{0 })^{-1}w_{l}g\|_{L^{\infty}_{\eta,v}}+|w_{l+4}r|_{L^{\infty}(\gamma_{+})}\}. \tag{3.18}\] _We point out the constant \(C>0\) is independent of \(\lambda\), \(d\) and \(n\)._ **Remark 3.4**.: _For hard potentials, similar uniform estimate has been obtained in [22]. For soft potentials, since the effect of collision frequency is weak, i.e., \(\nu^{0}(v)=(1+|v|)^{\kappa}\to 0\) as \(|v|\to\infty\), we have to be more careful. In fact, one has to loss some weight to control the boundary perturbation \(r\), see (3.18)_ **Proof.** Denote \(h^{\lambda}:=w_{l}f^{\lambda}\), then it holds that \[\begin{cases}\hat{\nu}_{\delta}h^{\lambda}+\frac{dh^{\lambda}}{ds}=(1+|v|^{2} )^{\frac{|\kappa|}{2}}\lambda K^{0}_{w_{l}}h^{\lambda}+(1+|v|^{2})^{\frac{| \kappa|}{2}}w_{l}g,\\ \\ h^{\lambda}(\eta,v)|_{\gamma_{-}}=(1-\frac{1}{n})h^{\lambda}(\eta,R_{\eta}v)+ w_{l}r(\eta,R_{\eta}v).\end{cases}\] Integrating along the characteristic line, one gets \[h^{\lambda}(\eta,v) =(1-\frac{1}{n})^{k}h^{\lambda}(\eta_{k},v_{k})e^{-\hat{\nu}_{ \delta}(v)(t-t_{k})}+\lambda\sum_{i=0}^{k-1}(1-\frac{1}{n})^{i}\int_{t_{i+1}} ^{t_{i}}e^{-\hat{\nu}_{\delta}(v)(t-s)}(1+|v|^{2})^{\frac{|\kappa|}{2}}K^{0}_ {w_{l}}h^{\lambda}ds\] \[+\sum_{i=0}^{k-1}(1-\frac{1}{n})^{i}\int_{t_{i+1}}^{t_{i}}e^{- \hat{\nu}_{\delta}(v)(t-s)}(1+|v|^{2})^{\frac{|\kappa|}{2}}w_{l}gds+\sum_{i=0} ^{k-1}(1-\frac{1}{n})^{i}(w_{l}r)(\eta_{i},v_{i+1})e^{-\hat{\nu}_{\delta}(v)(t- t_{i})}\] \[=:I_{1}+I_{2}+I_{3}+I_{4}. \tag{3.19}\] Taking \(k=\tilde{k}_{0}|v_{3}|(1+|v|^{2})^{\frac{|\kappa|}{2}}\) with \(\tilde{k}_{0}\gg 1\) chosen later. Then it holds that \[I_{1}\leq e^{-\nu_{0}(k-1)t_{\mathbf{b}}}\|h^{\lambda}\|_{L^{\infty}_{\eta,v}} \leq e^{-\frac{1}{2}\nu_{0}\tilde{k}_{0}d}\|h^{\lambda}\|_{L^{\infty}_{\eta,v}}, \tag{3.20}\] where \(\nu_{0}>0\) is a constant depending on \(\rho^{0},\mathfrak{u}^{0},T^{0}\). It is obvious that \[I_{3}\leq\|(\nu^{0})^{-1}w_{l}g\|_{L^{\infty}_{\eta,v}}. \tag{3.21}\] For \(I_{4}\), noting \(|v_{i}|=|v|\), one has \[I_{4}\leq k(1+|v|)^{-4}|w_{l+4}r|_{L^{\infty}(\gamma_{+})}\leq C|w_{l+4}r|_{L^{ \infty}(\gamma_{+})}. \tag{3.22}\] To estimate \(I_{2}\), we divide it into two parts: \[\sum_{i=0}^{k-1}(1-\frac{1}{n})^{i}\int_{t_{i+1}}^{t_{i}}e^{-\hat {\nu}_{\delta}(v)(t-s)}\lambda(1+|v|^{2})^{\frac{|v|}{2}}K_{w_{l}}^{0}h^{ \lambda}(X_{cl}(s),v_{i})ds\] \[\leq\sum_{i=0}^{k-1}\int_{t_{i+1}}^{t_{i}}e^{-\hat{\nu}_{\delta}( v)(t-s)}(1+|v|^{2})^{\frac{|v|}{2}}|K_{w_{l}}^{0,c}h^{\lambda}(X_{cl}(s),v_{i} )|ds\] \[+\sum_{i=0}^{k-1}\int_{t_{i+1}}^{t_{i}}e^{-\hat{\nu}_{\delta}(v)( t-s)}(1+|v|^{2})^{\frac{|v|}{2}}|K_{w_{l}}^{0,m}h^{\lambda}(X_{cl}(s),v_{i} )|ds. \tag{3.23}\] For the second term on the RHS of (3.23), one has from (2.6) that \[\sum_{i=0}^{k-1}\int_{t_{i+1}}^{t_{i}}e^{-\hat{\nu}_{\delta}(v)(t-s)}(1+|v|^{2 })^{\frac{|v|}{2}}|K_{w_{l}}^{0,m}h^{\lambda}(X_{cl}(s),v_{i})|ds\leq Cm^{3+ \kappa}e^{-\frac{|v|^{2}}{20}}\|h^{\lambda}\|_{L^{\infty}_{\eta,v}}. \tag{3.24}\] For the first term on the RHS of (3.23), we use (3.19) again to obtain \[\sum_{i=0}^{k-1}\int_{t_{i+1}}^{t_{i}}e^{-\hat{\nu}_{\delta}(v)(t- s)}(1+|v|^{2})^{\frac{|v|}{2}}|K_{w_{l}}^{0,c}h^{\lambda}(X_{cl}(s),v_{i})|ds\] \[=\sum_{i=0}^{k-1}\int_{t_{i+1}}^{t_{i}}e^{-\hat{\nu}_{\delta}(v)( t-s)}(1+|v|^{2})^{\frac{|v|}{2}}\Big{|}\int_{\mathbb{R}^{3}}k_{w_{l}}^{0,c}(v_{i},v ^{\prime})h^{\lambda}(X_{cl}(s),v^{\prime})dv^{\prime}\Big{|}ds\] \[\leq\sum_{i=0}^{k-1}\int_{t_{i+1}}^{t_{i}}e^{-\hat{\nu}_{\delta}( v)(t-s)}(1+|v|^{2})^{\frac{|v|}{2}}\int_{\mathbb{R}^{3}}|k_{w_{l}}^{0,c}(v_{i},v ^{\prime})|\times(1+|v^{\prime}|^{2})^{\frac{|v|}{2}}dv^{\prime}ds\] \[\qquad\times\sum_{j=0}^{k^{\prime}-1}\int_{t^{\prime}_{j+1}}^{t^{ \prime}_{j}}e^{-\hat{\nu}_{\delta}(v^{\prime})(s-s_{1})}\int_{\mathbb{R}^{3}}| k_{w_{l}}^{0,c}(v^{\prime}_{j},v^{\prime\prime})h^{\lambda}(X^{\prime}_{cl}(s_{1}),v ^{\prime\prime})|dv^{\prime\prime}ds_{1}\] \[\quad+C\big{(}m^{3+\kappa}+m^{\kappa-1}e^{-\frac{1}{2}\nu_{0}k_{0} d}\big{)}\|h^{\lambda}\|_{L^{\infty}_{\eta,v}}+Cm^{\kappa-1}\big{\{}\|(\nu^{0})^{-1}w_{l}g \|_{L^{\infty}_{\eta,v}}+|w_{i+4}r|_{L^{\infty}(\gamma_{+})}\big{\}}, \tag{3.25}\] where we have used (3.20)-(3.22), (3.24) and (2.9)-(2.10), and denoted \(X^{\prime}_{cl}(s_{1})=X_{cl}(s_{1};s,X_{cl}(s),v^{\prime})\), and \(t^{\prime}_{j},v^{\prime}_{j}\) are the corresponding times and velocities for specular cycles. Here \(k^{\prime}=\tilde{k}_{0}|v^{\prime}_{3}|(1+|v^{\prime}|^{2})^{\frac{|v|}{2}}\). For the first term on RHS of (3.25), we divide the proof into several cases. _Case 1._\(|v|\geq N\). Using (2.9), the first term on the RHS of (3.25) is bounded by \[Cm^{\kappa-1}\sum_{i=0}^{k-1}\int_{t_{i+1}}^{t_{i}}e^{-\hat{\nu }_{\delta}(v)(t-s)}(1+|v|^{2})^{\frac{|v|}{2}}\int_{\mathbb{R}^{3}}|k_{w_{l}}^{0,c}(v_{i},v^{\prime})|(1+|v^{\prime}|)^{-2}dv^{\prime}ds\cdot\|h^{\lambda}\|_{L^ {\infty}_{\eta,v}}\] \[\leq Cm^{2(\kappa-1)}(1+|v|)^{-2}\|h^{\lambda}\|_{L^{\infty}_{ \eta,v}}\leq C\frac{m^{2(\kappa-1)}}{N^{2}}\|h^{\lambda}\|_{L^{\infty}_{\eta,v}}, \tag{3.26}\] where we have used the fact \(|v|\equiv|v_{i}|\) for \(i=0,1,\cdots\). It is important that the constant in (3.26) is independent of \(k\). _Case 2._\(|v|\leq N,|v^{\prime}|\geq 2N\) _or_\(|v^{\prime}|\leq 2N,|v^{\prime\prime}|\geq 3N\). Noting \(|v_{i}|=|v|\) and \(|v^{\prime}_{j}|=|v^{\prime}|\), we get either \(|v_{i}-v^{\prime}|\geq N\) or \(|v^{\prime}_{j}-v^{\prime\prime}|\geq N\), then either one of the following is valid for some small positive constant \(0<c_{2}\leq\frac{1}{32}\): \[\begin{split}|k^{0,c}_{w_{l}}(v_{i},v^{\prime})|&\leq e ^{-c_{2}N^{2}}|k^{0,c}_{w_{l}}(v_{i},,v^{\prime})\exp\big{(}c_{2}|v_{i}-v^{ \prime}|^{2}\big{)}|,\\ |k^{0,c}_{w_{l}}(v^{\prime}_{j},v^{\prime\prime})|&\leq e ^{-c_{2}N^{2}}|k^{0,c}_{w_{l}}(v^{\prime}_{j},,v^{\prime\prime})\exp\big{(}c_{ 2}|v^{\prime}_{j}-v^{\prime}|^{2}\big{)}|,\end{split} \tag{3.27}\] which, together with (2.9), yields that \[\sum_{i=0}^{k-1}\int_{t_{i+1}}^{t_{i}}e^{-\hat{\nu}_{\delta}(v)(t-s)}\left\{ \iint_{|v|\leq N,|v^{\prime}|\geq 2N}+\iint_{|v^{\prime}|\leq 2N,|v^{\prime \prime}|\geq 3N}\right\}(\cdots)dv^{\prime\prime}ds_{1}dv^{\prime}ds\] \[\leq Cm^{\kappa-1}e^{-c_{2}N^{2}}\|h^{\lambda}\|_{L^{\infty}_{q,v}}\leq\frac{ Cm^{\kappa-1}}{N}\|h^{\lambda}\|_{L^{\infty}_{q,v}}. \tag{3.28}\] We also point out that the constant in (3.28) is independent of \(k\). _Case 3. \(|v|\leq N,|v^{\prime}|\leq 2N\), \(|v^{\prime\prime}|\leq 3N\)._ We denote \(\mathcal{D}=\{|v|\leq N,\,|v^{\prime}|\leq 2N,\,|v^{\prime\prime}|\leq 3N\}\). Noting \(\hat{\nu}_{\delta}(v)\geq\nu_{0}\), the corresponding part is bounded by \[\sum_{i=0}^{k-1}\int_{t_{i+1}}^{t_{i}}e^{-\nu_{0}(t-s)}\iint_{ \mathcal{D}}|k^{0,c}_{w_{l}}(v_{i},v^{\prime})k^{0,c}_{w_{l}}(v^{\prime}_{j},v ^{\prime\prime})|(1+|v|^{2})^{\frac{|v|}{2}}(1+|v^{\prime}|^{2})^{\frac{|v|}{ 2}}dv^{\prime\prime}dv^{\prime}ds\] \[\qquad\qquad\times\sum_{j=0}^{k^{\prime}-1}\left(\int_{t^{\prime }_{j}-\frac{1}{N^{0}}}^{t^{\prime}_{j}}+\int_{t^{\prime}_{j+1}}^{t^{\prime}_{j }-\frac{1}{N^{0}}}\right)e^{-\nu_{0}(s-s_{1})}|h^{\lambda}(X^{\prime}_{cl}(s_{ 1}),v^{\prime\prime})|ds_{1}=:P_{1}+P_{2}.\] For \(P_{1}\), noting \(|v^{\prime}|\leq 2N\), one has \[P_{1}\leq C\frac{k^{\prime}m^{2\kappa-2}}{N^{6}}\|h^{\lambda}\|_{L^{\infty}_{q,v}}\leq C\frac{\tilde{k}_{0}m^{2(\kappa-1)}N^{4}}{N^{6}}\|h^{\lambda}\|_{L^{ \infty}_{q,v}}\leq\frac{Cm^{2(\kappa-1)}}{N^{2}}\|h^{\lambda}\|_{L^{\infty}_{ q,v}}.\] For \(P_{2}\), a direct calculation shows \[P_{2} \leq\sum_{i=0}^{k-1}\sum_{j=0}^{k^{\prime}-1}\int_{t_{i+1}}^{t_{i }}\int_{t^{\prime}_{j+1}}^{t^{\prime}_{j}-\frac{1}{N^{0}}}\iint_{\mathcal{D}}| k^{0,c}_{w_{l}}(v_{i},v^{\prime})k^{0,c}_{w_{l}}(v^{\prime}_{j},v^{\prime\prime})|(1+|v |^{2})^{\frac{|v|}{2}}(1+|v^{\prime}|^{2})^{\frac{|v|}{2}}\] \[\qquad\qquad\times|e^{-\nu_{0}(t-s_{1})}h^{\lambda}(X^{\prime}_{ cl}(s_{1}),v^{\prime\prime})|dv^{\prime\prime}dv^{\prime}ds_{1}ds\] \[\leq C_{N}\sum_{i=0}^{k-1}k^{\prime-1}\int_{t_{i+1}}^{t_{i}}\int_ {t^{\prime}_{j+1}}^{t^{\prime}_{j}-\frac{1}{N^{0}}}e^{-\nu_{0}(t-s_{1})}\Big{[} \iint_{\mathcal{D}}\nu^{0}(v^{\prime\prime})|f^{\lambda}(X^{\prime}_{cl}(s_{1 }),v^{\prime\prime})|^{2}dv^{\prime\prime}dv^{\prime}\Big{]}^{\frac{1}{2}}, \tag{3.29}\] where we used the fact \[\iint_{\mathcal{D}}|k^{0,c}_{w_{l}}(v_{i},v^{\prime})k^{0,c}_{w_{l}}(v^{\prime} _{j},v^{\prime\prime})|^{2}(1+|v|^{2})^{|\kappa|}(1+|v^{\prime}|^{2})^{|\kappa| }{w_{l}}^{2}(v^{\prime\prime})(\nu^{0})^{-1}(v^{\prime\prime})dv^{\prime}dv^{ \prime\prime}\leq C_{N}.\] Define \(y:=\eta^{\prime}_{j}-\hat{v}^{\prime}_{j,3}(t^{\prime}_{j}-s_{1})=X^{\prime}_{ cl}\). We have \(\eta^{\prime}_{j}=0\,\mathrm{or}\,d\) and \(\hat{v}^{\prime}_{j,3}=(-1)^{j}\hat{v}^{\prime}_{0,3}\). For \(t^{\prime}_{j}=t^{\prime}_{j}(s_{1};s,X_{cl}(s),v^{\prime})\), it holds that \[s-t^{\prime}_{j}=\begin{cases}\frac{X_{cl}(s)}{|\hat{v}^{\prime}_{0,3}|}+(j-1) \frac{d}{|\hat{v}^{\prime}_{0,3}|},\quad\text{for }v^{\prime}_{0,3}>0,\\ \frac{d-X_{cl}(s)}{|\hat{v}^{\prime}_{0,3}|}+(j-1)\frac{d}{|\hat{v}^{\prime}_{0,3 }|},\quad\text{for }v^{\prime}_{0,3}<0,\end{cases}\] which yields that \[y=\begin{cases}\eta^{\prime}_{j}-(-1)^{j}\Big{\{}\hat{v}^{\prime}_{0,3}(s-s_{1} )-[X_{cl}(s)+(j-1)d]\Big{\}},\quad\text{for }v^{\prime}_{0,3}>0,\\ \eta^{\prime}_{j}-(-1)^{j}\Big{\{}\hat{v}^{\prime}_{0,3}(s-s_{1})+[jd-X_{cl}(s) ]\Big{\}},\quad\text{for }v^{\prime}_{0,3}<0.\end{cases}\] Since \(\eta^{\prime}_{j}=0\,\mathrm{or}\,d\), which is independent of \(v^{\prime}_{0,3}\), thus we have \[\left|\frac{dy}{dv^{\prime}_{0,3}}\right|=(s-s_{1})\Big{\{}(1+|v^{\prime}|^{2})^{ \frac{|v|}{2}}+|\kappa|(1+|v^{\prime}|^{2})^{\frac{|v|}{2}-1}(v^{\prime}_{0,3}) ^{2}\Big{\}}\geq\frac{1}{N^{6}},\quad\text{for }s_{1}\in[t^{\prime}_{j+1},t^{\prime}_{j}-\frac{1}{N^{6}}],\] which yields that \[\left(\iint_{\mathcal{D}}\nu^{0}(v^{\prime\prime})|f^{\lambda}(\eta^{\prime}_{j}-v ^{\prime}_{j,3}(t^{\prime}_{j}-s_{1}),v^{\prime\prime})|^{2}dv^{\prime}dv^{\prime \prime}\right)^{\frac{1}{2}}\leq C_{m,N}\|\sqrt{\nu^{0}}f^{\lambda}\|_{L^{2}_{ \eta,v}}.\] Combining above estimates, then the RHS of (3.29) is bounded by \[\frac{Cm^{2(\kappa-1)}}{N^{2}}\|h^{\lambda}\|_{L^{\infty}_{\eta,v}}+C_{m,N}\| \sqrt{\nu^{0}}f^{\lambda}\|_{L^{2}_{\eta,v}}.\] Combining the above estimates, we obtain \[\|h^{\lambda}\|_{L^{\infty}_{\eta,v}}+|h^{\lambda}|_{L^{\infty}( \gamma_{+})} \leq C\big{(}m^{\kappa+3}+m^{\kappa-1}e^{-\frac{1}{2}\nu_{0}\bar{k} _{0}d}+\frac{m^{2(\kappa-1)}}{N}\big{)}\big{\{}\|h^{\lambda}\|_{L^{\infty}_{ \eta,v}}+|h^{\lambda}|_{L^{\infty}(\gamma_{+})}\big{\}}\] \[+C_{m,N}\|\sqrt{\nu^{0}}f^{\lambda}\|_{L^{2}_{\eta,v}}+C_{m}\{\|( \nu^{0})^{-1}w_{l}g\|_{L^{\infty}_{\eta,v}}+|w_{l+4}r|_{L^{\infty}(\gamma_{+} )}\}.\] First \(m\) sufficiently small, then taking Taking \(\bar{k}_{0}\) and \(N\) suitably large so that \[C\big{(}m^{\kappa+3}+m^{\kappa-1}e^{-\frac{1}{2}\nu_{0}\bar{k}_{0}d}+\frac{m^ {2(\kappa-1)}}{N}\big{)}\leq\frac{1}{2},\] then one has \[\|h^{\lambda}\|_{L^{\infty}_{\eta,v}}+|h^{\lambda}|_{L^{\infty}(\gamma_{+})} \leq C\|\sqrt{\nu^{0}}f^{\lambda}\|_{L^{2}_{\eta,v}}+C\big{\{}\|(\nu^{0})^{-1} w_{l}g\|_{L^{\infty}_{\eta,v}}+|w_{l+4}r|_{L^{\infty}(\gamma_{+})}\big{\}}.\] Therefore the proof of Lemma 3.3 is finished. **Lemma 3.5**.: _Let \(\delta>0,d\geq 1\), \(n\geq n_{0}\), and \(l>2\). Assume \(\|(\nu^{0})^{-1}w_{l}g\|_{L^{\infty}_{\eta,v}}<\infty\). Then there exists a unique solution \(f^{n}\) to the following boundary value problem_ \[\begin{cases}\delta f^{n}+v_{3}\partial_{\eta}f^{n}+\nu^{0}(v)f^{n}-K^{0}f^{n} =g,\\ f^{n}(\eta,v)|_{\gamma_{-}}=(1-\frac{1}{n})f^{n}(\eta,R_{\eta}v),\end{cases} \quad(\eta,v)\in\Omega_{d}\times\mathbb{R}^{3}, \tag{3.30}\] _satisfying_ \[\|w_{l}f^{n}\|_{L^{\infty}_{\eta,v}}+|w_{l}f^{n}|_{L^{\infty}(\gamma_{+})} \leq C_{\delta,d}\|(\nu^{0})^{-1}w_{l}g\|_{L^{\infty}_{\eta,v}},\] _where the positive constant \(C_{\delta,d}>0\) depends only on \(\delta,d\). Moreover, if \(g\) is continuous in \(\Omega_{d}\times\mathbb{R}^{2}\), then \(f^{n}\) is continuous away from grazing set \(\gamma_{0}\)._ **Proof.** We consider the solvability of the following boundary value problem \[\begin{cases}\mathcal{L}_{\lambda}f:=\delta f+v_{3}\partial_{\eta}f+\nu^{0}(v )f-\lambda K^{0}f=g,\\ f(\eta,v)|_{\gamma_{-}}=(1-\frac{1}{n})f(\eta,R_{\eta}v),\end{cases} \tag{3.31}\] for \(\lambda\in[0,1]\). For brevity we denote \(\mathcal{L}_{\lambda}^{-1}\) to be the solution operator associated with the problem, \(f:=\mathcal{L}_{\lambda}^{-1}g\) is a solution to the BVP (3.31). Our idea is to prove the existence of \(\mathcal{L}_{0}^{-1}\), and then extend to obtain the existence of \(\mathcal{L}_{1}^{-1}\) by a continuous argument on \(\lambda\). We split the proof into several steps. _Step 1._ In this step, we prove the existence of \(\mathcal{L}_{0}^{-1}\). We consider the following approximate sequence \[\begin{cases}\mathcal{L}_{0}f^{i+1}=\delta f^{i+1}+v_{3}\partial_{\eta}f^{i+1} +\nu^{0}(v)f^{i+1}=g,\\ f^{i+1}(\eta,v)|_{\gamma_{-}}=(1-\frac{1}{n})f^{i}(\eta,R_{\eta}v),\end{cases} \tag{3.32}\] for \(i=0,1,2,\cdots\), where we have set \(f^{0}\equiv 0\). We will construct \(L^{\infty}\) solutions to (3.32) for \(i=0,1,2,\cdots\), and establish uniform \(L^{\infty}\)-estimates. Firstly, multiplying (3.32) by \(f^{i+1}\) and integrating the resultant equality over \(\Omega_{d}\times\mathbb{R}^{3}\), one obtains that \[\delta\|f^{i+1}\|_{L^{2}_{\eta,v}}^{2}+\frac{1}{2}|f^{i+1}|_{L^{2 }(\gamma_{+})}^{2}+\|\sqrt{\nu^{0}}f^{i+1}\|_{L^{2}_{\eta,v}}\] \[\leq\frac{1}{2}(1-\frac{1}{n})^{2}|f^{i}|_{L^{2}(\gamma_{+})}^{2}+ C_{\delta}\|g\|_{L^{2}_{\eta,v}}^{2}+\frac{\delta}{2}\|f^{i+1}\|_{L^{2}_{\eta,v}}^{2}, \tag{3.33}\] which yields that \[\delta\|f^{i+1}\|^{2}_{L^{2}_{\eta,v}}+|f^{i+1}|^{2}_{L^{2}(\gamma_{+})}\leq(1- \frac{1}{n})^{2}|f^{i}|^{2}_{L^{2}(\gamma_{+})}+C_{\delta}\|g\|^{2}_{L^{2}_{\eta, v}}.\] Considering the equation of \(f^{i+1}-f^{i}\), by similar energy estimate as above, one obtains \[\delta\|f^{i+1}-f^{i}\|^{2}_{L^{2}_{\eta,v}}+|f^{i+1}-f^{i}|^{2}_ {L^{2}(\gamma_{+})}\] \[\leq(1-\frac{1}{n})^{2}|f^{i}-f^{i-1}|^{2}_{L^{2}(\gamma_{+})} \leq\cdots\leq(1-\frac{1}{n})^{2i}|f^{1}|^{2}_{L^{2}(\gamma_{+})}\] \[\leq C_{\delta}(1-\frac{1}{n})^{2i}\|g\|^{2}_{L^{2}_{\eta,v}}<\infty. \tag{3.34}\] Noting \(1-\frac{1}{n}<1\), thus \(\{f^{i}\}_{i=0}^{\infty}\) is a Cauchy sequence in \(L^{2}\), i.e., \[|f^{i}-f^{j}|^{2}_{L^{2}(\gamma_{+})}+\|f^{i}-f^{j}\|^{2}_{L^{2}_{\eta,v}}\to 0,\quad\text{as }i,j\to\infty,\] and we have, for \(i=0,1,2,\cdots\), \[|f^{i}|^{2}_{L^{2}(\gamma_{+})}+\|f^{i}\|^{2}_{L^{2}_{\eta,v}}\leq C_{\delta} \|g\|^{2}_{L^{2}_{\eta,v}}. \tag{3.35}\] Next we consider the uniform \(L^{\infty}_{\eta,v}\) estimate. Let \(h^{i}=w_{l}f^{i}\), one has that \[h^{i+1}(\eta,v)e^{\rho_{\delta}(v)t}=(1-\frac{1}{n})h^{i}(\eta_{1},v_{1})e^{ \rho_{\delta}(v)t_{1}}+\int_{t_{1}}^{t}e^{\rho_{\delta}s}(1+|v|^{2})^{\frac{| \varepsilon|}{2}}w_{l}gds.\] Then it is easy to obtain \[\|h^{1}\|_{L^{\infty}_{\eta,v}}+|h^{1}|_{L^{\infty}(\gamma_{+})}\leq\|(\nu^{ 0})^{-1}w_{l}g\|_{L^{\infty}_{\eta,v}}.\] Also, by iteration, it holds that \[\|h^{i}\|_{L^{\infty}_{\eta,v}}+|h^{i}|_{L^{\infty}(\gamma_{+})}\leq C_{i}\|( \nu^{0})^{-1}w_{l}g\|_{L^{\infty}_{\eta,v}},\quad i=0,1,2...,\] where the constants \(C_{i}\) depend on \(i\). Taking \(h^{i+1}-h^{i}\), similarly, one obtains \[(h^{i+1}-h^{i})(\eta,v)=(1-\frac{1}{n})e^{-\hat{\nu}_{\delta}(t-t_{1})}(h^{i}- h^{i-1})(\eta_{1},v_{1}),\] which yields that \[\|h^{i+1}-h^{i}\|_{L^{\infty}_{\eta,v}}+|h^{i+1}-h^{i}|_{L^{\infty }(\gamma_{+})}\leq(1-\frac{1}{n})|(h^{i}-h^{i-1})|_{L^{\infty}(\gamma_{+})}\] \[\leq...\leq(1-\frac{1}{n})^{i}|h^{1}|_{L^{\infty}(\gamma_{+})} \leq(1-\frac{1}{n})^{i}\|(\nu^{0})^{-1}w_{l}g\|_{L^{\infty}_{\eta,v}}.\] Since \(1-\frac{1}{n}<1\), \(\{h^{i}\}_{i=0}^{\infty}\) is a Cauchy sequence in \(L^{\infty}\) space. Then there exists a solution to \(\mathcal{L}_{0}^{-1}\) with \(L^{2}\) and \(L^{\infty}\)-weighted bound. _Step 2._ Assume \(f\) is a solution to (3.31) and \(\|w_{l}f\|_{L^{\infty}_{\eta,v}}+|w_{l}f|_{L^{\infty}(\gamma_{+})}<\infty\). Multiplying \(f\) to (3.31) and integrating on \(\mathbb{R}^{3}\), one obtains \[\delta\|f\|^{2}_{L^{2}_{v}}+\frac{d}{d\eta}\int_{\mathbb{R}^{3}}v_{3}f^{2}\ dv+\lambda c_{1}\|(\mathbf{I}-\mathbf{P}_{0})f\|^{2}_{ \nu}\leq\frac{\delta}{4}\|f\|^{2}_{L^{2}_{v}}+\frac{C}{\delta}\|g\|^{2}_{L^{2} _{v}}, \tag{3.36}\] where we used \[\langle f,\nu^{0}(v)f\rangle-\lambda\langle f,K^{0}f\rangle\geq\lambda c_{1}\| (\mathbf{I}-\mathbf{P}_{0})f\|^{2}_{\nu}+C(1-\lambda)\|f\|^{2}_{\nu}.\] A direct calculation shows that \[\int_{0}^{d}\int_{\mathbb{R}^{3}}\frac{d}{d\eta}(v_{3}f^{2})\ dvd\eta =\int_{\mathbb{R}^{3}}v_{3}|f|^{2}(d)\ dv-\int_{\mathbb{R}^{3}}v_{ 3}|f|^{2}(0)\ dv\] \[=\int_{v_{3}>0}v_{3}f^{2}(d,v)\ dv+\int_{v_{3}<0}(1-\frac{1}{n})^ {2}v_{3}|f|^{2}(d,Rv)\ dv\] \[\quad-\int_{v_{3}>0}(1-\frac{1}{n})^{2}v_{3}|f|^{2}(0,Rv)\ dv- \int_{v_{3}<0}v_{3}|f|^{2}(0,v)\ dv\] \[\|w_{l}f^{n}\|_{L^{\infty}_{\eta,v}}+|w_{l}f^{n}|_{L^{\infty}(\gamma_{+})}\leq C \Big{\{}\|(\nu^{0})^{-1}w_{l}g\|_{L^{\infty}_{\eta,v}}+\|f^{n}\|_{L^{2}_{\eta,v}} \Big{\}}\leq C_{\delta,d}\|(\nu^{0})^{-1}w_{l}g\|_{L^{\infty}_{\eta,v}}. \tag{3.43}\] Taking the difference \(f^{n_{1}}-f^{n_{2}}\) with \(n_{1},n_{2}\geq n_{0}\), we know that \[\begin{cases}\delta(f^{n_{1}}-f^{n_{2}})+v_{3}\partial_{\eta}(f^{n_{1}}-f^{n_{ 2}})+\mathbf{L}_{0}(f^{n_{1}}-f^{n_{2}})=0,\\ (f^{n_{1}}-f^{n_{2}})(\eta,v)|_{\gamma_{-}}=(1-\frac{1}{n_{1}})(f^{n_{1}}-f^{n_ {2}})(\eta,R_{\eta}v)+(\frac{1}{n_{2}}-\frac{1}{n_{1}})f^{n_{2}}(\eta,R_{\eta} v).\end{cases} \tag{3.44}\] Multiplying (3.44) by \(f^{n_{1}}-f^{n_{2}}\) and integrating it over \(\Omega_{d}\times\mathbb{R}^{3}\), we can obtain \[\delta\|(f^{n_{1}}-f^{n_{2}})\|^{2}_{L^{2}_{\eta,v}}+c_{1}\int_{0}^{d}\|( \mathbf{I}-\mathbf{P}_{0})(f^{n_{1}}-f^{n_{2}})\|^{2}_{\nu}\ d\eta\] \[\|w_{l}f\|_{L^{\infty}_{q,v}}+|w_{l}f|_{L^{\infty}(\gamma_{+})}\leq C_{d}\|(\nu^{ 0})^{-1}w_{l}g\|_{L^{\infty}_{q,v}}. \tag{3.50}\] _Moreover, if \(g\) is continuous in \(\Omega_{d}\times\mathbb{R}^{3}\), then \(f\) is continuous away from grazing set \(\gamma_{0}\)._ **Proof.** Let \(f^{\delta}\) be the solution of (3.11) constructed in Lemma 3.6. We shall consider the limit \(\delta\to 0\) to obtain solution of (3.49). By similar arguments as in [22, Lemma 3.7], we can obtain \[\|\mathbf{P}_{0}f^{\delta}\|^{2}\leq Cd^{6}\Big{(}\|(\mathbf{I}-\mathbf{P}_{0 })f^{\delta}\|^{2}_{\nu}+\|g\|^{2}_{L_{q,v}}\Big{)}. \tag{3.51}\] It is noted that [22, Lemma 3.7] was proved for hard sphere case, but the proof can be generalized to both hard and soft potentials without any difficulty. Multiplying (3.11) by \(f^{\delta}\) and integrating over \(\Omega_{d}\times\mathbb{R}^{3}\), we have \[\delta\|f^{\delta}\|^{2}_{L^{\infty}_{q,v}}+c_{0}\|(\mathbf{I}-\mathbf{P})f^{ \delta}\|^{2}_{\nu}\leq\vartheta\|f^{\delta}\|^{2}_{L^{2}_{q,v}}+C_{\vartheta }\|g\|^{2}_{L^{2}_{q,v}}. \tag{3.52}\] which, together with (3.51) and taking \(\vartheta\) small enough (depending on \(d\)), yields that \[\|\sqrt{\nu^{0}}f^{\delta}\|^{2}_{L^{2}_{q,v}}\leq C_{d}\|g\|^{2}_{L^{2}_{q,v}}. \tag{3.53}\] Applying (3.18) to \(f^{\delta}\) and using (3.53), one obtain \[\|w_{l}f^{\delta}\|_{L^{\infty}_{q,v}}+|w_{l}f^{\delta}|_{L^{\infty}(\gamma_ {+})}\leq C_{d}\|(\nu^{0})^{-1}w_{l}g\|_{L^{\infty}_{q,v}}. \tag{3.54}\] Next we consider the convergence of \(f^{\delta}\) as \(\delta\to 0+\). For any \(\delta_{1},\delta_{2}>0\), we consider the difference \(f^{\delta_{2}}-f^{\delta_{1}}\) satisfying \[\begin{cases}v_{3}\partial_{\eta}(f^{\delta_{2}}-f^{\delta_{1}})+ \mathbf{L}_{0}(f^{\delta_{2}}-f^{\delta_{1}})=-\delta_{2}f^{\delta_{2}}+\delta _{1}f^{\delta_{1}},\\ (f^{\delta_{2}}-f^{\delta_{1}})|_{\gamma_{-}}=(f^{\delta_{2}}-f^{\delta_{1}}) (\eta,R_{\eta}v).\end{cases} \tag{3.55}\] Multiplying (3.55) by \(f^{\delta_{2}}-f^{\delta_{1}}\), integrating the resultant equation and by similar arguments as in (3.52)-(3.54), one gets \[\|\sqrt{\nu^{0}}(f^{\delta_{2}}-f^{\delta_{1}})\|^{2}_{L_{\eta,v}} \leq C_{d}\|\delta_{2}f^{\delta_{2}}-\delta_{1}f^{\delta_{1}}\|^{2}_{L^{2}_{ \eta,v}}\leq C_{d}(\delta_{1}^{2}+\delta_{2}^{2})\cdot\|(\nu^{0})^{-1}w_{l}g \|^{2}_{L^{\infty}_{\eta,v}}\to 0, \tag{3.56}\] as \(\delta_{1}\), \(\delta_{2}\to 0+\). Finally, applying (3.18) to \(f^{\delta_{2}}-f^{\delta_{1}}\) and using (3.56), then we obtain \[\|w_{l}(f^{\delta_{2}}-f^{\delta_{1}})\|_{L^{\infty}_{\eta,v}}+|w_ {l}(f^{\delta_{2}}-f^{\delta_{1}})|_{L^{\infty}(\gamma_{+})}\] \[\leq C\Big{\{}\|(\nu^{0})^{-1}w_{l}(\delta_{2}f^{\delta_{2}}- \delta_{1}f^{\delta_{1}})\|_{L^{\infty}_{\eta,v}}+\|\sqrt{\nu^{0}}(f^{\delta_{ 2}}-f^{\delta_{1}})\|_{L^{2}_{\eta,v}}\Big{\}}\] \[\leq C_{d}(\delta_{1}+\delta_{2})\|(\nu^{0})^{-1}w_{l}g\|_{L^{ \infty}_{\eta,v}}\to 0, \tag{3.57}\] as \(\delta_{1}\), \(\delta_{2}\to 0+\), With (3.57), we know that there exists a function \(f\) so that \(\|w_{l}\,(f^{\delta}-f)\|_{L^{\infty}_{\eta,v}}\to 0\) as \(\delta\to 0+\). And it is direct to see that \(f\) solves (3.49). Also, (3.50) follows immediately from (3.54). The continuity of \(f\) directly follows from the \(L^{\infty}_{\eta,v}\)-convergence and the continuity of \(f^{\delta}\). Therefore the proof of Lemma 3.7 is complete. To obtain the solution for half-space problem, we need some uniform estimates independent of \(d\), then we can take the limit \(d\to\infty\). Let \(f\) be the solution of (3.49), we denote \[\mathbf{P}_{0}f(\eta,v)=\big{[}a(\eta)+b(\eta)\cdot(v-\mathfrak{u}^{0})+c( \eta)(|v-\mathfrak{u}^{0}|^{2}-3T^{0})\big{]}\sqrt{\mu_{0}}.\] Multiplying (3.49) by \(\sqrt{\mu_{0}}\) and using (3.8), we have \[0=\frac{d}{d\eta}\int_{\mathbb{R}^{3}}v_{3}\sqrt{\mu_{0}}f(\eta,v)dv=\frac{d} {d\eta}b_{3}(\eta)\equiv 0. \tag{3.58}\] Since \(f\) satisfies the specular boundary, it holds that \(b_{3}(\eta)|_{\eta=0}=b_{3}(\eta)|_{\eta=d}=0\), which, together with (3.58), yields \[b_{3}(\eta)=0,\quad\text{for }\eta\in[0,d]. \tag{3.59}\] Let \((\phi_{0},\phi_{1},\phi_{2},\phi_{3})\) be some constants chosen later, we define \[\bar{f}(\eta,v) :=f(\eta,v)+[\phi_{0}+\phi_{1}(v_{1}-\mathfrak{u}^{0}_{1})+\phi_{ 2}(v_{2}-\mathfrak{u}^{0}_{2})+\phi_{3}(|v-\mathfrak{u}^{0}|^{2}-3T^{0})]\sqrt {\mu_{0}}\] \[=[\bar{a}(\eta)+\bar{b}_{1}(\eta)(v_{1}-\mathfrak{u}^{0}_{1})+ \bar{b}_{2}(\eta)(v_{2}-\mathfrak{u}^{0}_{2})+\bar{c}(\eta)(|v-\mathfrak{u}^{0}| ^{2}-3T^{0})]\sqrt{\mu_{0}}\] \[\qquad+(\mathbf{I}-\mathbf{P}_{0})\bar{f},\] where \[\begin{cases}\bar{a}(\eta)=a(\eta)+\phi_{0},\\ \bar{b}_{i}(\eta)=b_{i}(\eta)+\phi_{i},\quad i=1,2,\\ \bar{c}(\eta)=c(\eta)+\phi_{3}.\end{cases}\] It follows from (3.59) that \[\bar{b}_{3}(\eta)\equiv 0\quad\text{and}\quad(\mathbf{I}-\mathbf{P}_{0})\bar{f} (\eta,v)\equiv(\mathbf{I}-\mathbf{P}_{0})f(\eta,v)\quad\forall\eta\in[0,d]. \tag{3.60}\] The equation for \(\bar{f}\) is \[\begin{cases}v_{3}\partial_{\eta}\bar{f}+\mathbf{L}_{0}\bar{f}=g,\quad(\eta,v) \in\Omega_{d}\times\mathbb{R}^{3},\\ \bar{f}(\eta,v)|_{\gamma_{-}}=\bar{f}(\eta,R_{\eta}v).\end{cases} \tag{3.61}\] Hence it follows from (3.50) that \[\|w_{l}\,\bar{f}\|_{L^{\infty}_{\eta,v}}+|w_{l}\,\bar{f}\|_{L^{\infty}(\gamma_{ +})}\leq C_{d}\|(\nu^{0})^{-1}w_{l}g\|_{L^{\infty}_{\eta,v}}+C_{d}|(\phi_{0}, \phi_{1},\phi_{2},\phi_{3})|.\] Multiplying \(\eqref{eq:3.61}_{1}\) by \((v_{1}-\mathfrak{u}_{1}^{0},v_{2}-\mathfrak{u}_{2}^{0},|v-\mathfrak{u}^{0}|^{2}-5T ^{0})\sqrt{\mu_{0}}\) and using (3.8), we get \[\begin{split}\int_{\mathbb{R}^{3}}v_{3}(v_{i}-\mathfrak{u}_{i}^{0} )\sqrt{\mu_{0}}\bar{f}(\eta,v)dv&=0,\quad\forall\,\eta\in[0,d], \quad i=1,2,\\ \int_{\mathbb{R}^{3}}v_{3}(|v-\mathfrak{u}^{0}|^{2}-5T^{0})\sqrt{ \mu_{0}}\bar{f}(\eta,v)dv&=0,\quad\forall\,\eta\in[0,d].\end{split} \tag{3.62}\] It follows from (3.60) and (3.62) that \[\int_{\mathbb{R}^{3}}v_{3}|\mathbf{P}_{0}\bar{f}(\eta,v)|^{2}dv\equiv\int_{ \mathbb{R}^{3}}v_{3}\mathbf{P}_{0}\bar{f}(\eta,v)\cdot(\mathbf{I}-\mathbf{P}_ {0})\bar{f}(\eta,v)dv\equiv 0, \tag{3.63}\] which yields that \[\int_{\mathbb{R}^{3}}v_{3}|\bar{f}(\eta,v)|^{2}dv=\int_{\mathbb{R}^{3}}v_{3}|( \mathbf{I}-\mathbf{P}_{0})\bar{f}(\eta,v)|^{2}dv,\quad\forall\eta\in[0,d]. \tag{3.64}\] Multiplying (3.61) by \(\bar{f}\) and using (3.64), (3.8), we obtain \[\frac{d}{d\eta}\int_{\mathbb{R}^{3}}v_{3}|(\mathbf{I}-\mathbf{P}_{0})\bar{f}| ^{2}dv+\frac{1}{2}c_{1}\|(\mathbf{I}-\mathbf{P}_{0})\bar{f}\|_{\nu}^{2}\leq C \|(\nu^{0})^{-\frac{1}{2}}g\|_{L_{v}^{2}}^{2},\] which yields that \[\int_{0}^{d}\|(\mathbf{I}-\mathbf{P}_{0})\bar{f}\|_{\nu}^{2}\ d\eta\leq C\int _{0}^{d}\|(\nu^{0})^{-\frac{1}{2}}g\|_{L_{v}^{2}}^{2}d\eta, \tag{3.65}\] where we have used (3.8) to derive \[\int_{\mathbb{R}^{3}}g\bar{f}dv=\int_{\mathbb{R}^{3}}g(\mathbf{I}-\mathbf{P}_ {0})\bar{f}dv\leq\frac{1}{2}c_{1}\|(\mathbf{I}-\mathbf{P}_{0})\bar{f}\|_{\nu }^{2}+C\|(\nu^{0})^{-\frac{1}{2}}g\|_{L_{v}^{2}}^{2}.\] **Lemma 3.8**.: _There exist constants \((\phi_{0},\phi_{1},\phi_{2},\phi_{3})\) such that_ \[\begin{split}&\int_{\mathbb{R}^{3}}v_{3}\bar{f}(d,v)\cdot v_{3} \sqrt{\mu_{0}}dv=0,\\ &\int_{\mathbb{R}^{3}}v_{3}\bar{f}(d,v)\cdot\mathbf{L}_{0}^{-1}( \mathcal{A}_{3i}^{0})dv=0,\ i=1,2,\\ &\int_{\mathbb{R}^{3}}v_{3}\bar{f}(d,v)\cdot\mathbf{L}_{0}^{-1}( \mathcal{B}_{3}^{0})dv=0.\end{split} \tag{3.66}\] **Proof.** A direct calculation shows that \[\begin{split}\int_{\mathbb{R}^{3}}v_{3}\bar{f}(\eta,v)\cdot v_{3 }\sqrt{\mu_{0}}dv&=\rho^{0}T^{0}\bar{a}(\eta)+2\rho^{0}(T^{0})^{2 }\bar{c}(\eta)+T^{0}\int_{\mathbb{R}^{3}}\mathcal{A}_{33}^{0}(v)\cdot(\mathbf{ I}-\mathbf{P}_{0})\bar{f}(\eta,v)dv\\ &=\rho^{0}T^{0}\phi_{0}+2\rho^{0}(T^{0})^{2}\phi_{3}+\rho^{0}T^{ 0}a(\eta)+2\rho^{0}(T^{0})^{2}c(\eta)\\ &\quad+T^{0}\int_{\mathbb{R}^{3}}\mathcal{A}_{33}^{0}(v)\cdot( \mathbf{I}-\mathbf{P}_{0})f(\eta,v)dv,\end{split} \tag{3.67}\] \[\begin{split}\int_{\mathbb{R}^{3}}v_{3}\bar{f}(\eta,v)\cdot \mathbf{L}_{0}^{-1}(\mathcal{A}_{31}^{0})dv&=\mu(T^{0})\bar{b}_{1}( \eta)+\int_{\mathbb{R}^{3}}v_{3}(\mathbf{I}-\mathbf{P}_{0})\bar{f}(\eta,v)\cdot \mathbf{L}_{0}^{-1}(\mathcal{A}_{31}^{0})dv\\ &=\mu(T^{0})\phi_{1}+\mu(T^{0})b_{1}(\eta)+\int_{\mathbb{R}^{3}}v _{3}(\mathbf{I}-\mathbf{P}_{0})f(\eta,v)\cdot\mathbf{L}_{0}^{-1}(\mathcal{A} _{31}^{0})dv,\end{split} \tag{3.68}\] \[\begin{split}\int_{\mathbb{R}^{3}}v_{3}\bar{f}(\eta,v)\cdot \mathbf{L}_{0}^{-1}(\mathcal{A}_{32}^{0})dv&=\mu(T^{0})\bar{b}_{2}( \eta)+\int_{\mathbb{R}^{3}}v_{3}(\mathbf{I}-\mathbf{P}_{0})\bar{f}(\eta,v)\cdot \mathbf{L}_{0}^{-1}(\mathcal{A}_{32}^{0})dv\\ &=\mu(T^{0})\phi_{2}+\mu(T^{0})b_{2}(\eta)+\int_{\mathbb{R}^{3}}v _{3}(\mathbf{I}-\mathbf{P}_{0})f(\eta,v)\cdot\mathbf{L}_{0}^{-1}(\mathcal{A} _{32}^{0})dv,\end{split} \tag{3.69}\] \[\begin{split}\int_{\mathbb{R}^{3}}v_{3}\bar{f}(\eta,v)\cdot \mathbf{L}_{0}^{-1}(\mathcal{B}_{3}^{0})dv&=\kappa(T^{0})\bar{c}( \eta)+\int_{\mathbb{R}^{3}}v_{3}(\mathbf{I}-\mathbf{P}_{0})\bar{f}(\eta,v)\cdot \mathbf{L}_{0}^{-1}(\mathcal{B}_{3}^{0})dv\\ &=\kappa(T^{0})\phi_{3}+\kappa(T^{0})c(\eta)+\int_{\mathbb{R}^{3}} v_{3}(\mathbf{I}-\mathbf{P}_{0})f(\eta,v)\cdot\mathbf{L}_{0}^{-1}(\mathcal{B} _{3}^{0})dv,\end{split} \tag{3.70}\] where we have used the notations in (3.9). Using (3.67)-(3.70), then (3.66) is equivalent as \[\left(\begin{array}{cccc}1&0&0&2T^{0}\\ 0&\mu(T^{0})&0&0\\ 0&0&\mu(T^{0})&0\\ 0&0&0&\kappa(T^{0})\end{array}\right)\left(\begin{array}{c}\phi_{0}\\ \phi_{1}\\ \phi_{2}\\ \phi_{3}\end{array}\right)=-\left(\begin{array}{c}a(d)+2T^{0}c(d)+\frac{1}{ \rho^{0}}\int_{\mathbb{R}^{3}}(\mathbf{I}-\mathbf{P}_{0})f(d,v)\cdot\mathcal{A }_{33}^{0}(v)dv\\ \mu(T^{0})b_{1}(d)+\int_{\mathbb{R}^{3}}v_{3}(\mathbf{I}-\mathbf{P}_{0})f(d,v) \cdot\mathbf{L}_{0}^{-1}(\mathcal{A}_{31}^{0})dv\\ \mu(T^{0})b_{2}(d)+\int_{\mathbb{R}^{3}}v_{3}(\mathbf{I}-\mathbf{P}_{0})f(d,v) \cdot\mathbf{L}_{0}^{-1}(\mathcal{A}_{32}^{0})dv\\ \kappa(T^{0})c(d)+\int_{\mathbb{R}^{3}}v_{3}(\mathbf{I}-\mathbf{P}_{0})f(d,v) \cdot\mathbf{L}_{0}^{-1}(\mathcal{B}_{3}^{0})dv\end{array}\right).\] Noting the matrix is non-singular, hence \((\phi_{0},\phi_{1},\phi_{2},\phi_{3})\) are found. Therefore the proof of Lemma 3.8 is completed. From now on, the proof is quite different with hard sphere case since we do not have \(\nu^{0}\geq\sigma|v_{8}|\) for soft cases. Hence it is hard to obtain the space exponential decay as hard sphere case. Our strategy is to get the space decay by losing the particle velocity weight. **Lemma 3.9**.: _Let \((\phi_{0},\phi_{1},\phi_{2},\phi_{3})\) be the ones determined in Lemma 3.8, then it holds that_ \[\int_{0}^{d}\|\bar{f}\|_{\nu}^{2}d\eta\leq C\int_{0}^{d}\int_{\mathbb{R}^{3}}(1 +\eta)^{2p_{0}}(\nu^{0})^{-1}g^{2}dvd\eta,\quad p_{0}>1. \tag{3.71}\] _where the constant \(C>0\) is independent of \(d\)._ **Proof.** Multiplying (3.61) by \(\mathbf{L}_{0}^{-1}(\mathcal{A}_{31}^{0}),\mathbf{L}_{0}^{-1}(\mathcal{A}_{32 }^{0})\) and \(\mathbf{L}_{0}^{-1}(\mathcal{B}_{3}^{0})\), respectively, one has from (3.62) that \[\frac{d}{d\eta}\left(\begin{array}{c}\int_{\mathbb{R}^{3}}v_{3}\bar{f}(\eta, v)\cdot\mathbf{L}_{0}^{-1}(\mathcal{A}_{31}^{0})dv\\ \int_{\mathbb{R}^{3}}v_{3}\bar{f}(\eta,v)\cdot\mathbf{L}_{0}^{-1}(\mathcal{A}_ {32}^{0})dv\\ \int_{\mathbb{R}^{3}}v_{3}\bar{f}(\eta,v)\cdot\mathbf{L}_{0}^{-1}(\mathcal{B} _{3}^{0})dv\end{array}\right)=\left(\begin{array}{c}\int_{\mathbb{R}^{3}}g \cdot\mathbf{L}_{0}^{-1}(\mathcal{A}_{31}^{0})dv\\ \int_{\mathbb{R}^{3}}g\cdot\mathbf{L}_{0}^{-1}(\mathcal{A}_{32}^{0})dv\\ \int_{\mathbb{R}^{3}}g\cdot\mathbf{L}_{0}^{-1}(\mathcal{B}_{3}^{0})dv\end{array} \right).\] Integrating the above system over \([\eta,d]\) and using (3.66), one obtains \[\left(\begin{array}{c}\int_{\mathbb{R}^{3}}v_{3}\bar{f}(\eta, v)\cdot\mathbf{L}_{0}^{-1}(\mathcal{A}_{31}^{0})dv\\ \int_{\mathbb{R}^{3}}v_{3}\bar{f}(\eta,v)\cdot\mathbf{L}_{0}^{-1}(\mathcal{A}_ {32}^{0})dv\\ \int_{\mathbb{R}^{3}}v_{3}\bar{f}(\eta,v)\cdot\mathbf{L}_{0}^{-1}(\mathcal{B} _{3}^{0})dv\end{array}\right)=-\int_{\eta}^{d}\left(\begin{array}{c}\int_{ \mathbb{R}^{3}}g\cdot\mathbf{L}_{0}^{-1}(\mathcal{A}_{31}^{0})dv\\ \int_{\mathbb{R}^{3}}g\cdot\mathbf{L}_{0}^{-1}(\mathcal{A}_{32}^{0})dv\\ \int_{\mathbb{R}^{3}}g\cdot\mathbf{L}_{0}^{-1}(\mathcal{B}_{3}^{0})dv\end{array} \right)(z)dz,\] which, together with (3.68)-(3.70) and Proposition 2.5, yields that \[|(\mu(T^{0})\bar{b}_{1},\mu(T^{0})\bar{b}_{2},\kappa(T^{0})\bar{c })(\eta)|\leq C\|(\mathbf{I}-\mathbf{P}_{0})f(\eta)\|_{\nu}+C\int_{\eta}^{d}\|g(z )\|_{L_{v}^{2}}dz, \tag{3.72}\] where we used Proposition 2.5 to derive the decay estimates for \(v_{3}\mathbf{L}_{0}^{-1}(\mathcal{A}_{31}^{0}),v_{3}\mathbf{L}_{0}^{-1}( \mathcal{A}_{32}^{0}),v_{3}\mathbf{L}_{0}^{-1}(\mathcal{B}_{3}^{0})\). It follows from (3.67) that \[\bar{a}(\eta)=-2T^{0}\bar{c}(\eta)-\frac{1}{\rho^{0}T^{0}}\int_{\mathbb{R}^{3}} (\mathbf{I}-\mathbf{P}_{0})\bar{f}(\eta,v)\cdot v_{3}^{2}\sqrt{\mu_{0}}dv-\frac {1}{\rho^{0}T^{0}}\int_{\eta}^{d}\int_{\mathbb{R}^{3}}g\cdot v_{3}\sqrt{\mu_{0} }dvdz,\] which yields that \[|\bar{a}(\eta)|\leq C\|(\mathbf{I}-\mathbf{P}_{0})f(\eta)\|_{\nu}+C\int_{\eta}^{d} \|g(z)\|_{L_{v}^{2}}dz. \tag{3.73}\] Using (3.65), (3.72)-(3.73), one gets \[\int_{0}^{d}\|\mathbf{P}_{0}\bar{f}\|_{\nu}^{2}d\eta\leq C\int_{0}^{d}\|(\mathbf{I}-\mathbf{P}_{0})f(\eta)\|_{\nu}^{2}d\eta+C \int_{0}^{d}\Big{\{}\int_{\eta}^{d}\|g(z)\|_{L_{v}^{2}}dz\Big{\}}^{2}d\eta\] \[\leq C\int_{0}^{d}\int_{\mathbb{R}^{3}}(\nu^{0})^{-1}g^{2}dvd\eta+C \int_{0}^{d}\int_{\eta}^{d}(1+z)^{-2p_{0}}dzd\eta\cdot\int_{0}^{d}\int_{\mathbb{R} ^{3}}(1+\eta)^{2p_{0}}g^{2}dvd\eta\] \[\leq C\int_{0}^{d}\int_{\mathbb{R}^{3}}(1+\eta)^{2p_{0}}(\nu^{0})^{-1 }g^{2}dvd\eta,\quad p_{0}>1. \tag{3.74}\] We conclude (3.71) from (3.74) and (3.65). The proof of Lemma 3.9 is completed. Since we will encounter some space weight \(\eta^{l}\) in the formulation of Hilbert expansion (see (1.22) for details), then we have to derive at least polynomial decay for Knudsen boundary layer so that the analysis can be closed. **Lemma 3.10**.: _Let \((\phi_{0},\phi_{1},\phi_{2},\phi_{3})\) be the ones determined in Lemma 3.8, then it holds that_ \[\int_{0}^{d}(1+\eta)^{k}\|w_{l}\bar{f}\|_{\nu}^{2}d\eta\leq C_{k}\int_{0}^{d}(1 +\eta)^{2p_{k}}\|w_{l+2k+2}g\|_{L_{v}^{2}}^{2}d\eta,\quad p_{k}>\frac{k}{2}+1, \tag{3.75}\] _where \(k\) is a non-negative integer and the constant \(C_{k}\) depends only on \(k\)._ **Proof.** We divide the proof into three steps. _Step 1._ Let \(l\) be any positive constant. From [37, Corollary 1], it holds that \[\langle w_{l}^{2}\mathbf{L}_{0}\mathfrak{h},\mathfrak{h}\rangle\geq\frac{1}{2 }\|w_{l}\mathfrak{h}\|_{\nu}^{2}-C\|\mathfrak{h}\|_{\nu}^{2}. \tag{3.76}\] Multiplying (3.61) by \(w_{l}^{2}\bar{f}\) and integrating on \(\mathbb{R}^{3}\), one has \[\frac{1}{2}\frac{d}{d\eta}\int_{\mathbb{R}^{3}}v_{3}w_{l}^{2}\bar{f}^{2}dv+ \int_{\mathbb{R}^{3}}w_{l}^{2}\bar{f}\cdot\mathbf{L}_{0}\bar{f}dv=\int_{ \mathbb{R}^{3}}w_{l}^{2}\bar{f}gdv. \tag{3.77}\] Integrating (3.77) on \([0,d]\) and using (3.76), one gets \[\int_{0}^{d}\|w_{l}\bar{f}\|_{\nu}^{2}\ d\eta \lesssim\int_{0}^{d}\|\bar{f}\|_{\nu}^{2}d\eta+\int_{0}^{d}\int_{ \mathbb{R}^{3}}w_{l}^{2}\bar{f}g\ dvd\eta\] \[\lesssim\int_{0}^{d}\|\bar{f}\|_{\nu}^{2}d\eta+\int_{0}^{d}\|( \nu^{0})^{-\frac{1}{2}}w_{l}g\|_{L_{v}^{2}}^{2}d\eta\] \[\lesssim\int_{0}^{d}\|\bar{f}\|_{\nu}^{2}d\eta+\int_{0}^{d}\|w_{ l+2}g\|_{L_{v}^{2}}^{2}d\eta\] \[\lesssim\int_{0}^{d}(1+\eta)^{2p_{0}}\|w_{l+2}g\|_{L_{v}^{2}}^{2} d\eta,\quad p_{0}>1, \tag{3.78}\] where we have used Lemma 3.9. _Step 2._ Multiplying (3.61) by \(\bar{f}\) and integrating over \(\mathbb{R}^{3}\), one has \[\frac{1}{2}\frac{d}{d\eta}\int_{\mathbb{R}^{3}}v_{3}\bar{f}^{2}dv+\int_{ \mathbb{R}^{3}}\bar{f}\mathbf{L}_{0}\bar{f}dv=\int_{\mathbb{R}^{3}}\bar{f}gdv,\] which implies that \[\frac{d}{d\eta}\int_{\mathbb{R}^{3}}v_{3}\bar{f}^{2}dv+c_{1}\|(\mathbf{I}- \mathbf{P}_{0})\bar{f}\|_{\nu}^{2}\lesssim\|(\nu^{0})^{-\frac{1}{2}}g\|_{L_{v }^{2}}^{2}. \tag{3.79}\] Multiplying (3.79) by \((1+\eta)^{k}\) with \(k\) being some positive integer, we get \[\partial_{\eta}\big{\{}(1+\eta)^{k}\int_{\mathbb{R}^{3}}v_{3}\bar {f}^{2}dv\big{\}}+c_{1}(1+\eta)^{k}\|(\mathbf{I}-\mathbf{P}_{0})\bar{f}\|_{\nu }^{2}\] \[\lesssim k(1+\eta)^{k-1}\int_{\mathbb{R}^{3}}v_{3}\bar{f}^{2}dv+( 1+\eta)^{k}\|(\nu^{0})^{-\frac{1}{2}}g\|_{L_{v}^{2}}^{2}. \tag{3.80}\] Then, integrating (3.80) on \([0,d]\), one obtains \[\int_{0}^{d}(1+\eta)^{k}\|(\mathbf{I}-\mathbf{P}_{0})\bar{f}\|_{ \nu}^{2}d\eta \lesssim k\int_{0}^{d}(1+\eta)^{k-1}\int_{\mathbb{R}^{3}}v_{3}\bar{f}^{2} dvd\eta+\int_{0}^{d}(1+\eta)^{k}\|(\nu^{0})^{-\frac{1}{2}}g\|_{L_{v}^{2}}^{2}d\eta\] \[\lesssim k\int_{0}^{d}(1+\eta)^{k-1}\|w_{2}\bar{f}\|_{\nu}^{2}d \eta+\int_{0}^{d}(1+\eta)^{k}\|w_{2}g\|_{L_{v}^{2}}^{2}d\eta. \tag{3.81}\] On the other hand, from (3.72)-(3.73), one has that \[\int_{0}^{d}(1+\eta)^{k}\|\mathbf{P}_{0}\bar{f}\|_{L_{v}^{2}}^{2}d\eta \lesssim_{k}\int_{0}^{d}(1+\eta)^{k}\|(\mathbf{I}-\mathbf{P}_{0})\bar{f}\|_{ \nu}^{2}d\eta+\int_{0}^{d}(1+\eta)^{k}\Big{\{}\int_{\eta}^{d}\|g(z)\|_{L_{v}^{ 2}}dz\Big{\}}^{2}d\eta\] \[\lesssim_{k}\int_{0}^{d}(1+\eta)^{k-1}\|w_{2}\bar{f}\|_{\nu}^{2}d \eta+\int_{0}^{d}(1+\eta)^{2p_{k}}\|w_{2}g\|_{L^{2}_{v}}^{2}d\eta, \tag{3.82}\] where \(p_{k}>\frac{k}{2}+1\). It follows from (3.81)-(3.82) that \[\int_{0}^{d}(1+\eta)^{k}\|\bar{f}\|_{\nu}^{2}d\eta\lesssim k\int_{0}^{d}(1+ \eta)^{k-1}\|w_{2}\bar{f}\|_{\nu}^{2}d\eta+\int_{0}^{d}(1+\eta)^{2p_{k}}\|w_{2 }g\|_{L^{2}_{v}}^{2}d\eta,\quad p_{k}>\frac{k}{2}+1. \tag{3.83}\] _Step 3._ Multiplying (3.77) by \((1+\eta)^{k}\), one has \[\frac{1}{2}\frac{d}{d\eta}\Big{\{}(1+\eta)^{k}\int_{\mathbb{R}^{ 3}}v_{3}w_{l}^{2}\bar{f}^{2}dv\Big{\}}-\frac{k}{2}(1+\eta)^{k-1}\int_{\mathbb{ R}^{3}}v_{3}w_{l}^{2}\bar{f}^{2}dv\] \[\quad+(1+\eta)^{k}\int_{\mathbb{R}^{3}}w_{l}^{2}\bar{f}\mathbf{ L}_{0}\bar{f}dv=(1+\eta)^{k}\int_{\mathbb{R}^{3}}w_{l}^{2}\bar{f}gdv. \tag{3.84}\] We have from (3.76) that \[(1+\eta)^{k}\int_{\mathbb{R}^{3}}w_{l}^{2}\bar{f}\mathbf{L}_{0}\bar{f}dv\geq \frac{1}{2}(1+\eta)^{k}\|w_{l}\bar{f}\|_{\nu}^{2}-C(1+\eta)^{k}\|\bar{f}\|_{ \nu}^{2},\] which, together with (3.83)-(3.84), yields that \[\int_{0}^{d}(1+\eta)^{k}\|w_{l}\bar{f}\|_{\nu}^{2}d\eta \lesssim\int_{0}^{d}(1+\eta)^{k}\|\bar{f}\|_{\nu}^{2}d\eta+\int_{0 }^{d}\int_{\mathbb{R}^{3}}(1+\eta)^{k}(\nu^{0})^{-1}w_{l}^{2}g^{2}dvd\eta\] \[\quad+k\int_{0}^{d}\int_{\mathbb{R}^{3}}(1+\eta)^{k-1}|v_{3}|w_{l }^{2}\bar{f}^{2}dvd\eta\] \[\lesssim k\int_{0}^{d}(1+\eta)^{k-1}\|w_{2}\bar{f}\|_{\nu}^{2}d \eta+\int_{0}^{d}(1+\eta)^{2p_{k}}\|w_{2}g\|_{L^{2}_{v}}^{2}d\eta\] \[\quad+\int_{0}^{d}(1+\eta)^{k}\|w_{l+2}g\|_{L^{2}_{v}}^{2}d\eta+k \int_{0}^{d}(1+\eta)^{k-1}\|w_{l+2}\bar{f}\|_{\nu}^{2}d\eta\] \[\lesssim k\int_{0}^{d}(1+\eta)^{k-1}\|w_{l+2}\bar{f}\|_{\nu}^{2}c \eta+\int_{0}^{d}(1+\eta)^{2p_{k}}\|w_{l+2}g\|_{L^{2}_{v}}^{2}d\eta, \tag{3.85}\] where \(p_{k}>\frac{k}{2}+1\). Using (3.78), (3.85), and induction arguments, one can deduce that \[\int_{0}^{d}(1+\eta)^{k}\|w_{l}\bar{f}\|_{\nu}^{2}d\eta \lesssim_{k}\int_{0}^{d}(1+\eta)^{k-1}\|w_{l+2}\bar{f}\|_{\nu}^{2} d\eta+\int_{0}^{d}(1+\eta)^{2p_{k}}\|w_{l+2}g\|_{L^{2}_{v}}^{2}d\eta\] \[\lesssim_{k}\int_{0}^{d}(1+\eta)^{k-2}\|w_{l+4}\bar{f}\|_{\nu}^{2} d\eta+\int_{0}^{d}(1+\eta)^{2p_{k}}\|w_{l+4}g\|_{L^{2}_{v}}^{2}d\eta\] \[\lesssim_{k}\cdots\lesssim_{k}\int_{0}^{d}\|w_{l+2k}\bar{f}\|_{ \nu}^{2}d\eta+\int_{0}^{d}(1+\eta)^{2p_{k}}\|w_{l+2k}g\|_{L^{2}_{v}}^{2}d\eta\] \[\lesssim_{k}\int_{0}^{d}(1+\eta)^{2p_{k}}\|w_{l+2k+2}g\|_{L^{2}_{ v}}^{2}d\eta,\quad p_{k}>\frac{k}{2}+1. \tag{3.86}\] Therefore the proof of Lemma 3.10 is completed. **Lemma 3.11**.: _Let \((\phi_{0},\phi_{1},\phi_{2},\phi_{3})\) be the ones determined in Lemma 3.8, then it holds that_ \[\|(1+\eta)^{k}w_{l}\bar{f}\|_{L^{\infty}_{\eta,v}}+|w_{l}\bar{f}|_{L^{\infty}( \gamma_{+})}\leq C_{k}\|(1+\eta)^{q_{k}}w_{l+4k+4}g\|_{L^{\infty}_{\eta,v}}, \quad q_{k}>k+\frac{3}{2}, \tag{3.87}\] _where \(k\) is a non-negative integer, and the constant \(C_{k}\) is independent of \(d\)._ **Proof.** Let \(h_{0}=w_{l}\bar{f}\), then it holds that \[\begin{cases}v_{3}\partial_{\eta}h_{0}+\nu^{0}h_{0}=K^{0}_{w_{l}}h_{0}+w_{l}g,\\ h_{0}(\eta,v)|_{\gamma_{-}}=h_{0}(\eta,R_{\eta}v).\end{cases} \tag{3.88}\] Applying Lemma 3.3, one has that \[\|w_{l}\bar{f}\|_{L^{\infty}_{\eta,v}}+|w_{l}\bar{f}|_{L^{\infty}( \gamma_{+})} \leq C\Big{\{}\int_{0}^{d}\int_{\mathbb{R}^{3}}\nu^{0}\bar{f}^{2} dvd\eta\Big{\}}^{\frac{1}{2}}+C\|(\nu^{0})^{-1}w_{l}g\|_{L^{\infty}_{\eta,v}}\] \[\leq C\Big{\{}\int_{0}^{d}(1+\eta)^{2p_{0}}\|w_{2}g\|_{L^{\infty}_ {\eta}}^{2}d\eta\Big{\}}^{\frac{1}{2}}+C\|(\nu^{0})^{-1}w_{l}g\|_{L^{\infty}_{ \eta,v}}\] \[\leq C\|(1+\eta)^{q_{0}}w_{4}g\|_{L^{\infty}_{\eta,v}}+C\|w_{l+3} g\|_{L^{\infty}_{\eta,v}}\] \[\leq C\|(1+\eta)^{q_{0}}w_{l+4}g\|_{L^{\infty}_{\eta,v}},\quad \text{for }q_{0}>p_{0}+\frac{1}{2} \tag{3.89}\] where we have used (3.75) to derive \[\int_{0}^{d}(1+\eta)^{2k}\|w_{l}\bar{f}\|_{\nu}^{2}d\eta \lesssim_{k}\int_{0}^{d}(1+\eta)^{2p_{2k}}\|w_{l+4k+2}g\|_{L^{2}_{ v}}^{2}\] \[\lesssim_{k}\|(1+\eta)^{q_{k}}w_{l+4k+4g}\|_{L^{\infty}_{\eta,v}} ^{2}\cdot\int_{0}^{d}(1+\eta)^{2p_{2k}-2q_{k}}d\eta\int_{\mathbb{R}^{3}}w_{2} ^{-2}dv\] \[\lesssim_{k}\|(1+\eta)^{q_{k}}w_{l+4k+4g}\|_{L^{\infty}_{\eta,v}} ^{2},\quad\text{for }q_{k}>p_{2k}+\frac{1}{2}. \tag{3.90}\] Let \(h_{k}=(1+\eta)^{k}w_{l}\bar{f}\), then it holds that \[v_{3}\partial_{\eta}h_{k}+\nu^{0}h_{k}=K^{0}_{w_{l}}h_{k}+k(1+\eta)^{k-1}v_{3 }w_{l}\bar{f}+(1+\eta)^{k}w_{l}g. \tag{3.91}\] Applying Lemma 3.3 and (3.90), one gets that \[\|h_{k}\|_{L^{\infty}_{\eta,v}}+|h_{k}|_{L^{\infty}(\gamma_{+})} \leq Ck\|(1+\eta)^{k-1}(\nu^{0})^{-1}v_{3}w_{l}\bar{f}\|_{L^{\infty }_{\eta,v}}+C\|(1+\eta)^{k}(\nu^{0})^{-1}w_{l}g\|_{L^{\infty}_{\eta,v}}\] \[\quad+C\|(1+\eta)^{k}(\nu^{0})^{\frac{1}{2}}\bar{f}\|_{L^{2}_{ \eta,v}}\] \[\leq C_{k}\|(1+\eta)^{k-1}w_{l+4}\bar{f}\|_{L^{\infty}_{\eta,v}}+ C_{k}\|(1+\eta)^{q_{k}}w_{\max\{4k+4,l+3\}}g\|_{L^{\infty}_{\eta,v}}, \tag{3.92}\] where \(q_{k}>p_{2k}+\frac{1}{2}\). Using (3.89), (3.92) and induction arguments, one obtains that \[\|(1+\eta)^{k}w_{l}\bar{f}\|_{L^{\infty}_{\eta,v}}+|(1+\eta)^{k}w_ {l}\bar{f}|_{L^{\infty}(\gamma_{+})}\] \[\leq C_{k}\|(1+\eta)^{k-1}w_{l+4}\bar{f}\|_{L^{\infty}_{\eta,v}}+ C_{k}\|(1+\eta)^{q_{k}}w_{\max\{4k+4,l+3\}}g\|_{L^{\infty}_{\eta,v}}\] \[\leq C_{k}\|(1+\eta)^{k-2}w_{l+8}\bar{f}\|_{L^{\infty}_{\eta,v}}+ C_{k}\|(1+\eta)^{q_{k}}w_{\max\{4k+4,l+7\}}g\|_{L^{\infty}_{\eta,v}}\] \[...\] \[\leq C_{k}\|w_{l+4k}\bar{f}\|_{L^{\infty}_{\eta,v}}+C_{k}\|(1+\eta) ^{q_{k}}w_{\max\{4k+4,l+4k\}}g\|_{L^{\infty}_{\eta,v}}\] \[\leq C_{k}\|(1+\eta)^{q_{k}}w_{l+4k+4g}\|_{L^{\infty}_{\eta,v}}, \quad q_{k}>p_{2k}+\frac{1}{2}.\] Recall the range of \(p_{k}\) in (3.75), then the proof of Lemma 3.11 is finished. With the help of decay estimate in Lemma 3.11, we shall prove Theorem 3.1 by taking the limit \(d\to\infty\). From now on, we denote the solution \(\tilde{f}(\eta,v)\) of (3.61) as \(\tilde{f}_{d}(\eta,v)\) to emphasize the dependence on \(d\). We denote \[\tilde{f}(\eta,v)=\bar{f}_{d_{2}}(\eta,v)-\bar{f}_{d_{1}}(\eta,v),\quad 1\leq d_{1 }\leq d_{2}.\] Then \(\tilde{f}\) satisfies the following equation \[\begin{cases}v_{3}\partial_{\eta}\tilde{f}+\mathbf{L}_{0}\tilde{f}=0,\quad \eta\in[0,d_{1}],\ v\in\mathbb{R}^{3},\\ \tilde{f}(0,v)|_{v_{3}>0}=\tilde{f}(0,Rv).\end{cases} \tag{3.93}\] ### Proof of Theorem 3.1 We divide the proof into two steps. _Step 1. Convergence in \(L^{2}\)-norm._ Multiplying (3.93) by \(\tilde{f}\) and integrating on \([0,d_{1}]\times\mathbb{R}^{3}\), one obtains that \[\int_{0}^{d_{1}}\int_{\mathbb{R}^{3}}\nu^{0}|(\mathbf{I}-\mathbf{P }_{0})\tilde{f}(\eta,v)|^{2}dvd\eta\] \[\leq C\int_{\mathbb{R}^{3}}|v_{3}|\cdot|\tilde{f}(d_{1},v)|^{2}dv \leq C\big{\{}\|w_{l}\bar{f}_{d_{2}}(d_{1})\|_{L^{\infty}_{v}}^{2}+|w_{l}\bar{ f}_{d_{1}}(d_{1})|_{L^{\infty}(\gamma_{+})}^{2}\big{\}}\] \[\leq C\|(1+\eta)^{\mathfrak{q}}w_{l+4}g\|_{L^{\infty}_{\eta,v}}^{ 2}\cdot d_{1}^{-2},\quad\mathfrak{q}\geq 3. \tag{3.94}\] We still need to control the macroscopic part. Denote \[\mathbf{P}_{0}\tilde{f}=[\tilde{a}(\eta)+\tilde{b}_{1}(\eta)(v_{1}-\mathfrak{ u}_{1}^{0})+\tilde{b}_{2}(\eta)(v_{2}-\mathfrak{u}_{2}^{0})+\tilde{c}(\eta)(|v- \mathfrak{u}^{0}|^{2}-3T^{0})]\sqrt{\mu_{0}}.\] Similar as in Lemma 3.9, we can obtain \[\left(\begin{array}{c}\int_{\mathbb{R}^{3}}v_{3}\tilde{f}(\eta,v)\cdot \mathbf{L}_{0}^{-1}(\mathcal{A}_{31}^{0})dv\\ \int_{\mathbb{R}^{3}}v_{3}\tilde{f}(\eta,v)\cdot\mathbf{L}_{0}^{-1}(\mathcal{ A}_{32}^{0})dv\\ \int_{\mathbb{R}^{3}}v_{3}\tilde{f}(\eta,v)\cdot\mathbf{L}_{0}^{-1}(\mathcal{B}_{3}^{0}) dv\end{array}\right)=\left(\begin{array}{c}\int_{\mathbb{R}^{3}}v_{3} \tilde{f}(d_{1},v)\cdot\mathbf{L}_{0}^{-1}(\mathcal{A}_{31}^{0})(d_{1})dv\\ \int_{\mathbb{R}^{3}}v_{3}\tilde{f}(d_{1},v)\cdot\mathbf{L}_{0}^{-1}(\mathcal{ A}_{32}^{0})(d_{1})dv\\ \int_{\mathbb{R}^{3}}v_{3}\tilde{f}(d_{1},v)\cdot\mathbf{L}_{0}^{-1}(\mathcal{ B}_{3}^{0})(d_{1})dv\end{array}\right),\] which, together with (3.68)-(3.70), yields that \[|(\mu(T^{0})\tilde{b}_{1},\mu(T^{0})\tilde{b}_{2},\kappa(T^{0})\tilde{c})( \eta)|\leq C\Big{\{}\|w_{l}\bar{f}_{d_{2}}(d_{1})\|_{L^{\infty}_{v}}+|w_{l} \bar{f}_{d_{1}}(d_{1})|_{L^{\infty}(\gamma_{+})}\Big{\}}+C\|(\mathbf{I}- \mathbf{P}_{0})\tilde{f}(\eta)\|_{\nu}. \tag{3.95}\] Integrating (3.95) over \([0,d_{1}]\), using (3.87), (3.94), one has \[\int_{0}^{d_{1}}|(\tilde{b}_{1},\tilde{b}_{2},\tilde{c})(\eta)|^{2}d\eta\leq C \|(1+\eta)^{\mathfrak{q}}w_{l+4}g\|_{L^{\infty}_{\eta,v}}^{2}\cdot d_{1}^{-1}, \quad\mathfrak{q}\geq 3. \tag{3.96}\] Multiplying (3.93) by \(v_{3}\sqrt{\mu_{0}}\), we have that \[\frac{d}{d\eta}\int_{\mathbb{R}^{3}}\tilde{f}(\eta,v)\cdot v_{3}^{2}\sqrt{\mu _{0}}dv=0.\] Integrating the above equation over \([\eta,d]\) and using (3.67), one obtains \[\tilde{a}(\eta)=-2T^{0}\tilde{c}(\eta)-\frac{1}{\rho^{0}T^{0}}\int_{\mathbb{R }^{3}}(\mathbf{I}-\mathbf{P}_{0})\tilde{f}(\eta,v)\cdot v_{3}^{2}\sqrt{\mu_{0 }}dv+\frac{1}{\rho^{0}T^{0}}\int_{\mathbb{R}^{3}}\tilde{f}(d_{1},v)\cdot v_{3} ^{2}\sqrt{\mu_{0}}dv. \tag{3.97}\] Using (3.87) (3.94) and (3.96), one can get that \[\int_{0}^{d_{1}}|\tilde{a}(\eta)|^{2}d\eta\leq C\|(1+\eta)^{\mathfrak{q}}w_{l+ 4}g\|_{L^{\infty}_{\eta,v}}^{2}\cdot d_{1}^{-1},\quad\text{for $\mathfrak{q}\geq 3$},\] which, together with (3.94) and (3.96), yields that \[\int_{0}^{d_{1}}\int_{\mathbb{R}^{3}}\nu^{0}|\tilde{f}(\eta,v)|^{2}dvd\eta \leq C\|(1+\eta)^{\mathfrak{q}}w_{l+4}g\|_{L^{\infty}_{\eta,v}}^{2}\cdot d_{1} ^{-1},\quad\mathfrak{q}\geq 3. \tag{3.98}\] _Step 2. Convergence in \(L^{\infty}\)-norm._ We shall use \(t_{k}=t_{k}(t,\eta,v),X_{cl}(s;t,\eta,v),\eta_{k}=\eta_{k}(\eta,v)\) to be the back-time cycles defined for domain \([0,d_{1}]\times\mathbb{R}^{3}\). For later use, we denote \(\tilde{h}:=w_{l}\tilde{f}\). Let \((\eta,v)\in[0,d_{1}]\times\mathbb{R}^{3}\backslash(\gamma_{0}\cup\gamma_{-})\), it follows from (3.93) that \[\tilde{h}(\eta,v)=e^{-\hat{\nu}(v)(t-t_{k})}\tilde{h}(d_{1},v_{k-1})+\sum_{i=0 }^{k-1}\int_{t_{i+1}}^{t_{i}}e^{-\hat{\nu}(v)(t-s)}(1+|v|^{2})^{\frac{|s|}{2}} K_{w_{l}}^{0}\tilde{h}(X_{cl}(s),v_{i})ds, \tag{3.99}\] with \(k=1\) for \(v_{0,3}<0\), and \(k=2\) for \(v_{0,3}>0\). We will use this summation convention in the following of this lemma. We always have \[|e^{-\hat{\nu}(v)(t-t_{k})}\tilde{h}(d_{1},v_{k-1})| \leq C\Big{(}\|w_{l}\bar{f}_{d_{2}}(d_{1})\|_{L^{\infty}_{v}}+|w_{l} \bar{f}_{d_{1}}(d_{1})|_{L^{\infty}(\gamma_{+})}\Big{)}\] \[\leq C\|(1+\eta)^{\mathfrak{q}}w_{l+4}g\|_{L^{\infty}_{\eta,v}} \cdot d_{1}^{-1},\quad\mathfrak{q}\geq 3. \tag{3.100}\] For the second term on RHS of (3.99), we use (3.99) again to obtain \[\left|\sum_{i=0}^{k-1}\int_{t_{i+1}}^{t_{i}}e^{-\hat{\nu}(v)(t-s)}(1+ |v|^{2})^{\frac{|s|}{2}}K_{w_{l}}^{0}\tilde{h}(X_{cl}(s),v_{i})ds\right|\] \[=\left|\sum_{i=0}^{k-1}\int_{t_{i+1}}^{t_{i}}e^{-\hat{\nu}(v)(t-s) }(1+|v|^{2})^{\frac{|s|}{2}}K_{w_{l}}^{0,c}\tilde{h}(X_{cl}(s),v_{i})ds\right|\] \[+\left|\sum_{i=0}^{k-1}\int_{t_{i+1}}^{t_{i}}e^{-\hat{\nu}(v)(t-s) }(1+|v|^{2})^{\frac{|s|}{2}}K_{w_{l}}^{0,m}\tilde{h}(X_{cl}(s),v_{i})ds\right|\] \[\leq\frac{1}{4}\|\tilde{h}\|_{L_{\eta,v}^{\infty}}+C\|(1+\eta)^{ \mathfrak{q}}w_{l+4}g\|_{L_{\eta,v}^{\infty}}\cdot d_{1}^{-1}\] \[+\Big{|}\sum_{i=0}^{k-1}\int_{t_{i+1}}^{t_{i}}e^{-\hat{\nu}(v)(t- s)}(1+|v|^{2})^{\frac{|s|}{2}}\int_{\mathbb{R}^{3}}k_{w_{l}}^{0,c}(v_{i},v^{ \prime})(1+|v^{\prime}|^{2})^{\frac{|s|}{2}}\] \[\quad\times\sum_{j=0}^{k-1}\int_{t_{j+1}^{\prime}}^{t_{j}}e^{- \hat{\nu}(v^{\prime})(s-s_{1})}\int_{\mathbb{R}^{3}}k_{w_{l}}^{0,c}(v_{j}^{ \prime},v^{\prime\prime})\tilde{h}(X_{cl}(s_{1}),v^{\prime\prime})dv^{\prime \prime}ds_{1}dv^{\prime}ds\Big{|}. \tag{3.101}\] where we have used (3.100) and denote \(X_{cl}^{\prime}(s_{1})=X_{cl}(s_{1};s,X_{cl}(s),v^{\prime})\), \(t_{j}^{\prime}=t_{j}^{\prime}(s_{1};s,X_{cl}(s),v^{\prime})\) and \(v_{j}^{\prime}\) to be the back-time cycle of \((s,X_{cl}(s),v^{\prime})\). Then, by the same arguments as in Lemma 3.3, we get \[\|\tilde{h}\|_{L^{\infty}([0,d_{1}]\times\mathbb{R}^{3})}+|\tilde {h}(0)|_{L^{\infty}(\gamma_{+})}\] \[\leq\frac{1}{2}(\|\tilde{h}\|_{L^{\infty}([0,d_{1}]\times\mathbb{ R}^{3})}+|\tilde{h}(0)|_{L^{\infty}(\gamma_{+})})\] \[\quad+Cd_{1}^{-1}\|(1+\eta)^{\mathfrak{q}}w_{l+4}g\|_{L_{\eta,v}^ {\infty}}+C\|(\nu^{0})^{\frac{1}{2}}\tilde{f}\|_{L^{2}([0,d_{1}]\times\mathbb{ R}^{3})}\] \[\leq Cd_{1}^{-\frac{1}{2}}\|(1+\eta)^{\mathfrak{q}}w_{l+4}g\|_{L_{ \eta,v}^{\infty}},\quad\mathfrak{q}\geq 3. \tag{3.102}\] With the help of (3.102), there exists a function \(f(\eta,v)\) with \((\eta,v)\in\mathbb{R}_{+}\times\mathbb{R}^{3}\) so that \(\|w(\bar{f}_{d}-f)\|_{L^{\infty}([0,d]\times\mathbb{R}^{3})}\to 0\) as \(d\to\infty\). The uniform bound (3.4) follows from (3.87) and the strong convergence in \(L_{\eta,v}^{\infty}\). It is direct to see that \(f(\eta,v)\) solves (3.7). The continuity of \(f\) follows directly from the \(L_{x,v}^{\infty}\)-convergence and the continuity of \(\bar{f}_{d}\). For the uniqueness, let \(\mathbf{f}_{1},\mathbf{f}_{2}\) be two solutions of (3.7) with the bound (3.4), then it holds that \[\begin{cases}v_{3}\partial_{\eta}(\mathbf{f}_{1}-\mathbf{f}_{2})+\mathbf{L}_{0 }(\mathbf{f}_{1}-\mathbf{f}_{2})=0,\\ \mathbf{f}_{i}(0,v)|_{v_{3}>0}=\mathbf{f}_{i}(0,Rv),\ i=1,2,\\ \lim_{\eta\to\infty}\mathbf{f}_{i}(\eta,v)=0,\ i=1,2.\end{cases} \tag{3.103}\] Multiplying (3.103) by \((\mathbf{f}_{1}-\mathbf{f}_{2})\), it is direct to prove that \[\int_{0}^{\infty}\|(\mathbf{I}-\mathbf{P}_{0})(\mathbf{f}_{1}-\mathbf{f}_{2}) \|_{\nu}^{2}d\eta=0.\] That is, \((\mathbf{f}_{1}-\mathbf{f}_{2})=\mathbf{P}_{0}(\mathbf{f}_{1}-\mathbf{f}_{2})\). Then by the same arguments as (3.72)-(3.73), one has that \[\int_{0}^{\infty}\|\mathbf{P}_{0}(\mathbf{f}_{1}-\mathbf{f}_{2})\|_{L_{x}^{2}} ^{2}d\eta=0.\] Thus, we prove \(\mathbf{f}_{1}\equiv\mathbf{f}_{2}\). Finally, let \(\mathfrak{f}:=f+\Upsilon(\eta)\,f_{b}(v)\), then it direct to know that \(\mathfrak{f}\) solves (3.1). The proof of Theorem 3.1 is completed. ## 4. Hilbert Expansions for Boltzmann Equation of Soft Potentials In this section, we aim to construct the solutions of Boltzmann equation of soft potentials through Hilbert expansion with multi-scales. ### Linear parts of Hilbert expansion In this section, we shall construct the soft Boltzmann solutions in the form of Hilbert expansion with multi-scales. Recall \(\varpi_{\mathfrak{t}}\) in (1.29), we define the velocity weight functions \[\tilde{w}_{\kappa_{i}}(v)=\varpi_{\kappa_{i}}(v)\mu^{-\mathfrak{a}_{i}},\quad \mathfrak{w}_{\bar{\kappa}_{i}}(v)=\varpi_{\bar{\kappa}_{i}}(v)\mu_{0}^{- \mathfrak{a}_{i}}\,\text{and}\,\,\mathfrak{w}_{\hat{\kappa}_{i}}(v)=\varpi_{ \hat{\kappa}_{i}}(v)\mu_{0}^{-\mathfrak{a}_{i}}, \tag{4.1}\] for constants \(\kappa_{i},\bar{\kappa}_{i},\hat{\kappa}_{i}\geq 0\), \(1\leq i\leq N\) and \(0\leq\mathfrak{a}_{i}<\frac{1}{2}\). Note that the weight function \(\tilde{w}_{\kappa_{i}}\) depends on \((t,x)\), while \(\mathfrak{w}_{\hat{\kappa}_{i}}\) and \(\mathfrak{w}_{\hat{\kappa}_{i}}\) depend on \((t,x_{\bar{\nu}})\). For later use, we define \[\hat{x}=(x_{\bar{\nu}},\eta)\in\mathbb{R}_{+}^{3},\quad\nabla_{\hat{x}}:=( \nabla_{\bar{\nu}},\partial_{\eta}),\] and recall \(\bar{x}=(x_{\bar{\nu}},y)\in\mathbb{R}_{+}^{3},\,\,\nabla_{\bar{x}}=(\nabla_{ \bar{\nu}},\partial_{y})\), and the weighted \(L_{l}^{2}\)-norm with \((1+y)^{l}\) weight. **Proposition 4.1**.: _Let \(\tau^{\delta}>0\) be the life-span of compressible Euler equations. Let \(0\leq\mathfrak{a}_{i}<\frac{1}{2}\) in (4.1) and \(\mathfrak{a}_{i}>\mathfrak{a}_{i+1}\). Let \(s_{0},s_{i},\bar{s}_{i},\hat{s}_{i},\zeta_{i}\in\mathbb{N}_{+}\), \(\kappa_{i},\bar{\kappa}_{i},\hat{\kappa}_{i}\in\mathbb{R}_{+}\) for \(1\leq i\leq N\); and define \(l_{j}^{i}:=\bar{l}_{i}+2(\bar{s}_{i}-j)\) for \(1\leq i\leq N,\,\,0\leq j\leq\bar{s}_{i}\). For these parameters, we have chosen \(s_{i},\bar{s}_{i},\hat{s}_{i}\) such that_ \[s_{0}\geq s_{1}+\mathfrak{b}+6,\quad s_{1}=\bar{s}_{1}=\hat{s}_{ 1}\gg 1;\] \[s_{1}>s_{i}>\bar{s}_{i}>\hat{s}_{i}\geq s_{i+1}>\bar{s}_{i+1}> \hat{s}_{i+1}\geq...\gg 1,\,\,i=2,...,N-1;\] \[s_{i+1}\leq\min\{\hat{s}_{i},\frac{1}{2}\bar{s}_{i}-3\},\,\,\bar{ s}_{i+1}\leq s_{i+1}-8-\mathfrak{b},\,\,\hat{s}_{i+1}\leq\frac{1}{2}\bar{s}_{i+1} -2-\mathfrak{b},\,\,i=1,...,N-1, \tag{4.2}\] _and taken \(l_{j}^{i}=\bar{l}_{j}+2(\bar{s}_{i}-j)\) with \(0\leq j\leq\bar{s}_{i}\) so that_ \[l_{j}^{N}\gg 2\mathfrak{b}\quad\text{and}\quad l_{j}^{i}\geq 2l_{j}^{i+1}+18+2 \mathfrak{b},\,\,\text{for}\,\,1\leq i\leq N-1, \tag{4.3}\] _and_ \[\kappa_{i}\gg\bar{\kappa}_{i}\gg\hat{\kappa}_{i}\gg\kappa_{i+1} \gg\bar{\kappa}_{i+1}\gg\hat{\kappa}_{i+1}\gg 1,\] \[\zeta_{i+1}-\zeta_{i}\geq\mathfrak{b}+3\quad\text{and}\quad\zeta _{1}\gg\zeta_{2}...\gg\zeta_{i}...\gg\mathfrak{b}. \tag{4.4}\] _Let \((\rho_{i},u_{i},\theta_{i})(0)\) be the initial data for interior expansions, and \((\bar{u}_{i,\bar{\nu}},\bar{\theta}_{i})(0)\) be the initial data of viscous boundary layer. Assume_ \[\sum_{i=0}^{N}\Big{\{}\sum_{\gamma+\beta\leq s_{i}}\|\partial_{t}^{\gamma} \nabla_{x}^{\beta}(\rho_{i},u_{i},\theta_{i})(0)\|_{L_{x}^{2}}+\sum_{j=0}^{ \bar{s}_{i}}\sum_{j=2\gamma+\beta}\|\partial_{t}^{\gamma}\nabla_{\bar{x}}^{ \beta}(\bar{u}_{i,\bar{\nu}},\bar{\theta}_{i})(0)\|_{L_{\bar{l}_{j}^{i}}^{2}}^{ 2}\Big{\}}<\infty. \tag{4.5}\] _And we also assume that the compatibility conditions for initial data \((\rho_{i},u_{i},\theta_{i})(0)\) and \((\bar{u}_{i,\bar{\nu}},\bar{\theta}_{i})(0)\) are satisfied. Then there exist solutions \(F_{i}=\sqrt{\mu}f_{i},\,\bar{F}_{i}=\sqrt{\mu_{0}}\bar{f}_{i},\,\,\hat{F}_{i}= \sqrt{\mu_{0}}\hat{f}_{i}\) to interior expansions (1.7), viscous boundary layer (1.16) and Knudsen layer solutions (1.22) over the time interval \(t\in[0,\tau^{\delta}]\) so that the specular boundary condition is satisfied in the following form:_ \[(F_{i}+\hat{F}_{i}+\hat{F}_{i})(t,x_{\bar{\nu}},0,v_{\bar{\nu}},v_{3})|_{v_{3}>0 }=(F_{i}+\hat{F}_{i}+\hat{F}_{i})(t,x_{\bar{\nu}},0,v_{\bar{\nu}},-v_{3}).\] _Moreover, it holds that_ \[\sup_{t\in[0,\tau^{\delta}]}\sum_{i=1}^{N}\Bigg{\{}\sum_{\gamma+ \beta\leq s_{i}}\|\tilde{w}_{\kappa_{i}}\partial_{t}^{\gamma}\nabla_{x}^{\beta}f _{i}(t)\|_{L_{x}^{2}L_{\bar{\nu}}^{\infty}}+\sum_{j=0}^{\bar{s}_{i}}\sum_{j=2 \gamma+\beta}\|\mathfrak{w}_{\bar{\kappa}_{i}}\partial_{t}^{\gamma}\nabla_{\bar{x}}^ {\beta}\bar{f}_{i}(t)\|_{L_{\bar{l}_{j}^{i}}^{2}L_{\bar{\nu}}^{\infty}}\] \[+\sum_{\gamma+\beta\leq\hat{s}_{i}}\|(1+\eta)^{\zeta_{i}}\mathfrak{w }_{\bar{\kappa}_{i}}\partial_{t}^{\gamma}\nabla_{\bar{\kappa}_{i}}^{\beta}\hat{f}_{ i}(t)\|_{L_{x,\bar{\nu}}^{\infty}\cap L_{x_{\bar{\nu}}}^{2}L_{\bar{\nu}}^{ \infty}}\Bigg{\}}\] \[\leq C\Bigg{(}\tau^{\delta},\|(\varphi_{0},\Phi_{0},\vartheta_{0})\| _{H^{*0}}+\sum_{i=0}^{N}\sum_{\gamma+\beta\leq s_{i}}\|\partial_{t}^{\gamma} \nabla_{x}^{\beta}(\rho_{i},u_{i},\theta_{i})(0)\|_{L_{x}^{2}}\] \[+\sum_{i=0}^{N}\sum_{j=0}^{\bar{s}_{i}}\sum_{j=2\gamma+\beta}\| \partial_{t}^{\gamma}\nabla_{\bar{x}}^{\beta}(\bar{u}_{i,\bar{\nu}},\bar{ \theta}_{i})(0)\|_{L_{t_{j}^{i}}^{2}}^{2}\Bigg{)}. \tag{4.6}\] **Proof.** With the help of Proposition 2.5 and Theorem 3.1, by similar arguments as in [20, Proposition 5.1], one can construct \(f_{i},\bar{f}_{i}\) and \(\hat{f}_{i}\). Here we explain a little bit on how to use Proposition 2.5 and Theorem 3.1 for soft potential cases. Noting \(f_{i},\bar{f}_{i}\) are smooth on \((t,x,v)\) and \((t,\bar{x},v)\), then by using Proposition 2.5, one can always get the exponential decay on \(v\), i.e. \[|\partial_{t}\nabla_{x}f_{i}|\lesssim\mu^{\frac{q}{2}},\quad|\partial_{t} \nabla_{\bar{x}}\bar{f}_{i}|\lesssim\mu_{0}^{\frac{q}{2}}\quad\text{for $q\in(0,1)$}.\] With the help of Theorem 3.1, we can construct the solutions for Knudsen boundary layers \(\hat{f}_{i}\) with enough polynomial space decay estimate. ### Estimates on the remainder We first consider the \(L^{2}\)-energy estimate. Recall the definition of \(f_{R}^{\varepsilon}\) in (1.26), we rewrite the equation of \(f_{R}^{\varepsilon}\) as \[\partial_{t}f_{R}^{\varepsilon}+v\cdot\nabla_{x}f_{R}^{ \varepsilon}+\frac{1}{\varepsilon^{2}}\mathbf{L}f_{R}^{\varepsilon}\] \[=-\frac{\{\partial_{t}+v\cdot\nabla_{x}\}\sqrt{\mu}}{\sqrt{\mu}} f_{R}^{\varepsilon}+\varepsilon^{3}\frac{1}{\sqrt{\mu}}Q(\sqrt{\mu}f_{R}^{ \varepsilon},\sqrt{\mu}f_{R}^{\varepsilon})\] \[\quad+\sum_{i=1}^{N}\varepsilon^{i-2}\frac{1}{\sqrt{\mu}}\Big{\{} Q(F_{i}+\bar{F}_{i}+\hat{F}_{i},\sqrt{\mu}f_{R}^{\varepsilon})+Q(\sqrt{\mu}f_{R}^{ \varepsilon},F_{i}+\bar{F}_{i}+\hat{F}_{i})\Big{\}}\] \[\quad+\frac{1}{\sqrt{\mu}}R^{\varepsilon}+\frac{1}{\sqrt{\mu}} \bar{R}^{\varepsilon}+\frac{1}{\sqrt{\mu}}\hat{R}^{\varepsilon}, \tag{4.7}\] where \[R^{\varepsilon}=-\varepsilon^{N-6}\{\partial_{t}+v\cdot\nabla_{x}\}(F_{N-1}+ \varepsilon F_{N})+\varepsilon^{N-6}\sum_{\begin{subarray}{c}i+j\geq N+1\\ 1\leq i,j\leq N\end{subarray}}\varepsilon^{i+j-N-1}Q(F_{i},F_{j}), \tag{4.8}\] \[\bar{R}^{\varepsilon}=-\varepsilon^{N-6}\{\partial_{t}+v_{{}_{ \shortmid}}\cdot\nabla_{{}_{\shortmid}}\}(\bar{F}_{N-1}+\varepsilon\bar{F}_{N })-\varepsilon^{N-6}v_{3}\partial_{y}\bar{F}_{N}\] \[\quad+\varepsilon^{N-6}\sum_{\begin{subarray}{c}i+j\geq N+1\\ 1\leq i,j\leq N,\ 1\leq l\leq b\end{subarray}}\varepsilon^{i+j-N-1}\cdot\frac{y^{l}}{l!} \big{[}Q(\partial_{3}^{l}\mu_{0},\bar{F}_{j})+Q(\bar{F}_{j},\partial_{3}^{l} \mu_{0})\big{]}\] \[\quad+\varepsilon^{N-6}\sum_{\begin{subarray}{c}i+j\geq N+1\\ 1\leq i,j\leq N\end{subarray}}\varepsilon^{i+j-N-1}\big{[}Q(F_{i}^{0},\bar{F}_ {j})+Q(\bar{F}_{j},F_{i}^{0})+Q(\bar{F}_{i},\bar{F}_{j})\big{]}\] \[\quad+\varepsilon^{N-6}\sum_{\begin{subarray}{c}i+j+l\geq N+1\\ 1\leq i,j\leq N,\ 1\leq l\leq b\end{subarray}}\varepsilon^{i+j-N-1}\cdot\frac{y^{l}}{l!} \big{[}Q(\partial_{3}^{l}F_{i}^{0},\bar{F}_{j})+Q(\bar{F}_{j},\partial_{3}^{l }F_{i}^{0})\big{]}\] \[\quad+\varepsilon^{b-5}\frac{y^{b+1}}{(b+1)!}\sum_{j=1}^{N} \varepsilon^{j-1}[Q(\partial_{3}^{b+1}\bar{\mu},\bar{F}_{j})+Q(\bar{F}_{j}, \partial_{3}^{b+1}\bar{\mu})]\] \[\quad+\varepsilon^{b-4}\frac{y^{b+1}}{(b+1)!}\sum_{i,j=1}^{N} \varepsilon^{i+j-2}\big{[}Q(\partial_{3}^{b+1}\mathfrak{F}_{i},\bar{F}_{j})+Q (\bar{F}_{j},\partial_{3}^{b+1}\mathfrak{F}_{i})\big{]}, \tag{4.9}\] and \[\hat{R}^{\varepsilon}=-\varepsilon^{N-6}\{\partial_{t}+v_{{}_{ \shortmid}}\cdot\nabla_{{}_{\shortmid}}\}(\hat{F}_{N-1}+\varepsilon\hat{F}_{N})\] \[\quad+\varepsilon^{N-6}\sum_{\begin{subarray}{c}i+2l\geq N+1\\ 1\leq j\leq N,1\leq l\leq b\end{subarray}}\varepsilon^{i+2l-N-1}\cdot\frac{\eta }{l!}\big{[}Q(\partial_{3}^{l}\mu_{0},\hat{F}_{j})+Q(\hat{F}_{j},\partial_{3}^ {l}\mu_{0})\big{]}\] \[\quad+\varepsilon^{N-6}\sum_{\begin{subarray}{c}i+j\geq N+1\\ 1\leq i,j\leq N\end{subarray}}\varepsilon^{i+j-N-1}\big{[}Q(F_{i}^{0}+\bar{F}_{i}^ {0},\hat{F}_{j})+Q(\hat{F}_{j},F_{i}^{0}+\bar{F}_{i}^{0})+Q(\hat{F}_{i},\hat{F}_ {j})\big{]}\] \[+\varepsilon^{N-6}\sum_{\begin{subarray}{c}i+j+2l>N+1\\ 1\leq i,j\leq N,1\leq l\leq b\end{subarray}}\varepsilon^{i+j+l-N-1}\cdot\frac{ \eta^{l}}{l!}\big{[}Q(\partial_{3}^{l}F_{i}^{0},\hat{F}_{j})+Q(\hat{F}_{j}, \partial_{3}^{l}F_{i}^{0})\big{]}\] \[+\varepsilon^{N-6}\sum_{\begin{subarray}{c}i+j+l>N+1\\ 1\leq i,j\leq N,1\leq l\leq b\end{subarray}}\varepsilon^{i+j+l-N-1}\cdot\frac{ \eta^{l}}{l!}\big{[}Q(\partial_{y}^{l}\bar{F}_{i}^{0},\hat{F}_{j})+Q(\hat{F}_{ j},\partial_{y}^{l}\bar{F}_{i}^{0})\big{]}\] \[+\varepsilon^{2b-4}\frac{\eta^{b+1}}{(b+1)!}\sum_{j=1}^{N} \varepsilon^{j-1}\big{[}Q(\partial_{3}^{b+1}\tilde{\mu},\hat{F}_{j})+Q(\hat{F }_{j},\partial_{3}^{b+1}\tilde{\mu})\big{]}\] \[+\varepsilon^{2b-3}\frac{\eta^{b+1}}{(b+1)!}\sum_{i,j=1}^{N} \varepsilon^{i+j-2}\big{[}Q(\partial_{3}^{b+1}\mathfrak{F}_{i},\hat{F}_{j})+Q (\hat{F}_{j},\partial_{3}^{b+1}\mathfrak{F}_{i})\big{]}\] \[+\varepsilon^{b-4}\frac{\eta^{b+1}}{(b+1)!}\sum_{i,j=1}^{N} \varepsilon^{i+j-2}\big{[}Q(\partial_{3}^{b+1}\mathfrak{F}_{i},\hat{F}_{j})+Q (\hat{F}_{j},\partial_{3}^{b+1}\mathfrak{F}_{i})\big{]}, \tag{4.10}\] where \(\partial_{3}^{l}\mu_{0},\partial_{3}^{b+1}\tilde{\mu}\), \(\partial_{3}^{l}F_{i}^{0},\partial_{3}^{b+1}\mathfrak{F}_{i}\) and \(\partial_{y}^{l}\bar{F}_{i}^{0},\partial_{y}^{b+1}\bar{\mathfrak{F}}_{i}\) are defined in (1.19), (1.23). From Proposition 4.1, we know that \(f_{R}^{\varepsilon}\) satisfies specular reflection boundary conditions \[f_{R}^{\varepsilon}(t,x_{1},x_{2},0,v_{1},v_{2},v_{3})|_{v_{3}>0}=f_{R}^{ \varepsilon}(t,x_{1},x_{2},0,v_{1},v_{2},-v_{3}). \tag{4.11}\] **Lemma 4.2**.: _Recall \(\alpha\) in (1.28). Let \(0<\frac{1}{2\alpha}(1-\alpha)<\mathfrak{a}_{i}<\frac{1}{2}\), \(\mathfrak{k}\geq 18\), \(N\geq 6\) and \(\mathfrak{b}\geq 5\). Let \(\tau^{\delta}>0\) be the life span of compressible Euler solution, then there exists a suitably small constant \(\varepsilon_{0}>0\) such that for all \(\varepsilon\in(0,\varepsilon_{0})\), it holds that_ \[\frac{d}{dt}\|f_{R}^{\varepsilon}(t)\|_{L^{2}}^{2}+\frac{c_{0}}{2 \varepsilon^{2}}\|\{\mathbf{I}-\mathbf{P}\}f_{R}^{\varepsilon}(t)\|_{\nu}^{2}\] \[\leq C\big{\{}1+\varepsilon^{8}\|h_{R}^{\varepsilon}(t)\|_{L^{ \infty}}^{2}\big{\}}\cdot(\|f_{R}^{\varepsilon}(t)\|_{L^{2}}^{2}+1),\text{ for }t\in[0,\tau^{\delta}]. \tag{4.12}\] **Proof.** Multiplying (4.7) by \(f_{R}^{\varepsilon}\) and integrating over \(\mathbb{R}_{+}^{3}\times\mathbb{R}^{3}\), one obtains that \[\frac{1}{2}\frac{d}{dt}\|f_{R}^{\varepsilon}\|_{L^{2}}^{2}+\frac {c_{0}}{2\varepsilon^{2}}\|\{\mathbf{I}-\mathbf{P}\}f_{R}^{\varepsilon}\|_{\nu}^ {2}\] \[=-\int_{\mathbb{R}_{+}^{3}}\int_{\mathbb{R}^{3}}\frac{\{\partial_{ t}+v\cdot\nabla_{x}\}\sqrt{\mu}|f_{R}^{\varepsilon}|^{2}+\varepsilon^{3}\int_{ \mathbb{R}_{+}^{3}}\int_{\mathbb{R}^{3}}\frac{1}{\sqrt{\mu}}Q(\sqrt{\mu}f_{R}^ {\varepsilon},\sqrt{\mu}f_{R}^{\varepsilon})f_{R}^{\varepsilon}\] \[+\int_{\mathbb{R}_{+}^{3}}\int_{\mathbb{R}^{3}}\sum_{i=1}^{N} \varepsilon^{i-2}\frac{1}{\sqrt{\mu}}\Big{\{}Q(F_{i}+\bar{F}_{i}+\hat{F}_{i}, \sqrt{\mu}f_{R}^{\varepsilon})+Q(\sqrt{\mu}f_{R}^{\varepsilon},F_{i}+\bar{F}_{ i}+\hat{F}_{i})\Big{\}}f_{R}^{\varepsilon}\] \[+\int_{\mathbb{R}_{+}^{3}}\int_{\mathbb{R}^{3}}\bigg{\{}\frac{1}{ \sqrt{\mu}}R^{\varepsilon}+\frac{1}{\sqrt{\mu}}\bar{R}^{\varepsilon}+\frac{1}{ \sqrt{\mu}}\hat{R}^{\varepsilon}\bigg{\}}\,f_{R}^{\varepsilon}, \tag{4.13}\] where we have used (4.11) so that the boundary term vanishes. Recall the definition \(h_{R}^{\varepsilon}\) in (1.29). For any \(\lambda>0\), motivated by [18], we take \(\mathfrak{k}\geq 18\) to get \[\int_{\mathbb{R}_{+}^{3}}\int_{\mathbb{R}^{3}}\frac{\{\partial_{ t}+(v\cdot\nabla_{x})\}\sqrt{\mu}}{|f_{R}^{\varepsilon}|^{2}dvdx}\] \[\leq C\int_{\mathbb{R}_{+}^{3}}\int_{\mathbb{R}^{3}}|(\nabla_{x} \rho,\nabla_{x}\mathfrak{u},\nabla_{x}T)|(1+|v|)^{3}|f_{R}^{\varepsilon}|^{2} dvdx\] \[\leq C\left\{\int_{\mathbb{R}_{+}^{3}}\int_{|v|\geq\frac{\lambda}{ \varepsilon^{1/3}}}+\int_{\mathbb{R}_{+}^{3}}\int_{|v|\leq\frac{\lambda}{ \varepsilon^{1/3}}}\right\}(\cdots)dvdx\] \[\leq C\frac{\lambda}{\varepsilon^{2}}\|\{\mathbf{I}-\mathbf{P}\}f _{R}^{\varepsilon}\|_{\nu}^{2}+C_{\lambda}(1+\varepsilon^{4}\|h_{R}^{\varepsilon} \|_{L^{\infty}})\|f_{R}^{\varepsilon}\|_{L^{2}},\] where we have used \[\int_{\mathbb{R}^{3}}\int_{|v|\leq\frac{\lambda}{\varepsilon^{\frac{ 1}{1/3}}}}|\nabla_{x}(\rho,\mathfrak{u},T)|(1+|v|)^{3}|f_{R}^{\varepsilon}|^{2 }dvdx\] \[\leq\int_{\mathbb{R}^{3}}\int_{|v|\leq\frac{\lambda}{\varepsilon^{ \frac{1}{1/3}}}}|\nabla_{x}(\rho,\mathfrak{u},T)|(1+|v|)^{3}\Big{\{}|\mathbf{P} f_{R}^{\varepsilon}|^{2}+|(\mathbf{I}-\mathbf{P})f_{R}^{\varepsilon}|^{2}\Big{\}}dvdx\] \[\leq C_{\lambda}\|f_{R}^{\varepsilon}\|_{L^{2}}^{2}+C\|(\mathbf{ I}-\mathbf{P})f_{R}^{\varepsilon}\|_{\nu}^{2}\cdot\max_{|v|\leq\frac{\lambda}{ \varepsilon^{\frac{1}{1/3}}}}(1+|v|)^{3-\kappa}\] \[\leq C_{\lambda}\|f_{R}^{\varepsilon}\|_{L^{2}}^{2}+C\frac{ \lambda}{\varepsilon^{2}}\|(\mathbf{I}-\mathbf{P})f_{R}^{\varepsilon}\|_{\nu} ^{2},\] and \[\int_{\mathbb{R}^{3}}\int_{|v|\geq\frac{\lambda}{\varepsilon^{ \frac{1}{1/3}}}}|\nabla_{x}(\rho,\mathfrak{u},T)|(1+|v|)^{3}|f_{R}^{ \varepsilon}|^{2}dvdx\] \[\leq C\|f_{R}^{\varepsilon}\|_{L^{2}}\|h_{R}^{\varepsilon}\|_{L^ {\infty}}\cdot\Big{\{}\int_{|v|\geq\frac{\lambda}{\varepsilon^{\frac{1}{3}}}} (1+|v|)^{6-2\mathfrak{v}}dv\Big{\}}^{\frac{1}{2}}\] \[\leq C_{\lambda}\varepsilon^{\frac{1}{3}\mathfrak{k}-2}\|f_{R}^{ \varepsilon}\|_{L^{2}}\|h_{R}^{\varepsilon}\|_{L^{\infty}}\leq C_{\lambda} \varepsilon^{4}\|f_{R}^{\varepsilon}\|_{L^{2}}\|h_{R}^{\varepsilon}\|_{L^{ \infty}}.\] Using Lemma 2.3, one has \[\varepsilon^{3}\int_{\mathbb{R}^{3}_{+}}\int_{\mathbb{R}^{3}} \frac{1}{\sqrt{\mu}}Q(\sqrt{\mu}f_{R}^{\varepsilon},\sqrt{\mu}f_{R}^{ \varepsilon})f_{R}^{\varepsilon}dvdx\] \[=\varepsilon^{3}\int_{\mathbb{R}^{3}_{+}}\int_{\mathbb{R}^{3}} \frac{1}{\sqrt{\mu}}Q(\sqrt{\mu}f_{R}^{\varepsilon},\sqrt{\mu}f_{R}^{ \varepsilon})\{\mathbf{I}-\mathbf{P}\}f_{R}^{\varepsilon}dvdx\] \[\leq\varepsilon^{3}\|\{\mathbf{I}-\mathbf{P}\}f_{R}^{\varepsilon }\|_{\nu}\|h_{R}^{\varepsilon}\|_{L^{\infty}}\|f_{R}^{\varepsilon}\|_{L^{2}}\] \[\leq\frac{\lambda}{\varepsilon^{2}}\|\{\mathbf{I}-\mathbf{P}\}f_ {R}^{\varepsilon}\|_{\nu}^{2}+C_{\lambda}\varepsilon^{8}\|h_{R}^{\varepsilon }\|_{L^{\infty}}^{2}\|f_{R}^{\varepsilon}\|_{L^{2}}^{2}.\] From (4.2), we have \[s_{N}>\bar{s}_{N}\geq 2\mathfrak{b}+4+\hat{s}_{N},\quad\hat{s}_{N}\geq 1,\] which, together with Proposition 4.1 and Sobolev embedding theorem, yields that, for \(1\leq i\leq N\) and \(t\in[0,\tau^{\delta}]\), \[\sum_{k=0}^{2\mathfrak{b}+2}\Big{\{}\left\|\tilde{w}_{\kappa_{i} }(v)\nabla_{t,x}^{k}f_{i}(t)\right\|_{L^{2}_{x,v}}+\left\|\tilde{w}_{\kappa_{i }}\nabla_{t,x}^{k}f_{i}(t)\right\|_{L^{\infty}_{x,v}}\Big{\}}\leq C_{R}(\tau^{ \delta}), \tag{4.14}\] \[\sum_{k=0,1}^{\mathfrak{b}+2}\Bigg{\{}\left\|\mathfrak{w}_{\bar{ \kappa}_{i}}(1+\eta)^{\mathfrak{b}+9}\nabla_{t,\bar{x}}^{k}\hat{f}_{i}(t) \right\|_{L^{2}_{x,v}}+\left\|\mathfrak{w}_{\bar{\kappa}_{i}}(1+\eta)^{ \mathfrak{b}+9}\nabla_{t,\bar{x}}^{k}\hat{f}_{i}(t)\right\|_{L^{\infty}_{x,v} }\Bigg{\}}\leq C_{R}(\tau^{\delta}),\] where we have denoted \[C_{R}(\tau^{\delta}): =C\Bigg{(}\tau^{\delta},\|(\varphi_{0},\Phi_{0},\vartheta_{0})\| _{H^{s_{0}}}+\sum_{i=0}^{N}\sum_{\gamma+\beta\leq s_{i}}\|\partial_{t}^{\gamma }\nabla_{x}^{\beta}(\rho_{i},u_{i},\theta_{i})(0)\|_{L^{2}_{x}}\] \[\qquad\qquad\qquad\qquad+\sum_{i=0}^{N}\sum_{j=0}^{\bar{s}_{i}} \sum_{j=2\gamma+\beta}\|\partial_{t}^{\gamma}\nabla_{\bar{x}}^{\beta}(\bar{u}_{ i,u},\bar{\theta}_{i})(0)\|_{L^{2}_{t^{j}_{j}}}^{2}\Bigg{)}.\] Recall \(\varpi_{\mathbf{t}}\) in (1.28). For \(1\leq i\leq N\), it is clear that \[\begin{split}\left|\varpi_{\mathbf{t}}(v)\frac{\sqrt{\mu_{0}}}{ \sqrt{\mu}}\check{f}_{i}(t,x_{\short **Lemma 4.3** ([18]).: _It holds that_ \[|\hat{K}^{m}g(v)|\leq Cm^{3+\kappa}\nu(\mu)\|g\|_{L^{\infty}},\] _and \(\hat{K}^{c}g(v)=\int_{\mathbb{R}^{3}}l(v,v^{\prime})g(v^{\prime})dv^{\prime}\) where the kernel \(l(v,v^{\prime})\) satisfies_ \[|l(v,v^{\prime})|\leq C_{m}\frac{\exp(|v-u|^{2})}{|v-u|(1+|v|+|u|)^{1-\kappa}}.\] Denoting \(K_{\varpi}g\equiv\varpi_{\mathbf{t}}\hat{K}(\frac{g}{\varpi_{\mathbf{t}}})\), we deduce from (4.7) and (1.29) that \[\partial_{t}h_{R}^{\varepsilon}+v\cdot\nabla_{x}h_{R}^{ \varepsilon}+\frac{\nu(\mu)}{\varepsilon^{2}}h_{R}^{\varepsilon}-\frac{1}{ \varepsilon^{2}}K_{\varpi}h_{R}^{\varepsilon}\] \[=\sum_{i=1}^{N}\varepsilon^{i-2}\frac{\varpi_{\mathbf{t}}(v)}{ \sqrt{\mu_{M}(v)}}\Big{\{}Q(F_{i}+\bar{F}_{i}+\hat{F}_{i},\frac{\sqrt{\mu_{M} }h_{R}^{\varepsilon}}{\varpi_{\mathbf{t}}})+Q(\frac{\sqrt{\mu_{M}}h_{R}^{ \varepsilon}}{\varpi_{\mathbf{t}}},F_{i}+\bar{F}_{i}+\hat{F}_{i})\Big{\}}\] \[\quad+\varepsilon^{3}\frac{\varpi_{\mathbf{t}}}{\sqrt{\mu_{M}}}Q \Big{(}\frac{\sqrt{\mu_{M}}h_{R}^{\varepsilon}}{\varpi_{\mathbf{t}}},\frac{ \sqrt{\mu_{M}}h_{R}^{\varepsilon}}{\varpi_{\mathbf{t}}}\Big{)}+\frac{\varpi_{ \mathbf{t}}}{\sqrt{\mu_{M}}}\big{[}R^{\varepsilon}+\bar{R}^{\varepsilon}+\hat{ R}^{\varepsilon}\big{]}. \tag{4.18}\] Using Lemma 4.3, by similar arguments as in [20, Lemma 6.3] (see also [18, Lemma 2.2]), we can obtain the following \(L^{\infty}\) estimates. Here we omit the details for simplicity of presentation. **Lemma 4.4**.: _For \(t\in[0,\tau^{\delta}]\), it holds that_ \[\sup_{0\leq s\leq t}\|\varepsilon^{3}h_{R}^{\varepsilon}(s)\|_{L^{\infty}} \leq C(t)\{\|\varepsilon^{3}h_{R}^{\varepsilon}(0)\|_{L^{\infty}}+ \varepsilon^{N-1}+\varepsilon^{\mathfrak{b}}\}+\sup_{0\leq s\leq t}\|f_{R}^{ \varepsilon}(s)\|_{L^{2}}.\] ### Proof of Theorem 1.1 With Lemma 4.2 and Lemma 4.4, one can close the proof by the same arguments as in [18]. We omit the details for simplicity of presentation. Therefore the proof of Theorem 1.1 is complete. **Acknowledgments.** Yong Wang's research is partially supported by National Key R&D Program of China No. 2021YFA1000800, National Natural Science Foundation of China No. 12022114, 12288201, CAS Project for Young Scientists in Basic ResearchGrant No. YSBR-031, and Youth Innovation Promotion Association of the Chinese Academy of Science No. 2019002. We thank Weiqiang Wang for his valuable discuss. **Conflict of interest.** The authors declare that they have no conflict of interest.
境界効果は、ボッティリッツ理論における流体力学的極限の研究において重要な役割を果たします。ボッティリッツ方程式をソフトポテンシャルの compressible Euler方程式に拡張し、Hilbert Expansion を用いて、流体力学的極限の有効性を厳密に証明しました。具体的には、ボッティリッツの解を3つの部分に展開しました。内部部分、粘性境界層、Knudsen境界層です。ソフトポテンシャルの衝突周波数の影響が弱いため、空間の衰減率を持つ Knudsen 層の解の存在について、いくつかの制約条件を満たし、速度の重みによる議論によって克服しました。
2309.04625
Democracy from topology
Chiral form fields in $d$ dimensions can be effectively described as edge modes of topological Chern-Simons theories in $d+1$ dimensions. At the same time, manifestly Lorentz-invariant Lagrangian description of such fields directly in terms of a $d$-dimensional field theory is challenging and requires introducing nontrivial auxiliary gauge fields eliminated on-shell with extra gauge symmetries. A recent work by Arvanitakis et al.\ demonstrates (emphasizing the case of 2d chiral bosons) that the two approaches are related, and a peculiar reduction on the $(d+1)$-dimensional topological Lagrangian automatically leads to $d$-dimensional Lagrangians with appropriate sets of auxiliary fields. We develop this setup in three distinct directions. First, we demonstrate how arbitrary Abelian self-interactions for chiral forms can be included using nonlinear boundary terms in the Chern-Simons theory. Second, by generalizing the Chern-Simons theory to the BF theory, we obtain an analogous democratic description of non-chiral form fields, where electric and magnetic potentials appear as explicit dynamical variables. Third, we discuss the effects of introducing topological interactions in the higher-dimensional bulk, which produce extra interaction terms in the boundary theory. When applied to a topological 4-form field in 12 dimensions, this construction results in a democratic description of the 3-form gauge field of the 11-dimensional supergravity.
Oleg Evnin, Euihun Joung, Karapet Mkrtchyan
2023-09-08T22:31:56
http://arxiv.org/abs/2309.04625v1
# Democracy from topology ###### Abstract Chiral form fields in \(d\) dimensions can be effectively described as edge modes of topological Chern-Simons theories in \(d+1\) dimensions. At the same time, manifestly Lorentz-invariant Lagrangian description of such fields directly in terms of a \(d\)-dimensional field theory is challenging and requires introducing nontrivial auxiliary gauge fields eliminated on-shell with extra gauge symmetries. A recent work by Arvanitakis et al. demonstrates (emphasizing the case of 2d chiral bosons) that the two approaches are related, and a peculiar reduction on the \((d+1)\)-dimensional topological Lagrangian automatically leads to \(d\)-dimensional Lagrangians with appropriate sets of auxiliary fields. We develop this setup in three distinct directions. First, we demonstrate how arbitrary Abelian self-interactions for chiral forms can be included using nonlinear boundary terms in the Chern-Simons theory. Second, by generalizing the Chern-Simons theory to the BF theory, we obtain an analogous democratic description of non-chiral form fields, where electric and magnetic potentials appear as explicit dynamical variables. Third, we discuss the effects of introducing topological interactions in the higher-dimensional bulk, which produce extra interaction terms in the boundary theory. When applied to a topological 4-form field in 12 dimensions, this construction results in a democratic description of the 3-form gauge field of the 11-dimensional supergravity. ## I Introduction It has been known for a long time [1; 2; 3; 4; 5] that the topological Chern-Simons theory and its BF generalizations can describe (chiral) \(p\)-form degrees of freedom on the boundary. However, the generality and systematics of this approach is not fully understood yet. While the description of chiral fields as edge modes of topological theory is graceful and simple, the fact that one inevitably starts in a fictitious spacetime of one dimension higher may be seen as a drawback. Attempts to describe chiral fields as Lagrangian theories without introducing extra dimensions, on the other hand, have met difficulties of their own. Early ventures in this direction sacrificed manifest Lorentz invariance [6; 7; 8]. The elegant Pasti-Sorokin-Tonin (PST) approach [9; 10; 11] offers an economical Lorentz-invariant formulation, but suffers from non-polynomial dependence of the action on an auxiliary scalar field, and furthermore encounters difficulties when including self-interactions [11]. (We mention additionally the approach of [12], where chiral fields are necessarily accompanied by decoupled but propagating additional degrees of freedom. See also [13; 14].) Recently [15], Lorentz-covariant Lagrangians for arbitrary self-interacting chiral \(p\)-forms were found. The description includes a doubled set of gauge fields and an auxiliary scalar, which are gauged on-shell to a single propagating self-interacting chiral \(p\)-form. A comparison of this formalism with other approaches in the literature can be found in [16]. The topological field theory approaches to chiral forms have been pursued historically rather independently of the line of research that builds Lagrangian descriptions of chiral forms using auxiliary fields without introducing extra spacetime dimensions. A bridge connecting the two approaches was set up in a recent work by Arvanitakis et al. [17] who found a reduction procedure1 that allows deriving the boundary theory from the Chern-Simons theory in the bulk. The procedure naturally leads to a boundary theory in the form of [15] (which, for the case of free forms, can be related to PST formulation [19] by integrating out auxiliary gauge fields). Footnote 1: The reduction procedure of [17] assumes a topologically trivial bulk with a single boundary. The nontrivial features of the bulk theory on manifolds of more complicated topology (see, e.g., [18]) thus do not enter the game in this setting. We thank Massimo Porrati for emphasizing the importance of this point. Our present purpose is to extend and generalize the formulation of [17] in a few different directions. First, arbitrary Abelian self-interactions can be introduced to the setup of [17] by adding nonlinear boundary terms to the Chern-Simons action. One thus recovers the full scope of self-interacting theories in [15]. Second, the problem of Lagrangian description of chiral forms is often discussed side-by-side with the problem of 'democratic' description of ordinary (non-chiral) forms, where the dual electric and magnetic potentials appear as explicit dynamical variables. As we shall see, such democratic theories emerge from boundary reductions of the topological BF theory, a cousin of the Chern-Simons theory evoked in [17]. Finally, in the BF setup, it is possible to introduce topological interactions in the bulk. This, correspondingly, affects the boundary theory inducing self-interactions that essentially involve the gauge potential (as opposed to being expressible through the field strength alone). In this way, in particular, one obtains a democratic description of the self-interacting 3-form appearing in the 11-dimensional supergravity. ## II Chiral fields Here, we give a short derivation similar to that undertaken in [17] for free chiral forms, adding Abelian interactions. The starting point is the Chern-Simons theory given by the action \[S=\int_{M}H\wedge\mathrm{d}H \tag{1}\] (for our purposes the overall factor aka Chern-Simons level does not have to be explicit) where \(M\) is a \(d+1=2p+3\) (\(p\) is even) dimensional manifold with a boundary \(\partial M\) and \(H\) is a \((p+1)\)-form field. The variation of this Lagrangian contains a boundary term \(\int_{\partial M}\delta H\wedge H\), which would be incompatible with the least action principle. To remedy for this inconsistency, we add a boundary term \(-\frac{1}{2}H\wedge\star H\) to the action to obtain \[S_{\mbox{\tiny free}}=\int_{M}H\wedge\mathrm{d}H-\frac{1}{2}\int_{\partial M} H\wedge\star H\,. \tag{2}\] The variation is then \[\delta S_{\mbox{\tiny free}}=2\int_{M}\delta H\wedge\mathrm{d}H-\frac{1}{2} \int_{\partial M}\delta H^{+}\wedge H^{-}\,. \tag{3}\] Here and in what follows, we use the shorthand notation \[H^{\pm}=H\pm\star H, \tag{4}\] and the pullback of \(H\) onto the boundary is denoted by the same symbol \(H\). Note that \(\star\) shall denote throughout the Hodge dual associated with an arbitrary metric on the boundary with Lorentzian signature (the bulk Hodge dual will not appear in the formalism we consider, hence no danger of confusion). We may impose the Dirichlet boundary condition, \(\delta H^{+}=0\) or the Neumann one \(H^{-}=0\): \(H^{+}\) and \(H^{-}\) play the roles of 'position' and'momentum' respectively. The Neumann condition can be also viewed as the dynamical equation with respect to the boundary variation. We shall take the latter point of view as it is more convenient for introducing interactions. As discussed in [15; 16], general equations describing self-interactions of a chiral field are given as \[H^{-}=f(H^{+})\,,\qquad\mathrm{d}H=0\,, \tag{5}\] where \(f:\Lambda^{+}\to\Lambda^{-}\) is an antiselfdual form valued function of a selfdual variable (here \(\Lambda^{+}\) and \(\Lambda^{-}\) represent the space of selfdual and antiselfdual forms respectively). In order to reproduce these equations, one can introduce a boundary term to the Chern-Simons theory, given by an arbitrary function of \(H^{+}\) as \[S=\int_{M}H\wedge\mathrm{d}H-\int_{\partial M}\frac{1}{2}\,H\wedge\star H+g(H^ {+})\,. \tag{6}\] The function \(g(H^{+})\) is a top form function of the selfdual argument \(H^{+}\). The addition of \(g(H^{+})\) is analogous to the addition of an arbitrary potential term to a free Hamiltonian. The bulk equations of motion stemming from the action (6) are simply \(\mathrm{d}H=0\), describing pure gauge configurations, while the boundary equations reproduce (5), where \(f(Y)=\partial g(Y)/\partial Y\) is an anti-selfdual \((p+1)\)-form function of a selfdual variable \(Y=H^{+}\). The action (6) describes arbitrary Abelian interacting theories of a single chiral \(2k-\)form field in \(d=4k+2\) dimensional spacetime (the boundary \(\partial M\)) endowed with a metric of Lorentzian signature. In six dimensions, there is a unique functionally independent scalar made of a selfdual 3-form, therefore, (6) describes an infinite number of consistent theories parameterized by a function of one variable [15]. In ten and higher dimensions such theories are parametrized by a function of more than one variable, as many as the number of independent Lorentz scalars constructed from a selfdual form. In two dimensions, there is no polynomial scalar constructed from a selfdual vector, therefore the only option of the form (6) is the free Abelian theory. For multiple fields, however, interactions via bulk non-Abelian deformations are possible [17]. ## III Democratic description for \(p\)-forms We will use now the same logic to derive democratic Lagrangians for arbitrary \(p\)-forms (including arbitrary Abelian interactions from [15]). The starting point is the topological theory given by the action (occasionally referred to as the BF theory) \[S_{\mbox{\tiny bulk}}=\int_{M}(-1)^{d-p}\,G\wedge\mathrm{d}F+\mathrm{d}G \wedge F\,, \tag{7}\] where \(M\) is a \((d+1)\)-dimensional manifold with \(d\)-dimensional boundary, \(F\) is a \((p+1)-\)form and \(G\) is a \((d-p-1)-\)form. Here, both \(d\) and \(p\) are arbitrary, as opposed to the previous section. The gauge symmetry is given by \[\delta F=\mathrm{d}\alpha\,,\quad\delta G=\mathrm{d}\beta\,. \tag{8}\] The Lagrangian is gauge invariant up to boundary terms. The bulk equations of motion are \(\mathrm{d}F=0=\mathrm{d}G\), implying that these fields are pure gauge, therefore there are no bulk degrees of freedom. The boundary term in the variation of the bulk Lagrangian is given by \(\int_{\partial M}\delta G\wedge F-G\wedge\delta F\,.\) Adding to the action (7) a boundary term, \[-\int_{\partial M}\frac{1}{2}(F\wedge\star F+G\wedge\star G)\,, \tag{9}\] modifies the boundary variation as \[\int_{\partial M}\delta F\wedge((-1)^{p+d+pd}\,G-\star F)+\delta G \wedge(F-\star G)\] \[\qquad=(-1)^{p+d+pd}\int_{\partial M}\star\delta(F+\star G)\wedge( F-\star G)\,. \tag{10}\] Here, again, we take the Neumann boundary condition \(F-\star G=0\), which can be viewed as the dynamical equations with respect to the boundary variation, so that the variational principle gives the equations \(\mathrm{d}F=0=\mathrm{d}G\) supplemented with these boundary conditions. The boundary term (9) again uses a metric with Lorentzian signature. Generalization to the self-interacting case is given as \[S=\int_{M}(-1)^{d-p}\,G\wedge\mathrm{d}F+\mathrm{d}G\wedge F\] \[\quad-\int_{\partial M}\frac{1}{2}\left(F\wedge\star F+G\wedge \star G\right)+g(F+\star G)\,, \tag{11}\] which gives the same bulk equations \(\mathrm{d}F=0=\mathrm{d}G\) and the following modified boundary conditions: \[F-\star G=f(F+\star G)\,. \tag{12}\] Here again, \(f(Y)=\partial g(Y)/\partial Y\) for a \((p+1)-\)form argument \(Y\). This reproduces the democratic theory of general Abelian self-interactions for \(p\)-forms (the reduction to the democratic Lagrangians of [15] will be demonstrated below). An interesting observation [20] is that, as opposed to the chiral case, now we also have the option to describe the boundary theory in a non-democratic manner by simply integrating out one of the fields. E.g., we can solve the bulk equation for \(G\), that is \(\mathrm{d}F=0\), which implies \(F=\mathrm{d}A\). Substituting this into the action reduces the whole system to a boundary Lagrangian that is algebraic in \(F=\mathrm{d}A\), while the only field variable is now \(A\). In the case of free theory, we will simply get a Maxwell Lagrangian \(F\wedge\star F\). Instead, for nontrivial \(g(Y)\), we get a nonlinear algebraic equation expressing \(G\) in terms of \(F\), similar to those discussed in [21; 15]. Such relations are not always easy to solve explicitly even for nonlinear electrodynamics in \(3+1\) dimensions, where some simplifications occur compared to general \(d\) and \(p\). These equations, however, explicitly capture the essence of the conversion procedure between democratic and ordinary single-field formalisms. Note that we could equally integrate out \(F\) instead of \(G\) arriving at different but equivalent \(d\)-dimensional descriptions. The two theories, corresponding to two different reductions (either integrating out \(G\) or \(F\)), are related by duality [20]. This is somewhat similar to the dualization procedure where we integrate out the field \(A\) and \(F\) from the action \(S=\int_{\partial M}-\frac{1}{2}\,F\wedge\star F+G\wedge(F-\mathrm{d}A)\). In the non-Abelian case, this procedure leads to non-polynomial action in terms of the variable \(G\), with no smooth free limit [22]. The democratic action (11) for \(p=2k\)-forms in \(d=4k+2\) dimensions can be diagonalized by introducing new variables \(C=(F+G)/\sqrt{2}\) and \(D=(F-G)/\sqrt{2}\) as \[S=\int_{M}C\wedge\mathrm{d}C-D\wedge\mathrm{d}D\] \[-\int_{\partial M}\frac{1}{2}\left(C\wedge\star C+D\wedge\star D \right)+g(C_{+}+D_{-})\,, \tag{13}\] thus explicitly describing one chiral and one antichiral \(p\)-forms. Note that the Abelian interaction term \(g(C_{+}+D_{-})\) can be viewed as a function of two independent variables \(C_{+}\) and \(D_{-}\), which are simply the selfdual and anti-selfdual projections of \(C_{+}+D_{-}\), which means that (13) actually represents the most general interactions for one chiral and one antichiral fields \(C\) and \(D\). Note that the normalization of the fields in the democratic setup is not unique: one can rescale the fields \(F\) and \(G\) in an opposite manner, arriving at the action, \[S= \ \int_{M}(-1)^{d-p}\,G\wedge\mathrm{d}F+\mathrm{d}G\wedge F\] \[-\int_{\partial M}\left[\frac{1}{2}\left(\lambda^{-2}\,F\wedge \star F+\lambda^{2}\,G\wedge\star G\right)\right.\] \[\qquad\qquad+\left.g(\lambda^{-1}\,F+\lambda\,\star G)\,\right], \tag{14}\] with boundary equations of motion, \[\mathrm{d}F=0=\mathrm{d}G\,,\quad\lambda^{-1}\,F-\lambda\,\star G=f(\lambda^{ -1}\,F+\lambda\,\star G)\,. \tag{15}\] When coupled to charged matter (see for example [23]), this rescaling is related to the change in the coupling constant, which requires opposite rescaling for electric and magnetic couplings. This rescaling freedom is consistent with the Dirac-Schwinger quantization of the charges since the product of their coupling constants is invariant (the quantization applies only to the linear combination of pairwise product of electric and magnetic charges). ### Nonlinear electrodynamics and \(So(2)\) duality When \(d=4k\), and both \(F\) and \(G\) are \(p+1=2k\)-forms, it is convenient to label them as \(F=H^{1}\) and \(G=H^{2}\,\). The Abelian nonlinear \(p\)-form theory in the democratic form, given in [21], can be derived from a \(d+1=4k+1\)-dimensional topological action with a boundary term, \[S= \int_{M}\epsilon_{bc}\,H^{b}\wedge\mathrm{d}H^{c}\] \[-\int_{\partial M}\frac{1}{2}\,H^{b}\wedge\star H^{b}+g(\star H^{ b}+\epsilon^{bc}H^{c})\,. \tag{16}\] This action transmutes under the reduction procedure of [17] to that of [21]. The function \(g(Y)\) is further restricted [21] if we require the \(SO(2)\) duality symmetry rotating \(H^{1}\) and \(H^{2}\). When \(d=4\), the duality-symmetric theories of nonlinear electrodynamics are given by the five-dimensional action of type (16) where the Abelian interaction term is reduced to a function of a single variable, \(g(W^{ab}\,W_{ab})\). Here, \(W^{ab}\) is the duality covariant Lorentz scalar, \[W^{ab}=\star[(\star H^{a}+\epsilon^{ac}H^{c})\wedge\star(\star H^{b}+\epsilon^ {bd}H^{d})]\,,\] whose trace vanishes identically: \(W^{a}{}_{a}=0\,\). The next example is \(d=8\), where the interactions in the general democratic 3-form theory will be parameterized by a function of 14 variables, two for each order in fields -- from second to eighth. The duality-symmetric condition leaves only half of these variables -- seven: one for each order. ## IV Reduction to boundary theories We now proceed to the dimensional reduction procedure introduced in [17] to show that the action (6) can be reduced to the nonlinear chiral \(p\)-form actions of [15]. For that, one introduces a closed one-form \(v\) (and corresponding vector which we will denote with the same letter) and decomposes the bulk field as: \[H=\hat{H}+v\wedge\check{H}\,, \tag{17}\] with a gauge redundancy \[\delta\hat{H}=-v\wedge\alpha\,,\qquad\delta\check{H}=\alpha\,, \tag{18}\] which was fixed by the choice \(i_{v}\hat{H}=0\) in [17]. Plugging this decomposition into the Lagrangian, we notice that the field \(\check{H}\) becomes a Lagrange multiplier enforcing a constraint on the field \(\hat{H}\), \[v\wedge\mathrm{d}\hat{H}=0\,, \tag{19}\] which can be solved following the Appendix C of [24], arriving at \[H=\mathrm{d}A+v\wedge R\,, \tag{20}\] where \(A\) and \(R\) are \(p\)-forms. Then, one can see that the bulk Chern-Simons term of the action becomes a total derivative taking into account that \(\mathrm{d}v=0\). Therefore, the full action reduces to a bulk terms contribution to the boundary \(\mathrm{d}A\wedge v\wedge R\) plus boundary term, where the field \(H\) is replaced by \(\mathrm{d}A+v\wedge R\). Thus the final boundary action is given as \[S=\int_{\partial M}-\frac{1}{2}\,H\wedge\star H+\mathrm{d}A\wedge v\wedge R+g( \star H+H)\,, \tag{21}\] where \(H=\mathrm{d}A+v\wedge R\). The equation (21) reproduces the Lagrangian for the arbitrary interacting theory of chiral \(p\)-form given in [15] with one small difference: there, the \(v\) is parameterized as \(v=\mathrm{d}a\) with a dynamical field \(a\), thus avoiding the need for a prescribed one-form in the theory that naively breaks the Lorentz symmetry. The shift symmetry of the field \(a\), which we call henceforth 'PST symmetry' due to its close relation to the similar symmetry featured in the PST theory [9], is hard to anticipate from the Chern-Simons point of view.2 This symmetry, however, is crucial for the consistency of the theory and furthermore makes it possible to gauge-fix the field \(a\) to a non-dynamical fixed function, at the expense of manifest Lorentz symmetry (thus making contact with the Chern-Simons derivation above). One may add a top-form term \(J\wedge\mathrm{d}v\) to the Lagrangian (where \(J\) is a Lagrange multiplier) and keep the field \(v\) unconstrained. This formulation (for the free theory) was the starting point in [19] (where the one-form \(v\) was denoted as \(c\)). Note, that the condition \(v^{2}\neq 0\) is essential for the theory given by action (21) to describe a chiral form. One way to exclude the space \(v^{2}=0\) from the theory could be an extra condition \(v^{2}=1\) imposed by a Lagrange multiplier \(\mu\), i.e., adding3 a term \(\mu(v^{2}-1)\) to the Lagrangian (21). Footnote 2: Naively, in order to get the boundary Lagrangian, one needs to use a specific \(v\). However, any non-null \(v\) gives a consistent theory on the boundary, and all such theories are equivalently encoded in the action (6) which has manifest Lorentz symmetry. This gives an intuitive picture of why there should be extra gauge symmetries in the boundary theory that provide for Lorentz invariance, as in [9; 10; 11; 15; 16; 21], though it is not obvious how to make these symmetries explicit in the bulk theory language. Footnote 3: We thank Chris Ishid for discussions on this matter. Within the boundary theory, the expression \(\star H+H\) is gauge-invariant with respect to the enlarged set of gauge symmetries shifting the auxiliary fields [15]. Thus, these gauge symmetries guide us to the action (21) in the language of the boundary theory of [15], while in the Chern-Simons language, the structure of the corresponding boundary terms is guessed so that they give rise to self-interacting chiral edge modes. Now that we reviewed the derivation of [17] and generalized it to include Abelian interactions of chiral forms, we will proceed to the democratic formulation for arbitrary \(p\)-forms. Using the same reduction procedure as in the chiral case, one can show that (11) leads to the general Abelian self-interactions for the \(p\)-forms, with the democratic boundary Lagrangian given in [15]. For that, one decomposes the fields \(F\) and \(G\) using a closed one-form \(v\) (and corresponding vector which we will denote with the same letter): \[F=\hat{F}+v\wedge\check{F}\,,\qquad G=\hat{G}+v\wedge\check{G}\,. \tag{22}\] Substituting this in the bulk Lagrangian, we can see that the fields \(\check{F}\) and \(\check{G}\) are Lagrange multipliers, imposing the constraints on the fields \(\check{F}\) and \(\hat{G}\), \[v\wedge\mathrm{d}\hat{F}=0=v\wedge\mathrm{d}\hat{G}\,, \tag{23}\] which can be solved as earlier. Substitution of the latter expressions in the action leads to purely boundary theory with a Lagrangian, \[\mathcal{L} = v\wedge S\wedge\mathrm{d}A-\mathrm{d}B\wedge v\wedge R \tag{24}\] \[+\frac{1}{2}\left(F\wedge\star F+G\wedge\star G\right)+g(\star G+ F)\,,\] where \(H_{1}\) and \(H_{2}\) are given by \[F = \mathrm{d}A+v\wedge R\,, \tag{25}\] \[G = \mathrm{d}B+v\wedge S\,. \tag{26}\] This Lagrangian coincides with [15] after solving the constraint \(\mathrm{d}v=0\) as \(v=\mathrm{d}a\) and a simple field redefinition discussed in [24]. ## V Bulk-induced interactions The interactions introduced above only enter the higher-dimensional topological description through the boundary terms. Consequently, the interactions in the resulting boundary theory are expressed through the field strength alone, but not through the gauge potential. It is possible to construct more general interactions by considering topological interactions in the bulk. The simplest example of such interactions would be the non-Abelian Chern-Simons Lagrangian discussed in [17]. More generally, one can add bulk interaction terms that are top-form wedge products of the fields involved. Such interactions are very limited for a single field, which we will discuss here, completing the discussion on Abelian self-interactions, and leaving the less constrained cases with multiple fields for future work. For the chiral case, the only field is the \((p+1)-\)form \(H\), so the interactions may have the form \(H\wedge H\wedge H\). Such a term is only legitimate in three bulk dimensions, where \(H\) is a one-form, and even there, it is trivial for a single field \(H\). For higher dimensions, self-interactions of a single chiral field can only be introduced via the boundary terms discussed earlier. For democratic fields, the situation is different. In special cases, there is a possibility to add interacting terms for a single field. This happens when \(d=3p+2\) for odd \(p\), and the corresponding bulk term is \(F\wedge F\wedge F\) (we recall that \(F\) is a \((p+1)-\)form and therefore the latter term is nontrivial for odd \(p\) and is a top form in \(d+1=3(p+1)\) dimensions). Therefore, the full action is given as \[S = \int_{M}G\wedge\mathrm{d}F+\mathrm{d}G\wedge F+\frac{2}{3}\, \lambda_{3}\,F\wedge F\wedge F \tag{27}\] \[-\int_{\partial M}\frac{1}{2}\left(F\wedge\star F+G\wedge\star G \right)+g(F+\star G)\,.\] In the first non-trivial case, \(p=1\), the \(\lambda_{3}\) term in the action (27) describes Abelian Chern-Simons interactions for five-dimensional nonlinear electrodynamics. This can be quickly verified by integrating out the field \(G\), most easily done in the case \(g(Y)=0\), leading to Maxwell-Chern-Simons theory. In the next case, \(p=3\), the \(\lambda_{3}\) term describes the Chern-Simons interactions for the three-form in eleven dimensions. This interaction is essential for the 11d supergravity and was the missing element for the democratic formulation of the latter in the same line as type II supergravities in ten dimensions [25]. More generally, bulk Abelian interactions are possible in the dimensions \(d=np+n-1\) (assuming that \(p\) is odd) and are given by a wedge product of \(n\) copies of \(F\). For the quartic interactions, the first nontrivial case is the seven-dimensional Abelian Chern-Simons term, given by the bulk interaction \(\lambda_{4}\,F\wedge F\wedge F\wedge F\). The reduction procedure of [17] works smoothly also in the presence of the bulk interaction (27). The same procedure as performed above in the case of \(\lambda_{3}=0\) leads to a neat cancellation of all bulk terms and leaves a boundary theory with the Lagrangian, \[\mathcal{L} = v\wedge S\wedge\mathrm{d}A-\mathrm{d}B\wedge v\wedge R-\frac{ \lambda_{3}}{3}A\wedge\mathrm{d}A\wedge\mathrm{d}A \tag{28}\] \[+\frac{1}{2}\left(F\wedge\star F+G\wedge\star G\right)+g(\star G +F)\,,\] where \(F\) takes the same form as in (25) while \(G\) is modified to \[G=\mathrm{d}B+v\wedge S-\lambda_{3}\,A\wedge\mathrm{d}A\,. \tag{29}\] This Lagrangian describes democratically nonlinear Maxwell-Chern-Simons theory in five dimensions for 1-form \(A\) and 2-form \(B\). The same Lagrangian describes democratically the 3-form \(A\) in eleven-dimensions on equal footing with its dual 6-form \(B\). ## VI Maximal supergravities in \(d=10,11\) We can now quickly derive the type II supergravities in the democratic form of [25] from a topological theory in eleven dimensions. The starting point is the Chern-Simons action on the 11-dimensional manifold \(M\) with a Lorentzian \(10d\) boundary \(\partial M\), \[S_{{}_{\rm RR}}=\int_{M}G\wedge DG+\int_{\partial M}\frac{1}{2}(G,\star G)\,, \tag{30}\] where \(\star\) is defined with a factor \(\star\alpha=(-1)^{\left\lfloor\frac{\mathrm{d}\alpha\,\alpha}{2}\right\rfloor+ \mathrm{d}\varepsilon\,\alpha}\) compared to Hodge star denoted in this section as \(\ast\), and we use Mukai pairing \((\alpha,\beta):=(-1)^{\left\lfloor\frac{\mathrm{d}\alpha\,\alpha}{2}\right\rfloor }(\alpha\wedge\beta)^{\mathrm{top}}\), and finally \(D=\mathrm{d}+H\wedge\), where \(H\) is a closed 3-form curvature of the Kalb-Ramond field (see details in [25]). Here, \(G\) encodes all the curvatures of RR fields: \[G =G_{2}+G_{4}+G_{6}+G_{8}+G_{10},\qquad\text{(IIA case)} \tag{31}\] \[G =G_{1}+G_{3}+G_{5}+G_{7}+G_{9}.\qquad\text{(IIB case)} \tag{32}\] The action (30) can be reduced to ten dimensions via the procedure of [17] to reproduce the RR sector actions of [25]. It is straightforward to add the NSNS sector and gravity, which are not described democratically. An analogous description can be proposed for the 11-dimensional supergravity [26]. Here, we introduce a 12-dimensional BF theory with a 11-dimensional boundary term and describe democratically the 3-form field with 4-form curvature \(F\) and its dual 7-form curvature \(G\) of the 6-form potential. Therefore, the action takes the form of (27) where the coupling constant is fixed by supersymmetry as \(\lambda_{3}=1\), whose value is responsible for the remarkable exceptional symmetries of the dimensional reductions of \(11d\) supergravity [27]. When \(g(Y)=0\), we can integrate out the \(G\) field from (27) to recover the standard 11d action involving a single three-form potential field. Instead, if we reduce the \(12d\) action (27) via the procedure of [17], we find the democratic description of the \(11d\) Lagrangian of the form (28) (with \(\lambda_{3}=1\)). Integrating out the auxiliary fields \(R\) and \(S\), we recover the PST form of the action from [28]. Note that deformations similar to \(\alpha^{\prime}-\)corrections in String Theory are suggested by a non-trivial interaction term \(g(\star G+F)\). ## VII Discussion We have provided a simple derivation of arbitrary self-interacting Abelian \(p\)-form theories with first-order equations of motion -- democratic or chiral -- starting from familiar topological theories, making use of the ideas introduced in [17]. We also introduced large classes of Abelian self-interactions for these fields. The last missing piece of the puzzle was the Abelian interactions that cannot be written in terms of curvatures and are given by Abelian Chern-Simons terms that are only gauge invariant up to boundary terms. This setup builds a connection between Lagrangian formulations for the nonlinear (twisted) selfduality equations [15] and other influential considerations in the literature (see, e.g. [6; 7; 29; 30; 31; 32; 33; 34; 35; 36] for a sample of historical references). More general interactions between multiple different fields will be studied systematically elsewhere. The topological description of the RR fields in ten-dimensional supergravities discussed in this letter also provides supporting explanations on the resolution [37; 25] of the puzzles of supergravity on-shell actions [37], which have to be contrasted with the expectations from holography. This resolution, which does not rely on a specific vacuum solution, is made at the level of the democratic \(d\)-dimensional Lagrangians with a unique \((d-1)\)-dimensional boundary term protected by the PST symmetry. From the perspective of the \((d+1)\)-dimensional topological theories, this boundary term lives on the boundary of the boundary, and hence it is not surprising that any ambiguity in such a term is resolved. We expect that the analogous puzzle of \(11d\) supergravity related to the electric solution [38] admits a similar resolution. The democratic descriptions discussed here require a Lorentzian metric on the boundary because the (twisted) self-duality equations with signature \((t,d-t)\) admit non-trivial solutions only for \(+1(-1)\) values of the Hodge star squared \(\star^{2}=(-1)^{p(d-p)+t}\). Gravitational theories involving such actions may use path integral over the metric with arbitrary signature (see for example [39]). Then, the degrees of freedom described by the democratic (or chiral) formulations of \(p\)-forms will be switched off in even-time signatures, going to a lower-dimensional phase space compared to the Lorentzian signature. ## Acknowledgements We are grateful to Alex Arvanitakis, Chris Hull, Massimo Porrati, Arkady Tseytlin, and Fridrich Valach for helpful discussions, and Zhirayr Avetisyan, Calvin Chen, Lewis Cole, and Alexander Sevrin for feedback on the manuscript. O. E. is supported by Thailand NSRF via PMU-B (grant numbers B01F650006 and B05F650021). E. J. was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2022R1F1A1074977). K. M. was supported by the European Union's Horizon 2020 Research and Innovation Programme under the Marie Sklodowska-Curie Grant No. 844265, UKRI and STFC Consolidated Grant ST/T000791/1.
chirality の空間的形状($d$ 次元)は、$d+1$ 次元のトポロジカル Chern-Simons理論の境界モードとして効果的に記述できます。同時に、このような場の対称性のある Лоренц保存されたLagrangian記述は、$d$ 次元の場理論で直接記述することは困難であり、必要に応じて、オン・シェルで消去する特異な補助ゲージ場を導入する必要があります。Arvanitakis et al. の最近の仕事では、2次元のChiral bosonの場合に示されています(その結果、両方のアプローチは関連しています。 $(d+1)$ 次元のトポロジカル Lagrangians の特殊な削減は、適切な補助場のセットを持つ $d$ 次元のLagrangians に自動的に変換されます。私たちは、この設定を三つの異なる方向に発展させました。まず、Chiral form の任意のAbelian 自己相互作用を、Chern
2309.13359
Analysis of the Gravitational Wave Background Using Gamma-Ray Pulsar Timing Arrays with Next-Generation Detectors
In this work, we investigate the potential of gamma-ray pulsar time array (PTA) on gravitational waves background (GWB) using future gamma-ray detectors with larger effective areas. We consider both spaceborne detectors and ground-based imaging air Cherenkov telescope arrays (IACTs). We simulated the detected photons from pulsars using the response of hypothetical detectors taking into account the backgrounds and analyzed the sensitivities. Our results showed that thanks to the higher statistics of IACTs, the PTA using IACTs can improve significantly the performance compared with the PTA using Fermi-LAT data.
Zhen Xie, Zhipeng Zhang, Jieshuang Wang, Ruizhi Yang
2023-09-23T12:44:08
http://arxiv.org/abs/2309.13359v1
Analysis of the Gravitational Wave Background Using Gamma-Ray Pulsar Timing Arrays with Next-Generation Detectors ###### Abstract In this work, we investigate the potential of gamma-ray pulsar time array (PTA) on gravitational waves background (GWB) using future gamma-ray detectors with larger effective areas. We consider both spaceborne detectors and ground-based imaging air Cherenkov telescope arrays (IACTs). We simulated the detected photons from pulsars using the response of hypothetical detectors taking into account the backgrounds and analyzed the sensitivities. Our results showed that thanks to the higher statistics of IACTs, the PTA using IACTs can improve significantly the performance compared with the PTA using Fermi-LAT data. ## I Introduction Pulsars are ideal cosmic laboratories for their excellent periodicity. The pulsar timing array (PTA) is the only method so far to detect the low-frequency gravitational waves (GWs) in nHz [1]. The GWs can be detected using ensembles of millisecond pulsars (MSPs) known as pulsar timing arrays (PTAs). PTAs monitor the arrival times of steady pulses from each pulsar, which are affected by spacetime perturbations and may arrive earlier or later than expected. For observations taken on Earth, the low-frequency GWs are expected to produce a signature quadrupolar pattern of the TOAs of the photons that come from the pulsar, known as the Hellings-Downs correlation[2]. Low-frequency GWs have many origins, and they can provide a wealth of information about the universe. Supermassive black hole (SMBH) binaries are expected to emit GWs, and the superposition of GWs from many SMBH binaries throughout the universe is predicted to build up a GW background (GWB). GWs from inflation would help describe the universe at its earliest moments[3] and are also an important way to test cosmology theories. Cosmic strings are theorized topological defects produced by phase transitions in the early universe, vibrating and losing energy via gravitational wave emission over the history of the universe [4]. If cosmic strings exist, they will create a stochastic GWB, and the observation of such kind of GWB would bring confirmation of physics beyond the Standard Model [5]. As mentioned above, since many processes can produce GW signals, the information derived from stochastic GWB would provide significant information about astrophysical processes over the history of the universe[6]. Recently, the Fermi-LAT Collaboration has performed for the first time the study of gravitational wave background using PTA observed in gamma-ray band[7], which demonstrates the great potential to study the GWB. Gamma PTA has many advantages compared with the traditional ratio PTAs. For example, a main noise source for radio PTAs is the effect of radio propagation through plasma, including the solar wind and the ionized interstellar medium (IISM). These effects are time-dependent and introduce noises similar to the GW signals. On the other hand, the effects of the IISM and solar wind can be ignored for gamma-ray photons. In this regard, gamma PTA has smaller noise and much easier data analysis. But gamma PTA also suffers from poor angular resolution and limited exposure of the current instrument. In this letter, we investigated the potential improvement of gamma PTA[8] by future detectors. We considered two types of instruments. One is future spaceborne telescopes (FSTs) like Fermi-LAT with a larger effective area; and the other is Image Air Cherenkov Telescopes (IACTs), these ground-based telescope has a much larger effective area with high time accuracy. Our work follows this structure. We described the method we used to simulate the observation of pulsars using the hypothetical instruments in session 2, we analyzed the simulated data and investigated the sensitivities of gamma PTA with future instruments in session 3, and the last session is the conclusion. ## II Simulated Data Based on Future Detectors In Fermi-LAT gamma PTA, pulsar PSR J1231-1411 gave the best constraint of the photon-by-photon method, so we used this object in the following simulation as an example. In the simulation, two different types of detectors are considered. Firstly we consider FSTs similar to Fermi-LAT but with 10 times more effective area. Another type is low threshold IACTs. In this work, we adopt 5@5 as an example of such kind of detector. 5@5 is a large ground-based Cherenkov telescope array planned for the mountains of the Atacama Desert in northern Chile. Due to its low energy threshold, it shows great potential for pulsar research. In this paper, we used the response of 5@5 to perform the simulation. In the analysis we didn't consider the true geometrical location of the arrays, instead, we just assumed a 100-hour exposure of the pulsar with the fiducial telescope response. We admit that the true instrument response will depend on the site location as well as the source declination, but for a single pulsar, it is easy to find 100 hours of observation times every year with reasonable declination. Thus in the work, we used a uniform instrumental response for IACTs for simplicity. The telescope's effective area can be described as Aharonian _et al._[9] calculated: \[A_{eff}=8.5E^{5.2}[1+(E/5~{}GeV)^{4.7}]^{-1}~{}\rm{m}^{2}, \tag{1}\] and the point speared function (PSF) of 5@5 can be described as: \[\phi=0.8(E/1~{}GeV)^{-0.4}~{}\rm{degree}, \tag{2}\] by integrating with the spectrum of the pulsar, we can derive the expected photon number of the IACTs. Fig. 1 shows the result of the photon number by Fermi-LAT and 5@5 which makes an observation for 100 h per year in 12.5 years. We found that the ground-based telescope has a good performance in collecting photons, due to their large effective area. For J1231-1411, we made a conservative estimate to observe it 100h per year, the photon number IACT can collect is 30 times more than that from Fermi-LAT in the same time span. We note that a significant disadvantage of IACTs is the much smaller FOV and lower duty cycles. Fermi LAT results showed that the combined likelihood of more than 20 pulsars can further improve the sensitivity by a factor of two. In this regard, IACTs cannot compete because of the limited sky coverage. But thanks to the advantage of photo sensors, the next generation IACTs can also operate on the night with moon [10], thus the observation time every year can be increased to nearly 2000 hours. Thus it would be easy to observe more than 10 pulsars every year with an exposure of about 100 hours each, which will also allow us to perform the joint likelihood analysis. For Fermi-LAT, the gamma-ray data are recorded in terms of energy \(E_{i}\), spatial position \(\mathbf{r}_{i}\), and arrival time \(t_{i}\) for the \(i\)-th photon. So in simulations for photons detected by hypothetical detectors, we also sample these quantities. The energy for photon from a pulsar can be described by the parameterized function PLSuperExpCutoff4 used by Fermi-LAT[11]: \[\frac{dN}{dE}=K(\frac{E}{E_{0}})^{\frac{d}{8}-\Gamma_{*}}exp[\frac{d}{b^{2}}(1 -(\frac{E}{E_{0}})^{b})]~{}~{}~{}~{}(b\,ln\frac{E}{E_{0}}>10^{-2})\,, \tag{3}\] each parameter can be queried in the catalog provided by Fermi-LAT. We first sample the energy of the photons by using this distribution. For the spatial position, we chose a circle of 3\({}^{\circ}\) radius around the pulsar as Fermi-LAT PTA and then sampled the position of the detected photon by taking into account the point spread function (PSF) of the detector, as well as the flux from both pulsar and a flat background. Note that the PSF is always energy-dependent. Due to the high, sometimes even dominating backgrounds in gamma-ray astronomy, it is always difficult to recognize whether the photon comes from the pulsar itself or from backgrounds. The background in Fermi LAT (and other space-borne detectors) is mainly the diffuse Galactic gamma-ray background (DGE). In Fermi LAT it is described in the standard background file _gll_iem_v07_fits_[12]. It is taken into account in the data analysis in Fermi PTA to calculate the _weight_ of photons. As in gamma PTA, _weight_ is given to each photon to show the possibility of whether the photon comes from a pulsar or not. In IACTs, however, in addition to the DGE, there are unavoidable contaminations from cosmic ray (CR) proton and electrons which are also observed by IACTs. In the energy range we are interested in this work (\(1-10\) GeV), the CR electrons cannot be detected by IACTs due to the geomagnetic cutoff effects. As calculated in [13], the background from CR protons can also be neglected in this energy range due to a much lower trigger rate at low energy. In this case, the dominating background in IACTs would also be DGE, and the analysis for IACTs would be identical to that of Fermi-LAT and FSTs. However, we cannot exclude the possibility that IACT could induce further CR backgrounds due to different configurations to that used in [13]. As a conservative check, in this work we estimated the CR proton background based on the results in Aharonian _et al._[9], the background for 1 - 10 GeV gamma-rays mainly comes from the protons with energy 10 - 100 GeV, considering also a gamma/p separation power of about 1/10, the flux of background from CR protons can be written as \(F_{\rm bkg}=2\times 10^{-7}~{}(\rm{E/1GeV})^{-2.7}MeV^{-1}sr^{-1}cm^{-2}s^{-1}\), which is at least one order of magnitude larger than the DGE in the plane at the same energy range. As a result, we consider only the background induced by CR protons in the calculation for IACTs. We also assume it is uniformly distributed spatially due to the homogeneity of CR proton arriving directions. In addition to the primary electron and CRs, the secondary electrons produced in the primary CR interaction with the atmosphere can be another background. But these secondary electrons should be part of the hadronic shower induced by primary CR protons, which is already included in the proton background and gamma/p separation procedure discussed above. The arrival time of each photon can be translated into the phase by the _PINT_ software[14], by accumulation, we can get the pulse profile. And the profile can be described by the superposition of several Gaussian distributions, which is called the template function. In our simulation, we used the profile folded by Fermi-LAT Observation data in 12.5 years. The sampling of the arrival time of a photon consists of two parts, integer multiples of the period of the pulsar and the phase (time) conforming to the pulsar's pulse profile, which is described of template functions for PSR J1231-1411 derived in Fermi PTAs [7]. The last step of simulating is to calculate the _weight_ of each photon. We calculated the predicted photon flux from the pulsar by convolving the flux of the pulsar with the PSF at each position, as well as the flux from the background. We calculated the _weight_ of each photon by dividing the photon flux from the pulsar by the total photon flux (pulsar plus background) at each position. Through the above steps, we simulated the energy, time(phase), position, and the _weight_ information of each incident photon, we used them in the analysis using gamma PTA pipelines. ## III Gamma PTA data analysis The log-likelihood function of a single pulsar is given by unbinned (photon-by-photon) method[7]: \[\begin{split}\log\mathcal{L}=\sum_{i}\log\left[w_{i}f(\phi_{i} )+(1-w_{i})\right]-\\ 0.5\beta^{T}C_{\rm tn}^{-1}\beta-\frac{1}{2}\log(|C_{\rm tn}|), \end{split} \tag{4}\] here,\(\phi\) is the phase of an individual pulsar, and \(f(\phi)\) is the profile of \(\phi\), which is defined by a sum over one (or many) Gaussian distributions \(g(\phi;\mu,\sigma)\) with the mean \(\mu\) and the variance \(\sigma\), and each photon is assigned a _weight_ which characterizes its probability of originating from the pulsar or background as described earlier. The second part represents a Gaussian noise process of Fourier amplitudes \(\mathbf{\beta}\): \[\mathcal{L}_{\rm tn}\propto\frac{1}{\sqrt{|C_{\rm tn}|}}\exp\left(-\frac{1}{ 2}\beta^{T}C_{\rm tn}^{-1}\beta\right). \tag{5}\] To compare these results with radio PTA, we assumed that both IACTs and FSTs will start observation in 2035 when such large instruments are likely to be put into operation. Then we calculated the sensitivities with observation duration. We considered the constraints for the single source PSR J1231-1411, which gave the best constraints for Fermi-LAT. For IACTs, we calculated both the sensitivities with and without the hypothetical CR background, assumed an effective exposure time of 100 hours every year. We found the IACTs we considered Figure 1: The number of effective photons of 35 Fermi-LAT pulsars[7] measured in 12.5 years, compared with the expected number of photons by 5@5 observation 100h/yr in 12.5 years, at the energy from 1 to 10 Gev. here have a sensitivity significantly better than FSTs, even though we have assumed that the FSTs have a 10 times larger effective area than Fermi-LAT, which is nearly unrealistic. The Gamma PTA with IACTs can surpass the Fermi LAT sensitivities within a decade or so after the operation. We also compared these results with the recent NANOGrav 15-year Data Set result, which gave evidence of GWB with the amplitude of \(2.6\times 10^{-15}\) at a reference frequency of 1 yr\({}^{-1}\)[15]. The Square Kilometre Array (SKA) can greatly enhance pulsar timing precision by its unprecedented collecting area and bandwidth, and the expected levels to be reached by the SKA is about \(10^{-16}-10^{-17}\) at a reference frequency of 1 yr\({}^{-1}\)[16]. These results are shown in Fig. 2. For ideal PTA,the signal-to-noise ratio grows proportionally to \(A_{gwb}^{2}\times t_{obs}^{\Gamma}\)[17]. So as \(\Gamma=13/3\) for SMBH generated GWB[18], the relation of \(A_{gwb}\) with the observation time length \(t_{obs}\) will be \(A_{gwb}\propto t_{obs}^{-13/6}\), here the dimensionless strain amplitude \(A_{gwb}\) incorporates the growth, masses, and merger rates of SMBHs, and the \(\Gamma\) is the spectral index of spectrum of GWBs power spectral densities. We calculated the Fermi LAT upper limit on \(A_{gwb}\) with different \(t_{obs}\) using the real Fermi-LAT data and the results are shown in Fig. 3. We also calculated the sensitivity with time using IACTs, assuming an exposure of 100 hours per year. The results are shown in Fig. 4. We can see that the sensitivity gradually gets closer to the expectation for both Fermi-LAT and IACTs with the increase of observation time. In order to consider the level of how the background of IACTs influences the sensitivity of GWB analysis, we also simulated data with different background levels, as Fig. 5 shows. From our calculated results, we found that the influence is relatively small when the background contributes less than 80% of the total photons since the photons that come from the background have lower _weight_, which seldom affects the pulsars profile. While at higher rates, it weakens the sensitivity sharply, this may be due to the profile of the pulsar being broken by the background. In very high background ratios(\(>95\%\)), even fitting the profile Figure 3: Changing in \(A_{gwb}\) limit for J1231-1411 with increased observation time using Fermi-LAT data. The dashed line represents the relationship that \(A_{gwb}\) is proportional to \(t_{obs}^{-13/6}\). Figure 2: Constraints on the GWB from radio and gamma-ray PTAs, the radio PTA data is from Fermi-LAT[7]. Assuming that both IACTs and FSTs start observation in 2035 **(note that the data points before 2045 are above the \(A_{gwb}\) range shown in this figure due to the steep rise of the sensitivity curve)**, the points in the right half show the results for about 7.5 and 12.5-year observations of J1231-1411. The solid line shows the Fermi-LAT result, in which the sensitivity is proportional to \(t_{obs}^{-13/6}\). The dot-dash line shows the results of IACTs with and without background as Fig. 4. The green line shows the level at which SKA can be reached when it goes into operation in 2028. The orange star is the NANOGrav 15-year Data Set result. of the pulsar is failed. So it's still necessary to lower the background photon's effect in ground-based observation. A possible way is using a small exposure window near the center of the pulsar. In our current work, we used \(3^{\circ}\) region, a smaller exposure window can improve the performance of sensitivity. Considering the PSF of about \(1^{\circ}\) for IACTs in this energy range, such improvement is feasible. ## IV Discussion In this paper, we extended the gamma PTA analytical method of GWB used by Fermi-LAT to simulate future gamma-ray detectors' capability on gamma PTA. Both IACTs and FSTs can lift the statistics significantly. IACTs would potentially induce extra CR backgrounds, which could limit the sensitivity to GWB. We took the extra background into account and found that, in our conservative estimation of CR backgrounds, the IACTs still gave a much better sensitivity due to their overwhelming effective area. Meanwhile, the sensitivity of gamma PTA is still limited, and there's still a gap with the result of radio PTA. This is not only due to the limitation of existing instruments but also on account of the short time length of gamma PTA observations. The sensitivity of gamma PTA is hard to compare with radio PTA in the short term. But as we had discussed in this letter, gamma PTA shows great potential to match radio PTA in a decade, especially with future detectors. Beyond that, due to the much easier data reduction procedure and less impact from ISM plasma for gamma PTA, we believe that the cross-check from multi-wavelength observation is also necessary and important to limit the GWB and other physical progress. Looking ahead, large gamma-ray instruments have been planned or are already under construction, such as VLAST[19] and Cherenkov Telescope Array (CTA) [20]. There is also a plan to build IACTs on the site of the Large High Altitude Air Shower Observatory (LHAASO) [21], [22]. But for low threshold IACTs the LHAASO site may be not good enough because of the limited weather conditions, other better sites for optical astronomy, such as Lenghu [23] are more suitable for such an instrument. The gamma PTA is a supplementation and cross-checking tool for radio PTA. With the continued development of new detection tools, we expect further progress in understanding these elusive phenomena.
この仕事では、将来のγ線検出器のより大きな有効面積を用いて、γ線パルサーの時間配列(PTA)が重力波背景(GWB)における可能性を調査します。空間探査検出器と地中探査型イメージングエア陈肯の天体観測器配列(IACT)を考慮します。パルサーからの検出光子を、背景を考慮した仮想的検出器の反応を用いてシミュレートし、その敏感性を分析しました。結果として、IACTを用いるPTAがFermi-LATデータを用いるPTAに比べて、統計量の増加により、その性能が著しく向上することが示唆されました。
2301.00261
Cluster radioactivity in trans-lead region: A systematic study with modified empirical formulas
The possibility of cluster emission from trans-lead (86$\leq$Z$\leq$96) region of periodic chart has been explored comprehensively by employing few empirical formulas which are modified by adding angular momentum ($l$) or isospin-dependent ($I=(N-Z)/A$) or both terms for the calculation of cluster decay half-lives. These modified versions of the formulas are found with lesser ${\chi}^2$ per degree of freedom and root mean-square error, in addition to the smaller values of some other statistical parameters, while compared to their corresponding old versions on available 61 experimental data of cluster radioactivity. By applying the modified version of the formula given by Balasubramaniam \textit{et al.} [PRC 70 (2004) 017301], the most accurate formula among these, half-lives of several clusters i.e. isotopes of Be, B, C, N, O, F, Ne, Na, Mg, and Si are predicted systematically for the several isotopes in the trans-lead region. The contest of cluster emission with $\alpha$-decay has been investigated in form of branching ratio which brings several potential cluster emissions into the probable decay modes of these nuclei. The accurate prediction of half-lives of such clusters is expected to be crucial for the future experimental observations where $\alpha$-decay is observed dominantly.
A. Jain, P. K. Sharma, S. K. Jain, J. K. Deegwal, G. Saxena
2022-12-31T18:03:03
http://arxiv.org/abs/2301.00261v1
# Cluster radioactivity in trans-lead region: A systematic study with modified empirical formulas ###### Abstract The possibility of cluster emission from trans-lead (86\(\leq\)Z\(\leq\)96) region of periodic chart has been explored comprehensively by employing few empirical formulas which are modified by adding angular momentum (\(l\)) or isospin-dependent (\(I=(N-Z)/A\)) or both terms for the calculation of cluster decay half-lives. These modified versions of the formulas are found with lesser \(\chi^{2}\) per degree of freedom and root mean-square error, in addition to the smaller values of some other statistical parameters, while compared to their corresponding old versions on available 61 experimental data of cluster radioactivity. By applying the modified version of the formula given by Balasubramaniam _et al._ [PRC 70 (2004) 017301], the most accurate formula among these, half-lives of several clusters i.e. isotopes of Be, B, C, N, O, F, Ne, Na, Mg, and Si are predicted systematically for the several isotopes in the trans-lead region. The contest of cluster emission with \(\alpha\)-decay has been investigated in form of branching ratio which brings several potential cluster emissions into the probable decay modes of these nuclei. The accurate prediction of half-lives of such clusters is expected to be crucial for the future experimental observations where \(\alpha\)-decay is observed dominantly. keywords: Cluster decay, Trans-lead Nuclei, Empirical formulas, \(\alpha\)-decay. + Footnote †: journal: Nuclear Physics A ## 1 Introduction In 1980, Sandulescu _et al._[1] firstly predicted a new type of radioactivity: cluster radioactivity, which was based on fragmentation theory, where fusion and fission reaction valleys were generated by the shell closure effect [2]. Later in 1984, Rose and Jones experimentally proved the existence of this new type of exotic decay [3], in which \({}^{14}\)C decays from actinide parent nucleus \({}^{223}\)Ra and forms a stable doubly magic (Z=82, N=126) nucleus \({}^{208}\)Pb. Till now, many clusters decays from light to heavy clusters (\({}^{14}\)C to \({}^{32}\)Si) have been observed from various trans-lead nuclei (Fr, Ra, Ac, Pa, Th, U, Pu, etc.) resulting the corresponding daughter nuclei as magic nuclei (Z=82) or neighboring ones (Z=80, 81, and 83), which indicate the importance of shell and pairing effects in cluster radioactivity [4; 5; 6]. These clusters are observed with long half-lives (T\({}_{1/2}\)) in the range 10\({}^{11}\)-10\({}^{30}\) sec. [7]. Theoretically, the half-lives of cluster emissions are predicted using various models such as unified fission model (UFM) [8], generalised liquid drop model (GLDM) [9], super-asymmetric fission model (SAFM) [10], preformation cluster model (PCM) [11], etc. Cluster decay half-lives are also calculated by using various semi-empirical formulas such as (i) the empirical relation suggested by Balasubramaniam _et al._ (BKAG formula) for cluster decay half-lives with only three parameters [12], (ii) the empirical relation suggested by Ren _et al._ (RenA formula) using a microscopic density-dependent cluster model with the re-normalized M3Y nucleon-nucleon interaction [13]. Comcomitantly, based on experimental observations about the characteristics of exotic cluster decays, scaling law proposed by Horoi [14] in which logarithmic half-life is proportional to scaling variable \((Z_{c}Z_{d})^{0.6}/\sqrt{Q}\) and also proportional to \(\sqrt{\mu}\), where \(\mu\) is the reduced mass of cluster and daughter nuclei which was followed by another semi-empirical formula (NRDX), proposed by Ni _et al._[15] considering WKB barrier penetration probability with some approximations. In 2009, Qi _et al._ introduced universal decay law (UDL) [16] that originates from the mechanism of charged particle decay and R-matrix for all sort of decays of clusters, which includes monopole radioactive decays as well. Poenaru _et al._[17] plotted a universal curve (UNIV) which is found to be a straight line for cluster decay and \(\alpha\)-decay. All the above-mentioned formulas have been fitted to the available experimental data without considering the dependence of half-lives on angular momentum taken away by the cluster: expected to be crucial alike to the \(\alpha\)-decay [18] to delineate all sets of experimental data. The importance of angular momentum on the \(\alpha\)-decay half-lives has already been established in a few of our recent works [19; 20] which has invoked us to probe similar dependence on the cluster decay half-lives. In addition to this, isospin (\(I=(N-Z)/A\)) of parent nucleus is found to be pivotal for the case of \(\alpha\)-decay in heavy and superheavy nuclei [20; 21; 22; 23; 24; 25] pointing towards its significance in terms of cluster decay as well. Considering these two effects together, modified UDL formula (new UDL) by Soylu and Qi [26], and improved NRDX formula (named as improved unified formula (IUF)) by Ismail _et al._[27] have explained recently that angular momentum and isospin are indeed crucial quantities in determining the cluster decay half-lives. Importance of isospin effect is also probed by improving semi-empirical formula (ISEM) for the cluster radioactivity in Ref. [28]. In this article, we have modified the BKAG [12], RenA [13], Horoi [14], NRDX [15], UDL [16], and UNIV [17] formulas by investigating the effect of centrifugal barrier and isospin terms. These six modified formulas are fitted by using 61 experimental cluster decay data [7; 9; 26; 29]. The comparison of RMSE (root mean square error) between the older and modified version manifestly shows the significance of inclusion of angular momentum and isospin-dependent terms in cluster emission. Furthermore, one of the modified formulas i.e. MBKAG formula (emerged with least RMSE) is employed to calculate the cluster decay half-lives for various cluster emissions like isotopes of Be, B, C, N, O, F, Ne, Na, Mg, and Si in trans-lead region (86\(\leq\)Z\(\leq\)96). For these theoretical estimates, the requirement of disintegration energy (\(Q\)-value) is tested by 121 available experimental \(Q\)-values [7; 9; 26; 29] from various mass models [30; 31; 32; 33]. Consequently, various potential clusters are proposed from trans-lead region along with their accurate estimation of half-lives. ## 2 Formalism In 2004, Balasubramaniam _et al._ fitted a formula (BKAG) [12] for cluster decay. In the course of that year, Ren _et al._ established a formula [13] that can be treated as a natural extension of the Geiger-Nuttall law [34] as well as the Viola-Seaborg formula [35] from simple \(\alpha\)-decay to complex cluster radioactivity. In the same year, Horoi also suggested an independent model for \(\alpha\)-decay which was generalized for cluster emission [14]. In 2008, Ni _et al._ established NRDX semi-empirical formula for the calculation of half-lives of \(\alpha\) and cluster decays [15]. Afterwards, Qi _et al._ has introduced universal decay law (UDL) [16] which is widely used by many authors for the estimation of half-lives of cluster radioactivity. In 2011, Poenaru _et al._ fitted UNIV formula [17] and represented a single line of the universal curve on the graph for \(\alpha\)-decay and cluster decay. The original versions of these formulas are mentioned below: \[log_{10}T_{1/2}^{BKAG}(sec.)=[aA_{c}(A_{d}-A_{c})/A+bZ_{c}(Z_{d}-Z_{c})/Z]Q^{-1/ 2}+c \tag{1}\] \[log_{10}T_{1/2}^{RenA}(sec.)=aZ_{d}Z_{c}Q^{-1/2}+bZ_{d}Z_{c}+c \tag{2}\] \[log_{10}T_{1/2}^{Horoi}(sec.)=(a\sqrt{\mu}+b)[(Z_{c}Z_{d})^{0.607}Q^{-1/2}-7]+ (c\sqrt{\mu}+d) \tag{3}\] \[log_{10}T_{1/2}^{NRDX}(sec.)=aZ_{c}Z_{d}\sqrt{\frac{\mu}{Q}}+b\sqrt{\mu}(Z_{c }Z_{d})^{1/2}+c \tag{4}\] \[log_{10}T_{1/2}^{UDL}(sec.) = aZ_{c}Z_{d}\sqrt{\frac{\mu}{Q}}+b[\mu Z_{c}Z_{d}({A_{c}}^{1/3}+{ A_{d}}^{1/3})]^{1/2}+c \tag{5}\] \[log_{10}T_{1/2}^{UNIV}(sec.) = -logP+log_{10}S-[log_{10}(ln2)-log_{10}v] \tag{6}\] In the above-mentioned formulas \(A_{d}\), \(A_{c}\) and \(Z_{d}\), \(Z_{c}\) denote the mass numbers and atomic numbers of the daughter nucleus and cluster, respectively. \(Q\) (in MeV) is the energy released in cluster decay, and \(\mu=A_{d}A_{c}/(A_{d}+A_{c})\) is the reduced mass. In Eqn. (6), \(-logP\) is determined by \(a(\mu Z_{c}Z_{d}R_{b})^{1/2}[arccos\sqrt{r}-\sqrt{r(1-r)}],r=R_{a}/R_{b}\) with \(R_{a}=1.2249({A_{c}}^{1/3}+{A_{d}}^{1/3})\) fm, \(R_{b}=1.43998Z_{d}Z_{c}/Q\) fm, and the logarithmic form of preformation factor is given by \(log_{10}S=-b(A_{c}-1)\) along with \([log_{10}(ln2)-log_{10}v]\) = d is the additive constant. The values of fitting coefficients a, b, c, and d of the above mentioned formulas can be found in their respective Refs. [12-17]. On account of the importance of angular momentum (\(l\)) as mentioned above, in the present work, as the first step we have modified these formulas by adding only \(l\) dependent term (\(l(l+1)\)), where \(l\) is the minimum angular momentum of cluster particle, which is obtained by following selection rules: \[l=\left\{\begin{array}{ll}\triangle_{j}&\mbox{for even $\triangle_{j}$ and $\pi_{i}=\pi_{f}$}\\ \triangle_{j}+1&\mbox{for even $\triangle_{j}$ and $\pi_{i}\neq\pi_{f}$}\\ \triangle_{j}&\mbox{for odd $\triangle_{j}$ and $\pi_{i}\neq\pi_{f}$}\\ \triangle_{j}+1&\mbox{for odd $\triangle_{j}$ and $\pi_{i}=\pi_{f}$}\end{array}\right. \tag{7}\] here, \(\triangle_{j}=|j_{p}-j_{d}-j_{c}|\) with j\({}_{p}\), \(\pi_{i}\), are the spin and parity values of the parent nucleus, respectively. j\({}_{d}\) is the spin of the daughter nucleus. \(\pi_{f}=(\pi_{d})(\pi_{c})\), in which, \(\pi_{d}\) and \(\pi_{c}\) are the parities of the daughter nucleus and cluster, respectively. For the purpose of fitting, the data of spin and parity are taken from NUBASE2020 [36]. In the next step, the formulas are also modified by adding isospin \(I(=(N-Z)/A)\) dependent term (\(I(I+1)\)). The accuracy and need of addition of different terms belong to the modified formulas are checked by \(\chi^{2}\) per degree of freedom (\(\chi^{2}\)) and RMSE values for various versions, which are listed in Table 1 and calculated by using the following relations: \[\chi^{2}=\frac{1}{N_{nucl}-N_{p}}\sum_{i=1}^{N_{nucl}}\left(log\frac{T_{Th.}^{ i}}{T_{Exp.}^{i}}\right)^{2} \tag{8}\] \[\text{RMSE}=\sqrt{\frac{1}{N_{nucl}}\sum_{i=1}^{N_{nucl}}\left(log\frac{T_{Th.}^{i} }{T_{Exp.}^{i}}\right)^{2}} \tag{9}\] where, \(N_{nucl}\) is the total number of nuclei (data) and \(N_{p}\) is the number of degree of freedom (or no. of coefficients). \(T_{Exp.}^{i}\) and \(T_{Th.}^{i}\) are the experimental and theoretical values of half-lives for \(i^{th}\) data point, respectively. The investigation of addition of different terms leads to the following conclusion from Table 1: (i) the addition of \(l\)-dependent term which reflects the hindrance effect of centrifugal barrier, significantly reduces \(\chi^{2}\) and RMSE for all the considered six formulas, (ii) whereas, the addition of \(I\)-dependent term minimises \(\chi^{2}\) and RMSE values only for BKAG and RenA formulas. As a result, the final versions of these modified formulas adopted in the present article are given by: \[log_{10}T_{1/2}^{MBKAG}(sec.)=[aA_{c}(A_{d}-A_{c})/A+bZ_{c}(Z_{d}-Z_{c})/Z]Q^{ -1/2}+cl(l+1)+dI(I+1)+e \tag{10}\] \[log_{10}T_{1/2}^{MRenA}(sec.)=aZ_{d}Z_{c}Q^{-1/2}+bZ_{d}Z_{c}+cl(l+1)+dI(I+1)+e \tag{11}\] \[log_{10}T_{1/2}^{MHoroi}(sec.)=(a\sqrt{\mu}+b)[(Z_{c}Z_{d})^{0.607}Q^{-1/2}-7] +(c\sqrt{\mu}+d)+el(l+1) \tag{12}\] \[log_{10}T_{1/2}^{MNRDX}(sec.) = aZ_{c}Z_{d}\sqrt{\frac{\mu}{Q}}+b\sqrt{\mu}(Z_{c}Z_{d})^{1/2}+cl( l+1)+d \tag{13}\] \[log_{10}T_{1/2}^{MUDL}(sec.) = aZ_{c}Z_{d}\sqrt{\frac{\mu}{Q}}+b[\mu Z_{c}Z_{d}({A_{c}}^{1/3}+{A_ {d}}^{1/3})]^{1/2}+cl(l+1)+d \tag{14}\] \[log_{10}T_{1/2}^{MUNIV}(sec.) = -logP-log_{10}S+cl(l+1)+d \tag{15}\] The coefficients a, b, c, d, and e of these modified formulas are mentioned in Table 2. ## 3 Results and discussions To ascertain the impact on accuracy for the estimation of half-lives of cluster decay by the addition of the above mentioned terms, we have plotted the ratio of decay widths \(W_{Exp.}/W_{Th.}=log_{10}T_{1/2}^{Th.}/log_{10}T_{1/2}^{Exp.}\) as a function of A for our six modified formulas (MBKAG, MRenA, MHoroi, \begin{table} \begin{tabular}{c|c c c c c c c c c c c c} \hline Formula & \multicolumn{2}{c}{BKAG} & \multicolumn{2}{c}{RenA} & \multicolumn{2}{c}{Horoi} & \multicolumn{2}{c}{NRDX} & \multicolumn{2}{c}{UDL} & \multicolumn{2}{c}{UNIV} \\ \cline{2-13} & \(\chi^{2}\) & RMSE & \(\chi^{2}\) & RMSE & \(\chi^{2}\) & RMSE & \(\chi^{2}\) & RMSE & \(\chi^{2}\) & RMSE & \(\chi^{2}\) & RMSE & \(\chi^{2}\) & RMSE \\ \hline Original & 1.01 & 0.98 & 1.10 & 0.95 & 1.45 & 1.16 & 0.85 & 0.90 & 1.88 & 1.34 & 0.87 & 0.91 \\ With \(l\) term only & 0.66 & 0.78 & 0.92 & 0.93 & 0.76 & 0.84 & 0.66 & 0.78 & 0.51 & 0.69 & 0.65 & 0.78 \\ With \(l\) and I terms & 0.44 & 0.63 & 0.68 & 0.79 & 0.77 & 0.83 & 0.66 & 0.77 & 0.49 & 0.67 & 0.67 & 0.77 \\ \hline \end{tabular} \end{table} Table 1: The \(\chi^{2}\) and RMSE of various versions of BKAG, RenA, Horoi, NRDX, UDL, and UNIV formulas for 61 cluster decay data. MNRDX, MUDL, and MUNIV) along with their original versions in Fig. 1. Most of the points corresponding to our modified formulas (red diamonds) are between half order of magnitude while the points corresponding to the original formulas (blue triangles) are somewhat widely scattered, which indicate the improvement for the estimation of half-lives of cluster decay after the addition of angular momentum (\(l\)) or isospin-dependent (\(I=(N-Z)/A\)) or both terms. For the comparison among our modified formulas with a few of latest fitted/modified formulas [26; 27; 28] for cluster decay half-lives, we have calculated some other statistical parameters such as standard deviation (\(\sigma\)), uncertainty (\(u\)), average deviation factor (\(\overline{x}\)), and mean deviation \(\overline{\delta}\) for 61 experimentally known cluster decay half-lives [7; 9; 26; 29]. All these stati \begin{table} \begin{tabular}{l|c c c c c} \hline Formula & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) \\ \hline MBKAG & 6.5279 & 89.2684 & 0.0798 & 70.0439 & -100.4122 \\ MRenA & 1.2947 & -0.0423 & 0.0771 & 89.9255 & -101.5076 \\ MHori & 10.1451 & -23.1954 & 4.4835 & -10.9094 & 0.0567 \\ MINRDX & 0.3590 & -1.0063 & 0.0634 & -18.8444 & - \\ MUDL & 0.3564 & -0.3199 & 0.0737 & -24.8301 & - \\ MUNIV & 0.2369 & 0.6104 & 0.0648 & -23.7267 & - \\ \hline \end{tabular} \end{table} Table 2: The coefficients of MBKAG, MRenA, MHoroi, MNRDX, MUDL, and MUNIV formulas proposed in the present work. Figure 1: (Colour online) Ratio of experimental to theoretical decay widths \(W_{Exp.}/W_{Th.}=log_{10}T_{1/2}^{Th.}/log_{10}T_{1/2}^{Exp.}\) for the comparison of our six modified formulas with their respective original versions by using 61 cluster emission data. The RMSE values are also indicated in front of the name of the respective formula. these formulas are mentioned in Table 3. These statistical parameters are defined as: \[\sigma=\sqrt{\frac{1}{N_{nucl}-1}\sum_{i=1}^{N_{nucl}}\left(log\frac{T_{Th.}^{i}}{ T_{Exp.}^{i}}\right)^{2}} \tag{16}\] \[u=\sqrt{\frac{1}{N_{nucl}(N_{nucl}-1)}\sum_{i=1}^{N_{nucl}}\left(log\frac{T_{Th. }^{i}}{T_{Exp.}^{i}}-\mu\right)^{2}} \tag{17}\] \[\overline{x}=\frac{1}{N_{nucl}}\sum_{i=1}^{N_{nucl}}\left(\frac{|logT_{Exp.}^{ i}-logT_{Th.}^{i}|}{logT_{Exp.}^{i}}\right) \tag{18}\] \[\overline{\delta}=\frac{1}{N_{nucl}}\sum_{i=1}^{N_{nucl}}\left|log\frac{T_{Th. }^{i}}{T_{Exp.}^{i}}\right| \tag{19}\] The terms in above equations are already defined in Eqns. (8) and (9). \(\mu\) in Eqn. (17) refers to the mean of full data set. It is clear from Table 3 that the isospin (only for BKAG and RenA) and angular momentum play a crucial role to improve the cluster decay formulas and result in lesser statistical parameters \(\sigma\), \(u\), \(\overline{x}\), and \(\overline{\delta}\) for the modified formulas introduced in the present work, as compared with a few of the latest fitted/modified formulas (new UDL, IUF, and ISEF formulas) for the cluster decay. It is to be noted that among all the modified formulas, MBKAG formula renders more accurate half-life while compared through all the statistical parameters. Hence, MBKAG formula can be employed to predict the more precise half-lives of cluster decay and the probable decay emission. With this in view, the possibility of cluster emission from the experimentally known trans-lead (86\(\leq\)Z\(\leq\)96) isotopes is probed by considering the daughter nuclei near the proton shell closure i.e., the emission of a cluster is chosen in such a way that the proton number of daughter nucleus \(Z_{d}\) is close to 82 (Pb). Before predicting possibilities of new cluster decays in trans-lead regions, we first calculate the half-lives of experimentally known cluster decay using the MBKAG formula which are listed in Table 4. We have taken only one parent-cluster combination out of 61 experimental data of cluster decay, to compare with \(\alpha\)-decay half-lives. For the \(\alpha\)-decay half-lives, we have used the NMHF (new modified Horoi formula) whose accuracy in determining the half-lives has already \begin{table} \begin{tabular}{l c c c c} \hline \hline Formula & \(\sigma\) & \(u\) & \(\overline{x}\) & \(\overline{\delta}\) \\ \hline MBKAG (Present Work) & 0.64 & 0.08 & 0.02 & 0.51 \\ MRENA(Present Work) & 0.80 & 0.10 & 0.02 & 0.62 \\ MHori (Present Work) & 0.84 & 0.11 & 0.03 & 0.66 \\ MNRDX (Present Work) & 0.79 & 0.10 & 0.02 & 0.60 \\ MUDL (Present Work) & 0.70 & 0.09 & 0.03 & 0.53 \\ MUNIV (Present Work) & 0.79 & 0.10 & 0.03 & 0.59 \\ New UDL [26] & 0.81 & 0.10 & 0.03 & 0.68 \\ IUF [27] & 0.84 & 0.11 & 0.03 & 0.64 \\ ISEF [28] & 0.93 & 0.12 & 0.04 & 0.76 \\ \hline \end{tabular} \end{table} Table 3: Comparison of MBKAG, MRENA, MHori, MNRDX, MUDL, and MUNIV formulas with few others formulas. \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline Parent & Daughter & Emitted & \(Q\) & \(Q_{\alpha}\) & \(l\) & \multicolumn{3}{c}{log\({}_{10}\)T\({}_{1/2}\)(sec.)} & BR\({}_{Exp.}\) & BR \\ nucleus & nucleus & cluster & (MeV) & (MeV) & & Exp. & MBKAG & NMHF & \\ & & & & & & & (Cluster) & (\(\alpha\)) & & \\ \hline \({}^{221}\)Fr & \({}^{207}\)Tl & \({}^{14}\)C & 31.28 & 6.46 & 3 & 14.52 & 15.44 & 2.96 & -11.56 & -12.48 \\ \({}^{221}\)Ra & \({}^{207}\)Pb & \({}^{14}\)C & 32.39 & 6.88 & 3 & 13.39 & 13.01 & 1.74 & -11.65 & -11.27 \\ \({}^{222}\)Ra & \({}^{208}\)Pb & \({}^{14}\)C & 33.05 & 6.68 & 0 & 11.22 & 11.46 & 2.32 & -8.90 & -9.14 \\ \({}^{223}\)Ra & \({}^{209}\)Pb & \({}^{14}\)C & 31.85 & 5.98 & 4 & 15.25 & 15.18 & 5.17 & -10.08 & -10.01 \\ \({}^{223}\)Ac & \({}^{209}\)Bi & \({}^{14}\)C & 33.06 & 6.78 & 2 & 12.60 & 11.54 & 2.38 & -10.22 & -9.16 \\ \({}^{223}\)Ac & \({}^{208}\)Pb & \({}^{15}\)N & 39.47 & 6.78 & 2 & 14.76 & 14.36 & 2.38 & -12.38 & -11.98 \\ \({}^{224}\)Ra & \({}^{210}\)Pb & \({}^{14}\)C & 30.54 & 5.79 & 0 & 15.90 & 15.99 & 5.87 & -10.03 & -10.12 \\ \({}^{225}\)Ac & \({}^{211}\)Bi & \({}^{14}\)C & 30.48 & 5.94 & 4 & 17.16 & 17.30 & 5.70 & -11.46 & -11.60 \\ \({}^{226}\)Ra & \({}^{212}\)Pb & \({}^{14}\)C & 28.21 & 4.87 & 0 & 21.19 & 20.68 & 10.52 & -10.67 & -10.16 \\ \({}^{226}\)Th & \({}^{212}\)Po & \({}^{14}\)C & 30.67 & 6.45 & 0 & 15.30 & 15.02 & 3.79 & -11.51 & -11.24 \\ \({}^{228}\)Th & \({}^{208}\)Pb & \({}^{20}\)O & 44.72 & 5.52 & 0 & 20.72 & 21.34 & 7.82 & -12.90 & -13.52 \\ \({}^{230}\)Th & \({}^{206}\)Hg & \({}^{24}\)Ne & 57.78 & 4.77 & 0 & 24.64 & 25.78 & 11.91 & -12.73 & -13.87 \\ \({}^{230}\)U & \({}^{208}\)Pb & \({}^{22}\)Ne & 61.40 & 5.99 & 0 & 19.57 & 20.38 & 6.32 & -13.25 & -14.06 \\ \({}^{231}\)Pa & \({}^{207}\)Tl & \({}^{24}\)Ne & 60.42 & 5.15 & 1 & 23.23 & 23.33 & 10.11 & -13.12 & -13.22 \\ \({}^{232}\)Th & \({}^{208}\)Hg & \({}^{24}\)Ne & 55.62 & 4.08 & 0 & 29.20 & 28.56 & 16.63 & -12.57 & -11.94 \\ \({}^{232}\)Th & \({}^{206}\)Hg & \({}^{26}\)Ne & 55.97 & 4.08 & 0 & 29.20 & 29.21 & 16.63 & -12.57 & -12.59 \\ \({}^{232}\)Th & \({}^{208}\)Pb & \({}^{24}\)Ne & 62.31 & 5.41 & 0 & 21.06 & 21.32 & 9.08 & -11.98 & -12.24 \\ \({}^{232}\)U & \({}^{204}\)Hg & \({}^{28}\)Mg & 74.32 & 5.41 & 0 & 22.26 & 25.01 & 9.08 & -13.18 & -15.93 \\ \({}^{233}\)U & \({}^{209}\)Pb & \({}^{24}\)Ne & 60.50 & 4.91 & 2 & 24.82 & 23.71 & 11.86 & -12.96 & -11.85 \\ \({}^{233}\)U & \({}^{208}\)Pb & \({}^{25}\)Ne & 60.75 & 4.91 & 2 & 24.82 & 23.97 & 11.86 & -12.96 & -12.12 \\ \({}^{233}\)U & \({}^{205}\)Hg & \({}^{28}\)Mg & 74.24 & 4.91 & 3 & 27.59 & 26.38 & 11.86 & -15.73 & -14.53 \\ \({}^{234}\)U & \({}^{210}\)Pb & \({}^{24}\)Ne & 58.84 & 4.86 & 0 & 25.88 & 25.06 & 12.19 & -13.69 & -12.87 \\ \({}^{234}\)U & \({}^{208}\)Pb & \({}^{26}\)Ne & 59.47 & 4.86 & 0 & 25.88 & 25.46 & 12.19 & -13.69 & -13.27 \\ \({}^{234}\)U & \({}^{206}\)Hg & \({}^{28}\)Mg & 74.13 & 4.86 & 0 & 25.14 & 25.86 & 12.19 & -12.95 & -13.67 \\ \({}^{235}\)U & \({}^{211}\)Pb & \({}^{24}\)Ne & 57.36 & 4.68 & 1 & 27.42 & 26.95 & 13.37 & -14.05 & -13.58 \\ \({}^{235}\)U & \({}^{210}\)Pb & \({}^{25}\)Ne & 57.83 & 4.68 & 3 & 27.42 & 27.81 & 13.37 & -14.05 & -14.43 \\ \({}^{235}\)U & \({}^{207}\)Hg & \({}^{28}\)Mg & 72.20 & 4.68 & 1 & 28.09 & 27.81 & 13.37 & -14.72 & -14.44 \\ \({}^{235}\)U & \({}^{206}\)Hg & \({}^{29}\)Mg & 72.61 & 4.68 & 3 & 28.09 & 28.70 & 13.37 & -14.72 & -15.32 \\ \({}^{236}\)U & \({}^{212}\)Pb & \({}^{24}\)Ne & 55.96 & 4.57 & 0 & 25.90 & 28.50 & 14.04 & -11.86 & -14.46 \\ \({}^{236}\)U & \({}^{210}\)Pb & \({}^{26}\)Ne & 56.75 & 4.57 & 0 & 25.90 & 28.73 & 14.04 & -11.86 & -14.69 \\ \({}^{236}\)U & \({}^{208}\)Hg & \({}^{28}\)Mg & 71.69 & 4.57 & 0 & 27.58 & 28.40 & 14.04 & -13.54 & -14.36 \\ \({}^{236}\)U & \({}^{206}\)Hg & \({}^{30} been demonstrated in Ref. [20]. The first, second, and third columns of Table 4 show the parent, daughter, and cluster nuclei, respectively. Next two columns represent the disintegration energies of cluster decay and \(\alpha\)-decay taken from Refs. [7; 9; 26; 29] and from AME2020 [37], respectively. The sixth column lists angular momentum taken away by cluster particle after emission which is calculated by using selection rules explained in the Eqn. (7). We have calculated logarithmic half-lives of cluster decay (using Eqn. (10)), tabulated them in the eighth column, and compared these results with the experimental results (presented in the seventh column). It is clear from the Table 4 that calculated half-lives of cluster emission by using the MBKAG formula (present work) are very close to experimental results. Branching ratio (BR) which quantifies comparison between cluster decay to the \(\alpha\)-decay and is defined as the ratio of \(\alpha\)-decay half-life (listed in the ninth column) to the cluster decay half-life as below: \[BR=log_{10}b_{c}=log_{10}(\lambda_{c}/\lambda_{\alpha})=log_{10}(T_{\alpha}/T_{ c}) \tag{20}\] where, \(\lambda_{\alpha}\) and \(\lambda_{c}\) are referred as the decay constants of \(\alpha\)-decay and cluster emission, respectively. The calculated branching ratios are shown in the last column which are indeed close to experimental branching ratios [7; 9; 26; 29] (presented in the second last column). In fact, an excellent match of half-lives of almost all mentioned clusters in Table 4 validates the pertinence of MBKAG formula. Furthermore, one can note that the experimental cluster decay half-life goes maximum nearly upto \(10^{30}\) sec., therefore, it can be reasoned out that the clusters with a half-life less than \(10^{30}\) sec. seemingly be of experimental interest. In the next step of our study, we have utilized the degree of accuracy of MBKAG formula, as exhibited in Table 4, to predict the logarithmic half-lives of unknown cluster emissions in the trans-lead region. For this estimation, the \(Q\)-values are calculated by the following relation: \[Q(MeV)=B.E.(d)+B.E.(c)-B.E.(p)+k[Z_{p}^{\epsilon}-Z_{d}^{\epsilon}] \tag{21}\] where, the term \(k[Z_{p}^{\epsilon}-Z_{d}^{\epsilon}]\) indicates screening effect caused by the surrounding electrons around the nuclei [38] with k=8.7 eV [8.7 \(\times\)\(10^{-6}\)MeV] and \(\epsilon\)=2.517 for Z (proton number) \(\geq\) 60, and k=13.6 eV [13.6 \(\times\)\(10^{-6}\)MeV] and \(\epsilon\) =2.408 for Z \(<\) 60 have been deducted from the data shown by Huang _et al._[39]. For accurate prediction of theoretical \(Q\)-values, we have selected an effective and reliable possible treatment among various theoretical approaches viz. relativistic mean-field theory (RMF) [32; 40; 41; 42; 43; 44], Finite Range Droplet Model (FRDM) [31], nonrelativistic Skyrme Hartree-Fock-Bogoliubov (HFB) [33], and Weizsacker-Skyrme mass model (WS4) [30]. From these approaches, we have calculated RMSE, listed in Table 5, for the known 121 \(Q\)-values related to cluster emissions [7; 9; 26; 29]. Table 5 establishes that WS4 mass model provides an excellent agreement with the minimum RMSE compared to all other considered theoretical approaches and hence justifies the calculation of \(Q\)-values for cluster emission by taking binding energies (for daughter(d), cluster(c), and parent(p) nuclei) from this mass model [30]. \begin{table} \begin{tabular}{l c} \hline \hline Theory & RMSE \\ \hline WS4 & 0.43 \\ FRDM & 0.78 \\ HFB & 1.17 \\ RMF & 3.61 \\ \hline \hline \end{tabular} \end{table} Table 5: RMSE of various mass models for \(Q\)-value data for cluster emission. Figure 2: (Colour online) Variation of half-lives of various cluster emissions from experimentally known isotopes of trans-lead nuclei (86\(\leq\)Z\(\leq\)96) as a function of neutron number of daughter nuclei (considering proton number \(Z_{d}\)=82). These half-lives are calculated by using MBKAG formula and the \(Q\)-values are taken from the WS4 mass model[30]. After the selection of efficacious empirical formula as well as the theoretical \(Q\)-values, we have chosen all the parent-cluster combinations for this extensive study to find the possible clusters emitted from \({}^{211-231}\)Rn, \({}^{213-226}\)Fr, \({}^{214-235}\)Ra, \({}^{215-233}\)Ac, \({}^{216-237}\)Th, \({}^{218-241}\)Pa, \({}^{228-243}\)U, \({}^{226-245}\)Np, \({}^{226-245}\)Pu, \({}^{227-248}\)Am, and \({}^{231-252}\)Cm isotopes leading to \({}^{208}\)Pb daughter (doubly magic) and neighbouring nuclei. We have plotted our results (up to T=\(10^{100}\) sec.) in Fig. 2 where the minima of log\({}_{10}\)T\({}_{1/2}\) in several panels (Ra-isotopes to U-isotopes) correspond to \({}^{208}\)Pb daughter i.e., doubly magic (Z=82, N=126) or near to it. These minima provide us the most probable clusters emitted from the respective isotopes. However, the probability of cluster emission always competes with \(\alpha\)-decay which is quantified by branching ratio as we have discussed in Eqn. (20). The limit of experimental branching ratio related to \(\alpha\)-decay is around \(BR=-17\) as can be seen in Table 4 and also explained by Poenaru _et al._[45]. Accordingly, cluster emission emerges more probable if \(BR\geq-17\): the criteria for the listed probable clusters in Table 6. These clusters are selected from the Fig. 2 for the particular isotopic chain of parent trans-lead nuclei \({}^{211-231}\)Rn, \({}^{213-226}\)Fr, \({}^{214-235}\)Ra, \({}^{215-233}\)Ac, \({}^{216-237}\)Th, \({}^{218-241}\)Pa, and \({}^{228-243}\)U. Most of our results are within the experimental reach and also in close match with the recent predictions of Refs. [46; 47; 48]. On the other side, in the panels from Np-isotopes to Cm-isotopes in Fig. 2, in-spite of a clear minima, there is incessantly some probability of emission of clusters since many of the clusters own half-lives less than \(10^{30}\) sec. (experimental limit of half-lives of cluster emissions). For examples, \begin{table} \begin{tabular}{c c c c c c c c c} \hline Parent & Daughter & Emitted & \(Q\) & \(Q_{\alpha}\) & \(l\) & \(\log_{10}\)T\({}_{1/2}\)(sec.) & BR \\ nucleus & nucleus & cluster & (MeV) & (MeV) & & MBKAG & NMHF & \\ & & & & & & (Cluster) & (\(\alpha\)) & & \\ \hline \({}^{216}\)Rn & \({}^{208}\)Pb & \({}^{8}\)Be & 17.13 & 8.20 & 0 & 6.65 & -2.84 & -9.49 \\ \({}^{222}\)Fr & \({}^{207}\)Pb & \({}^{14}\)B & 21.56 & 5.85 & 0 & 20.23 & 5.24 & -14.99 \\ \({}^{221}\)Ra & \({}^{208}\)Pb & \({}^{13}\)C & 31.70 & 6.88 & 3 & 13.13 & 1.74 & -11.39 \\ \({}^{223}\)Ra & \({}^{208}\)Pb & \({}^{15}\)C & 29.22 & 5.98 & 2 & 19.15 & 5.17 & -13.98 \\ \({}^{222}\)Ac & \({}^{208}\)Pb & \({}^{14}\)N & 35.64 & 7.14 & 1 & 17.93 & 1.03 & -16.90 \\ \({}^{222}\)Ac & \({}^{207}\)Pb & \({}^{15}\)N & 39.10 & 7.14 & 1 & 14.09 & 1.03 & -13.06 \\ \({}^{224}\)Ac & \({}^{208}\)Pb & \({}^{16}\)N & 36.43 & 6.33 & 2 & 19.44 & 3.99 & -15.45 \\ \({}^{225}\)Ac & \({}^{208}\)Pb & \({}^{17}\)N & 35.64 & 5.94 & 2 & 21.68 & 5.70 & -15.98 \\ \({}^{224}\)Th & \({}^{208}\)Pb & \({}^{16}\)O & 46.63 & 7.30 & 0 & 15.11 & 0.81 & -14.30 \\ \({}^{225}\)Th & \({}^{208}\)Pb & \({}^{17}\)O & 45.02 & 6.92 & 2 & 18.39 & 2.22 & -16.17 \\ \({}^{226}\)Th & \({}^{208}\)Pb & \({}^{18}\)O & 45.88 & 6.45 & 0 & 17.98 & 3.79 & -14.19 \\ \({}^{227}\)Th & \({}^{208}\)Pb & \({}^{19}\)O & 44.36 & 6.15 & 2 & 21.19 & 5.16 & -16.03 \\ \({}^{228}\)Th & \({}^{208}\)Pb & \({}^{20}\)O & 44.87 & 5.52 & 0 & 21.12 & 7.96 & -13.16 \\ \({}^{229}\)Th & \({}^{208}\)Pb & \({}^{21}\)O & 43.41 & 5.17 & 0 & 23.84 & 9.77 & -14.37 \\ \({}^{230}\)Th & \({}^{208}\)Pb & \({}^{22}\)O & 43.48 & 4.77 & 0 & 24.73 & 11.91 & -12.82 \\ \({}^{231}\)Th & \({}^{208}\)Pb & \({}^{23}\)O & 41.08 & 4.21 & 2 & 29.26 & 15.75 & -13.51 \\ \({}^{228}\)Pa & \({}^{208}\)Pb & \({}^{20}\)F & 50.90 & 6.26 & 2 & 22.42 & 5.13 & -17.29 \\ \({}^{229}\)Pa & \({}^{208}\)Pb & \({}^{21}\)F & 51.83 & 5.84 & 0 & 21.94 & 6.74 & -15.20 \\ \({}^{231}\)Pa & \({}^{208}\)Pb & \({}^{23}\)F & 52.01 & 5.15 & 1 & 23.75 & 10.11 & -13.64 \\ \({}^{231}\)U & \({}^{208}\)Pb & \({}^{23}\)Ne & 60.99 & 5.58 & 0 & 21.55 & 8.53 & -13.02 \\ \({}^{231}\)U & \({}^{206}\)Pb & \({}^{25}\)Ne & 59.91 & 5.58 & 2 & 23.95 & 8.53 & -15.42 \\ \hline \end{tabular} \end{table} Table 6: The calculated logarithmic half-lives and branching ratios of probable clusters emitted from various isotopes of trans-lead nuclei (86\(\leq\)Z\(\leq\)96). Cluster decay and \(\alpha\)-decay half-lives are calculated by using MBKAG formula (Eqn. 10) and NMHF formula [20], respectively. Disintegration energies (\(Q\)-values) for the cluster decay and \(\alpha\)-decay are taken from WS4 mass model [30] and AME2020 [37], respectively. For the \(l\) values, spin and parity of parent, daughter, and cluster nuclei are used from NUBASE2020 [36]. \({}^{21}\)Na from \({}^{226-229}\)Np, \({}^{22}\)Na from \({}^{226-230}\)Np, \({}^{23}\)Na from \({}^{226-233}\)Np, \({}^{24}\)Na from \({}^{226-234}\)Np, \({}^{25,27}\)Na from \({}^{226-237}\)Np, \({}^{26}\)Na from \({}^{226-236}\)Np and \({}^{28}\)Na from \({}^{224-236}\)Np. Similarly, some possible clusters (Mg-isotopes) emitted from various Pu-isotopes (Z\({}_{p}\)=94) are \({}^{23}\)Mg from \({}^{226-231}\)Pu, \({}^{24,25}\)Mg from \({}^{226-235}\)Np, \({}^{26}\)Mg from \({}^{226-238}\)Np, \({}^{27}\)Mg from \({}^{226-239}\)Np, and \({}^{28,29}\)Mg from \({}^{226-241}\)Np. Among Am-isotopes the potential clusters are \({}^{24}\)Al from \({}^{227-230}\)Am, \({}^{25}\)Al from \({}^{227-233}\)Am, \({}^{26}\)Al from \({}^{227-236}\)Am, \({}^{27}\)Al from \({}^{227-239}\)Am, \({}^{28}\)Al from \({}^{227-240}\)Am, \({}^{29}\)Al from \({}^{227-241}\)Am, and \({}^{30-32}\)Al from \({}^{227-242}\)Am as well as \({}^{26-33}\)Si from the \({}^{231-252}\)Cm isotopes. In the emission of odd mass clusters, the odd-even staggering is noticeable in Fig. 2 which is usually attributed to the existence of nucleonic pairing correlations [49]. The above-mentioned detailed study about favorable clusters having T\({}_{1/2}<10^{30}\) sec. is expected to be certainly useful for future experimental inputs. ## 4 Conclusions Several empirical formulas are investigated by adding angular momentum and isospin dependence. Their modified versions are turned into MBKAG, MRenA, MHoroi, MNRDX, MUDL, and MUNIV formulas. Experimental data of a total of 61 nuclei have been utilized for fitting which offers improved results of all the modified formulas while compared to their earlier versions. Among these six modified formulas, after comparison of several statistical parameters the MBKAG formula is found most precise which is used to examine cluster decay half-lives for trans-lead region: \({}^{211-231}\)Rn, \({}^{213-226}\)Fr, \({}^{214-235}\)Ra, \({}^{215-233}\)Ac, \({}^{216-237}\)Th, \({}^{218-241}\)Pa, \({}^{228-243}\)U, \({}^{226-245}\)Np, \({}^{226-245}\)Pu, \({}^{227-248}\)Am, and \({}^{231-252}\)Cm isotopes leading to \({}^{208}\)Pb daughter (doubly magic) and neighbouring nuclei. We have found the considerable probability of emission of various isotopes of Be, B, C, N, O, F, Ne, Na, Mg, and Si from above mentioned trans-lead nuclei, respectively, and many of them are found to be favorable for the measurement (T\({}_{1/2}<10^{30}\) sec.). This study reveals that doubly magic daughter nuclei play a crucial role in the cluster decay process and could serve as a stimulus to the experiments eyeing on cluster radioactivity. ## 5 Acknowledgement AJ and GS acknowledge the support provided by SERB (DST), Govt. of India under CRG/2019/001851, and SIR/2022/000566, respectively.
``` クラスタ放出の可能性について、周期表の trans-鉛 (86≤Z≤96) regione における放出が、角運動量 ($l$) や異ospin依存 ($I = (N-Z)/A$) などの追加項を含む数少ない実用的な式を用いて、体系的に研究されています。これらの式を修正したバージョンは、各自由度におけるχ²値や根 mean square エラー、いくつかの他の統計パラメータの値も少なくなり、61 個のクラスタ放射性実験データとの比較において、より良い相関を示しています。Balasubramaniam らの論文 (PRC 70 (2004) 017301) における修正された式を用いて、 trans-鉛領域のいくつかのクラスタの半減期を、Be、B、C、N、O、F、Ne、Na、Mg、およびSi の同位
2309.15728
Line Graph Neural Networks for Link Weight Prediction
Link weight prediction is of great practical importance, since real-world networks are often weighted networks. Previous studies have mainly used shallow graph features for link weight prediction, which limits the prediction performance. In this paper, we propose a new link weight prediction algorithm, namely Line Graph Neural Networks for Link Weight Prediction (LGLWP), which learns deeper graph features through deep learning. In our algorithm, we first extract the enclosing subgraph around a target link, and then employ a weighted graph labeling algorithm to label the subgraph nodes. Next, we transform the subgraph into a line graph and apply the graph convolution neural networks to learn the node embedding in the line graph, which can represent the links in the original subgraph. Finally, the link feature vectors are put into a fully-connected neural network to predict the weight of the target link. Our algorithm directly obtain the feature vectors of the target links in the original graph, which is better than the previous methods that splice the node feature vectors for link weight prediction. Experiments results on six real datasets of various network sizes and types show that our algorithm has better prediction performance than the state-of-art methods, while it has fewer parameters and high training efficiency.
Jinbi Liang, Cunlai Pu
2023-09-27T15:34:44
http://arxiv.org/abs/2309.15728v1
# Line Graph Neural Networks for Link Weight Prediction ###### Abstract. Link weight prediction is of great practical importance, since real-world networks are often weighted networks. Previous studies have mainly used shallow graph features for link weight prediction, which limits the prediction performance. In this paper, we propose a new link weight prediction algorithm, namely Line Graph Neural Networks for Link Weight Prediction (LGLWP), which learns deeper graph features through deep learning. In our algorithm, we first extract the enclosing subgraph around a target link, and then employ a weighted graph labeling algorithm to label the subgraph nodes. Next, we transform the subgraph into a line graph and apply the graph convolution neural networks to learn the node embedding in the line graph, which can represent the links in the original subgraph. Finally, the link feature vectors are put into a fully-connected neural network to predict the weight of the target link. Our algorithm directly obtain the feature vectors of the target links in the original graph, which is better than the previous methods that splice the node feature vectors for link weight prediction. Experiments results on six real datasets of various network sizes and types show that our algorithm has better prediction performance than the state-of-art methods, while it has fewer parameters and high training efficiency. Link weight prediction, line graph, graph neural network, graph mining + Footnote †: journal: Computer graphics & Machine learning + Footnote †: journal: Computer graphics & Machine learning Link weight prediction represents a burgeoning field of research in network science, aiming to forecast the strength of connections between nodes in a network. Unlike traditional graph classification tasks such as node classification or link prediction, link weight prediction poses a more intricate regression task that has received comparably less attention and exploration. Previous studies have attempted to address this problem through simplified approaches, primarily utilizing shallow topological features of the network to estimate connection weights (Wang et al., 2018; Wang et al., 2019; Wang et al., 2020; Wang et al., 2020). For instance, methods like WCN, WAA, and WRA (Wang et al., 2019) rely solely on basic statistical characteristics of the local graph structure to infer connection weights.However, it has been empirically and practically observed that connection weights often embody intricate interdependencies, exemplified by the strength of interactions between neurons in brain networks. These complex relationships elude facile capture by shallow feature-based methodologies (Wang et al., 2019). As a result, the challenge of link weight prediction resides in the necessity to devise advanced techniques and methodologies capable of accurately capturing and predicting the multifaceted strengths of connections between nodes. Accomplishing this goal necessitates thorough research and exploration to tackle the inherent challenges presented by this regression task. Inspired by LGLP (Liang et al., 2019), we propose line graph neural network for link weight model LGLWP.We first extract the enclosing subgraphs of the target links for node labeling.Since the node labeling algorithm in LGLP is for unweighted graphs, here we introduce the node labeling algorithm for weighted graphs in (Wang et al., 2019), and then we transform the subgraphs into corresponding line graphs, and then through the graphs Convolutional Neural Network GCN to learn the node features and get the feature vector corresponding to the target link, and finally regression prediction by two fully connected layers.The framework is depicted in Figure 1. In fact by transforming the original graph into a line graph, the feature vectors of the original two destination nodes are transformed into the feature vectors of the corresponding points in the line graph, and this feature vectors are fed into the neural network for regression prediction. On the one hand, the neural network can require fewer parameters, and on the other hand, it can accelerate the training speed of the model. The contributions of our research can be summarized as follows: 1. Based on LGLP, unlike LGLP for link existence prediction, we propose link weight regression prediction with line graphs. Thus, we open up a new idea for link weight prediction. 2. Since the subgraph node labeling algorithms in LGLP are mainly for undirected unweighted graphs, these works focus on simple link prediction, i.e., they are not suitable for weight prediction. Their node labeling algorithms are not able to handle weighted graph nodes and hence are not applicable to the tasks of this work. Here we introduce algorithms suitable for weighted graph node labeling. 3. On six real datasets, six datasets involving different network sizes and containing both large and small graphs, we have achieved good results. We also conducted ablation experiments to demonstrate the effectiveness of this weighted graph node labeling algorithm for link weight prediction by comparing the random labeling of subgraph nodes with the weighted graph node labeling algorithm. ## 2. Related Work Link weight prediction is a relatively new research area within network science. Lv et al. (Lv et al., 2019) pioneered this field by exploring the role of weak connectivity in weighted networks. They introduced weighted local similarity metrics, which included Weighted Common Neighbors (WCN) metrics, Weighted Adamic-Adar (WAA) metrics, and Weighted Resource Allocation (WRA) metrics to estimate link weights. Zhao et al. (Zhao et al., 2019) extended unweighted local similarity metrics to weighted local similarity metrics through a method called the "Reliable Routing Method."These weighted local similarity metrics are invaluable for predicting the presence of links and their respective weights. (Li et al., 2019)In this approach, a link weight matrix is generated by perturbing the observed weighted network structure. Subsequently, the link weight matrix is reconstructed by utilizing the factorized latent factors derived from the observed network. Finally, these two matrices are combined to yield predictions for missing link weights.In another approach, (Li et al., 2019) expanded the eigenvector space of connected edges by incorporating node similarity metrics from the original network and node centrality metrics from the line graph to perform link weight regression prediction.Additionally, (Li et al., 2019) introduced a novel computational framework called Neighborhood Estimation Weights NEW. This method relies solely on the fundamental structural information of the network and offers flexibility to adapt to various types of networks. However, it's important to note that these methods often remain at the level of basic structural network information and may have limitations when dealing with more complex network features. Thus graph representation learning was proposed, where the goal of graph representation learning is to encode nodes into low-dimensional vectors that contain the nodes' positions in the graph as well as their local graph structure. In other words, graph representation learning is the projection of nodes into a potential Euclidean space, where the geometric relations in this potential space correspond to the relations in the original graph or network. The obtained embedding vectors can be used for downstream tasks such as node embedding and link prediction. The main technical tools are Graph Embedding (GE) and Graph Neural Networks (GNNs) (Goh et al., 2019; Li et al., 2019). The graph embedding models are deepwalk (Wang et al., 2019), node2vec (Wang et al., 2019), SDNE (Wang et al., 2019), etc., and the graph deep learning models are GCN (Goh et al., 2019),GAE (Goh et al., 2019), VGAE (Goh et al., 2019), etc., but the link weight prediction task has never been addressed. Therefore, it would be interesting to investigate how well these deep graph learning models perform on this task (Wang et al., 2019). In addition, there is recent literature demonstrating promising results using enclosing subgraph extraction. There are methods for link prediction through enclosing subgraph extraction in recent years, representative of which are WLNM (Wang et al., 2019), SEAL (Wang et al., 2019) and LGLP (Liang et al., 2019). They have achieved very good results in this task. Thus, the correct representation of the target links has been shown to be sufficient to predict the links and their weights, avoiding the need to process the whole graph. However, node labeling techniques must be provided to use enclosing subgraph extraction methods. By consistently labeling nodes to learn predictions, further models can be generalized to different subgraphs. In addition, node labeling techniques must preserve topological directionality towards the target link for optimal performance, thus providing a mechanism for the model to focus on specific nodes (Wang et al., 2017).Weisfeiler-Lehman (2017), proposed an algorithm based on the original WL algorithm for labeling unweighted graphs. The SEAL framework (Kumar et al., 2017) also proposes a novel node labeling method based on the radius of each node to the target linkLGLP (Beng et al., 2017) applies a line graph based on SEAL while also retaining the node labeling algorithm of the SEAL model. These works focus on simple link prediction, i.e., they are not suitable for weights. Their node labeling algorithms are unable to handle weighted nodes and hence are not suitable for tasks related to weighted graph node labeling (Wang et al., 2017). Therefore, in this paper, we introduce a node labeling technique suitable for weighted graphs based on the LGLP (Beng et al., 2017) model, while retaining the enclosing subgraph extraction method, and finally perform link weight prediction with good results on several datasets. ## 3. Proposed Method In this section, we will begin by providing an overview of the problem of link weight prediction. Subsequently, we will introduce our novel link weight prediction method, LGLWP, which encompasses the following key steps: 1. Enclosing subgraph extraction 2. Subgraph node ordering 3. Feature learning and link weight prediction via line graph neural networks A summary figure of the whole approach is shown in Figure 1. ### Problem description Let \(G(V,E,W)\) represent an undirected weighted network, where V denotes the network's nodes and E denotes its edges. The weight matrix, denoted as W, describes the network's adjacency, where the weight value \(w\) of a link \((i,j)\in E\) is assigned as \(W_{i,j}=w\), and \(W_{i,j}=0\) otherwise. The set of weights, W, can be divided randomly into two subsets: \(W_{train}\) and \(W_{test}\), where \(W_{train}\cup W_{test}=W\), and \(W_{train}\cap W_{test}=\emptyset\). The objective of network link weight prediction is to predict the weights of the test set \(W_{test}\) with maximum accuracy, using the graph \(G(V,E,W_{train})\)(Beng et al., 2017).To avoid the effect of different weight ranges on the results, we first preprocess the weights. Here we employ we use the exponential transformation method to normalize all the contiguous edge weights \(w\) to the interval \((0,1)\), i.e.: \[w^{*}=e^{-\frac{1}{w}} \tag{1}\] ### Enclosing subgraph extraction The first step of the method is to extract the enclosing subgraph of the target connection. The link weights between two nodes can be predicted based on the subgraph of the target link. In general, the larger the subgraph, the more information can be learned. However, this will bring more computational cost. In order to find a balance between performance and computational cost, we only extract 1-hop subgraphs, and one-hop subgraphs are defined as follows: \[G^{1}(i,j)=\{w\mid\min(d(v,i),d(v,j))\leq 1\}, \tag{2}\] Where \(d(v,i)\) and \(d(v,j)\) denote the shortest path between \(v\) and i, and \(v\) and j, respectively, i.e., the path that connects two nodes with the least number of edges on the path. Figure 1. Summary of the steps for the LGLWP link weight prediction framework. Since the number of nodes contained in different enclosing subgraphs is inconsistent, considering the time complexity and performance, we select 10 nodes in the first-order enclosing subgraphs of all the target links, and randomly select 10 nodes in the enclosing subgraphs with the number of nodes greater than 10 nodes, and leave them unprocessed if they are less than 10 nodes. This avoids the variability in the number of nodes in different graphs. With the same subgraph extraction strategy, we can obtain similar contextual representations of target node pairs and the model can be generalized across different graphs, nodes, and links. ### Subgraph node ordering The second step of the approach is to order the extracted enclosing subgraph. The purpose of the ordering is to provide a consistent way of labeling nodes such that those nodes with similar topological characteristics to the subgraphs are similarly labeled, e.g. if the relative positions and structural roles of the vertices in their respective subgraphs are similar, then they receive a similar ordering. Now, let's provide a brief introduction to the Weisfeiler-Lehman (WL) algorithm (Zhou et al., 2017). The WL algorithm addresses the graph isomorphism problem, which involves determining whether two graphs share the same number of nodes connected in the same manner. WL operates through iterative updates of node labels, utilizing the labels of neighboring nodes and compacting them into new labels until convergence.Initially, all nodes are assigned the same color, typically denoted as 1. For each node, a signature string is generated by concatenating its color with the sorted colors of its neighboring nodes. Subsequently, nodes are sorted based on their signature strings in ascending order, and new colors are assigned. Nodes with identical signature strings are assigned the same color.A crucial aspect of the WL algorithm is its ability to encode the structural roles of vertices within the graph. Moreover, it defines a relative order for these vertices, considering their structural roles; vertices with similar roles receive similar labels. Importantly, the relative ordering of vertices remains consistent across different graphs. Literature (Zhou et al., 2017) proposed a new node labeling ranking method suitable for unweighted graphs based on Weisfeiler-Lehman ( WL ) algorithm. Literature (Zhou et al., 2017) introduces another node labeling sorting algorithm based on weighted graph based on literature (Zhou et al., 2017), which has the following requirements for graph labeling algorithm: 1. The graph labeling algorithm must provide similar labels for nodes with similar topological characteristics in a enclosing subgraph. 2. It must maintain topological directionality to the target link, i.e., the order of the nodes must be constrained by the target node and the distance to the target node must be reflected in the ordering. Since the WL algorithm does not satisfy the second requirement and the node ordering is crucial for model learning, we adopt here the graph labeling approach proposed in (Zhou et al., 2017) by applying the one-dimensional Weisfeiler - Lehman ( WL ) algorithm to a weighted graph. The goal of this algorithm is to rank the set of nodes from the extracted subgraphs. However, since we want to maintain topological directionality with respect to the target edges, the target edges will always be assigned the order 1 and 2. First, initial labels are assigned to the nodes based on the sum of the shortest paths (shortest paths are computed by computing the weights of the edges) from the nodes to the target nodes, \(o_{x}\) and \(o_{y}\). Next, we use the Weisfeiler-Lehman algorithm to assign a label string to each node. This process consists of arranging the initial labels of each node with the initial labels of its neighboring nodes in order from smallest to largest, and in this way, we generate a unique label string. Then, the lowest dictionary-ordered string signature corresponds to the next node in the ordered list. Next, we iterate this process until each node is assigned a number. The process is defined in Algorithm 1.The schematic of the process is shown in Figure 2. ``` Input:\(h\)-hop enclosing subgraph \(G^{h}_{(o_{1},o_{2})}\) centered on two target nodes \(o_{1}\) and \(o_{2}\), which is extracted by Equation (2) Output: ordered set of nodes from \(o\in G^{h}_{(o_{1},o_{2})}\)\(o_{1}=\)x \(o_{2}=\)y calculate \(d(o):=d(o,x)+d(o,y)\) for all \(o\in G^{h}_{(o_{1},o_{2})}\) get initial labels \(l(o)=f(d(o))\)\(l(o_{1})=0\)\(l(o_{2})=0\)while\(|orderList|<|V|\)do generate label string \(Agg(l(o))\) for all \(o\in G^{h}_{(o_{1},o_{2})}\) sorted \(Agg(l(o))\) add lowest \((Agg(l(o)))\) to orderList endwhile returnorderList ``` **Algorithm 1** Subgraph node ordering algorithm Once the node sorting process is complete and the ordered set is obtained, we extract the adjacency matrix of the subgraph with rows and columns of the matrix corresponding to the ordered set. Each row vector of this adjacency matrix is then used as a feature vector for each node. Before feeding into the line graph network model for prediction, we ensure that the values \(W_{1,2}\) and \(W_{2,1}\) in the matrix representing the link weights are not visible to the model and are set to -1 here, respectively (Zhou et al., 2017). ### Line graph transformation To predict the weights of links from a given enclosing subgraph \(G^{h}_{(o_{1},o_{2})}\), where \(G^{h}_{(o_{1},o_{2})}\) is an \(h\)-hop enclosing subgraph centered on two target nodes v1 and v2, each node in the enclosing subgraph corresponds to an ordered feature vector, and nodes with similar topological features to the subgraph are similarly labeled. Since the edges corresponding to pairs of nodes in the original graph correspond to vertices in the line graph. Therefore directly processing the nodes in the line graph is processing the edges in the original graph. This will not only not increase the time complexity, but will also require fewer model parameters. Therefore we propose to convert the enclosing subgraph into a line graph, which represents the adjacencies between the edges of the original graph (Beng et al., 2019). Moreover, the features of the links to be predicted can be learned directly from the line graph representation using graph convolutional neural networks for weight prediction. In graph theory, the graph G corresponding to a line graph is a graph that reflects the adjacency of the edges in the graph, denoted as \(L(G)\). Briefly, \(L(G)\) abstracts each edge in G into a vertex each; if two edges in the original graph are adjacent, then an edge is connected to the corresponding vertex in the line graph. Because a line graph reduces the edges of the original graph to vertices, it can also be thought of as a dual of the original graph. An example of the line graph transformation process is given in Figure 3. In order to transform the attributes of the node pairs in the original graph into the attributes of the nodes in the line graph, (Bang et al., 2017) proposed a function: \[l_{(v_{1},v_{2})}=\text{concate}(\min(f_{l}(v_{1}),f_{l}(v_{2})),\max(f_{l}(v_ {1}),f_{l}(v_{2}))), \tag{3}\] where: \(f_{l}(\cdot)\) is the node labeling function, \(v_{1}\) and \(v_{2}\) are the two endpoints of the edge, and concate \(\text{concate}(\cdot)\) denotes the cascade operation on the two inputs. Since only the link weight prediction of the undirected weighted graph is considered in this paper, the attributes \((v1,v2)\) and \((v2,v1)\) of the introduced edges should be the same. The above formulation ensures that the generated edge attributes (i.e., node attributes in the line graph) are consistent when switching the end nodes. In addition, the structural importance information of the nodes can be well preserved in the function (Bang et al., 2017). Since the nodes in the original graph are represented by the row vectors of the ordered adjacency matrix, here we take the node pairs in the original graph and splice the row vectors of the two nodes according to the above transformation method as the feature vectors of the nodes in the line graph. ### Feature Learning by Graph Neural Networks Deep learning methods have been successfully applied in many fields such as image processing, speech recognition, etc., but the data in these fields are in Euclidean space. However, the data present in real world networks are in non-Euclidean space, so traditional deep learning methods do not work well to extract features from graphs.Kipf et al. (Kipf et al., 2017) proposed a multilayer graph convolutional neural network that can be used directly on graph data, which aggregates the node information of its neighbors and generates new node embeddings that contain rich neighborhood information. In this work, we use a graph convolutional neural network to learn node embeddings in a line graph, where a node in the line graph can represent an edge in the original graph. The graph convolutional neural network can aggregate the information of neighboring nodes to generate a new node embedding that contains rich neighbor information. Therefore, the node embeddings in the line graph can be used to predict the target edge connection weights in the network. Given a line graph representation of the enclosing subgraph \(L\left(G_{01,02}^{h}\right)\), the node embedding of \((v_{i},v_{j})\) in the kth layer of the graph convolutional neural network is denoted as \(Z_{(v_{i},v_{j})}^{(k)}\). The node embedding of \((v_{i},v_{j})\) in the \((k+1)\)th layer is: \[Z_{(v_{i},v_{j})}^{(k+1)}=\sigma(\widetilde{D}^{-\frac{1}{2}}\widetilde{A} \widetilde{D}^{-\frac{1}{2}}Z_{(v_{i},v_{j})}^{(k)})W^{(k)}), \tag{4}\] Where \(\widetilde{A}=A+I_{N}\) is the adjacency matrix of the line graph \(L\left(G_{01,02}^{h}\right)\) of the enclosing subgraphs where each node in the graph Figure 3. Line graph transformation procedure. Figure 2. Subgraph node ordering algorithm. We want to predict the weight (w, coloured in red) of the link for the target nodes (dashed and coloured in yellow). is connected to itself, \(\widetilde{D}_{ii}=\sum_{j}\widetilde{A}_{ij}\) is the degree of the line graph \(L\left(G_{o1,02}^{h}\right)\) of the enclosing subgraphs where each node in the graph is connected to itself, and \(W(k)\) is the trainable weight matrix of the kth layer. \(\sigma(\cdot)\) is the activation function in each layer. The input for the first layer of graph convolution neural network is set to node attribute in the line graph as \(Z_{(u_{i},u_{j})}^{0}=l_{(o1,02)}\)(Gendran et al., 2017).We then treat the link weight prediction task as a regression problem and train the neural network by minimizing the root-mean-square error loss for all all link weights to be predicted. ## 4. Experiments ### Datasets description We test the effect of our proposed algorithm in the following in six weighted networks, respectively. The six datasets cover different network sizes and network types. The specific topological characteristics are shown in Table 1. * Neural network (Kang et al., 2017): The neural network of C. elegans exhibits connections between neurons, which can occur through synapses or gap junctions. The weights assigned to the edges in this network signify the quantity of interactions that transpire between the neurons. * C. elegans (Kang et al., 2017): The network describing the interactions between metabolites in the roundworm Caenorhabditis elegans is an undirected graph. In this graph, the links represent the connections between pairwise metabolites. The weights assigned to the edges reflect the occurrence of multiple interactions between these metabolites. * Coauthorships in network science (Kang et al., 2017): The largest component of a co-authorship network collected by M. Newman consists of scientists collaborating on studies in the field of network science. M. Newman calculated the weights of edges in this network based on information from co-authored papers and co-authors. * Political blogs (Newman, 2017): Adamic and Glance collected a network that depicts the directed hyperlinks between political web blogs during the 2004 US Election. In this study, we simplified the directed links as undirected ones and assigned weights to represent the volume of multiple hyperlinks between blogs. * UC-social (Newman, 2017): The network consists of sent messages exchanged between users within an online student community at the University of California, Irvine. Users are represented as nodes, while directed edges indicate the flow of sent messages. The weight assigned to each edge reflects the occurrence of multiple messages. In this analysis, the network was treated as undirected but with weighted edges. * Condmat (Candand, 2017): This network represents collaborations among scientists who have posted preprints on the condensed matter archive at www.arxiv.org between 1995 and 1999. The compilation of this network was conducted by M. Newman. The weights assigned to the network follow the methodology outlined in the original paper. ### Evaluation metrics When performing link weight prediction, many literatures propose to use Pearson's correlation coefficient and root-mean-square error as evaluation metrics, however, according to the idea proposed in (Newman, 2017), they have practically equal evaluative power, and in order to save the paper's length, we choose only RMSE to measure the model's prediction performance. The definition of RMSE is: \[RMSE=\sqrt{\frac{\sum_{i=1}^{n}(y_{i}-\hat{y}_{i})^{2}}{n}} \tag{5}\] where y is the predicted value and \(\hat{y}\) is the actual value. Obviously, the smaller the RMSE is, the smaller the difference between predicted and actual values is, and the more accurate the algorithm is in its prediction. The smaller the RMSE is, the more accurate the algorithm's prediction is. ### Parameter settings The model parameters used in this paper are similar to the original paper (Gendran et al., 2017). Three graph convolution layers are used to compute the node embeddings, and the output feature dimensions of the three graph convolution layers are set to 32. Finally, the link weight regression prediction is performed through two more fully connected layers. The number of training iterations is set differently depending on the specific dataset. Specifically, 5 training epochs are used on some graphs with larger network sizes, such as Condmat, P.blog, and UC-social, and 15 training epochs are used on the rest of the datasets. ### Baselines In order to evaluate the predictive ability of the LGLWP model, we selected the same baseline models as in (Newman, 2017). These include seven well-known graph representation models such as Deepwalk (Wang et al., 2018), Node2vec (Chen et al., 2018), Grarep (Chen et al., 2018), SDNE (Kang et al., 2017), LINE (Kang et al., 2017), GAE (Gendran et al., 2017), VGAE (Gendran et al., 2017), etc. proposed in recent years. Since the above seven graph learning models cannot be directly applied to link weight prediction, the obtained node embedding vectors are concatenated as edge feature vectors to further train the linear regression model and evaluate its performance for link weight prediction (Newman, 2017). In addition to the mentioned baseline model, we also compared the GCN (Gendran et al., 2017) model. Since GCN model also performs very well in graph representation, it is often used for node classification, graph classification and other related tasks. GCN learns node embeddings by graph convolution. For this purpose, they use message passing framework in which node embeddings are updated by aggregating the embeddings of their neighbors. Our proposed model performs graph convolution on the line graph, whereas GCN performs graph convolution in the full graph. With the embedding vectors obtained \begin{table} \begin{tabular}{c c c c c} \hline \hline Dataset & \(|V|\) & \(E\) & Range of weights & categories \\ \hline Neural & 296 & 2137 & \([1,72]\) & biology \\ C.elegans & 453 & 2025 & \([1,114]\) & biology \\ Netscience & 575 & 1,028 & [0.0526, 2.5] &authorship \\ P.blog & 1224 & 16,715 & \([1,3]\) & social \\ UC-social & 1899 & 13,838 & \([1,184]\) & social \\ Condmat & 16264 & 47,594 & \([0.058824,22.3333]\) & coauthorship \\ \hline \hline \end{tabular} \end{table} Table 1. Basic topological features of weighted networks by GCN, we follow the method proposed by (Kumar et al., 2017) and use the inner product of the node vectors to measure the weights between nodes. In addition, seven shallow feature-based link weight prediction methods are selected for comparison, including three reliable routing-based methods (Wang et al., 2018), three line graph-based methods (Wang et al., 2018), and NEW methods (Kumar et al., 2018). SEA (Kumar et al., 2018), a self-attention-enhanced graph self-encoder SEA, which improves weight prediction by learning deep graph features, is a very novel model that achieves state-of-the-art performance in link weight prediction, and is presented in the paper we also compare it. Here all the parameters of the baseline are set according to the original literature. ### Results and Analysis In this paper, we follow the setup of (Wang et al., 2018; Kumar et al., 2018), where we choose 90% of the link weights in the original network as the training set and 10% of the link weights as the test set. Also, to avoid errors from a single experiment, each model is run on 10 independent sets of training and test sets, and their means and standard deviations are calculated. All data were preprocessed according to the methodology proposed in the (Kumar et al., 2018), e.g., all weights were normalized to intervals (0,1) using the exponential transformation method. The machine used for the experiments is a laptop computer configured with i9-12900H 2.50 GHz processor, 16G RAM and Nvidia 3070 GPU. Since GAE and VGAE are designed for attriubte networks. In order to make GAE and VGAE work on ordinary weighted graphs, the i-th column vector of the weighted adjacency matrix is used as the initial eigenvector of node i. The GAE\({}^{*}\) and VGAE\({}^{*}\) methods are modified versions of the original implementations. Especially when it comes to line-graph-based techniques such as LG-RF, LG-GBDT, and LG-SVM, the computation of centrality-based metrics can be quite resource-intensive. Consequently, these methods encountered difficulties in producing conclusive results for large graphs, including Condmat. As can be seen from Table 2, comparing seven link weight prediction models based on shallow graph features and eight graph representation models, our model achieves the best results. the SEA model has comparable performance with ours. SEA model is based on graph attention mechanism using graph self-encoder for link weight prediction, which opens the way to weight prediction based on attention mechanism, and the result also proves that the attention mechanism in weight prediction has a lot of room for development. However, SEA models are limited by the size of the graph, and their computation needs to consider the global information of the whole graph, which often requires more computational resources. On some large graphs, SEA proposes an effective graph compression algorithm, which first compresses the graph to a smaller size and then performs link weight prediction, and the algorithm also achieves good results. Contrary to graph compression, we only extract the neighbor information around the target links, and an effective node labeling algorithm can achieve the model to generalize between different subgraphs, thus avoiding processing the whole graph. In addition, our node labeling technique maintains the topological directionality of the target links for optimal performance, providing a mechanism for the model to focus on specific nodes. Our proposed method can learn the features of target links directly in the line graph. To analyze the convergence speed of the two models, we run the models on different datasets and collect the loss of each epoch. The results are shown in Figure 4. The losses in this paper's method are marked with green lines, and the losses in the SEA model are marked with blue lines. As can be seen from the results, our proposed model is able to converge faster than SEA. For our proposed method, we only need about 15 epochs to achieve the best performance on Neural, C.elegans, and Netscience datasets, and about 5 epochs on P.blog, UC-social, and Condmat datasets. while SEA has not converged after 50 epochs. According to (Kumar et al., 2018), specifically, SEA needs to be trained for 100 epochs on the Condmat dataset for optimal performance and 800 epochs on the P. blog dataset requires 800 epochs of training, 500 epochs in the UC - social dataset, and 300 epochs in the rest of the datasets.Thus, our proposed method saves training time and requires fewer model parameters. As can be seen from the Table 2, our model LGLWP outperforms SEA on the Neural and Netscience datasets.To test the robustness of LGLWP, we dynamically take 30, 40, 50, 60, 70, and 80 percent of all the links and weights in G as the training set and the rest as the test set, respectively. Our purpose is to determine whether the model consistently outperforms SEA with different proportions of weights missing.The results are shown in Figure 5. The experimental results show that the RMSE of LGLWP is lower than that of SEA for different proportions of the training set, indicating that LGLWP is robust. ### Ablation study We conducted ablation experiments with the aim of understanding the extent to which the graph labeling algorithm affects the models The introduction of a weighted graph labeling algorithm in LGLWP is one of the main contributions of this paper. It provides consistency to the model by labeling roles with similar structural roles and similar labels. We compare this algorithm with random labeling of nodes. We conducted experiments on all six data with the same experimental setup. The experimental results are shown in the Table 3. The results clearly demonstrate the effectiveness of the weighted graph labeling algorithm, as it indeed achieves significantly better performance than the subgraph random labeling algorithm. Due to the search for a balance between performance and computational cost, we only extract 1-hop subgraphs while controlling the number of subgraph nodes to be around 10. It is believed that this gap will become more obvious with the expansion of subgraph size. The use of enclosing subgraph extraction is a promising approach for link prediction and link weight prediction. A proper representation for the target links has been shown to be sufficient to predict the links and their weights, thus avoiding the entire processing graph. However, a node labeling technique must be provided to pair with the enclosing subgraph extraction method. By consistently labeling nodes algorithmically, the model can generalize the conversation over different subgraphs and thus make predictions. In addition, the node labeling technique must remain topologically oriented to the target links for optimal performance, thus providing a mechanism for the model to focus on specific nodes (Wang et al., 2018). Figure 4. Training loss comparison between our proposed LGLWP and SEA method. The training loss on Neural, C.elegans, Netscience, P.blog, UC-social and Condmat dataset. Figure 5. RMSE comparison on Neural, Netscience for SEA, LGLP using different percent of training set. On each dataset, we take 30, 40, 50, 60, 70, and 80 percent of all the links and weights in G as the training set. ### Discussion Since the P.blog, UC-social and Condmat networks are larger and therefore contain more samples, we trained only 5 epochs and the other datasets 15 epochs.The main problem when using graph-based methods for prediction is how to make them independent of the size of the graph. The subgraph extraction method solves this problem by making LWLWP independent of the number of nodes in the graph, i.e., to predict the link weights, we only need a small portion of the graph. However, being unaffected by the number of nodes in the graph comes at the cost of computational complexity and time complexity of the weighted graph labeling algorithm. Nevertheless, the time required to compute the weighted graph labeling algorithm for a given subgraph will always remain the same, while the model for processing the entire graph will scale linearly according to the number of nodes in the graph. It is worth noting that Deepwalk, Node2vec, Grarep, SDNE, Line, GCN, GAE and VGAE are all techniques based on graph representation learning. They all learn the representation of nodes in the graph to perform the link weight prediction task. We conclude that the model that generates the best representation for the nodes in the graph is the most successful. As (Kang et al., 2019) points out, good node embeddings should yield good prediction accuracy, because eventually some other machine learning system should use these embeddings to make valuable predictions. For this purpose, aggregating information from the most important neighbors and nodes is crucial. Meanwhile LGLP (Kang et al., 2019) points out that learning node embeddings by graph convolution in a line graph is more effective than performing neighbor embedding aggregation in the original graph. Therefore we introduce the line graph mechanism. For these graph representation learning methods, simply converting node vectors to connected edge vectors may not accurately characterize the structure of connected edges. Directly mapping connected edges to low-dimensional vectors can better preserve the structural features of incoming connected edges and is more suitable for network analysis tasks where connected edges are the object of study. ## 5. Conclusion and Future Work Inspired by LGLP, we propose a new link weight model, LGLWP.This model applies subgraph extraction and node labeling techniques that are currently widely used in link prediction and link weight prediction. To overcome the limitations of the unweighted graph node labeling technique in LGLP, we introduce a new weighted graph node labeling technique while retaining the line graph and graph convolutional neural network architectures. Our model achieved the best results on each of the tested datasets. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Model & Neural & C.elegans & Netscience & P.blog & UC-social & Condmat \\ \hline rWCN & 5.78\(\pm\)0.74 & 1.79\(\pm\)0.76 & 0.43\(\pm\)0.05 & 3.02\(\pm\)0.03 & 2.4527\(\pm\)0.0942 & 0.1992\(\pm\)0.0029 \\ rWAA & 6.3\(\pm\)0.75 & 2.36\(\pm\)0.84 & 0.42\(\pm\)0.05 & 0.89\(\pm\)0.02 & 0.6617\(\pm\)0.0292 & 0.1816\(\pm\)0.0031 \\ rWRA & 6.7\(\pm\)0.76 & 2.93\(\pm\)0.91 & 0.42\(\pm\)0.05 & 1.09\(\pm\)0.01 & 0.5852\(\pm\)0.0067 & 0.1932\(\pm\)0.0027 \\ LG-RF & 0.235\(\pm\)0.006 & 0.183\(\pm\)0.003 & 0.213\(\pm\)0.005 & 0.099 \(\pm\)0.003 & 0.223\(\pm\)0.002 & - \\ LG-GBDT & 0.383\(\pm\)0.004 & 0.276\(\pm\)0.005 & 0.181\(\pm\)0.006 & 0.239\(\pm\)0.003 & 0.369\(\pm\)0.003 & - \\ LG-SVM & 0.236\(\pm\)0.006 & 0.152\(\pm\)0.004 & 0.212\(\pm\)0.004 & 0.171\(\pm\)0.004 & 0.225\(\pm\)0.003 & - \\ NEW & 0.2056\(\pm\)0.0064 & 0.1421\(\pm\)0.0081 & 0.0891\(\pm\)0.0115 & 0.0797\(\pm\)0.0024 & 0.2076\(\pm\)0.0017 & 0.1953\(\pm\)0.0016 \\ \hline Deepwalk & 0.2211\(\pm\)0.0043 & 0.1421\(\pm\)0.0045 & 0.1214\(\pm\)0.0151 & 0.0816\(\pm\)0.0023 & 0.2124\(\pm\)0.0026 & 0.1943\(\pm\)0.0008 \\ Node2vec & 0.2153\(\pm\)0.0054 & 0.1413\(\pm\)0.0052 & 0.1199\(\pm\)0.0126 & 0.0817\(\pm\)0.0021 & 0.2088\(\pm\)0.0022 & 0.2032\(\pm\)0.0011 \\ Grarep & 0.2254\(\pm\)0.0092 & 0.1424\(\pm\)0.0053 & 0.1484\(\pm\)0.0378 & 0.0798\(\pm\)0.0021 & 0.2098\(\pm\)0.0012 & 0.1945\(\pm\)0.0016 \\ SDNE & 0.2060\(\pm\)0.0058 & 0.1380\(\pm\)0.0167 & 0.1386\(\pm\)0.0263 & 0.0771\(\pm\)0.0029 & 0.2056\(\pm\)0.0029 & 0.1808\(\pm\)0.0014 \\ LINE & 0.2222\(\pm\)0.0079 & 0.1390\(\pm\)0.0052 & 0.1377\(\pm\)0.0112 & 0.0809\(\pm\)0.0021 & 0.2102\(\pm\)0.0016 & 0.1927\(\pm\)0.0016 \\ GAE\({}^{*}\) & 0.2161\(\pm\)0.0082 & 0.1508\(\pm\)0.0058 & 0.4452\(\pm\)0.0052 & 0.1466\(\pm\)0.0142 & 0.2360\(\pm\)0.0041 & 0.4112\(\pm\)0.0017 \\ VGAE\({}^{*}\) & 0.2332\(\pm\)0.0089 & 0.1496\(\pm\)0.0054 & 0.4458\(\pm\)0.0052 & 0.1340\(\pm\)0.0008 & 0.2318\(\pm\)0.0043 & 0.4127\(\pm\)0.0017 \\ GCN & 0.2216\(\pm\)0.0098 & 0.1583\(\pm\)0.0139 & 0.1232\(\pm\)0.0146 & 0.2720\(\pm\)0.0770 & 0.2540\(\pm\)0.0614 & 0.2117\(\pm\)0.0036 \\ SEA & 0.2015\(\pm\)0.0052 & **0.11134\(\pm\)0.0055** & 0.0823\(\pm\)0.0094 & **0.0754\(\pm\)0.002** & **0.19764\(\pm\)0.0028** & 0.1694\(\pm\)0.0018 \\ \hline LGLPW & **0.1915\(\pm\)0.0086** & 0.1299\(\pm\)0.0061 & **0.0624\(\pm\)0.0137** & 0.0759\(\pm\)0.0019 & 0.2007\(\pm\)0.0029 & **0.1556\(\pm\)0.0024** \\ \hline \hline \end{tabular} \end{table} Table 2. Root Mean Squared Errors, standard deviation on all of the datasets, In addition to LGLWP, GCN, other experimental data were obtained from (Kang et al., 2019). The best experimental results have been bolded. The second-best results are indicated with underlining. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Method & Neural & C.elegans & Netscience & P.blog & UC-social & Condmat \\ \hline Random labeling & 0.2115 \(\pm\)0.0066 & 0.1466\(\pm\)0.0068 & 0.0762\(\pm\)0.0154 & 0.0833\(\pm\)0.0013 & 0.2096\(\pm\)0.0025 & 0.1745\(\pm\)0.0034 \\ weighted graph labeling & 0.1915\(\pm\)0.0086 & 0.1309\(\pm\)0.0072 & 0.0698\(\pm\)0.0134 & 0.0766\(\pm\)0.0012 & 0.2017\(\pm\)0.0022 & 0.1556\(\pm\)0.0024 \\ \hline \hline \end{tabular} \end{table} Table 3. Results for each version of the algorithm applied to all datasets. We report the Mean Squared errors with the Standard Deviation for 10 trials on each version. In network analysis tasks where connected edges are the object of study, simply transforming node vectors into connected edge vectors may not accurately characterize the structural features of connected edges. Directly mapping the connected edges into low-dimensional vectors can better characterize the structural features of the connected edges, and the line graph is certainly a very good solution. In our future research, we will focus on the application of line graphs in the network analysis task with connected edges as the object of study. A good subgraph node labeling algorithm is also crucial for the final prediction. A good subgraph extraction strategy and node labeling algorithm are both worthy future work.
リンクウェイト予測は、現実世界のネットワークが多くの場合ウェイトネットワークであるという点から実用的な重要性があります。これまでの研究では、主に浅いグラフの特徴を用いてリンクウェイト予測を行い、その予測性能を限界に抑えてきました。本論文では、新規なリンクウェイト予測アルゴリズムである「ライングラフネラルネットワーク for リンクウェイト予測 (LGLWP)」を提案します。このアルゴリズムでは、深層学習を用いてグラフの特徴をより深く学習します。本アルゴリズムでは、まず目標リンクの周囲を囲む subgraph を抽出します。その後、重み付きグラフのラベル付けアルゴリズムを用いて、 subgraph のノードをラベル付けします。次に、この subgraph をライングラフに変換し、ライングラフにおけるノード埋め込みを学習するために、グラフ畳み込みニューラルネットワークを使用します。これにより、ライングラフにおけるリンクを表現するノード埋め込みを得られます。
2309.12047
Self-Calibrating, Fully Differentiable NLOS Inverse Rendering
Existing time-resolved non-line-of-sight (NLOS) imaging methods reconstruct hidden scenes by inverting the optical paths of indirect illumination measured at visible relay surfaces. These methods are prone to reconstruction artifacts due to inversion ambiguities and capture noise, which are typically mitigated through the manual selection of filtering functions and parameters. We introduce a fully-differentiable end-to-end NLOS inverse rendering pipeline that self-calibrates the imaging parameters during the reconstruction of hidden scenes, using as input only the measured illumination while working both in the time and frequency domains. Our pipeline extracts a geometric representation of the hidden scene from NLOS volumetric intensities and estimates the time-resolved illumination at the relay wall produced by such geometric information using differentiable transient rendering. We then use gradient descent to optimize imaging parameters by minimizing the error between our simulated time-resolved illumination and the measured illumination. Our end-to-end differentiable pipeline couples diffraction-based volumetric NLOS reconstruction with path-space light transport and a simple ray marching technique to extract detailed, dense sets of surface points and normals of hidden scenes. We demonstrate the robustness of our method to consistently reconstruct geometry and albedo, even under significant noise levels.
Kiseok Choi, Inchul Kim, Dongyoung Choi, Julio Marco, Diego Gutierrez, Min H. Kim
2023-09-21T13:15:54
http://arxiv.org/abs/2309.12047v2
# Self-Calibrating, Fully Differentiable NLOS Inverse Rendering ###### Abstract. Existing time-resolved non-line-of-sight (NLOS) imaging methods reconstruct hidden scenes by inverting the optical paths of indirect illumination measured at visible relay surfaces. These methods are prone to reconstruction artifacts due to inversion ambiguities and capture noise, which are typically mitigated through the manual selection of filtering functions and parameters. We introduce a fully-differentiable end-to-end NLOS inverse rendering pipeline that self-calibrates the imaging parameters during the reconstruction of hidden scenes, using as input only the measured illumination while working both in the time and frequency domains. Our pipeline extracts a geometric representation of the hidden scene from NLOS volumetric intensities and estimates the time-resolved illumination at the relay wall produced by such geometric information using differentiable transient rendering. We then use gradient descent to optimize imaging parameters by minimizing the error between our simulated time-resolved illumination and the measured illumination. Our end-to-end differentiable pipeline couples diffraction-based volumetric NLOS reconstruction with path-space light transport and a simple ray marching technique to extract detailed, dense sets of surface points and normals of hidden scenes. We demonstrate the robustness of our method to consistently reconstruct geometry and albedo, even under significant noise levels. Non-line-of-sight imaging, image reconstruction, computational imaging + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + Footnote †: journal: Computer Graphics Communications + + Footnote †: journal: Computer Graphics Communications + + Footnote †: journal: Computer Graphics Communications different sources that introduce undesired artifacts in the reconstructions. Performing a filtering step over the data or the reconstructed volume is the most common solution to mitigate errors and enhance the geometric features (Arellano et al., 2017; Buttafava et al., 2015; Liu et al., 2019; O'Toole et al., 2018; Velten et al., 2012); however, this requires manual design and selection of filter parameters, as their impact in the reconstruction quality is highly dependent on the scene complexity, environment conditions, and hardware limitations. Recent physically-based methods proposed an alternative technique that avoids the issues linked to backprojection. By merging a simplified but efficient three-bounce transient rendering formula with an optimization loop, the computed time-resolved illumination at the relay wall resulting from an optimized geometry reconstruction is compared to the measured illumination. However, geometric representations introduced by existing works limit the detail in the reconstructions (Iseringhausen and Hullin, 2020) or fail to reproduce the boundaries of hidden objects (Tsai et al., 2019). Alternatively, the recent development of accurate transient rendering methods (Jarabo et al., 2014; Pediredla et al., 2019; Royo et al., 2022) has fostered differentiable rendering pipelines in path space (Wu et al., 2021; Yi et al., 2021), which have the potential to become key tools in optimization schemes. However, differentiable methods are currently bounded by memory limitations since the need to compute the derivatives of time-resolved radiometric data severely limits the number of unknown parameters that can be handled. The difficulty of handling visibility changes in a differentiable manner, as well as the large number of parameters that need to be taken into account, are two limiting factors shared as well with steady-state differentiable rendering (Li et al., 2018; Zhao et al., 2020), that are further aggravated in the transient regime. As a result, NLOS imaging methods that rely on differentiable rendering are therefore limited to simple operations such as tracking the motion of a single hidden object with a known shape (Yi et al., 2021). To address these problems, we propose a novel self-calibrated, fully differentiable pipeline for NLOS inverse rendering that jointly optimizes system parameters and scene information to extract surface points, normals, and albedo of the hidden geometry. To this end, we combine diffractive phasor-field imaging in the frequency domain (Liu et al., 2020, 2019) with differentiable third-bounce transient rendering in the temporal domain. We leverage the volumetric output of phasor-field NLOS imaging to estimate geometric information of the hidden scene, which we then use on a transient rendering step to simulate time-resolved illumination at the relay wall. By minimizing the error between simulated and captured illumination, we provide a fully-differentiable pipeline for self-calibrating NLOS imaging parameters in an end-to-end manner. Our optimized parameters provide accurate volumetric outputs from which we estimate surface points, normals and albedos of hidden objects, with more geometric detail than previous surface-based methods. Our method is robust in the presence of noise, providing consistent geometric estimations under varying capture conditions. Our code is freely available for research purposes1. Footnote 1: [https://github.com/KAIST-VCLAB/nlos-inverse-rendering.git](https://github.com/KAIST-VCLAB/nlos-inverse-rendering.git) ## 2. Related Work Active-light NLOS imaging methods provide 3D reconstructions of general NLOS scenes by leveraging temporal information of light propagation by means of time-gated illumination and sensors (Faccio et al., 2020; Jarabo et al., 2017). _Scene representation._ While existing methods rely on inverting third-bounce transport, they may differ in their particular representation of scene geometry as volumetric density or surfaces. Volumetric approaches estimate geometric density by backprojecting third-bounce light paths onto a voxelized space (Ahn et al., 2019; Arellano et al., 2017; Buttafava et al., 2015; Gariepy et al., 2015; Gupta et al., 2012; La Manna et al., 2018; Velten et al., 2012). Efficiently inverting the resulting discrete light transport matrix is not trivial, many dimensionality reduction methods have been proposed (Heide et al., 2019; Lindell et al., 2019; O'Toole et al., 2018; Xin et al., 2019; Young et al., 2020), but they are often limited in spatial resolution (as low as 64\(\times\)64 in some cases) due to memory constraints. Surface methods, in contrast, rely on inverting third-bounce light transport onto explicit representations of the geometry (Iseringhausen and Hullin, 2020; Plack et al., 2023; Tsai et al., 2019), usually starting with simple blob shapes, progressively optimizing the geometry until loss converges. In contrast, we estimate _implicit_ geometric representations of the hidden scene based on surface points and normals by ray marching the volumetric output of NLOS imaging, inspired by recent work on neural rendering (Barron et al., 2021; Mildenhall et al., 2020; Niemeyer et al., 2022). The combination of NLOS imaging with differentiable transient rendering over the estimated geometry allows us to self-calibrate imaging parameters in an end-to-end manner. For clarity, in this paper the term _explicit_ surface refers to a polygonal surface mesh, while _implicit_ surface denotes a representation based on surface points and their normals, without defining a surface mesh. Please, refer to Section 4.2 for a further detailed discussion on explicit/implicit surface representations. _Learning-based approaches._ Other methods leverage neural networks instead, such as U-net (Grau Chopite et al., 2020), convolutional neural networks (Chen et al., 2020), or neural radiance fields Figure 2. Overview of our self-calibrated, fully differentiable NLOS inverse rendering workflow (Sections 3 and 4). (a) We perform NLOS imaging using a phasor-field diffraction method, taking an initial matrix \(H\) of transient measurements as input, and outputting volumetric intensity \(I_{\text{pf}}\). (b) We estimate \(G\), an implicit geometric representation of the hidden scene, from \(I_{\text{pf}}\). (c) We obtain the time-resolved illumination \(H_{R}\) from \(G\) using differentiable path-space transient rendering. (d) We optimize imaging parameters until the error between \(H\) and \(H_{R}\) converges with regularization terms \(\Gamma\). Geometry \(G\) is computed during the forward pass, while \(\Theta_{\text{pf}}\), \(\Theta_{\text{ls}}\), and \(\Theta_{\text{G}}\) are updated during the backward pass. [20]. These learning-based methods are learned using object databases such as ShapeNet [14]. However, their parameters are trained with steady-state renderings of synthetic scenes composed of a single object behind an occluder in an otherwise empty space. As such, their performance is often degraded with real scenes, often overfitting to the training dataset, and becoming susceptible to noise. Our method does not rely on a pre-trained deep network to extract high-level features from synthetic steady-state rendering data; instead, we explicitly optimize virtual illumination functions and scene information by evaluating actual transient observations, without relying on neural networks. Recent works by Shen et al. [2021] and Fujimura et al. [2023] leverage transient observations similar to ours for optimizing multi-layer perceptrons for imaging. However, these methods cannot be utilized for calibrating the filtering parameters of volumetric NLOS methods due to the lack of evaluation of the physical observation of the transient measurements by an NLOS imaging and light transport model. Wave-based NLOS imagingRecent works have shifted the paradigm of third-bounce reconstruction approaches to the domain of wave optics [19, 20]. In particular, the phasor field framework [20] computationally transforms the data captured on the relay surface into illumination arriving at a virtual imaging aperture. This has enabled more complex imaging models (e.g., [13, 14, 15]), and boosted the efficiency of NLOS imaging to interactive and real-time reconstruction rates [16, 17, 18, 19]. However, these systems require careful calibration of all their parameters, including the definition of the phasor field and the particular characteristics of lasers and sensors, which makes using them a cumbersome process. Our fully self-calibrated system overcomes this limitation. ## 3. Time-gated NLOS Imaging Model We propose a differentiable end-to-end inverse rendering pipeline (shown in Figure 2) to improve the reconstruction quality of hidden scenes by optimizing the parameters of NLOS imaging algorithms without prior knowledge of the hidden scene. In the following, we describe our NLOS imaging model. Section 4 describes our optimization pipeline based on this NLOS imaging model. ### Phasor-based NLOS imaging In a standard NLOS imaging setup (see Figure 3), a laser beam is emitted towards a point \(\mathbf{x}_{l}\) on a visible relay wall, which reflects light towards the hidden scene and then is reflected back to the wall. The hidden scene is imaged based on the time-resolved illumination, captured at points \(\mathbf{x}_{s}\) on the relay wall in the form of a measurement matrix \(\mathbf{H}(\mathbf{x}_{l},\mathbf{x}_{s},t)\). The recent diffractive phasor-based framework by Liu et al. [2020, 2019] intuitively turns the grid of measured points \(\mathbf{x}_{s}\) on the relay wall into a virtual aperture; this allows to formulate the reconstruction of NLOS scenes as a virtual _line-of-sight_ (LOS) problem. We define \(\mathbf{H}\left(\mathbf{x}_{l},\mathbf{x}_{s},\Omega\right)\) as a set of phasors at the relay wall, obtained by Fourier transform of the measurement matrix \(\mathbf{H}(\mathbf{x}_{l},\mathbf{x}_{s},t)\). In practice, since this function \(\mathbf{H}\) is noisy, we apply a filtering operation as \[\mathbf{H}_{\text{pf}}\left(\mathbf{x}_{l},\mathbf{x}_{s},\Omega\right)=\mathcal{ P}\left(\mathbf{x}_{l},\mathbf{x}_{s},\Omega\right)\mathbf{H}\left(\mathbf{x}_{l}, \mathbf{x}_{s},\Omega\right), \tag{1}\] where \(\mathcal{P}(\mathbf{x}_{l},\mathbf{x}_{s},\Omega)\) represents a virtual illumination function that acts as a filter over \(H\), typically defined as a spatially-invariant illumination function [20, 21]. The hidden scene can then be imaged as an intensity function \(I_{\text{pf}}(\mathbf{x}_{b},t)\) on a voxelized space via Rayleigh-Sommerfeld Diffraction (RSD) operators as \[I_{\text{pf}}\left(\mathbf{x}_{b},t\right)=\left\lceil\int\limits_{-\infty}^{ \infty}e^{i\frac{\Omega}{\varepsilon}t}\iint\limits_{SL}\frac{e^{-i\frac{ \Omega}{\varepsilon}(d_{\text{fs}}d_{\text{fs}})}}{d_{\text{fs}}d_{\text{fs}}} \mathbf{H}_{\text{pf}}\left(\mathbf{x}_{l},\mathbf{x}_{s},\Omega\right)\,\mathrm{ d}\mathbf{x}_{l}\mathrm{d}\mathbf{x}_{s}\frac{\mathrm{d}\Omega}{2\pi}\right\rceil^{2}, \tag{2}\] where \(L\) and \(S\) define the illuminated and measured regions on the relay wall, respectively; \(d_{\text{fs}}=\left\lVert\mathbf{x}_{l}-\mathbf{x}_{s}\right\rVert\) and \(d_{\text{ss}}=\left\lVert\mathbf{x}_{o}-\mathbf{x}_{s}\right\rVert\) are voxel-laser and voxel-sensor distances (see Figure 3); and \(\Omega\) represents frequency. Classic NLOS reconstruction methods reconstruct hidden geometry by evaluating \(\mathbf{H}(\mathbf{x}_{l},\mathbf{x}_{s},t)\) at the time of flight of third-bounce illumination paths between scene locations and points on the relay surface [1, 14, 15]. This is analogous to evaluating \(I_{\text{pf}}(\mathbf{x}_{b},t)\) at \(t=0\), where the RSD propagators have traversed an optical distance \(\left\lVert\mathbf{\hat{x}}\right\rVert=d_{\text{fs}}+d_{\text{ss}}\). We incorporate a similar third-bounce strategy in our path integral formulation as described in the following. Due to the challenges of estimating surface albedo due to diffraction effects during the NLOS imaging process [14, 15], we assume an albedo term per surface point that approximates the averaged reflectance observed from all sensor points. ### Path-space light transport in NLOS scenes To formally describe transient light transport in an efficient manner, we rely on the transient path integral formulation [15, 16]. Transient light transport \(\mathbf{H}(\mathbf{x}_{l},\mathbf{x}_{s},t)\in\mathbb{R}\) can then be expressed as \[\mathbf{H}(\mathbf{x}_{l},\mathbf{x}_{s},t)=\int_{\mathcal{T}}\int_{\mathcal{V}} \mathcal{K}(\mathbf{\hat{x}},\mathbf{t})\mathrm{d}\mu(\mathbf{\hat{x}}) \mathrm{d}\mu(\mathbf{t}), \tag{3}\] Figure 3. NLOS imaging setup. A laser emits a pulse of light, which travels to the relay wall, then to the hidden geometry, back to the relay wall, and reaches the sensor after a travel time of \(t=t_{1}+t_{2}+t_{3}+t_{4}\). The inset shows the sensor response; the peak at \(t\) indicates the presence of a hidden object. where \(\mathcal{K}\) is the radiometric contribution in transient path-space; \(\mathrm{d}\mu(\bar{\mathbf{x}})\) is the differential measure of path \(\bar{\mathbf{x}}\); \(\mathcal{T}\) represents the domain of temporal measurements; \(\mathbf{t}=t_{l}\dots t_{\mathrm{s}}\) is the sequence of time-resolved measurements on each vertex; \(\mathrm{d}\mu(\mathbf{t})\) denotes temporal integration at each vertex; \(\bar{\mathbf{x}}=\mathbf{x}_{l}\dots\mathbf{x}_{\mathrm{s}}\) is a set of discrete transient path time intervals of \(k+1\) vertices; and \(\psi=\cup_{k=1}^{\infty}\psi_{k}\) is the entire space of paths with any number of vertices, with \(\psi_{k}\) being the space of all paths with \(k\) vertices. For convenience and without losing generality, we ignore the fixed vertices at the laser and sensor device in our formulae. In practice, \(\mathbf{H}\) is obtained by the spatio-temporal integration of transient measurements during a time interval \(\tau\), which accounts for the contribution of all paths \(\bar{\mathbf{x}}\) with time of flight \[t=\mathrm{tof}(\bar{\mathbf{x}})=\sum\nolimits_{i=1}^{k}\frac{||\mathbf{x}_{i}- \mathbf{x}_{i-1}||}{c}, \tag{4}\] where \(c\) is the speed of light, \(\mathbf{x}_{0}\equiv\mathbf{x}_{l}\), and \(\mathbf{x}_{k}\equiv\mathbf{x}_{\mathrm{s}}\). We assume no scattering delays at the vertices. Incorporating the third-bounce strategy of NLOS reconstruction methods in our path integral formulation, we can express \(\mathcal{K}\) in a closed form as \[\mathcal{K}(\bar{\mathbf{x}},\mathbf{t})=\Lambda(\mathbf{x}_{l}\to\mathbf{x}_{g},t_{l })\rho(\mathbf{x}_{g})\mathfrak{T}(\bar{\mathbf{x}},\mathbf{t})\Phi(\mathbf{x}_{g} \to\mathbf{x}_{s},\mathrm{tof}(\bar{\mathbf{x}})), \tag{5}\] where \(\Lambda\) is the emitted light from the laser, \(\Phi\) is the time-dependent sensor sensitivity function, \(\rho\) represents surface reflectance, and \(\mathfrak{T}(\bar{\mathbf{x}},\mathbf{t})\) is the path throughput defined by \[\mathfrak{T}(\bar{\mathbf{x}},\mathbf{t})=V(\mathbf{x}_{l},\mathbf{x}_{g})\frac{| \cos\theta_{1}||\cos\theta_{2}|}{d_{g}^{2}}V(\mathbf{x}_{g},\mathbf{x}_{s})\frac{| \cos\theta_{3}||\cos\theta_{4}|}{d_{g}^{2}}, \tag{6}\] where \(V\) is the binary visibility function between two vertices, \(d_{lg}=||\mathbf{x}_{l}-\mathbf{x}_{g}||\) and \(d_{gg}=||\mathbf{x}_{g}-\mathbf{x}_{s}||\), and \(\theta_{1-4}\) refer to the angles between the normals of both the relay wall and surface geometry, and the path segments in \(\bar{\mathbf{x}}\) (see Figure 3). Note that the three-bounce illumination is expressed in the path space as \(\bar{\mathbf{x}}\equiv\mathbf{x}_{l}\to\mathbf{x}_{g}\to\mathbf{x}_{s}\). Neither the emitted light \(\Lambda\) nor the sensor sensitivity \(\Phi\) are ideal Dirac delta functions. Yi et al. (2021) and Hernandez et al. (2017) provide the following models for the laser and sensor behavior \[\Lambda(t) =\frac{I_{l}}{\sigma_{l}\sqrt{2\pi}}e^{-t^{2}/(2\sigma_{l}^{2})}, \tag{8}\] \[\Phi(t) =\kappa_{g}e^{-\kappa_{g}t}\ast\frac{1}{\sigma_{g}\sqrt{2\pi}}e^{ -(t-\mu_{h})^{2}/(2\sigma_{l}^{2})}, \tag{7}\] where \(\sigma_{l}\) is the standard deviation of the Gaussian laser pulse, \(I_{l}\) is the laser intensity, \(\kappa_{s}\) is the sensor sensitivity decay rate, \(\sigma_{s}\) is the standard deviation of the sensor jitter, and \(\mu_{s}\) is the offset of the sensor jitter. Since we are only interested on reproducing the combined effect of the laser and sensor models \(\Lambda\) and \(\Phi\) on the path throughput (Equation 6), we replace them by a single joint laser-sensor correction function as \[\Psi(t) =\Phi(t)\ast\Lambda(t)\] \[=\kappa_{s}e^{-\kappa_{g}t}\ast\frac{I_{l}}{\sigma_{ls}\sqrt{2\pi }}e^{-t^{2}/(2\sigma_{ls}^{2})}. \tag{9}\] Note that the convolution of the two Gaussian functions of Equations 7 and 8 yields a single Gaussian with a joint model parameter \(\sigma_{ls}=\sqrt{\sigma_{l}^{2}+\sigma_{s}^{2}}\). We set the sensor jitter offset as \(\mu_{s}=0\), with the assumption that a uniform distribution of shifts is equally present in all transient measurements. Please refer to the supplemental material for more details on derivation. Our inverse rendering optimization seeks optimal parameters of this model automatically based on physically-based transient rendering. ## 4. Differentiable Time-Gated NLOS Inverse Rendering In the following, we describe in detail our self-calibrated, end-to-end differentiable inverse rendering pipeline, where the forward pass provides high-detailed reconstructions of the geometry \(G\), while the backward pass optimizes per-voxel surface reflectance as albedo \(\Theta_{G}\), as well as system parameters \(\Theta_{\mathrm{pf}}\) and \(\Theta_{\mathrm{ls}}\) to improve the forward pass reconstruction. For clarity, from here on, we redefine our functions in terms of their parameters to be optimized. Refer to the supplemental material for a summary of the different symbols. ### Virtual illumination for RSD propagation The inputs to our system are the known locations of the illumination \(\mathbf{x}_{l}\) and the sensor \(\mathbf{x}_{s}\), a matrix \(\mathbf{H}\) of transient measurements, and an _arbitrary_ virtual illumination function \(\mathcal{P}(\Theta_{\mathrm{pf}})\equiv\mathcal{P}(\mathbf{x}_{l},\mathbf{x}_{ s},\Omega)\) (Equation 1), where \(\Theta_{\mathrm{pf}}\) represents the optimized parameter space for \(\mathcal{P}\). Based on previous works (Liu et al., 2020, 2019; Marco et al., 2021), we define \(\Theta_{\mathrm{pf}}=\{\sigma_{\mathrm{pf}},\Omega_{\mathrm{pf}}\}\) to model a central frequency with a zero-mean Gaussian envelope as \(\mathcal{P}(\Theta_{\mathrm{pf}})=e^{i\Omega_{\mathrm{pf}}t}e^{-t^{2}/(2 \sigma_{\mathrm{pf}}^{2})}\), where \(\sigma_{\mathrm{pf}},\Omega_{\mathrm{pf}}\) represent the standard deviation and central frequency, respectively. Note that this equation is fully differentiable. In the forward pass we first compute the filtered matrix \(\mathbf{H}_{\mathrm{pf}}\) (Equation 1) using the optimized virtual illumination \(\mathcal{P}(\Theta_{\mathrm{pf}})\), having \(\mathbf{H}_{\mathrm{pf}}=P(H;\Theta_{\mathrm{pf}})\) (Figure 1(a)). We then compute a first estimation of the volumetric intensity \(I_{\mathrm{pf}}\) of the hidden scene by evaluating RSD propagation (Equation 2) at \(t=0\), as \(I_{\mathrm{pf}}=\mathrm{RSD}(\mathbf{H}_{\mathrm{pf}})\). Next, we show how to estimate both the geometry \(G\) and the time-resolved transport \(\mathbf{H}_{R}\) at the relay wall. ### Implicit surface geometry Our next goal is to estimate an implicit surface representation \(G\) (points \(\mathbf{x}_{g}\) and normals \(\mathbf{n}_{g}\)) by means of a differentiable function \(D\) as \(G=D(\mathrm{I_{pf}})\) (Figure 1(b)) that takes our volumetric intensity function \(I_{\mathrm{pf}}\) as input. We keep an implicit representation of our hidden surface geometry \(G\) without creating meshed (explicit) surface geometry during the whole optimization. The key idea is to use the volumetric data computed at each forward pass to estimate _projections_ of the geometry (i.e., points and normals) visible from the perspective of each sensor point \(\mathbf{x}_{s}\) on the relay wall and use those to perform path-space differentiable transient rendering at \(\mathbf{x}_{s}\). We first estimate the geometry observed by \(\mathbf{x}_{s}\) by sampling rays towards our volumetric intensity \(I_{\mathrm{pf}}\), and build an implicit representation of the closest surface along each ray. Using information from neighboring rays, we then estimate the normals required to compute the path-space throughput of \(\mathfrak{T}\) (Equation 6). Using the implicit geometry computed for every sensing point \(\mathbf{x}_{s}\), we then compute time-resolved illumination at \(\mathbf{x}_{s}\) as we describe later in this subsection. PointsAs Figure 3(a) shows, for each sensor point \(\mathbf{x}_{s}\) we sample rays uniformly using concentric hemispherical mapping (Shirley and Chiu 1997). We then sample points along each ray with ray marching, and estimate the intensity at each sampled point (blue in Figure 4a) by trilinear interpolation of neighbor voxel intensities of \(I_{\text{pf}}\) (red). From the interpolated volumetric intensities \(I_{\text{pf}}(d_{i})\) (Figure 4b, left), we estimate the distance \(d_{gs}\) between \(\mathbf{x}_{s}\) and the hidden surface vertex \(\mathbf{x}_{g}\) (Figure 4b, right), assuming \(\mathbf{x}_{g}\) is located at the maximum intensity along the ray. To find \(d_{gs}\) in free space from the ray-marched intensities in a differentiable manner, we use \(\mathtt{softmax}\) function: \(d_{gs}=\frac{\sum_{i}\omega_{i}d_{i}}{\sum_{i}\omega_{i}}\), where \(d_{i}\) is a ray-marched distance from \(\mathbf{x}_{s}\), and \(\omega_{i}=e^{B_{\text{pf},i}}\) is a probability density function of \(d_{i}\), and \(I_{\text{pf},i}\) is the volume intensity at distance \(d_{i}\) along the ray. \(\beta\) is a hyperparameter that determines the sensitivity in blending neighboring probabilities, set to i+3 in all our experiments. If \(I_{\text{pf}}\) falls below a threshold, we assume that no surface has been found; we set this threshold to 0.05 for synthetic scenes, and 0.2 for real scenes throughout the paper. Our procedure implicitly estimates surface points \(\mathbf{x}_{g}\) at distances \(d=\left\|\mathbf{x}_{s}-\mathbf{x}_{g}\right\|\) by observing via ray marching the grid of phasor-field intensities \(I_{\text{pf}}\) from the perspective of the sensing points \(\mathbf{x}_{s}\). _Normals._ As shown in Figure 4c, we estimate the normal \(\mathbf{n}_{g}\) at vertex \(\mathbf{x}_{g}\) based on the distances \(d_{N},d_{S},d_{E},d_{W}\) at neighboring ray samples in the concentric hemispherical mapping. We compute the normals of two triangles \(\triangle d_{N}d_{E}d_{S}\) and \(\triangle d_{S}d_{W}d_{N}\) via two edges' cross product and compute \(\mathbf{n}_{g}\) as the normalized sum of the normals of those two triangles. _Surface albedo._ Besides points and normals--updated implicitly during each forward pass--, computing path contribution \(\mathcal{K}\) (Equation 5) at sensor points \(\mathbf{x}_{s}\) requires computing per-point monochromatic albedo \(\rho\). We estimate albedos by evaluating the physical observation of the transient measurements in the backward pass. ### Differentiable transient rendering The next step during the forward pass is to obtain time-resolved illumination \(\mathbf{H}_{R}\) at \(\mathbf{x}_{s}\) through transient rendering. In our pipeline (Figure 2c), we represent this step as \(\mathbf{H}_{R}=R(G;\Theta_{G},\Theta_{\text{ls}})\), where \(R()\) computes third-bounce time-resolved light transport at sensing points \(\mathbf{x}_{s}\). We use the rays sampled from \(\mathbf{x}_{s}\) (Figure 4b) to compute the radiometric contribution \(\mathcal{K}(\mathbf{x},\mathbf{t})\) of the implicit surface points \(\mathbf{x}_{g}\) estimated by those rays, following Equations 5 through 9. _Visibility._ Differentiating the binary visibility function \(V\), necessary to compute the path throughput \(\mathfrak{T}\) (Equation 6), is challenging. However, note that we estimate an implicit surface at \(\mathbf{x}_{g}\) based on volumetric intensities, which strongly depend on the illumination from the laser reaching the surface and going back to the sensor without finding any occluder. Based on this, we avoid computing the visibility term by assuming the volumetric intensities are a good estimator of the geometry visible from the perspective of both laser and sensor positions on the relay wall. _Transient rendering._ The radiometric contribution \(\mathcal{K}(\mathbf{\hat{x}},\mathbf{t})\) (Equation 5) yields time-resolved transport in path space for a single path \(\mathbf{\hat{x}}\equiv\mathbf{x}_{l}\rightarrow\mathbf{x}_{g}\rightarrow \mathbf{x}_{s}\). Our goal is to obtain a set of discrete transient measurements \(\mathbf{H}_{R}\) from all paths arriving at each sensing point \(\mathbf{x}_{s}\), such that \(\mathbf{H}_{R}\) is comparable to the captured matrix \(\mathbf{H}\). To this end, we first discretize \(\left|\mathcal{K}(\mathbf{\hat{x}},\mathbf{t})\right|\) into neighboring bins \(\tau\) using a differentiable Gaussian distribution function as \(\hat{\mathcal{K}}(\mathbf{\hat{x}},\tau)=\left|\mathcal{K}(\mathbf{\hat{x}}, \mathbf{t})\right|\exp\left(-\frac{(\tau-t)^{2}}{2\sigma_{t}^{2}}\right)\), where \(\tau\) is a transient bin index, \(t\) is continuous time of \(\mathbf{\hat{x}}\) (Equation 4), and \(\sigma_{t}\) is set to 0.62 to make the FWHM of the Gaussian distribution cover a unit time bin. The time-resolved measurement \(\mathbf{H}_{r}(\mathbf{x}_{l},\mathbf{x}_{s},\tau)\) at temporal index \(\tau\) is then approximated as the sum of the discrete path contributions \(\hat{\mathcal{K}}(\mathbf{\hat{x}},\tau)\) sampled through the concentric disk mapping as \[\mathbf{H}_{r}(\mathbf{x}_{l},\mathbf{x}_{s},\tau)\approx\sum_{\mathbf{\hat{x}}\in \mathcal{X}}\hat{\mathcal{K}}(\mathbf{\hat{x}},\tau), \tag{10}\] where \(\mathcal{X}\) is the set of paths \(\mathbf{\hat{x}}\) that start at \(\mathbf{x}_{l}\) and end in \(\mathbf{x}_{s}\). After generating the rendered transient data \(\mathbf{H}_{r}\), we then apply our joint laser-sensor model to it to obtain a sensed transient data \(\mathbf{H}_{R}\): \[\mathbf{H}_{R}(\mathbf{x}_{l},\mathbf{x}_{s},\tau)=\Psi(\tau)*\mathbf{H}_{r}(\mathbf{ x}_{l},\mathbf{x}_{s},\tau)+\eta_{s} \tag{11}\] where \(\eta_{s}\) is the intensity offset parameter that takes the ambient light and the dark count rate of the sensor into account. ### Optimization of system parameters Our final goal is to estimate the system parameters \(\Theta=\{\Theta_{\text{pf}},\Theta_{\text{ls}},\Theta_{G}\}\) that minimize the loss between the measured matrix \(\mathbf{H}\) and the rendered matrix \(\mathbf{H}_{R}\) (Figure 2, red). We define this as \[\min_{\Theta}\mathcal{L}(\mathbf{H},\mathbf{H}_{R}), \tag{12}\] which we minimize by gradient descent. The transient cost function \(\mathcal{L}\) consists of a data term and regularization terms as \[\mathcal{L}(\mathbf{H},\mathbf{H}_{R})=E_{\mathbf{H}}+E_{\mathbf{I}_{\text{pf}}}+E_{\rho}. \tag{13}\] Figure 4. Geometry estimation procedure. (a) We ray-march from sensor points \(\mathbf{x}_{s}\), and estimate the intensity at each point along the ray by trilinear interpolation of \(I_{\text{pf}}\). (b) From the discrete ray-marching samplings, we obtain a continuous depth function. (c) Normals are computed based on the distances at neighboring ray samples in the concentric hemispherical mapping. The data term \(E_{H}\) computes an \(l_{2}\) norm between the transient measurements \(H\) and \(H_{R}\): \[E_{H}=\frac{1}{N_{H}}\sum_{i}\left\|H_{i}-H_{R,i}\right\|_{2}^{2}, \tag{14}\] where \(N_{H}\) is the total number of elements of \(H\). The key insight of this loss term is that \(H_{R}\) is the byproduct of time-resolved illumination computed from our implicit geometry \(G\), which was itself generated from volumetric intensities \(I_{\text{pf}}\) by means of RSD propagation of the ground truth \(H\). The difference between \(H\) and \(H_{R}\) is therefore a critical measure of the accuracy of geometry \(G\) and \(I_{\text{pf}}\). By backpropagating the loss term through our pipeline, we optimize all system parameters, which improve the estimation of \(I_{\text{pf}}\), \(G\) and therefore \(H_{R}\). The term \(E_{I_{\text{pf}}}\) in Equation 13 is a volumetric intensity regularization term that imposes sparsity, pursuing a clean image: \[E_{I_{\text{pf}}}=\lambda_{1}\frac{1}{N_{\text{pf},x}}\sum_{j}\left|I_{\text{pf },x,j}\right|, \tag{15}\] where \(I_{\text{pf},x}\) is the maximum intensity values of \(I_{\text{pf}}\) projected to the \(xz\) plane, \(N_{\text{pf},x}\) is the number of pixels of \(I_{\text{pf},x}\), and \(\lambda_{1}\) is a loss-scale balance hyperparameter, which is set to 1e+2 in all our experiments. The term \(E_{\rho}\) in Equation 13 is a regularization term that imposes smoothness, suppressing surface reflectance noise: \[E_{\rho}=\lambda_{2}\frac{1}{N_{\text{o}}}\sum_{m}\left|\nabla_{xx}\rho( \mathbf{x}_{\text{o},m})\right|, \tag{16}\] where \(N_{\text{o}}\) is the number of voxels \(\mathbf{x}_{\text{o}}\), and \(\lambda_{2}\) is a loss-scale balance hyperparameter, which is set to 5e-3 in all our experiments. All terms \(E_{H}\), \(E_{I_{\text{pf}}}\), and \(E_{\rho}\) of the loss function are computed over batches of the transients and voxels at every iteration. ## 5. Results We implement our pipeline using PyTorch. Our code runs on an AMD 7763 CPU of 2.45 GHz equipped with a single NVIDIA GPU A100. 3D geometry is obtained from points and normals using Poisson surface reconstruction (Kazhdan and Hoppe, 2013). Please note that we do not perform any thresholding or masking of the data prior to this step. We evaluate our method on four real confocal datasets Bike, Resolution, SU, and 34, provided by O'Toole et al. (2018); Ahn et al. (2019) and Lindell et al. (2019); on two real non-confocal datasets 44i and NLOS, provided by Liu et al. (2019); and on four synthetic confocal datasets Erato, Bunny, Indonesian and Dragon, generated with the transient renderer by Chen et al. (2020). The real datasets include all illumination bounces and different levels of noise depending on their exposure time. The synthetic datasets include up-to third-bounce illumination. In specific cases, we manually add Poisson noise to synthetic datasets to evaluate our robustness to signal degradation. ### Convergence of system parameters In Figure 5, we show the convergence of our system parameters in a full optimization of the Bike real scene, showing as well the final reconstruction of both volumetric intensity and geometry. Phasor-field kernel parameters \(\Omega_{\text{pf}}\) and \(\sigma_{\text{pf}}\) (first column) are responsible for improving the reconstruction quality by constructing a phasor kernel (fourth column, top) that yields high-detailed geometry. The laser and sensor parameters (second and third columns) improve the reconstruction of the transient measurements so that the transient simulation (fourth column, bottom, orange) resembles as much as possible the input data (blue). Refer to the supplemental material for more results of the progressive optimization. We evaluate the impact of each component in our optimization pipeline: phasor kernel, albedo, and laser-sensor model, using a \(256\times 256\times 201\) voxel volume. As Table 1 shows, adding albedo and laser-sensor parameters improves the result over just using the \begin{table} \begin{tabular}{c c c c} \hline \hline & \multicolumn{2}{c}{Component} & MSE \\ \hline \hline Phasor kernel & Albedo & Laser-sensor model & transient \\ \hline \hline ✔ & \(-\) & \(-\) & 6.817e-3 \\ ✔ & \(-\) & ✔ & 6.627e-3 \\ \(-\) & ✔ & \(-\) & 2.239e-3 \\ \(-\) & ✔ & ✔ & 2.217e-3 \\ ✔ & ✔ & \(-\) & 2.214e-3 \\ \hline ✔ & ✔ & ✔ & 1.971e-3 \\ \hline \hline \end{tabular} \end{table} Table 1. Ablation study of the impact of each component. MSE transient loss comparison with different configurations with the Bunny scene with two different albedos (Figure 8). Figure 5. Convergence of the imaging parameters optimized by our method in the Bike real scene. From left to right: Phasor kernel parameters (\(\Omega_{\text{pf}}\), \(\sigma_{\text{pf}}\)), laser-sensor joint model parameters (\(\sigma_{\text{ls}}\), \(l_{\text{ls}}\), \(\kappa_{\text{s}}\), \(\eta_{\text{s}}\)), the converged phasor kernel (purple and green for real and imaginary parts), measured transients compared to our reconstructed one, and our reconstruction results after the optimization. The yellow line indicates when the optimization converges. The converged phasor kernel yields a high-quality reconstruction, while the laser and sensor parameters provide an accurate estimation of transient illumination. -phasor parameters, while including the three components yields the best results. The impact of optimizing albedo is the most significant in this experiment. ### Robustness to noise To illustrate the robustness of our method to signal degradation, in Figure 6 we show reconstructions of the Bunny synthetic dataset under increasing levels of Poisson noise (from left to right) applied to the input transient data. The first row shows the final volumetric reconstruction after the optimization, while the second row shows the resulting surface estimation. The third row shows a comparison between the input transient illumination (blue) and our converged transient illumination at the same location that results from our estimated geometry (orange). The parameters optimized by our pipeline produce a volumetric reconstruction robust enough for our surface estimation method to obtain a reliable 3D geometry under a broad spectrum of noise levels. Note that while the volumetric outputs may show noticeable noise levels (first row), our pipeline optimizes the imaging parameters so that such volumetric outputs provide a good baseline for our geometry estimation method, which yields surface reconstructions that consistently preserve geometric details across varying noise levels (second row). In Figure 7, we compare our method with existing volumetric approaches on two real confocal scenes, Resolution and Bike, captured under different exposure times. For each scene, first to fourth columns illustrate the compared methods: O'Toole et al. (2018), Lindell et al. (2019), Liu et al. (2020), and ours, respectively. First to fourth rows show the resulting volumetric intensity images under increasing exposure times of 10, 30, 60, and 180 minutes, respectively. Our method converges to imaging parameters that produce the sharpest results while significantly removing noise even under the lowest exposure time (top row). Other methods degrade notably at lower exposure times, failing to reproduce details in the resolution chart, or yielding noisy outputs in the Bike scene. While LCT (O'Toole et al., 2018) allows to manually select an SNR filtering parameter \(\alpha\) to improve results in low-SNR conditions, our experiments with different \(\alpha\) values from 0.001 to 1.0 at different exposure levels validate that our automated calibration approach outperforms the LCT method, reproducing detailed geometric features (see supplemental material). ### Inverse rendering Our optimization pipeline estimates surface points, normals, and albedo by using only the input transient measurements. Figure 8 illustrates our volumetric intensity, as well as surface points, normals and albedo in the confocal synthetic scene Bunny made of two different surface albedos 1.0 (top) and 0.3 (bottom). Our method is consistent when estimating spatially-varying albedo, while not affecting the estimation of detailed surface points and normals. Figure 9 demonstrates our inverse rendering results on real scenes. As shown in a confocal scene SU (first row) and two non-confocal scenes 44i (second row) and NLOS (third row), we correctly estimate the albedo of objects with uniform reflectance properties (second column), although they undergo different attenuation factors due to being at different distances from the relay wall. The result of the NLOS non-confocal scene (third row) shows the albedo throughout the entire surface is almost identical. Our estimation of surface points and normals (third and fourth columns) is able to accurately reproduce the structure of the hidden geometry. In Figure 1, we illustrate the benefits of our inverse rendering optimization on the real scene Bike. The first row shows the first iteration of the optimization, which uses the volumetric output by Liu et al. (2020) with the default parameters of the illumination function. The resulting noise heavily degrades the geometry and normal estimation (top-right), and the albedo is wrongly estimated at empty locations in the scene despite the lack of a surface at such locations (top center). After our optimization converges (bottom row), the albedo is estimated only at surface locations, yielding a clean reconstruction of the bike's surface points and normals. ### Geometry accuracy In Figure 10, we compare the reconstructed geometry with surface normals in two real scenes (34 and SU) using D-LCT (Young et al., 2020), NeTF (Shen et al., 2021), a differentiable rendering approach (Plack et al., 2023), and our method. Existing methods fail to reproduce detailed surface features in both scenes, such as the subtle changes in depth of the numbers. Plack's method (fourth column) fails to reproduce the partially occluded U-shaped object and some regions of the S-shaped object in the SU scene. D-LCT (second column) succeeds in reproducing the U-shaped object but fails to reconstruct the detailed geometry of the boundary of the letters. While NeTF (Shen et al., 2021) (third column) is capable of reproducing the U-shaped object, their methodology, based on positional encoding and neural rendering, suppresses geometric details significantly, producing a coarse geometry. Plack's method faces similar challenges in reproducing geometric details due to the constraints imposed by the resolution of the explicit proxy geometry. Previous optimization-based methods that also rely on explicit Figure 6. Evaluation of our surface reconstruction under increasing levels of Poisson noise (left to right). From top to bottom: intensity volume, reconstructed geometry, and measured vs. optimized transport. Our method reconstructs geometry reliably across a broad spectrum of noise levels. A lower signal-to-noise ratio (SNR) value indicates a higher level of noise, with an exponential increase in noise. geometry (Iseringhausen and Hullin, 2020; Tsai et al., 2019) share similar limitations. Our method based on implicit surface representations is able to handle partial occlusions while reproducing detailed features of the surfaces, such as the depth changes on the numbers and the narrow segments of the letters. In Figure 11, we provide quantitative comparisons between our estimated geometry and the geometry obtained from D-LCT (Young et al., 2020), NeTf (Shen et al., 2021) and Plack et al. (2023) for three synthetic scenes, Dragon, Erato, and Indonesian, using the Hausdorff distance map as an objective metric. In terms of geometric accuracy, we outperform all three methods in Erato, and Dragon, as shown in the RMSE table. Our improvements are especially noticeable in self-occluded regions and in the reproduction of detailed features. While Plack et al. (2023) yields a lower RMSE in the Indonesian scene, note that it fails to reproduce large regions on the sides of the geometry. Thus, RMSE is only computed on the reconstructed regions and may not fully represent the overall accuracy of the reconstruction. ## 6. Discussion and Future Work We have presented an efficient and fully-differentiable end-to-end NLOS inverse rendering pipeline, which self-calibrates the imaging parameters using only the input-measured transient illumination. Our method is robust in the presence of noise while achieving enhanced scene reconstruction accuracy. Even though forward automatic differentiation (AD) is known to be memory efficient, we implemented our pipeline using reverse AD, as we found it to be 20 times faster and showed better performance when optimizing a large number of parameters (such as per-voxel albedo), and supports a wider set of differentiable functions required for our context. Phasor-field NLOS imaging can be performed analogously using temporal- or frequency-domain operators (Liu et al., 2020, 2019). However, operating in the temporal domain introduces large memory constraints that are impractical on a differentiable pipeline. Our pipeline therefore operates in the frequency domain to perform NLOS imaging, which provides practical implementation of convolutions of complex-valued phasor-field kernels within GPU memory constraints. While we based volumetric NLOS imaging on phasor-based operators and kernels, an interesting avenue of future work may be optimizing alternative kernel parameterizations or implementing other differentiable NLOS imaging approaches. ###### Acknowledgements. We want to thank the anonymous reviewers for their time and insightful comments. Min H. Kim acknowledges the main support of the Samsung Research Funding Center (SRFC-LT2001-04), in addition to the additional support of the MSIT/IITP of Korea (RS-2022-00155620, 2022-0-00058, and 2017-0-00072), Samsung Electronics, and the NIRCH of Korea (2021A02P02-001). This work was also partially funded by the Gobierno de Aragon (Departamento de Ciencia, Universidad y Sociedad del Conocimiento) through project BLIND-SIGHT (ref. LMP30_21), and by MCIN/AEI/10.13039/501100011033 through Project PID2019-105004GB-I00.
``` 既存の時間分解非線形目線外(NLOS)成像方法は、不透過光路を反転させて隠れたシーンを再構築します。これらの方法は、逆算の曖昧性と捕捉ノイズによる再構築の artefact により、通常はフィルタリング関数とパラメータの manuelle 選択により軽減されます。私たちは、入力にのみ測定された照明を使用し、時間と周波数域両方の領域で動作する、完全に微分可能なエンドツーエンドの NLOS 反射レンダリングパイプラインを導入します。このパイプラインは、隠れたシーンの再構築中に、その画像パラメータを自動的に調整します。私たちは、時間分解された照明の測定値と、その測定値に基づいて、時間分解された照明の測定値を推定するための、微分可能な瞬時レンダリングを使用して、隠れたシーンの幾何学的表現を抽出します。次に、勾配
2309.12721
Metrology of Rydberg states of the hydrogen atom
We present a method to precisly measure the frequencies of transitions to high-$n$ Rydberg states of the hydrogen atom which are not subject to uncontrolled systematic shifts caused by stray electric fields. The method consists in recording Stark spectra of the field-insensitive $k=0$ Stark states and the field-sensitive $k=\pm2$ Stark states, which are used to calibrate the electric field strength. We illustrate this method with measurements of transitions from the $2\,\text{s}(f=0\text{ and } 1)$ hyperfine levels in the presence of intentionally applied electric fields with strengths in the range between $0.4$ and $1.6\,$Vcm$^{-1}$. The slightly field-dependent $k=0$ level energies are corrected with a precisely calculated shift to obtain the corresponding Bohr energies $\left(-cR_{\mathrm{H}}/n^2\right)$. The energy difference between $n=20$ and $n=24$ obtained with our method agrees with Bohr's formula within the $10\,$kHz experimental uncertainty. We also determined the hyperfine splitting of the $2\,\text{s}$ state by taking the difference between transition frequencies from the $2\,\text{s}(f=0 \text{ and }1)$ levels to the $n=20,k=0$ Stark states. Our results demonstrate the possibility of carrying out precision measurements in high-$n$ hydrogenic quantum states.
Simon Scheidegger, Josef A. Agner, Hansjürg Schmutz, Frédéric Merkt
2023-09-22T09:11:55
http://arxiv.org/abs/2309.12721v1
# Metrology of Rydberg states of the hydrogen atom ###### Abstract We present a method to precisly measure the frequencies of transitions to high-\(n\) Rydberg states of the hydrogen atom which are not subject to uncontrolled systematic shifts caused by stray electric fields. The method consists in recording Stark spectra of the field-insensitive \(k=0\) Stark states and the field-sensitive \(k=\pm 2\) Stark states, which are used to calibrate the electric field strength. We illustrate this method with measurements of transitions from the \(2\mathrm{s}(f=0\) and \(1)\) hyperfine levels in the presence of intentionally applied electric fields with strengths in the range between \(0.4\) and \(1.6\,\mathrm{V}\,\mathrm{cm}^{-1}\). The slightly field-dependent \(k=0\) level energies are corrected with a precisely calculated shift to obtain the corresponding Bohr energies \(\left(-cR_{\mathrm{H}}/n^{2}\right)\). The energy difference between \(n=20\) and \(n=24\) obtained with our method agrees with Bohr's formula within the \(10\,\mathrm{kHz}\) experimental uncertainty. We also determined the hyperfine splitting of the \(2\mathrm{s}\) state by taking the difference between transition frequencies from the \(2\mathrm{s}(f=0\) and \(1)\) levels to the \(n=20,k=0\) Stark states. Our results demonstrate the possibility of carrying out precision measurements in high-\(n\) hydrogenic quantum states. Introduction The hydrogen atom is a fundamental two-body quantum system. Studies of its spectrum by experiment and theory have played a key role in the development of the quantum theory [1; 2; 3; 4; 5] and of quantum electrodynamics [6; 7; 8; 9]. Spectroscopic measurements of energy intervals between the quantum states of the hydrogen atom have reached exceptional precision and the results can be exactly explained by first-principles calculations and accurately known physical constants such as the Rydberg constant \(R_{\infty}\), the fine-structure constant \(\alpha\) and the proton charge radius \(r_{\rm p}\). The theoretical treatment of the H atom by relativistic quantum mechanics and quantum electrodynamics is indeed so accurate that the comparison with the results of precision measurements in the H atom can serve to determine the values of these constants [10]. In the past years, a significant revision of the values of \(R_{\infty}\) and \(r_{\rm p}\) became necessary after a new measurement of the Lamb shift in muonic hydrogen [11; 12; 13] challenged earlier results from H-atom spectroscopy, a challenge that was referred to as the proton-radius puzzle. This challenge essentially results from the correlation between the \(R_{\infty}\) and \(r_{\rm p}\) values which necessitates the combination of at least two transition frequencies in the H atom to determine these constants. The latest CODATA values of \(R_{\infty}\) and \(r_{\rm p}\) are based on a combination of multiple results, in which the 1s-2s interval in H [14; 15] and the Lamb shift in muonic hydrogen [11] play a central role. Several recent precision measurements in H confirmed the revised values [16; 17] whereas others cover the range between the old and the new values of \(R_{\infty}\) and \(r_{\rm p}\)[18; 19; 20]. Measurement of quantities that are only sensitive to either \(r_{\rm p}\), such as electron-scattering measurements [21; 22; 23; 24; 25; 26; 27; 28; 29], or \(R_{\infty}\), such as measurements in non-penetrating Rydberg series of H, have regained interest. Early, remarkable experiments designed to determine \(R_{\infty}\) from transition frequencies between circular states, _i.e._, states with orbital-angular-momentum quantum number \(\ell=n-1\) and magnetic quantum number \(m_{\ell}=\pm\ell\), of high principal quantum numbers in the H atom were carried out in the group of D. Kleppner at MIT [30; 31; 32], giving values of \(R_{\infty}\) compatible with the recommended CODATA values available at the time [33]. In that work, the frequencies of \(\Delta n=1\) transitions between circular states of H were measured with 2-3 Hz accuracy at \(n\) values around 30. These transition frequencies scale as \(2R_{\infty}/n^{3}\) and are completely insensitive to the proton size because the Rydberg electron does not penetrate in the core region. The \(2/n^{3}\) sensitivity factor to \(R_{\infty}\) of these measurement is only \(\approx 1\times 10^{-4}\) for the transition between the \(n=27,\ \ell=26,m_{\ell}=26\) and \(n=28,\ \ell=27,m_{\ell}=27\), but this disadvantage could be compensated by the fact that circular states are not sensitive to stray electric fields to first order, and through the exceptional control of all aspects of the millimeter-wave-spectroscopic experiments by the MIT team. An \(R_{\infty}\) value with an absolute uncertainty of 69 kHz and a relative uncertainty of \(2.1\times 10^{-11}\) was determined [32], close to the \(R_{\infty}\) uncertainty value of \(7.6\times 10^{-12}\) of the 1998 CODATA adjustment. Since this pioneering work, circular Rydberg states of Rb have been proposed as an alternative system to determine \(R_{\infty}\)[34]. The properties of circular Rydberg states of any atom or molecule are indeed ideally suited to metrology, as illustrated by the use of such states as ultrasensitive electric-field sensors [35]. If circular Rydberg states are excepted, high Rydberg states are usually not considered to be suitable for precision measurements because of their high sensitivity to stray electric fields (see discussion in, e.g., Refs. [36; 37]). In the context of metrology in the H atom, this sensitivity has implied that almost all precision experiments involving Rydberg states of H with \(n\geq 3\) have targeted states with \(n\) values below 12 [16; 38; 18] and that the measurements required a careful evaluation of the Stark effect on the level structure induced by stray electric fields. We introduce here an alternative method to determine \(R_{\infty}\) which relies on measuring the spectra of \(|m_{\ell}|=1\) Rydberg states of the H atom in the presence of intentionally applied electric fields. Stark states of the H atom exhibit shifts of \(\approx 1.5a_{0}ekn\mathcal{F}\) that are linear in the field strength \(\mathcal{F}\) at low fields and proportional to the integer difference \(k=n_{1}-n_{2}\) between the quantum numbers \(n_{1}\) and \(n_{2}\) that arise in the solution of the Schrodinger equation in parabolic coordinates (\(k=0,\pm 1,\pm 2,\ldots,\pm(n-1-|m_{\ell}|)\), where \(m_{\ell}\) is the magnetic quantum number associated with the electron orbital motion) [9; 39]. Consequently, even-\(n\), \(k=0,|m_{\ell}|=1\) states are to first order field insensitive, as circular Rydberg states. Their magnetic moments are, however, much smaller than for circular states, which makes them less sensitive to Zeeman shifts by magnetic stray fields. \(|m_{\ell}|=1\) Stark states do not possess any \(s\) character and their \(\ell\)-mixed wavefunctions are dominated by nonpenetrating high-\(\ell\) components; consequently, their spectral positions are also insensitive to the proton size. Experimentally, we measure the frequencies of transitions from the 2s(\(f=0\) and 1) states to \(n=20,k=0,\pm 2,|m_{\ell}|\) Stark states and use the separation between the \(k=\pm 2\) states to precisely determine the value of the applied field. We then extract the position of the \(k=0\) state to determine the Bohr energy \(\left(-hcR_{\rm H}n^{-2}\right)\) after correcting for the quadratic Stark shifts. To obtain a value of \(R_{\infty}\) without having to consider its correlation with \(r_{\rm p}\), the position of the 2s levels and the \(n=20,k=0,\pm 2,|m_{\ell}|\) Stark states can be related to the position of the 2p levels using the 2s Lamb shift determined by Bezginov _et al._[17]. The sensitivity factor of the measurement to \(R_{\infty}\) is thus 1/4, _i.e._, more than 3000 times higher than for the measurement based on circular states at \(n\approx 30\). Consequently, an accuracy of about 20 kHz would make this measurement competitive with the MIT measurements and we believe that this is achievable. The price to pay for this advantage is that the transition frequencies are in the UV range of the electromagnetic spectrum rather than in the millimeter-wave range and, therefore, compensation of the Doppler effect becomes much more critical. In this article, we present several of the key aspects of this method of determining \(R_{\infty}\). We are still in the middle of the data-acquisition process, and use subsets of the data to discuss systematic uncertainties in the measurements of \(nkm_{\ell}\)\(\leftarrow\) 2s transition frequencies originating from the Stark effect. We also present the determination of (i) the \(f=0-f=1\) hyperfine interval in the 2s state, which we obtain by combining two sets of measurements, from 2s(\(f=0\)) and 2s(\(f=1\)) to \(n=20\) Stark states, and (ii) the difference between the \(n=20\) and \(n=24\) Bohr energies by combining measurements from the 2s(\(f=1\)) hyperfine state to \(n=20\) and 24 Stark states. The article is structured as follows: Section II describes the experimental setup and provides details on the laser systems used to prepare H atoms selectively in the 2s(\(f=0\) and 1) hyperfine states and to record spectra of the \(nkm_{\ell}\)\(\leftarrow\) 2s(\(f\)) transitions, as well as the detection system and the procedure we follow to cancel the Doppler shifts. Section III describes how we calculate the energies of the Stark states of H and draws attention to the aspects that are most relevant for the determination of the Bohr energies. Section IV illustrates the current status of our measurements by using small data sets to compare spectra recorded at different electric fields from the two hyperfine components of the 2s state and to \(n=20\) and 24 Stark states. The results we present here only concern small energy intervals (\(\sim 177\,\)MHz for the 2s(\(f=1\gets f=0\)) interval and \(2.51\,\)THz for the difference between the Bohr energies at \(n=20\) and 24) obtained by building differences of (currently still blinded) UV laser frequencies. Absolute transition frequencies will be reported when the analysis of the systematic errors related to the Doppler effect is completed. In the last section, we draw several conclusions concerning our new approach. ## II Experimental setup The experimental setup is presented schematically in Fig. 1. It consists of (i) a differentially pumped set of vacuum chambers in which the H atoms are produced and entrained in a pulsed supersonic beam and subsequently photoexcited to Rydberg states via the metastable 2s state within a double-layer mu-metal magnetic shield; (ii) a pulsed near-Fourier-transform-limited laser system delivering radiation at 243 nm to drive the 2s \(\leftarrow\) 1s transition; and (iii) an SI-traceable single-mode continuous-wave (cw) UV laser to further excite the H atoms to Rydberg states. The experiment is run in a pulsed mode at a repetition rate of 25 Hz. The hydrogen atom source has been described in Ref. [40] to which we refer for details. The hydrogen atoms are produced by dissociating molecular hydrogen in a dielectric-barrier discharge near the orifice of a pulsed cryogenic valve and are entrained in a supersonic beam of H\({}_{2}\). The temperature (\(T_{0}\)) of the valve can be adjusted between 45 K and 160 K to vary the forward velocity of the supersonic expansion between 970 m s\({}^{-1}\) and 1800 m s\({}^{-1}\). The final longitudinal temperature of the supersonic beam (\(\approx\)12 mK) and its forward velocity (\(v_{x}\approx\sqrt{2k_{\mathrm{H}}T_{0}\gamma/m_{\mathrm{H}_{2}}(\gamma-1)}\)) can be well approximated using the model of an adiabatic expansion [41]. At valve temperatures below the characteristic rotational temperature of the carrier gas H\({}_{2}\) (\(\theta_{\mathrm{rot}}\approx\) 90 K), the heat capacity ratio \(\gamma\) can be approximated by the one of a monoatomic gas, _i.e._, \(\gamma=\nicefrac{{5}}{{3}}\). The central part of the supersonic beam is selected by two skimmers with diameters of 2 mm and 3 mm placed at distances of 45 cm and 135 cm from the nozzle orifice, respectively. The skimmed supersonic beam enters a magnetically shielded chamber in which the H atoms are excited to Rydberg states in a sequential three-photon absorption process. The 2s \(\leftarrow\) 1s transition is first induced between two copper plates kept at the same electric potential of 4V\({}_{\mathrm{DC}}\) by the third-harmonic (\(\lambda=243\) nm) beam of a pulse-amplified near-Fourier-transform-limited Ti:Sa laser [42] which crosses the supersonic beam at right angles. The molecular beam then traverses a region with a weak homogeneous electric field \(\mathcal{F}_{\mathrm{DC}}=\nicefrac{{V_{\mathrm{DC}}}}{{cm}^{-1}}\), where it intersects a single-mode cw UV laser (\(\lambda\approx 368\) nm) used to excite the metastable H(2s) atoms to specific Rydberg-Stark states. These states are field ionized by a large pulsed electric field (up to 6 kV cm\({}^{-1}\)) and the resulting protons are accelerated towards a microchannel-plate (MCP) detector. The different components are discussed in more details in the following subsections. Spectra of Rydberg-Stark states are recorded by monitoring the H\({}^{+}\) field-ionization yield as a function of the UV laser frequency. ### Laser system for the 2s \(\leftarrow\) 1s transition The 243-nm radiation used to excite the H atoms to the 2s state by nonresonant two-photon excitation is generated by amplification of the 120-ns-long chopped output of a titanium-sapphire (Ti:Sa) seed laser at 729 nm using a Nd:YAG-pumped Ti:Sa multipass amplifier, as described in Ref. [42]. The output pulses, with pulse energies of \(\approx 15\) mJ, are frequency tripled in two successive \(\beta\)-barium-borate (BBO) crystals, resulting in 40-ns-long pulses at 243 nm with typical pulse energies of 800 uJ. The 243-nm-laser beam is focused slightly beyond the supersonic beam using a 30-cm-focal-length lens. The use of two skimmers reduces the Doppler width of the 2s \(\leftarrow\) 1s transition and enables the full resolution of the \(f=0\gets f=0\) and \(f=1\gets f=1\) hyperfine components. Because the 243-nm laser beam propagates along the \(x\) axis (see Fig. 1), perpendicularly to both the supersonic beam and the cw UV laser, the focus selects a narrow cylinder (diameter of 0.1 mm) of H atoms with a reduced velocity distribution along the \(y\) axis (see axis system in Fig. 1). This selection narrows down the Doppler width of the Rydberg-excitation spectra from the 2s level. The photoexcitation only excites H(1s) atoms in a very restricted longitudinal phase-space volume. Consequently, the H(2s)-atom cloud remains compact and hardly expands as the beam propagates through the 4-cm-long distance separating the 2s \(\leftarrow\) 1s excitation region from the \(\mathit{nkm}\leftarrow\) 2s excitation region. However, the spatial and velocity selection can lead to a nonthermal velocity distribution, potentially resulting in asymmetric Doppler profiles in the Rydberg-excitation spectra. The 243-nm laser unavoidably ionizes a significant fraction of the H(2s) atoms [43]. To avoid stray fields from the generated protons, they are accelerated out of the H(2s) cloud by the electric field \(\mathcal{F}_{\mathrm{DC}}\) resulting from the potentials applied between the different electrodes within the mu-metal magnetic shield (see Fig. 1). To eliminate line broadening caused by interactions between closely spaced Rydberg atoms in the sample volume, the measurements are carried out in a regime where at most one Rydberg atom is in the excitation volume and on average much less than one field-ionization event is detected per experimental cycle. ### Laser system for the _nkm\(\leftarrow\)_2s excitation The primary laser used for the precision spectroscopy of the _nkm_\(\leftarrow\) 2s transition is a commercial continuous-wave (cw) Ti:Sa ring laser (Coherent, 899-21) pumped by a 12 W solid-state laser (Coherent, Verdi V-12). The Ti:Sa ring laser is operated in the range 729\(-\)736 nm and provides 1 W of output power. In addition to the standard actuators of the ring laser, an intra-cavity electro-optic modulator (EOM) (QUBIG, PS3D-BC) is used as a fast actuator to maintain a phase lock to an ultrastable reference laser, as discussed below. Around 98 % of the optical power is sent to a home-built second-harmonic-generation enhancement cavity (SHG) equipped with a 12-mm-long lithium Figure 1: Schematic representation of the experimental setup. Upper part: laser system and geometry of the photoexcitation from the metastable 2s state of H to Rydberg states. Lower part: vacuum chambers in which the supersonic beam of H atoms is generated, these atoms are photoexcited to Rydberg states and the Rydberg states are detected by pulsed field ionization Top right inset: Configuration of laser and supersonic beams used for the determination of Doppler-free frequencies. See text for details. triborate (LBO) crystal cut at Brewster's angle. The SHG cavity is stabilized using a Hansch-Couillaud scheme [44]. The typical conversion efficiency to the second harmonic is 20 %. The 368-nm output of the SHG cavity is coupled into an optical fiber and guided to an actively stabilized retroreflector (AFR) setup for Doppler-shift compensation (see below). The forward-propagating and retroreflected laser beams cross the molecular beam at right angles 4 cm downstream of the 2s \(\leftarrow\) 1s excitation spot. The remaining 2 % of the fundamental laser power is used for the frequency calibration and stabilization. The light is tightly collimated and sent through an acousto-optic modulator (AOM) (Isomet, M1260-T350L). The first-order diffraction is retro-reflected and its polarization turned by 90\({}^{\circ}\), as illustrated in the upper left part of Fig. 1. The double-pass configuration induces a shift of the fundamental frequency \(\nu_{\mathrm{L}}\) by 2\(\nu_{\mathrm{aom}}\) which can be adjusted up to 320 MHz. A polarizing beam splitter then deflects the frequency-shifted radiation and sends it through an optical fiber to an amplified, spectrally broadened, and frequency-doubled optically stabilized ultra-low-noise frequency comb (MenloSystems, FC1500-ULN & M-VIS). The repetition rate of the frequency comb is locked to an ultrastable laser, the frequency of which is referenced to an SI-traceable frequency standard, as characterized in Ref. [45]. The output of the spectrally broadened frequency comb is dispersed with a reflective grating and the spectral components around \(\nu_{\mathrm{L}}\) are selected and spatially overlapped with the laser. The beat, with frequency \[\nu_{b}=\nu_{c}-\nu_{\mathrm{L}^{\prime}} \tag{1}\] between the shifted laser frequency \(\nu_{\mathrm{L}^{\prime}}=\nu_{\mathrm{L}}+2\nu_{\mathrm{aom}}\) and the spectrally closest frequency-comb tooth \(\nu_{c}\) is recorded using a balanced photodiode (Thorlabs, PDB425A-AC) and processed using the electronic circuit depicted in Fig. 2. A bandpass filter centered at 60 MHz is used to suppress beat frequencies originating from neighboring comb teeth. The RF beat signal is amplified with an automatic-gain-control (AGC) amplifier and sent to a frequency counter (K+K Messtechnik, FXM50). A fraction of the RF-signal is used to establish a phase-lock of the Ti:Sa laser to the frequency comb. To this end, the beat signal is amplified again and fed to a phase-frequency detector (PFD) (Analog Devices, HMC403) where \(\nu_{b}\) is compared to a 60 MHz local oscillator. The error signal is transmitted to the control box of the ring laser [46] via an isolation amplifier (IA). The frequency components in the range \(0-20\) MHz are isolated with a diplexer, pre-amplified and distributed to an inverting bipolar high-voltage amplifier (APEX Microtechnology, PA90) and an amplifier (Comlinear, CLC103). The amplified signals are applied to the intracavity EOM as shown in Fig. 2. This frequency-offset-locking scheme provides a phase lock of the Ti:Sa ring laser to the ultra-low-noise frequency comb and makes \(\nu_{\mathrm{L}}\) SI traceable. ### Detection of the _nkm\(\leftarrow\)_ 2s transition The \(nkm\gets 2\)s excitation is carried out in the center of two electro-polished stainless-steel plates separated by \(\approx 2.1\) cm and designed for the application of homogeneous electric fields. A ring electrode consisting of four segments is inserted between the two plates to eliminate all line-of-sight trajectories of charged particles to insulators. This measure effectively prevents accumulation of charges near the excitation volume and is crucial to reduce stray electric fields. The segmented geometry enables one to apply transverse electric fields for stray-field compensation. A short plate distance of 2.1 cm between the ion-repeller plate and the grounded extraction plate was chosen to be able to generate electric fields up to 6 kV cm\({}^{-1}\) in less than 29 ns with a home-built 12.5 kV low-noise high-voltage switch. With such fields, Rydberg states with principal quantum number as low as 20 can be efficiently field ionized (see Fig. 7 below). The electronic circuit was conceived to combine the high voltage pulse with low-noise DC potentials (2V\({}_{\mathrm{DC}}\)) on the repeller plate using a 20-bit digital-to-analogue low-noise voltage source. This enabled us to either minimize stray-electric-field components or to apply well-defined electric fields in the \(z\) direction. The only openings in the electrode structure surrounding the photoexcitation region are 5-mm-diameter holes along the molecular-beam axis and 9-mm-diameter holes for the UV laser beam. ### Doppler-shift cancellation The inset of Fig. 1 schematically describes the photoexcitation geometry, where \(\vec{v}\) is the H(2s)-atom velocity and \(\vec{k}\) the wavevector of the forward-propagating (blue) and reflected (red) UV radiation. Any deviation \(\delta\alpha\) from 90\({}^{\circ}\) of the angle between the laser beam and the supersonic beam leads to a first-order Doppler shift. To cancel this shift, we choose \(\delta\alpha\) to be large enough so that the spectral lines from the forward-propagating and reflected UV laser beams do not overlap. In addition, a 180\({}^{\circ}\) reflection angle is enforced through an active-stabilization feedback system, based on a design introduced in Refs. [37; 47; 48]. This procedure resulted in a mirror-symmetric double-line profile with center at the first-order Doppler-free frequency [49]. Choosing \(\delta\alpha\) as close to zero as possible, as advocated in Ref. [47; 48], turned out not to be practical in our case because the nonthermal nature of the H(2s)-atom velocity distribution made it challenging to extract the central frequency from the lineshapes under conditions where the fine structure is not fully resolved. An aberration-free set of four antireflection-coated lenses [50] with an effective focal length of 21.35 mm is used to collimate the diverging beam emerging from a pure-silica-core, polarization-maintaining, single-mode optical fiber (mode-field diameter 2.3 mm), resulting in a parallel beam with a M\({}^{2}\) value of \(\approx 1.02\). The focus of the resulting Gaussian beam is located \(\approx 20\) mm beyond the chamber. Consequently, the reflected beam almost exactly retraces the incoming beam and the change of wavefront curvature is negligible. The active stabilization of the alignment of the 180\({}^{\circ}\) reflecting mirror is achieved by dithering its tip and tilt angles by applying sinusoidal electric potentials to piezo-electric elements installed at the back of the mirror holder (see Fig. 1). The dithering leads to a modulation of the incoupling efficiency of the reflected beam into the silica-core fiber beyond the lens system. These modulations are detected with an auto-balanced photodiode (PD). The dithering frequencies are selected to minimize cross talk between the motions of the tip and tilt axes. The error signal used to correct the mirror position is produced by lock-in amplifiers (LIA) (Femto, LIA-MVD-200L) connected to a proportional-integral controller (PI). To compensate slow drifts, the time constant of the feedback loop was chosen to be 0.1 s. ## III Theoretical description of Rydberg states of the H atom in electric fields The energy levels of the H atom in a static homogeneous electric field \(\vec{\mathcal{F}}=(0,0,\mathcal{F})\) are eigenvalues of the Hamiltonian \[\hat{\mathcal{H}}=\hat{\mathcal{H}}_{0}+e\mathcal{F}\hat{z}, \tag{2}\] where \(\hat{\mathcal{H}}_{0}\) is a diagonal matrix containing the field-free energies of the \(|nljfm_{f}\rangle\) states with principal quantum number \(n\), orbital angular momentum quantum number \(l\), total angular momentum quantum number without nuclear spin \(j\) Figure 2: Schematic electric-circuit diagram of the laser-stabilization electronics (see text for details). Color-shaded inset: Spectral density (SD) of the in-loop beat note \(\nu_{b}\) recorded with a bandwidth of 3 kHz with (black) and without (gray) active stabilization using the intracavity EOM. total angular momentum quantum number \(f\), and associated magnetic quantum number \(m_{f}\). The field-free hyperfine-centroid energies, including terms arising from relativistic, quantum-electrodynamics (QED) and finite-nuclear-size corrections, can be accurately calculated using Eqs. \(7-41\) of Ref. [10] and the latest recommended physical constants (2018 CODATA, see Ref. [51]). To obtain the field-free energy-level structure at high \(n\) values, we used Bethe logarithms tabulated in Ref. [52] and included the hyperfine splittings using the analytical expressions provided in Ref. [53]. The calculated structure of the \(m_{l}=0\) levels at \(n=20\) is depicted in the inset of Fig. 3b). The operator \(e{\cal F}\hat{z}\) in Eq. 2 describes the effect of the external field. The perturbation can be treated in excellent approximation in a nonrelativistic framework and relativistic corrections to the Stark effect as discussed in Ref. [54] become negligible as \(n\) increases. \(e{\cal F}\hat{z}\) only contributes off-diagonal elements connecting zero-field states differing in \(l\) by \(\pm 1\). These matrix elements can be expressed in analytic form using standard angular-momentum algebra (see, e.g., Refs. [55; 56]) as \[\left\langle n^{\prime}l^{\prime}j^{\prime}f^{\prime}m_{f}^{ \prime}\right|\hat{z}\left|nljfm_{f}\right\rangle=(-1)^{\Delta f+\Delta j+ \Delta l-m_{f}^{\prime}+I+S}\times\] \[\left(\begin{matrix}l^{\prime}&1&l\\ 0&0&0\end{matrix}\right)\left(\begin{matrix}f^{\prime}&1&f\\ -m_{f}^{\prime}&0&m_{f}\end{matrix}\right)\left\{\begin{matrix}j^{\prime}&f^ {\prime}&1\\ f&j&1\end{matrix}\right\}\left\{\begin{matrix}l^{\prime}&j^{\prime}&S\\ j&l&1\end{matrix}\right\}\times\] \[\sqrt{\Theta(f^{\prime})\Theta(f)\Theta(j^{\prime})\Theta(j) \Theta(l^{\prime})\Theta(l)}\left\langle n^{\prime}l^{\prime}\right|r\left| nl\right\rangle, \tag{3}\] where the expressions in parentheses and curly parentheses are Wigner 3j and 6j symbols, respectively, \(\Theta(x)=2x+1\) Figure 3: Stark effect in the \(n=20,\,m_{f}=0\) manifold of the H atom. a) Field dependence of the \(k=0\) state revealing a quadratic shift below \(50\,\mathrm{mV}\,\mathrm{cm}^{-1}\) caused by the intramanifold mixing of different orbital-angular-momentum components, and a smaller quadratic shift at larger fields arising from the interaction between different \(n\) manifolds. b) Overview of the field dependence of all \(m_{l}=0\) Stark states, which is essentially linear. c) Calculated spectra for different electric-fields strengths and electric-field vectors \(\vec{\cal F}\) pointing parallel or perpendicular to the laser polarization \(\vec{\epsilon}_{\mathrm{p}}\). \(\Delta x=x^{\prime}-x\), and \(\left\langle n^{\prime}l^{\prime}\right|r\left|nl\right\rangle\) are radial integrals connecting the \(r\)-dependent parts of the solutions of the Schrodinger equation of the H atom (see Eqs. 63.2 and 63.5 of Ref. [9]). Restricting the calculations of the Stark effect to a single \(n\) value, one obtains an intra-manifold quadratic Stark effect at low fields and a linear Stark effect at intermediate fields, as depicted in Fig. 3. The Stark states are commonly labeled by the parabolic quantum numbers \(n_{1}\) and \(n_{2}\) or by their difference \(k=n_{1}-n_{2}\)[9; 57]. At intermediate field strengths, the states can approximately be described by their \(k\) and \(m_{l}\) values. States of a given value of \(k\) form near degenerate groups with \(m_{l}\) values ranging from \(-(n-\left|k\right|-1)\) to \((n-\left|k\right|-1)\) in steps of 2. The \(k=0\) states, highlighted in red in Fig. 3, are the only states retaining almost pure parity \(\left[(-1)^{n-1}\right]\). They have a zero electric dipole moment and are insensitive to the field over a large range of fields, which makes them attractive for precision measurements, except at fields very close to zero. All other states exhibit a dipole moment in the field. At intermediate to high field strengths, the coupling between states of different \(n\) values induced by the field becomes significant and the states start exhibiting an inter-manifold quadratic Stark effect. This behavior is displayed on an enlarged vertical scale for \(m_{f}=0\) in Fig. 3a). To reliably calculate Stark shifts in this field range, it is necessary to include basis states of neighboring \(n\) values until convergence with the size of the basis set is reached. Figure 4 presents the decomposition of the \(n=20\), \(k=0\) Stark states with \(m_{f}=0-2\) in the \(\left|ljfm_{f}\right\rangle\) basis. For each \(m_{f}\) value, the eigenstates possess contributions from up to four hyperfine-structure components, as indicated by the color labels. The intensity of transitions from the 2s level corresponds to the coherent squared sum of the p characters in the evaluation of electric-dipole-moment matrix elements. Figure 3c depicts calculated intensity distributions in spectra of the \(n=20\gets 2s\) transitions at field strength below \(1\,\mathrm{V}\,\mathrm{cm}^{-1}\) and for laser polarizations parallel and perpendicular to the DC electric field. At fields below \(20\,\mathrm{mV}\,\mathrm{cm}^{-1}\), corresponding to typical stray fields, the center of gravity of the distribution depends on the polarization and varies strongly and nonlinearly with the field strength, making precision measurements prone to systematic uncertainties. This behavior explains why high-\(n\) Rydberg states are usually avoided in precision measurements. However, in the linear regime of the Stark effect, _i.e._, above \(0.2\,\mathrm{V}\,\mathrm{cm}^{-1}\) at \(n=20\), the spectra regain a regular intensity pattern and the spacings between the Stark states encode the field strength. When the polarization is parallel to the field (\(\pi\) transitions), the intensity is strongest at the outer edges of the manifold and vanishes at \(k=0\), for even \(n\) values Figure 4: Expansion coefficients of the \(k=0\), \(\left|m_{f}\right|=0,1\) and 2 Rydberg-Stark wavefunctions in the \(\left|ljfm_{f}\right\rangle\) angular-momentum basis as labeled in the figure. Only basis states with odd orbital angular momentum quantum number make significant contributions. because \(k=0,m_{l}=0\) states do not exist, and for odd \(n\) values because \(k=0\) states have vanishing p character. When the polarization is perpendicular to the field, the opposite behavior is observed (see right panel of Fig. 3c). Consideration of Fig. 3 leads to the following conclusions concerning precision spectroscopy in high-\(n\) states of hydrogen-like systems: * Because of the nontrivial field dependence of the line profiles, precision measurements are not attractive in the region of the intra-manifold quadratic Stark effect. * In the linear regime of the Stark effect, regular spectral pattern are restored and the states with \(|k|\geq 0\) form pairs of levels with Stark shifts of opposite sign. The positions of the \(k\neq 0\) states can be used for the electric-field calibration, as will be demonstrated in Section IV. * If an easily calculable shift from the Bohr energy \(\left(-hcR_{\text{H}}n^{-2}\right)\) arising from the quadratic Stark effect is disregarded, the \(k=0\) Stark states are essentially field-independent. Consequently, spectra of \(k=0\) Stark states in the linear regime are not subject to broadening by inhomogeneous fields and their positions can be converted into the Bohr energy by adding the calculated Stark shift (see red curves in Fig. 3a)). * The linear Stark manifold is thus perfectly suited for metrological purposes, in particular for precise determination of the Bohr energy. It has previously been used to determine the binding energy of Rydberg states of H\({}_{2}\)[58]. The wavefunctions of the Stark states can be used to estimate their magnetic moments and systematic shifts arising from the Zeeman effect caused by residual magnetic fields, as illustrated in Fig. 5 with the example of the \(k=0,|m_{l}|=1\) Stark states. In this case, the electric field splits the structure into two \(m_{f}=0\), two \(m_{f}=1\) and one \(m_{f}=2\) components and a total of eight states. The magnetic moments are given by the relative orientations of the electron orbital angular momentum, electron spin, and nuclear spin vectors. A magnetic field parallel to the electric field further splits these components according to their magnetic moments, as displayed schematically on the right-hand side of Fig. 5. Because the Zeeman shifts are symmetric and extremely small in a magnetically shielded environment (less than 2.4 kHz for \(\mu=2\mu_{\text{B}}\) and \(|\text{B}|\leq\)100 nT), we conclude that the Zeeman effect in low-\(m_{l}\) states can be neglected in metrological applications relying on Stark states in the linear regime. This is also the case for perpendicular magnetic-field components because the corresponding Zeeman effect couples states with \(\Delta m_{l}=\pm 1\) which are located in different \(k\) manifolds and thus energetically too distant for significant mixing to occur. As explained in Section II, the maximal electric-field strength we apply to record Stark spectra is 2 V cm\({}^{-1}\). The applied fields also induce shifts of the 2s level energies, which need to be considered when extracting the absolute positions of the Rydberg-Stark states. The Stark shifts of the 2s levels can be calculated in the same manner as explained above for higher \(n\) values. The calculated shifts are displayed in Fig. 6. They are positive and quadratic Figure 5: Energy level structure of the eight \(n=20,\,k=0\) Stark states with \(m_{l}=1\) character, calculated at an electric field strength \(\mathcal{F}=0.8\) V cm\({}^{-1}\). These states split into two groups of four states each separated by \(\approx 600\) kHz. The Zeeman effect induced by a magnetic field pointing along the quantization axis is schematically illustrated on the right side and lifts all remaining degeneracies. for small electric fields because the dominant interactions are with the 2p\({}_{\nicefrac{{1}}{{2}}}\) states, which are located energetically just below the 2s states. When determining the absolute positions of the \(nkm\) Rydberg-Stark states from spectra of the \(nkm\)\(\leftarrow\) 2s transitions, the 2s Stark shifts must be added to the measured transition frequencies. ## IV Results Figure 7 displays pulse-field-ionization (PFI) spectra of the \(n=20\) Stark manifold recorded from the 2s(\(f=1\)) hyperfine level using laser radiation polarized linearly in the direction orthogonal to the applied DC electric field \(\mathcal{F}_{\mathrm{DC}}\). The upper (lower) trace was recorded by field ionizing the Rydberg states with a pulsed field \(\mathcal{F}_{\mathrm{PFI}}\) pointing in the same (opposite) direction as the DC field \(\left[\mathcal{F}_{\mathrm{PFI}}=5.7\,\mathrm{kV}\,\mathrm{cm}^{-1},\mathcal{ F}_{\mathrm{DC}}=0.2\,\mathrm{V}\,\mathrm{cm}^{-1}(-0.2\,\mathrm{V}\,\mathrm{cm}^{-1})\right]\). The orthogonal laser-polarization arrangement led to the observation of dominant transitions to Stark states of even \(k\) values, as assigned at the top of the figure. The intensity distributions in both spectra are very similar, except at the edges of the manifold. Whereas the intensities of the transitions to the highest \(k\) states (\(k\geq 14\)) are strongly depleted in the upper spectrum, the lowest \(k\) states (\(k\leq-14\)) are depleted in the lower spectrum. The reason for the disappearance of the intensities at the edges of the Stark manifold are twofold: First, the transition dipole moment gradually decreases with increasing \(|k|\) value. Second, the ionization rates of the Stark states that are shifted to higher energies by the pulsed field rapidly decrease with increasing \(k\) value. In the case of the upper spectrum, these states are those observed at the highest frequencies. For the lower spectrum, they are observed at the lowest frequencies because of the reversal of the sign of \(k\) when the field polarity changes upon application of the pulsed field, which diabatically inverts the Stark manifold, as schematically illustrated in the inset. This interpretation is fully supported by calculations of the spectral intensities, as depicted in the red and blue stick spectra in Fig. 7. These intensities were obtained by multiplying the squared transition dipole moments calculated as explained in Section III with the field-ionization probabilities over the 80-ns-long detection window calculated using the analytical expressions reported by Damburg and Kolosov [59]. Before recording the lower spectrum in Fig. 7, the transverse stray fields were carefully compensated. Consequently, the laser polarization was almost perfectly parallel to the DC field. Under these conditions, transitions to Stark states of odd \(k\) values have zero intensity. In the case of the upper spectrum, a weak transverse stray field made the Stark states with odd \(k\) values optically accessible. Transitions to these states are strongest at the edges of the manifold and weakest at the center. The calculated intensities of transitions to odd \(k\) states in the presence of the transverse stray field (\(\sim 10\,\mathrm{mV}\,\mathrm{cm}^{-1}\)) are depicted as gray sticks in Fig. 7. They are only observable at the low-frequency edge of the Stark manifold because the Stark states at the high-frequency edge are not efficiently ionized by the pulsed field, as explained above. The good agreement between measured and calculated intensity distributions enables us to conclude that the Rydberg-Stark states located near the center of the \(n=20\) manifold are fully ionized by the \(5.7\,\mathrm{kV}\,\mathrm{cm}^{-1}\) pulsed field used in the experiments. Figure 6: Stark shifts of the metastable 2s levels of the H atom calculated for electric fields in the range between 0 and \(2\,\mathrm{V}\,\mathrm{cm}^{-1}\). Figure 8 displays a typical spectrum of transitions to the \(k=0,\pm 2\) Stark states of the \(n=20\) manifold recorded from the H(2s,\(f=1\)) state using laser radiation with linear polarization orthogonal to the 0.8 V cm\({}^{-1}\) DC field. The spectrum was recorded at an angle deviation \(\delta\alpha=1.1\) mrad from exact orthogonality between the H-atom beam and the laser beam, leading to two Doppler components per \(k\) state, separated by 6.28 MHz. The two Doppler components are slightly asymmetric with mirror-symmetric lineshapes (opposite sign of \(\gamma\) in Eq. 4 below). To optimize the data acquisition rate when recording the Stark spectra, the frequency was scanned in steps of 400 kHz within the line profiles and of 2 MHz between the lines. In addition, the data points within the spectral lines were obtained by averaging over 500 experimental cycles (_i.e._, over 20 s) whereas only 100 cycles were averaged for data points between the lines. The central frequency, the electric field strength and additional parameters were determined in a least-squares fit to the experimental data (black dots) based on the following line profile for each \(k\) value \[g_{k}(\nu)=\sum_{i=1}^{2}\sum_{m_{f}=-2}^{2}\mathrm{I}^{i}\mathrm{I}^{m_{f}}( \mathcal{F})\exp\left\{\frac{-\left[\nu-\nu_{0}^{i,m_{f}}(\mathcal{F},\gamma )\right]^{2}}{2\left(\sigma_{\mathrm{D}}^{2}+|k|\sigma_{\mathrm{S}}^{2}\right) }\right\}\times\left[1+\mathrm{erf}\left((-1)^{i}\gamma\frac{\left(\nu-\nu_{ 0}^{i,m_{f}}(\mathcal{F},\gamma)\right)}{\sqrt{2}\sigma_{\mathrm{D}}}\right) \right], \tag{4}\] with \[\nu_{0}^{i,m_{f}}(\mathcal{F},\gamma)=\nu_{0}+\nu_{\mathrm{S}}^{m_{f}}( \mathcal{F})+(-1)^{i}\left\{\nu_{\mathrm{D}}-\delta\nu(\gamma)\right\}. \tag{5}\] In Eqs. 4 and 5, \(i\,(=1,2)\) is an index specifying the Doppler component, \(\nu_{0}\) is the transition frequency to the reference position (\(-cR_{\mathrm{H}}/n^{2}\)) of the calculated Stark map of the \(n=20\) levels (see Fig. 3), \(\nu_{\mathrm{S}}^{m_{f}}(\mathcal{F})\) is the field-dependent Stark shift of the \(m_{f}\) level, \(\nu_{\mathrm{D}}\) is the Doppler shift arising from the angle deviation \(\delta\alpha\), and \(\delta\nu(\gamma)\) is a frequency offset used to compensate the shift of the intensity maximum of the asymmetric line profiles from the centers of the hypothetical symmetric profiles. This shift is introduced to reduce the correlation between the asymmetry parameter \(\gamma\) and \(\nu_{\mathrm{D}}\) in the least-squares fit. \(\sigma_{\mathrm{D}}\) is the Doppler width and \(\sigma_{\mathrm{S}}\) accounts for the broadening of the \(|k|=2\) lines arising from weak field inhomogeneities in the photoexcitation volume. As mentioned in Section II.1, the asymmetry of the line profiles originate from the nonthermal velocity distribution caused by the 2s \(\leftarrow\) 1s excitation. Figure 7: PFI spectra of the \(n=20\) Rydberg-Stark states of H recorded from the 2s\((f=1)\) hyperfine component in an electric field \(\mathcal{F}_{\mathrm{DC}}\approx 200\) mV cm\({}^{-1}\). The direction of the strong pulsed electric-field (\(\mathcal{F}_{\mathrm{PFI}}=5.7\) kV cm\({}^{-1}\)) used for ionization was set parallel to \(\mathcal{F}_{\mathrm{DC}}^{\uparrow\uparrow}\) to record the upper spectrum and antiparallel \(\mathcal{F}_{\mathrm{DC}}^{\uparrow\downarrow}\) to record the lower, inverted spectrum. The red and blue stick spectra represent the calculated intensity distributions. Inset: The alignment of the two fields leads to ionization without change of the field polarity (red) or to ionization after a diabatic state inversion upon reversal of the field polarity. The fit of the line profiles depicted in Fig. 8 resulted in the parameters listed in Table 1. These parameters are helpful in characterizing the experimental conditions. For instance, the homogeneous component of the field is found to correspond closely to the 0.8 V cm\({}^{-1}\) applied experimentally with an uncertainty of only 0.4% or 300 \(\mathrm{\SIUnitSymbolMicro V}\) cm\({}^{-1}\). The electric field inhomogeneity leads to a broadening of the \(k=\pm 2\) Stark components and is well represented by a field gradient of 12(3) mV cm\({}^{-2}\), which corresponds to a field change of 2.4(6) mV cm\({}^{-1}\) over the 2 mm diameter of the UV laser. The Doppler shift \(\nu_{\mathrm{D}}\) reflects the deviation angle \(\delta\alpha\) which, in this case, is 1.1 mrad. \(\sigma_{\mathrm{D}}\) is a measure of the transversal velocity distribution, which in the present case corresponds to a temperature of 40 \(\mathrm{\SIUnitSymbolMicro K}\) and is the result of the geometric constraints along the supersonic beam imposed by the skimmers and the 2s \(\leftarrow\) 1s excitation. The asymmetry parameter is alignment specific and typically varied between -2 and 4. The central frequency was arbitrary set to zero because the absolute frequency determination is still in a blinded phase. The weights used for the least-squares fits are determined in an iterative procedure to approach a normal distribution of the residuals. The overall data set collected so far involves more than 500 individual spectra of transitions recorded from the initial 2s(\(f=1\)) and 113 from the 2s(\(f=0\)) hyperfine state to \(n=20\) Rydberg states and 35 spectra from the 2s(\(f=1\)) to \(n=24\) Rydberg states. These spectra were recorded for different valve temperatures, electric-field strengths and deviation angles \(\delta\alpha\) to investigate possible sources systematic uncertainties. The main objective of the study presented here was to verify that the central frequencies extracted from the spectra do not depend on the strength of the applied electric field. A typical set of four measurements recorded at nominal Figure 8: a) Typical experimental (dots) and fitted (blue) spectra of the three (\(k=0,\pm 2\)) Rydberg-Stark states near the center of the \(n=20\) Stark manifold of H, each exhibiting two Doppler components. b) Weighted residuals (see text for details). \begin{table} \begin{tabular}{c c} \hline \hline & value \\ \hline \(\nicefrac{{\nu_{\mathrm{D}}}}{{\mathrm{kHz}}}\) & 0(26) (blinded) \\ \(\nicefrac{{\mathcal{F}}}{{\mathrm{V}}}\) cm\({}^{-1}\) & 0.8076(3) \\ \(\nicefrac{{\nu_{\mathrm{D}}}}{{\mathrm{MHz}}}\) & 3.16(5) \\ \(\nicefrac{{\sigma_{\mathrm{D}}}}{{\mathrm{MHz}}}\) & 1.56(7) \\ \(\nicefrac{{\sigma_{\mathrm{S}}}}{{\mathrm{MHz}}}\) & 0.27(6) \\ \(\gamma\) & 0.65(18) \\ \hline \end{tabular} \end{table} Table 1: Fit results obtained in the least-squares fit of the line profiles based on Equations 4 and 5. field strengths of 0.4, 0.8, 1.2 and 1.6 V cm\({}^{-1}\) under otherwise identical experimental conditions (beam velocity of 1060 m s\({}^{-1}\) and deviation angle \(\delta\alpha\) of 1.1 mrad) is presented in Fig. 9. At the scale of the figure, the Stark effect appears essentially linear. Table 2 summarizes the relevant lineshape parameters (see Eqs. 4 and 5) extracted from the fits of the lineshapes to the experimental data. The central frequencies corrected for the Stark shift of the 2s state agree within the combined uncertainties and do not reveal any systematic dependence on the field strength within the 20 kHz accuracy of the measurements. The field strength corresponds to the applied electric potential within the expected uncertainties resulting from the geometry of the electrode plates and the electronic circuits used to apply the potentials. The field-dependent line broadening does not reveal a significant dependence on the applied field strength, which suggests that the applied field distribution does not contribute to the observed field inhomogeneity. The slight variations in the values of \(\nu_{\mathrm{D}}\) and \(\sigma_{\mathrm{D}}\) reflect small changes in the day-to-day alignments of the beams and the supersonic-beam properties. The data set collected so far was used to determine the hyperfine splitting in the 2s level as well as the difference between the Bohr energies of the \(n=20\) and \(n=24\) Rydberg states. Figure 10 presents spectra of the transitions to the \(n=20\), \(k=0,\pm 2\) Stark states recorded from the 2s(\(f=0\)) (red) and 2s(\(f=1\)) (blue) states as illustration. Taking the difference in the central frequencies \(\nu_{0}\) (see Eq. 5) for the two sets of data (197 spectra and 50 spectra for \(f=1\) \begin{table} \begin{tabular}{c c c c c} \hline \hline & 0.4 V cm\({}^{-1}\) & 0.8 V cm\({}^{-1}\) & 1.2 V cm\({}^{-1}\) & 1.6 V cm\({}^{-1}\) \\ \hline \(\nu_{0}/\mathrm{kHz}\) & 0(21) & 21(18) & -20(21) & -0(20) \\ \(\mathcal{F}/\mathrm{Vcm}^{-1}\) & 0.4012(4) & 0.7990(3) & 1.1882(3) & 1.5794(3) \\ \(\sigma_{\mathrm{S}}/\mathrm{MHz}\) & 0.31(10) & 0.22(9) & 0.17(14) & 0.23(6) \\ \(\nu_{0}/\mathrm{MHz}\) & 4.26(5) & 4.61(4) & 5.20(5) & 5.18(3) \\ \(\sigma_{\mathrm{D}}/\mathrm{MHz}\) & 2.02(10) & 1.78(5) & 2.12(5) & 1.81(5) \\ \hline \end{tabular} \end{table} Table 2: Lineshape parameters extracted from fits to the measured spectra of the \(n=20,k=0\), \(\pm 2\), \(|m_{l}|=1\gets 2\)s(\(f=1\)) transitions measured when applying a nominal electric field of 0.4, 0.8, 1.2 and 1.6 V cm\({}^{-1}\), respectively. Figure 9: Spectra of the \(n=20,k=0\), \(\pm 2\), \(|m_{l}|=1\gets 2\)s(\(f=1\)) transitions measured when applying nominal electric fields of 0.4, 0.8, 1.2 and 1.6 V cm\({}^{-1}\), respectively. Each spectrum represents the sum of three independent scans as described in Section II. Right: Relative positions of the line center \(\nu_{0}\) with respect to the line center measured at a nominal field strength of 0.4 V cm\({}^{-1}\). The error bars represent 1\(\sigma\) uncertainties. and \(f=0\), respectively) yields a value of \(177.546(11)\,\mathrm{MHz}\) for the 2s hyperfine splitting, which corresponds within the \(1\sigma\) uncertainty to the much more precise value of \(177.55683887(85)\,\mathrm{MHz}\) determined by Ramsey spectroscopy in the \(n=2\) manifold [60]. The difference in the Bohr energies of the \(n=20\) and 24 Rydberg states was determined in an analogous manner from spectra of the \(n=20\) and 24 Stark states recorded from the 2s(\(f=1\)) state as illustrated in Fig. 11. The difference of the two \(\nu_{0}\) values is \(2\,511\,705.793(10)\,\mathrm{MHz}\) and also agrees within the experimental uncertainty with the value \(cR_{\mathrm{H}}\left(\nicefrac{{1}}{{20^{2}}}-\nicefrac{{1}}{{24^{2}}}\right)= 2\,511\,705.802\,\mathrm{MHz}\). The uncertainty of \(10\,\mathrm{kHz}\) results from the addition in quadrature of the \(7\,\mathrm{kHz}\) uncertainties of the blinded \(\nu_{0}\) values extracted from the experimental data. ## V Conclusion In this article, we have outlined an experimental approach to determine \(R_{\infty}\) from \(k=0\), \(\pm 2\), \(|m_{l}|=1\) Rydberg-Stark spectra of H. We have demonstrated that systematic errors resulting from the Stark effect are insignificant within the \(\sim 11\,\mathrm{kHz}\) precision of the four data sets used as illustrations (see Fig. 9). We have also demonstrated that the differences between the Bohr energy at \(n=20\) and the positions of the \(f=0\) and 1 hyperfine components of the 2s state are consistent within the \(11\,\mathrm{kHz}\) statistical uncertainty of the present determination with the more precise value of the 2s \((f=0)-(f=1)\) interval determined recently by Ramsey microwave spectroscopy [60]. Finally, we have determined the difference between the Bohr energies at \(n=20\) and 24 and found the results to agree with Bohr's formula using the CODATA 2018 recommended value for \(R_{\mathrm{H}}\)[10]. The data presented in this article was collected over a period of several months with frequent realignment of the optical system and supersonic beam. We did not observe inconsistencies in any of the relative frequencies determined for this article over this time. The \(2\mathrm{s}(f=0)-(f=1)\) and \(\nu_{0}(n=24)-\nu_{0}(20)\) intervals presented in this article correspond to differences of large frequencies, and systematic errors largely cancel out when building the differences. The main potential source of systematic errors in our method originates from the Doppler effect and a possible imperfect cancellation of the Doppler shifts. To characterize such uncertainties, measurements of absolute frequencies are underway, in which we systematically vary the velocity of the supersonic beam and the deviation angle \(\delta\alpha\). Absolute transition frequencies will be reported when these measurements are completed. Figure 10: Spectra of the \(n=20,k=0\), \(\pm 2\), \(|m_{l}|=1\gets 2\mathrm{s}(f=1)\) (blue) and \(n=20,k=0\), \(\pm 2\), \(|m_{l}|=1\gets 2\mathrm{s}(f=0)\) (red). The difference of the two central frequencies \(\nu_{0}\) corresponds to the hyperfine interval of the 2s state. ## Acknowledgments We thank Dominik Husmann (METAS, Bern) for his help in maintaining the SI-traceable frequency dissemination network and Gloria Clausen for helpful discussions. We also thank Prof. Klaus Ensslin and Peter Marki for the low-noise DC voltage source used in the measurements of the Stark spectra. This work was supported by the Swiss National Science Foundation through the Sinergia-program (Grant No. CRSII5-183579) and a single-investigator grant (Grant No. 200020B-200478).
``` 高n Rydberg状態への遷移の頻度を精度よく測定する方法を提示します。これは、 stray 電場による制御不能な系的なシフトの影響を受けない、水素原子の高n Rydberg状態の遷移の頻度を測定するためのものです。 この方法は、電場不感受性のk=0 Stark状態と電場感受性のk=±2 Stark状態のStarkスペクトル記録から構成されます。 この記録は、電場の強さを校正するために使用されます。この方法は、電場強度が0.4から1.6 Vcm<sup>-1</sup>の範囲で意図的に適用されている電場のある状況における2s(f=0と1)超伝導レベルの遷移の測定によって示されています。 少し電場依存性のk=0 レベルのエネルギーは、正確に計算されたシフトによって修正され、それによって対応するBohr エネルギー (-cR<sub>H</sub>
2310.00246
A hybrid quantum-classical conditional generative adversarial network algorithm for human-centered paradigm in cloud
As an emerging field that aims to bridge the gap between human activities and computing systems, human-centered computing (HCC) in cloud, edge, fog has had a huge impact on the artificial intelligence algorithms. The quantum generative adversarial network (QGAN) is considered to be one of the quantum machine learning algorithms with great application prospects, which also should be improved to conform to the human-centered paradigm. The generation process of QGAN is relatively random and the generated model does not conform to the human-centered concept, so it is not quite suitable for real scenarios. In order to solve these problems, a hybrid quantum-classical conditional generative adversarial network (QCGAN) algorithm is proposed, which is a knowledge-driven human-computer interaction computing mode that can be implemented in cloud. The purposes of stabilizing the generation process and realizing the interaction between human and computing process are achieved by inputting artificial conditional information in the generator and discriminator. The generator uses the parameterized quantum circuit with an all-to-all connected topology, which facilitates the tuning of network parameters during the training process. The discriminator uses the classical neural network, which effectively avoids the "input bottleneck" of quantum machine learning. Finally, the BAS training set is selected to conduct experiment on the quantum cloud computing platform. The result shows that the QCGAN algorithm can effectively converge to the Nash equilibrium point after training and perform human-centered classification generation tasks.
Wenjie Liu, Ying Zhang, Zhiliang Deng, Jiaojiao Zhao, Lian Tong
2023-09-30T04:31:23
http://arxiv.org/abs/2310.00246v1
A hybrid quantum-classical conditional generative adversarial network algorithm for human-centered paradigm in cloud ###### Abstract As an emerging field that aims to bridge the gap between human activities and computing systems, human-centered computing (HCC) in cloud, edge, fog has had a huge impact on the artificial intelligence algorithms. The quantum generative adversarial network (QGAN) is considered to be one of the quantum machine learning algorithms with great application prospects, which also should be improved to conform to the human-centered paradigm. The generation process of QGAN is relatively random and the generated model does not conform to the human-centered concept, so it is not quite suitable for real scenarios. In order to solve these problems, a hybrid quantum-classical conditional generative adversarial network (QCGAN) algorithm is proposed, which is a knowledge-driven human-computer interaction computing mode in cloud. The purpose of stabilizing the generation process and realizing the interaction between human and computing process is achieved by inputting artificial conditional information in the generator and discriminator. The generator uses the parameterized quantum circuit with an all-to-all connected topology, which facilitates the tuning of network parameters during the training process. The discriminator uses the classical neural network, which effectively avoids the "input bottleneck" of quantum machine learning. Finally, the BAS training set is selected to conduct experiment on the quantum cloud computing platform. The result shows that the QCGAN algorithm can effectively converge to the Nash equilibrium point after training and perform human-centered classification generation tasks. Quantum generative adversarial network; Conditional generative adversarial network; Human-centered computing; Cloud computing; Parameterized quantum circuits ## 1 Introduction With the development of wireless communications and networking, human-centered computing (HCC) in cloud, edge, and fog attempts to effectively integrate various computing elements related to humans [1, 2], which becomes a common focus of attention in the academic and industrial fields. Unlike other ordinary computing, HCC pays more attention to the status of human in computing technology and the interaction of humans with cyberspace and physical world [3]. Therefore, the design of HCC systems and algorithms needs to take into account the individual's ability and subjective initiative [4, 5]. Among them, cloud computing uses a super-large-scale distributed computing method to adapt to the large number of examples and complex calculation requirements of current artificial intelligence (AI) algorithms, and it has become a computing method commonly sought [6; 7]. In the background of HCC computing and big data, there are many interesting and practical applications generating [8; 9; 10]. Privacy is also an important norm that computing models must pay attention to, especially related to privacy perception and privacy protection [11; 12; 13]. Quantum cloud computing allows users to test and develop their quantum programs on local personal computers, and run them on actual quantum devices, thereby reducing the distance between humans and the mysterious quantum [14]. Under the influence of the AI wave, many technology companies are committed to establishing quantum cloud computing platforms that enable users to implement quantum machine learning algorithms. Compared with the two major models of machine learning, the generative model and the discriminant model, the generative model is more capable of exerting human subjective initiative, so it has the potential to developed into the HCC paradigm. Therefore, we consider the very creative quantum generative adversarial network model as a breakthrough in HCC computing in cloud. Generative adversarial network (GAN) [15] evaluates generative models through a set of adversarial neural network frameworks, which is a hot topic in recent years about generative machine learning algorithm. The GAN algorithm is bases on game theory scenario, and the generator aims to learn the mapping from simple input distribution to complex training sample space by competing with discriminator. As the adversary, the discriminator should judge as accurately as possible whether the input data comes from the training set or the generator. Both participants of the game try to minimize their own loss, so that the adversarial network framework finally reaches the Nash equilibrium [16]. In recent years, GAN has been successfully used in the fields of the processing of image, audio, natural language etc., to achieve functions such as clear image generation [17; 18], video prediction [19], text summarization [20], and image generation of semantic [21]. Actually, it is difficult to ensure stable training of GAN in operation. Researchers use the relevant results obtained by deep learning to improve GAN, including methods such as designing new network structures [22], adding regular constraints [23], integrated learning [24], and improving optimization algorithms [25]. However, the improved algorithms above are not human-centered, because the rules learned by the GAN algorithm are implicit. It is difficult to generate data that meets specific requirements by changing the structure or input of a trained generator. In 2014, Mirza et al. proposed conditional generative adversarial network (CGAN) [26]. This method guides GAN to learn to sample from the conditional distribution by adding conditional constraints to the hidden variables of the input layer, so that the generative data can be guided by conditional inputs, thereby expanding the application scenarios of the GAN algorithm. In the construction, the setting of conditional constraints can make the subjective initiative of people play a role, so it can be regarded as an HCC algorithm. Based on the CGAN algorithm, many human-centered applications have been constructed, such as objects detection [27], medical images processing and synthesis [28; 29]. Quantum generative adversarial network (QGAN) is a data-driven quantum circuit machine learning algorithm which combine the classical GAN and quantum computing [30]. In 2018, Lloyd proposed the concept of QGAN [31], which analyzed the effectiveness of three different QGAN frameworks from a theoretical perspective, and demonstrated that quantum adversarial learning can also reach the Nash equilibrium point when the generative distribution can fit real distribution. In the same year, Pierre's team discussed QGAN in more detail, by giving the general structure of the parameterized quantum circuit (PQC) as a generator and the estimation method of the parameter gradient when training the network [32]. In 2019, Hu et al. used quantum superconducting circuit physics experiments to prove the feasibility of QGAN on current noisy intermediate-scale quantum (NISQ) devices [33]. Additionally, the optimization of the quantum generator structure is also one of the research priorities. For example, using matrix product state [34] and tree tensor network [35] to construct PQCs as generator and discriminator of GAN respectively, the convergence and robustness to noise of these methods are all verified through experiments on quantum hardware. In terms of generating quantum data, the quantum supremacy means that classical information processors or neural networks sometimes cannot fit the data generated by quantum systems, and only quantum generator can complete such tasks. For the generation of classical data, the output of quantum generator can always meet the differentiable constraint. By sampling the output of quantum generator, classical discrete data can be obtained. In contrast, classical GAN cannot directly generate discrete data due to the influence of differentiable constraint. Therefore, as a complement to the classical GAN, QGAN with the ability to generate discrete data and the combination of other known variants of GAN and quantum mechanical mechanisms are of great research value. Similar to classical GAN, QGAN also has the problem of uncontrollable training process and random generative output. However, in practical applications, the intent output obtained by changing the input is a more common situation, so QGAN is less practical. In order to solve the problem that the QGAN algorithm lacks human-oriented thinking, this paper proposes a hybrid quantum classical scheme based on conditional generation adversarial network. Conditional constraints are added to the QGAN algorithm to guide the training process. This method has both the controllability of CGAN and the discrete data generation capability of QGAN. By analyzing the performance of different GAN, it is proved that the algorithm is better than the classical CGAN in terms of time complexity and algorithm functions. Through modeling and training experiments in cloud on classical data generation problem, the convergence of the model and the accuracy of the generative data verify the feasibility of applying quantum computing to the CGAN structure. The rest of the paper is organized as follows. Section II describes the preliminaries about classical GAN and QGAN. Section III presents the design focus of QCGAN, including the method of designing the PQCs and estimating the parameter gradients. The performance analysis of QCGAN and the comparison with other related algorithms are in Section IV. In Section V, experiments are performed in the quantum cloud computing platform to verify the feasibility of the proposed QCGAN algorithm. Section VI summarizes what we find in this work and the prospects for future researches. ## 2 Principles of generative adversarial network algorithm ### Generative adversarial network The core idea of the classical GAN is to construct a zero-sum game between generator and discriminator. Through the adversarial learning strategy, generator and discriminator are alternately trained to obtain a better generative model. The structure and algorithm flowchart of GAN are shown in Fig. 1. Specifically, the first step is to give training samples as generation target, assuming that the real data comes from a fixed and unknown distribution \(p_{real}\left(x\right)\). The generator is a neural network that can map low-dimensional distribution to high-dimensional space, and the discriminator is a neural network with classification function. The parameters of generator and discriminator are denoted as \(\overrightarrow{\theta}_{G}\) and \(\overrightarrow{\theta}_{D}\), respectively. The input of generator is a noise vector \(z\), which is generally sampled from a normal distribution or a uniform distribution; \(x=G\left(\overrightarrow{\theta}_{G},z\right)\) is the output of generator, which is transformed from the noise vector, and constitutes the generative distribution \(p_{G}\left(x\right)\). In the case of completing the ideal adversarial training, the discriminator will not be able to distinguish whether the input comes from the real distribution \(p_{real}\left(x\right)\) or the generative distribution \(p_{G}\left(x\right)\). Therefore, the goal of training generator is to make discriminator distinguish the output of generator as real data as much as possible. On the other hand, when training discriminator, its input contains real data \(x\sim p_{real}\left(x\right)\) and the output of generator \(x\sim p_{G}\left(x\right)\). At this time, the training goal is to accurately judge the two categories of input data. Combining these two aspects, the optimization of GAN can be described as the following minimax game problem \[\min_{G}\max_{D}V\left(D,G\right)=E_{x\sim p_{real}}\left[\log D\left(x\right) \right]+E_{x\sim p_{G}}\left[\log\left(1-D\left(x\right)\right)\right]. \tag{1}\] ### Conditional generative adversarial network In view of the uncontrollable shortcoming of the training process of GAN, the CGAN algorithm adds conditional variables to the input of generator and discriminator at the same time to play a constraining and guiding role. The structure and flowchart of CGAN algorithm are shown in Fig. 2. The condition variables \(y\) are generally known information with specific semantics, such as feature labels. Under the CGAN framework, the generator pays more attention to sample features that are closely related to conditional constraints, ignores other less relevant local features. Therefore, the addition of condition variables can control the training process to generate Figure 1: Schematic diagram of classical generative adversarial network. higher quality data. The output of the generator can be regarded as sampling from the conditional distribution \(p_{G}\left(x\left|y\right.\right)\), so the objective function of CGAN can be rewritten on the basis of the original GAN as \[\min_{G}\max_{D}V\left(D,G\right)=E_{x\sim p_{real}}\left[\log D\left(x\left|y \right.\right)\right]+E_{x\sim p_{G}}\left[\log\left(1-D\left(x\left|y\right. \right)\right)\right]. \tag{2}\] CGAN needs to sample from the noise vector and the condition variable at the same time, so the set of reasonable condition variable according to the generation target plays a crucial role in the generator's ability to fit the real distribution. The most common method is to directly extract the conditional variables from the training data, so that generator and discriminator get some prior knowledge about the training set when they receive the input. For example, the category label is used as a conditional variable and attached to the input layer of the adversarial network [26]. At this time, CGAN can be regarded as an improvement of the unsupervised GAN model into a weakly supervised or a supervised model. ### Quantum generative adversarial network The QGAN is also a zero-sum game that constructed by generator and discriminator in principle. If one or more than one of the real data, the generator and the discriminator obey the quantum mechanism, the constructed algorithm scheme belongs to the QGAN concept. In general, the quantum data set is expressed in the form of a density matrix, which corresponds to the covariance matrix of the classical data set. Quantum generator and discriminator are composed of PQC. The selection, arrangement, and depth of quantum gates of PQC will affect the performance of it, so they are also the parts that can be optimized. When QGAN is used for classical data generation tasks, if the goal of the generator is to reproduce statistical data on high-dimensional, QGAN with quantum generator has the potential to exponentially accelerate the convergence to Nash equilibrium [31]. Using classical neural networks as the discriminator in adversarial learning can avoid the input bottleneck of quantum machine learning, because it Figure 2: Schematic diagram of classical conditional generative adversarial network. reduces the calculation and resource consumption of quantum state encoding when discriminate real classical data. Combining the above two aspects, the QCGAN algorithm proposed in this paper is based on the basic settings of the quantum generator and the classical discriminator to generate classical data. The structure and algorithm flowchart of this kind of QGAN algorithm are shown in Fig. 3. ## 3 Quantum conditional generative adversarial network algorithm The QCGAN algorithm proposed in this paper is a generative adversarial network model which is suitable for fitting classical data distribution, whose generation process is controllable. The generator of QCGAN is constructed in the form of the parameterized quantum circuit, and the discriminator uses a classical neural network to complete the classification task. Different from the unconstrained QGAN algorithm, the QCGAN algorithm adds conditional variables to the input of both generator and discriminator to guide the training process. The basic flow of the algorithm can be summarized as follows (as shown in Fig. 4): the first step is to prepare classical samples and introduce appropriate conditional constraints according to the data characteristics as well as the goal of generation task. This two parts are combined to form the training data set of the network. The classical conditional constraints, which reflect the statistical characteristics of the training data set, are encoded into a entangled quantum state through a well-designed quantum circuit. The next step is to construct the PQC of the generator and the classical neural network of discriminator. Finally, the generative distribution and the real distribution are sampled separately and input these data to the discriminator for classification, and then an adversarial strategy is formulated for training. If the objective function converges, it means finding the best quantum generator. The output of the generator can be sampled to get a set of classical data, which is the result not only fits the target distribution but also meets the constraints. ### Entangled state coding of conditional information and circuit design For the quantum scheme of CGAN, an important topic is how to input the classical conditional variables into the quantum generator, which involves the quantum state encoding of the conditional variables and the circuit design for preparing this quantum state. In this paper, taking the representative category labels in the conditional variables as an example, the method of coding the entangled state of conditional information and designing corresponding circuit are explained in detail. Figure 3: Schematic diagram of quantum generative adversarial network. As shown in Fig. 4, the real data input to the discriminator are the data pairs \(\left(x,y\right)\) sampled from the classical training set, where \(y\) represents the conditional variable. The generator obtains the representation method of the conditional variables and the probability distribution of various samples in the training set through \(\left|\mathrm{y}\right\rangle\). Therefore, \(\left|\mathrm{y}\right\rangle\) is a quantum state entangled by \(m\)-categories conditional variables according to the probability distribution of real samples \[\left|y\right\rangle=\sum\limits_{j=1}^{m}\frac{1}{\alpha_{j}}\left|y_{j} \right\rangle, \tag{3}\] where \(1/\alpha_{j}=\left(p\left(x\left|y_{j}\right.\right)\right)^{-1/2}\), and \(1/\alpha_{j}\) meets the normalization conditions: \(\sum\limits_{j=1}^{n}\left|1/\alpha_{j}\right|^{2}=1\). The category labels of classical data samples used for machine learning tasks are generally coded by one-hot method. Assuming that three categories of data are generated, and the classical binary representations of three labels are: \(001,010,100\). Since the classical discriminator will perform classification processing on the generative distribution and the real distribution, it is most reasonable to use the same one-hot method to encode \(\left|\mathrm{y}_{j}\right\rangle\). It also happens to be similar in form to the quantum three-particle \(W\) state, \(\left|W\right\rangle_{3}=1/3\left(\left|001\right\rangle+\left|010\right\rangle +\left|100\right\rangle\right)\). When designing a quantum circuit to prepare \(\left|\mathrm{y}\right\rangle\), the quantum circuit of preparing a multi-particle \(W\) state can be used as a template, which reduces the complexity of circuit design to a certain extent. Taking \(\left|y\right\rangle=\left|W\right\rangle_{3}\) as an example, where \(m=3\), \(\alpha_{j}=\sqrt{3}\left(j=1,2,3\right)\), which means that the training set contains three categories of uniformly distributed data. The specific preparation process of \(\left|W\right\rangle_{3}\) can be divided into two steps, and the corresponding quantum circuit is shown in Fig. 5. The first step is to use a combination of single qubit rotation gates and CNOT gate. By adjusting the rotation angle, the qubits are prepared into a special state containing only three terms, i.e., \[\left|Q_{b}Q_{c}\right\rangle:\left|00\right\rangle\rightarrow\frac{1}{\sqrt{ 3}}\left(\left|00\right\rangle+\left|01\right\rangle+\left|10\right\rangle \right). \tag{4}\] According to the calculation rule of quantum circuit cascade, there is a equation \[EDCBA[1,0,0,0]^{\mathrm{T}}=\frac{1}{\sqrt{3}}[1,1,1,0]^{\mathrm{T}}\:. \tag{5}\] Figure 4: Schematic diagram of quantum conditional generative adversarial network. By solving this equation, the parameters \(\theta_{1}=\theta_{3}=0.55357436,\theta_{2}=-0.36486383\) in the quantum circuit can be obtained. The second step is to select the quantum gates without parameters to design circuit. Firstly perform the NOT gate (i.e., Pauli X gate) on \(\left|Q_{b}\right\rangle\) and \(\left|Q_{c}\right\rangle\), then apply the Toffoli gate to set the \(\left|Q_{a}\right\rangle\) equal to \(\left|1\right\rangle\), when \(\left|Q_{b}\right\rangle\) and \(\left|Q_{c}\right\rangle\) equal to \(\left|1\right\rangle\). Finally, perform a NOT gate on \(\left|Q_{b}\right\rangle\) and \(\left|Q_{c}\right\rangle\) to restore the state at the end of the first step. After the above operations, the initial state \(\left|000\right\rangle\) can be evolved into \(\left|W\right\rangle_{3}\). Using the one-hot method to encode the conditional information in the quantum state requires relatively more quantum resources, but it can reduce the workload of converting the data into other encoding forms when the data is classically post-processed. When designing the circuit for preparing quantum state of the conditional information, as long as the fixed template is followed, the parameter value is obtained by changing the probability amplitude on the right end of Eq. 5, and the multi-class label information that meets any probability distribution can be expressed. ### Circuit design of quantum generator Quantum computing forms a quantum circuit through the arrangement and combination of wires and basic quantum gates, which act on the quantum state to achieve the evolution of the system. The so-called parameterized quantum circuit is to choose a combination of parameterized quantum rotation gates and other quantum logic gates to constitute the circuit. Single-qubit gates are used to realize qubit rotation, while multi-qubit gates mainly realize entanglement between qubits. Representing the quantum state and the quantum gate in the form of a vector and a unitary matrix, it means that the mathematical connotation of the quantum gate operation is linear transformation, which is similar to classical machine learning. In that, the role of parameters in PQCs and classical neural networks is consistent. Due to the unitary constraints of quantum gates, to generate \(N\) bits of data, \(N=N_{d}+N_{c}\) qubits resources are required, where \(N_{d}\) channels process sample data and \(N_{c}\) channels receive conditional information. For the quantum generator, the input \(\left|0\right\rangle^{\otimes N_{d}}\left|y\right\rangle\) is converted into the final state \(\left|x\right\rangle_{G}\left|y\right\rangle\) after the \(L_{G}\) layers combination unitary operations, where the \(\left|x\right\rangle_{G}\) represents the generative distribution. Sampling the final state of the generator can collapse the quantum state to classical Figure 5: The quantum circuit for preparation of three-particle W-state quantum circuit. data. The quantum generator is realized by a PQC based on quantum gate computing mechanism, which is composed of rotation layers and entanglement layers alternately arranged. Due to the unitary nature of the quantum gate set, if the rotation layer and the entanglement layer alternately perform operations and form a sufficiently long layer sequence, any unitary transformation can be performed on the initial state in theory. According to the decomposition theorem of single qubit unitary operation, a single rotation layer is composed of two \(R_{z}\) gates and one \(R_{x}\) gate arranged at intervals, that is \(\prod\limits_{i=1}^{N}R_{z}\left(\theta_{l,3}^{i}\right)R_{x}\left(\theta_{l,2 }^{i}\right)R_{z}\left(\theta_{l,1}^{i}\right)\). The superscript \(i\) indicates that the quantum gate acts on the \(i\)-th qubit, and the subscript \(l\) indicates that the operations perform on the \(l\)-th layer. The matrix representation of \(R_{x}\) gate and \(R_{z}\) gate are \[R_{x}\left(\theta\right)=\left[\begin{array}{cc}\cos\left(\theta/2\right)&-i \sin\left(\theta/2\right)\\ -i\sin\left(\theta/2\right)&\cos\left(\theta/2\right)\end{array}\right],R_{z} \left(\theta\right)=\left[\begin{array}{cc}e^{-i\theta/2}&0\\ 0&e^{i\theta/2}\end{array}\right].\] A single entanglement layer generally selects two-qubit controlled rotation gates (such as CRX, CRY, CRZ gate) and general two-qubits logic gates (such as CNOT gate) for permutation and combination. The arrangement of quantum gates is related to the connectivity among qubits, thus affecting the expressiveness and entanglement capabilities of PQCs. There are three common connection topologies among qubits: circle, star, and all-to-all connectivity [36, 37]. For circle or star connectivity, the entanglement between certain qubits will not occur in a single layer, which means that more layers are required to fit the distribution of complex targets. This phenomenon undoubtedly increases the difficulty of parameters optimization. All-to-all connectivity is an ideal topology structure among qubits. Although the number of parameters of a single-layer will exceed the other two methods, a shallow all-to-all connectivity quantum circuit can achieve better generative results and the computational overhead of algorithm is cheaper. When designing the PQC of quantum generator, it is necessary to ensure that the qubits are fully connected. According to the above rules, the quantum generator circuit of QCGAN is shown in Fig. 6. The "XX" in the Fig. 6 represents an operation involving two qubits, where any one is the control qubit, and the other is the target qubit. When the control qubit is \(\left|1\right\rangle\) or \(\left|0\right\rangle\) (specified by the operation), the target qubit is operated accordingly. The \(N_{c}\) qubits are only responsible for transmitting conditional information to the other \(N_{b}\) qubits, and continue to pass the conditional information to the discriminator in post-processing. Therefore, no rotation operation is performed on them, and they are only used as control qubits to affect the circuit for data generation. ### Adversarial training strategy The training of the QCGAN is a parameter optimization quantum algorithm with a feedback loop. The parameters of quantum generator and classical discriminator are denoted by \(\theta\) and \(\phi\), respectively. Similar to the classical CGAN, the objective function of QCGAN is \[\min_{G_{\theta}}\max_{D_{\phi}}V\left(D,G\right)=E_{x\sim p_{real}}\left[\log D \left(x\left|y\right\rangle\right)\right]+E_{x\sim p_{\theta}}\left[\log\left(1 -D\left(x_{G}\left|y\right\rangle\right)\right)\right]. \tag{6}\] At the beginning of training, all parameters in quantum circuit and binary classification neural network are given random initial values. During the adversarial training process, the parameters of generator and discriminator are alternately optimized. The parameters of the quantum generator circuit are fixed first to optimize the parameters of the discriminator. The discriminator simultaneously judges the randomly sampled batch training data and the data sampled from the quantum generator. The output value of the discriminator represents the probability that the corresponding input comes from the real distribution, and the gradient is calculated in the direction of maximizing the objective function of discriminator to optimize the parameter \(\phi\). Modifying the parameters of discriminator and repeating the above optimization operations, so that discriminator can not only learn the characteristics of real data distribution but also have the ability to discriminate the data from generative distribution. Then the parameters of discriminator are fixed, and the input of discriminator is only the results of the generator sampling. The larger the output of the discriminator, the smaller the gap between the generative distribution and the previously learned real distribution. In that, the gradient is calculated according to the direction of maximizing the objective function of generator to optimize the parameters \(\theta\). The ability of generator to fit the true distribution is continuously improved by modifying the parameters and repeating the circuit on the quantum computing device. The alternate optimization of generator and discriminator parameters must be iteratively performed until generator can reconstruct the state distribution of the training set. According to the above connotation of adversarial training, Eq. 6 is decomposed into the unsaturated maximization objective function that generator and discriminator obeys respectively, \[\left\{\begin{array}{l}\max V_{D_{\phi}}=E_{x\sim p_{real}}\left[\log D\left( x\left|y\right.\right)\right]+E_{x\sim p_{\theta}}\left[\log(1-D\left(x_{G} \left|y\right.\right))\right]\\ \max V_{G_{\theta}}=E_{x\sim p_{\theta}}\left[\log\left(D\left(x_{G}\left|y \right.\right)\right)\right]\end{array}\right.. \tag{7}\] Figure 6: The template of quantum generator circuit. During the training process, the gradient descent method is used to optimize the parameters. This method needs to calculate the gradient information \(\nabla_{\theta}V_{G_{\theta}}\) and \(\nabla_{\phi}V_{D_{\phi}}\). For classical neural networks, backpropagation can be used directly to calculate the gradient value of the objective function effectively. But for quantum devices, only the measurement results can be obtained, in that the output probability of discriminator cannot be directly accessed. Therefore, the gradient estimation of a parameterized quantum circuit needs to follow the theorem: for a circuit containing the parameter unitary gates \(U\left(\eta\right)=e^{-\frac{\lambda}{\pi}\Sigma}\), the gradient of the expectation value of an observable \(B\) with respect to the parameter \(\eta\) reads \[\frac{\partial\langle B\rangle_{\eta}}{\partial\eta}=\frac{1}{2}\left(\langle B \rangle_{\eta^{+}}-\langle B\rangle_{\eta^{-}}\right). \tag{8}\] The \(\langle\rangle_{\eta^{\pm}}\) in Eq. 8 represents expectation value of observable with respect to the output quantum wave function generated by the same circuit with parameter \(\eta^{\pm}=\eta\pm\frac{2}{\pi}\)[38]. This is an unbiased estimation method for the gradient of PQC. According to this theorem, the gradient of the output of the discriminator with respect to the parameters \(\theta\) can be calculated \[\frac{\partial V_{G_{\theta}}}{\partial\theta_{i}}=\frac{1}{2}E_{x\sim p_{ \theta^{+}}}\left[\log D\left(x\left|y\right.\right)\right]-\frac{1}{2}E_{x \sim p_{\theta^{-}}}\left[\log D\left(x\left|y\right.\right)\right], \tag{9}\] where \(\theta^{\pm}=\theta\pm\frac{2}{\pi}e^{i}\) and \(e^{i}\) represents the \(i\)-th unit vector in the parameter state space, i.e., \(\theta_{i}\leftarrow\theta_{i}\pm\frac{2}{\pi}\). In order to estimate the gradient of each parameter, every single parameter needs to be optimized and then evaluated repeatedly. In the case of small-scale numerical simulation, the wave function can be used to directly calculate the expectation value. Another method is to calculate the probability distribution based on the wave function, and then sample the gradient for estimation [39]. ## 4 Performance evaluation In order to evaluate the performance of the algorithm proposed in this paper, the classical GAN [15] and CGAN [26], QGAN [31] and QCGAN are mainly compared from the perspectives of time complexity and algorithm function. The performance comparison of the four generative adversarial algorithms is shown in Table 1. In the classical CGAN algorithm, the process of generator parameters optimization can be seen as performing gradient descent in the convex set of the normalized covariance matrix of the data set to fit the real distribution. Therefore, the time complexity of generating data that fit the \(N\)-dimensional classical distribution is \(O(N^{2})\). In contrast, the time complexity of a quantum information processor to perform a linear transformation on an \(N\)-dimensional vector is \(O(N)\). Even if optimizing the each parameter needs to modify and execute the PQC twice, the calculation time complexity of QCGAN is still lower than that of CGAN when the same parameter optimization strategy is adopted (neglecting the time cost of preparing classical data into quantum states). On the other hand, the classical CGAN algorithm cannot directly generate discrete data due to the influence of differentiable constraints during parameter optimization, while QGAN can directly generate discrete data and also has the ability to generate continuous distribution [40]. In addition, the QCGAN algorithm proposed in this paper directly encodes classical data in quantum state, so its resource consumption is \(N_{d}+N_{c}\) the same as classical CGAN (where \(N_{d}\) is the resource consumption of generating target data, and \(N_{c}\) is the conditional information resource consumption). While the resource consumption of unsupervised GAN and QGAN algorithms is \(N\), which is equal to the generative target data size. Compared with unconstrained QGAN, the input of conditional information brings prior knowledge about the training set to the model, turning unsupervised QGAN into a weakly supervised or supervised adversarial learning model, thereby achieving controllable data generation process. The learning results of unconstrained QGAN are more inclined to present the average state of all data in training set. But due to adding the conditional information, QCGAN will accordingly show an advantage in the fitness of the generated results to the real distribution. Moreover, the generator trained by QGAN is still purposelessly generated, which can only guarantee the authenticity of the generated data but cannot expand other functions. While QCGAN can achieve different purpose generation tasks by introducing different conditional information, which can fully reflect the subjective initiative of people and realize the interaction between people and algorithms. It can be considered that QCGAN is a human-centered algorithm. Therefore, from a functional perspective, the generators trained by QCGAN have more extensive application scenarios and higher efficiency. ## 5 Experimental In this paper, the synthetic \(\text{BAS}(2,2)\) (Bars and Stripes) data set is used for the experiments and analyses of the classical data classification generation task. The TensorFlow Quantum (TFQ), an open source quantum cloud computing platform for the rapid prototyping of hybrid quantum-classical models for classical or quantum data [41], is introduced to realize the simulation experiments. ### \(\text{BAS}\) data set The \(\text{BAS}(m,n)\) data is a composite image containing only horizontal bars or vertical stripes on a two-dimensional grid. For \(m\times n\)-pixel images, there are only \(2^{m}+2^{n}-2\) valid BAS images in all \(2^{m\times n}\) cases. This defines the target probability distribution, where the probabilities for valid images are specified constants, and the probabilities for invalid images are zero. The generation goal of the experiment is the classical data of \(\text{BAS}(2,2)\), which seem to be a insufficient challenging for quantum computers intuitively. However, the effective quantum state represented by the \(\text{BAS}(2,2)\) data set have a minimum entanglement entropy of \(S_{BAS(2,2)}=1.25163\) and a maximum achievable entropy of \(S_{BAS(2,2)}=1.79248\), which is the known maximum entanglement entropy of four-qubit states set [42]. Therefore, the data have rich entanglement properties and are very suitable as a generation target for quantum adversarial training. \begin{table} \begin{tabular}{c c c c c} \hline Algorithm name & GAN & CGAN & QGAN & QCGAN \\ \hline Time complexity & \(O(N^{2})\) & \(O(N^{2})\) & \(O(N)\) & \(O(N)\) \\ Generator resource consumption & \(N\) bits & \(N_{d}+N_{c}\) bits & \(N\) qubits & \(N_{d}+N_{c}\) qubits \\ Generate data type & Continuous & Continuous & Continuous \& Discrete & Continuous \& Discrete \\ Whether human-center algorithm & No & Yes & No & Yes \\ \hline \end{tabular} \end{table} Table 1: Performance comparison of \(4\) generative adversarial network algorithms The BAS\((2,2)\) images in the training set are divided into three categories. The horizontal bar images and the vertical stripe images are respectively one category, and the image with pixel values of all 0 or all 1 is the other category. And the effective BAS images conform to the uniform distribution. According to the classification standard, the category labels are one-hot encoded and added to the basic data set as the conditional information. Hence the generator require 7 qubits resources, as processing the pixel information of BAS data requires 4 qubits, receiving conditional information requires 3 qubits. ### Experimental setup The codes synthesis 6000 samples to form the training set, including three categories of BAS data (a total of 6 valid images) that meet the above requirements and their category labels. During training, all data is out of order firstly, and then extracted by batch size. For the pre-training of the BAS data set, the discriminator and generator are alternately trained once in each iteration optimization. The batch size of each training is 40, and there are totally 100 epochs for iterative training. In each epoch, iterative training the network 150 times, so that the discriminator can traverse the entire training set. Considering that the improper setting of the learning rate will cause the network gradient to disappear/explode, setting the learning rate \(\times 0.1\) to reduce it every 10 epochs of training. The Adam (Adaptive Moment Estimation) optimizer provided by the open source library is introduced for both generator and discriminator, and the initial learning rate is set as 0.001. Each epoch of training optimization completes, the output of generator is sampled to inspect the quality of the current generation distribution. The inspection mainly including three points: (1) whether the generated pixel data constitutes a valid BAS image; (2) whether the generated pixel data matches the conditional information; (3) whether the generated all data conforms to the uniform distribution. Since the training process of the adversarial network is relatively unstable, if the comprehensive accuracy of the above three investigation points reaches the preset threshold of 95%, the training process can be chosen to terminate early. If the threshold can not be reached all the training time, 100 epochs of alternate training are performed according to the preset settings, and then analyze the convergence of the objective function in the whole training process. After that, the adversarial network can be trained again after reasonable adjustments to the training strategy and hyperparameters, by summarizing the reasons for the unsatisfactory training results. ## 6 Results and discussion In the simulation process, a series of comparative experiments are conducted on the performance of the generator using circle, star, and all-to-all connected quantum circuits firstly. The results verified the superiority of designing an all-to-all connected topology of the quantum generator in this scheme. According to the result of the comparative experiment, the PQC structure shown in Fig. 7 is used as the generator of QCGAN. The input \(\left|y\right\rangle\) of the generator is \(\left|W\right\rangle_{3}\), which is prepared in advance with the circuit shown in Fig. 5. The discriminator is classical so it is implemented using the classical deep learning framework, TensorFlow, which can form a hybrid quantum-classical model with TFQ. The discriminator has one input layer with dimension \(N_{\mathrm{d}}+N_{c}=7\), one hidden layer made up of 4 neurons and one output neuron. Since the discriminator directly judges the expectation value of the generator output, the hidden layer selects the linear ReLU activation function. As shown in Fig. 8, in the overall trend, the loss function value of the discriminator gradually decreases and the loss function value of the generator gradually increases. After training, both the losses of generator and discriminator converge to near the expected equilibrium point. As the epoch of training increases, the model gradually stabilizes and the relationship between generator and discriminator is more intense. So it shows in Fig. 8 that there is still a large oscillation around the expectation value after the convergence. This phenomenon is also related to the influence of noise on quantum systems which access through cloud platform. After the pre-training of the BAS data set is completed, quantum generator result is sampled \(10,000\) times to analyze the generative distribution. The probability distribution of the generated data is shown in Fig. 9(a). It can be seen that most of the generated data fall in the six valid BAS mode images, and the three categories BAS images basically conform to the uniform distribution with \(97.15\%\) accuracy. Fig. 9(b) visualizes the first 100 generative samples in the form of pixel maps of 1, 70 and 100 epoch, which shows that the quantum generator gradually has the ability to generate \(\mathrm{BAS}(2,2)\) images after pre-training. The parameters of quantum gates in the optimal generator are extracted after pre-training, and then use the generator circuit shown in Fig. 7 to realize the task of generating classification images. The parameters of PQC in Fig. 5 are adjusted to set the input \(\ket{y}\) as \(\ket{001}\), and then sample the output \(\ket{x}_{G}\) of generator. The result shows that two kinds of horizontal stripe images meet the uniform distribution, which means that the quantum generator can generate data of multiple categories that meet the conditional constraints through the guidance of conditional information. Figure 7: The quantum generator circuit diagram in this QCGAN experiment. ## 7 Conclusion Combining the classical CGAN algorithm with quantum computing ideas, this paper proposes a quantum conditional generative adversarial network algorithm for human-centered paradigm, which is a general scheme suitable for fitting classical data distribution. This paper gives a detailed interpretation of our design focus, including the configuration design of PQC as the generator, the parameter gradient estimation method of adversarial training strategy as well as the specific steps of the algorithm's cloud computing implementation. The effect of the QCGAN algorithm is that by adding conditional constraints related to the training data set in the input layer, which effectively guides the net Figure 8: The discriminator (in orange) and generator (in blue) loss with respect to iterations. Figure 9: \(2\times 2\) Bars-and-Stripes samples generated from QCGAN.(a)The final probability distribution of the generative BAS data.(b)BAS samples generated from QCGAN with different epoch(For illustrative purpose, we only show 10 samples for each situation.). work to generate data that meets specific requirements. This step increases the controllability of the generation process, but also more in line with the current human-centered requirements for machine learning algorithms. Compared with classical CGAN, the time complexity of the QCGAN algorithm proposed in this paper is lower, and it is more in line with the needs of actual application scenarios. Through experiments on the quantum cloud computing platform, the results show the QCGAN can generate the BAS data distribution effectively and the generator of QCGAN can output correct data guided by the conditional constraint in cloud. Given that QGAN has the ability to generate discrete data and the potential to dig out data distributions that cannot be effectively summarized by classical calculations, QGAN and classical GAN are functionally complementary. Many known variants of GAN can generate very realistic images, audio, and video, in that the combination of these algorithms and quantum mechanics is undoubtedly the icing on the cake. Our future work will focus on the quantum schemes of some classical GAN variant algorithms and constructing quantum machine learning algorithms that conform to the HCC paradigm and the corresponding cloud computing implementation. ## Abbreviations QGAN: Quantum generative adversarial network; QCGAN: quantum conditional generative adversarial network; NISQ: Noisy Intermediate-Scale Quantum; CGAN: Conditional generative adversarial network; HCC: human-centered computing; GAN: Generative adversarial network; PQC: Parameterized quantum circuit; TFQ: TensorFlow Quantum; BAS: Bars and stripes ###### Acknowledgements. This work is supported by National Natural Science Foundation of China (Grant Nos. 62071240 and 61802002); the Natural Science Foundation of Jiangsu Province (Grant No. BK20171458); the Graduate Research and Practice Innovation Program of Jiangsu Province (Grant No. KYCX20,0969); the Natural Science Foundation of Jiangsu Higher Education Institutions of China under Grant No.19RXBS20028; the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD).
人間中心 computing (HCC) のクラウド、エッジ、Fog 領域における重要な役割は、人工知能アルゴリズムに大きな影響を与えた。量子生成アドバンス adversarial network (QGAN) は、優れた応用可能性を有する量子機械学習アルゴリズムの一つとして、人間中心的なパラダイムに適合するように改良すべきである。QGAN の生成プロセスは比較的ランダムであり、人間中心的な概念に適合しないため、実用的なシナリオでは不適切である。これらの問題に対処するため、ハイブリッド量子-古典的条件生成アドバンス adversarial network (QCGAN) アルゴリズムが提案されている。これは、クラウドで実装可能な知識ベースの人の人間とコンピュータの相互作用のための計算モードである。生成プロセスを安定化し、人間とコンピュータの相互作用を実現するための目的は、生成器とdiscriminatorに人工条件情報を入力することにより達成される。生成器は全対全接続のト
2310.20196
Further Development of Event-Based Analysis of X-ray Polarization Data
An event-based maximum likelihood method for handling X-ray polarimetry data is extended to include the effects of background and nonuniform sampling of the possible position angle space. While nonuniform sampling in position angle space generally introduces cross terms in the uncertainties of polarization parameters that could create degeneracies, there are interesting cases that engender no bias or parameter covariance. When including background in Poisson-based likelihood formulation, the formula for the minimum detectable polarization (MDP) has nearly the same form as for the case of Gaussian statistics derived by Elsner et al. (2012) in the limiting case of an unpolarized signal. A polarized background is also considered, which demonstrably increases uncertainties in source polarization measurements. In addition, a Kolmogorov-style test of the event position angle distribution is proposed that can provide an unbinned test of models where the polarization angle in Stokes space depends on event characteristics such as time or energy.
Herman L. Marshall
2023-10-31T05:43:43
http://arxiv.org/abs/2310.20196v2
# Further Development of Event-Based Analysis of X-Ray Polarization Data ###### Abstract An event-based maximum likelihood method for handling X-ray polarimetry data is extended to include the effects of background and nonuniform sampling of the possible position angle space. While nonuniform sampling in position angle space generally introduces cross terms in the uncertainties of polarization parameters that could create degeneracies, there are interesting cases that engender no bias or parameter covariance. When including background in Poisson-based likelihood formulation, the formula for the minimum detectable polarization (MDP) has nearly the same form as for the case of Gaussian statistics derived by Elsner et al. (2012) in the limiting case of an unpolarized signal. A polarized background is also considered, which demonstrably increases uncertainties in source polarization measurements. In addition, a Kolmogorov-style test of the event position angle distribution is proposed that can provide an unbinned test of models where the polarization angle in Stokes space depends on event characteristics such as time or energy. Polarimetry, methods + Footnote †: journal: Astrophysical Journal 0000-0002-4007-9885]Herman L. Marshall 0000-0002-0002-3873]Herman L. Marshall ## 1 Introduction The goal of this paper is to extend the maximum likelihood formulation developed earlier for analysis of unbinned X-ray polarimetry data (Marshall, 2021) to circumstances that were not considered there. The method was developed specifically for application to data from the Imaging X-ray Polarization Explorer (IXPE, Weisskopf et al., 2022) but can be applied generally to instruments that yield events with associated polarization information, such as a soft X-ray polarimeter (Marshall et al., 2018) that is now in development, or instruments that must be rotated to obtain polarization information. In the case of IXPE, there is an angle \(\psi\) associated with every event based on the track produced by the photoelectron ejected by the incident X-ray. For the soft X-ray polarimeter, each event is associated with a "channel" according to the position angle of its Bragg reflector relative to the sky. By design, the gas pixel detectors on _IXPE_(Rankin et al., 2023) and PolarLight (Feng et al., 2019) have uniform sensitivity with \(\psi\). This is not generally true for systems based on Bragg reflection (e.g. OSO-8, Weisskopf et al., 1976), Thomson scattering (e.g. POLIX on XPoSat, Paul, 2022), or Compton scattering (e.g. X Calibur, Beilicke et al., 2014). Such instruments usually require rotation to obtain uniform azimuthal exposure. See the review of instruments based on Compton scattering by Del Monte et al. (2022). Thus, in section 2, exposure nonuniformities are examined and characterized by two observation-based parameters that can be used to determine the impact of such asymmetries. Every instrument has a background signal, so in section 3, a background term is added to the unbinned likelihood model. The basic case of an unpolarized signal is covered in section 3.1 and augmented to include the impact of unpolarized background in section 3.2. Given a model with its best fit parameters, it is necessary to test it. A Kolmogorov test of the counts with time or energy would not be sensitive to the polarization model. Previous tests of polarization models generally examined only the significances of the estimates of the polarization fraction for a full observation (e.g. Liodakis et al., 2022) or perhaps when binned by energy or pulse phase (e.g. Taverna et al., 2022). In section 4, a new test is proposed that is specifically designed to be sensitive to whether the distribution of the event \(\psi\) values matches the model. This sort of test can be used to examine the validity of a pulsar rotating vector model, such as fit by the unbinned method developed by Gonzalez-Caniulef et al. (2023). This test method can also be useful in cases where the electric vector position angle (EVPA) rotates with time as in two observations of the BL Lac object Mk 421 (Di Gesu et al., 2023) in order to test whether the rotation occurs at a uniform rate without binning EVPA measurements in time. A short review of the maximum likelihood formalism is in order, following Marshall (2021) and Marshall (2021). For this analysis, consider a simple case of a fixed energy band over which the polarization is constant so that the data consist of counts in \(\psi\) space. At energy \(E\), the modulation factor of the instrument is \(\mu_{E}\), the instrument effective area is \(A_{E}\), and the intrinsic source photon flux is \(f_{E}\) based on the spectral model of the source. Both \(\mu_{E}\) and \(A_{E}\) are assumed to be known _a priori_. The event density in a differential energy-phase element \(dEd\psi\) about \((E,\psi)\) is \[\lambda(E,\psi)=\frac{1}{2\pi}[1+\mu_{E}(q\cos 2\psi+u\sin 2\psi)]f_{E}A_{E}T \tag{1}\] where \(T\) is the exposure time and the (normalized) Stokes parameters are \(q\equiv Q/I\) and \(u\equiv U/I\) for Stokes fluxes \(I\), \(Q\), and \(U\). (Circular polarization, \(V\), is ignored here, as there is currently no practical way to measure it in the X-ray band.) Assuming that there are \(N\) events, with energies and instrument angles \((E_{i},\psi_{i})\), then the log-likelihood for a Poisson probability distribution of events, \(S=-2\ln L\), is \[S = -2\sum_{i}^{N}\ln\lambda(E_{i},\psi_{i})+\frac{T}{\pi}\int f_{E}A _{E}dE\int_{0}^{2\pi}[1+\mu_{E}(q\cos 2\psi+u\sin 2\psi)]d\psi \tag{2}\] \[= -2\sum_{i}^{N}\ln f_{i}-2\sum_{i}^{N}\ln(1+q\mu_{i}\cos 2\psi_{i} +u\mu_{i}\sin 2\psi_{i})+2T\int f_{E}A_{E}dE \tag{3}\] where \(f_{i}\equiv f(E_{i})\) and \(\mu_{i}\equiv\mu(E_{i})\), after dropping terms independent of \(q\), \(u\), and \(f\). In this case, the log-likelihood for the polarization parameters alone (such as when the polarization is independent of \(E\)) is relatively simple: \[S(q,u)=-2\sum_{i}^{N}\ln(1+q\mu_{i}\cos 2\psi_{i}+u\mu_{i}\sin 2\psi_{i})=-2\sum_{ i}^{N}\ln(1+qc_{i}+us_{i}) \tag{4}\] where \(c_{i}=\mu_{i}\cos 2\psi_{i}\) and \(s_{i}=\mu_{i}\sin 2\psi_{i}\). For a weakly polarized source, the best estimates of \(q\) and \(u\) are well approximated as \(\sum_{i}c_{i}/\sum_{i}c_{i}^{2}\) and \(\sum_{i}s_{i}/\sum_{i}s_{i}^{2}\), respectively. See Marshall (2021) for details. ## 2 Nonuniform Exposure Now, consider the case of a nonuniform exposure in an observation of an unvarying source. The exposure function, \(w(\psi)\) with units of radians\({}^{-1}\), can be defined as the fraction of the exposure spent with sensitivity to phase angle \(\psi\). If the total exposure is \(T\), then the exposure function can be normalized such that it integrates to unity for \(0\leq\psi<2\pi\). In this case, the event density is \[\lambda(E,\psi)=[1+\mu_{E}(q\cos 2\psi+u\sin 2\psi)]f_{E}A_{E}TdEw(\psi)d\psi \tag{5}\] and the log-likelihood for a Poisson probability distribution of events, \(S=-2\ln L\), is \[S = -2\sum_{i}\ln\lambda(E_{i},\psi_{i})+2T\int f_{E}A_{E}dE\int_{0}^{2 \pi}[1+\mu_{E}(q\cos 2\psi+u\sin 2\psi)]w(\psi)d\psi \tag{6}\] To simplify some results, now assume that the spectrum has a spectral shape with uninteresting spectral shape parameters \(\xi\) that are not related to the polarization so that \(f_{E}=f_{0}\eta(E;\xi)\) and define \(K=T\int\eta(E;\xi)A_{E}dE\) and \(K_{\mu}=T\int\eta(E;\xi)A_{E}\mu_{E}dE\) as conversion constants (from flux units to counts or modulated counts), giving \[\begin{split} S(f_{0},q,u)=&-2N\ln f_{0}-2\sum_{i} \ln(1+q\mu_{i}w_{i}\cos 2\psi_{i}+u\mu_{i}w_{i}\sin 2\psi_{i})\\ &+2Kf_{0}+2K_{\mu}f_{0}q\int_{0}^{2\pi}w(\psi)\cos 2\psi d\psi+2K_{ \mu}f_{0}u\int_{0}^{2\pi}w(\psi)\sin 2\psi d\psi\end{split} \tag{7}\] (dropping terms independent of \(f_{0}\), \(q\), or \(u\)). Note that when \(\mu\) is independent of \(E\), \(K_{\mu}=\mu K\). Redefining the weights with trigonometric factors, we can simplify Eq. 7: \[S(f_{0},q,u) = -2N\ln f_{0}-2\sum_{i}\ln(1+q\mu_{i}\alpha_{i}+u\mu_{i}\beta_{i} )+2Kf_{0}+2K_{\mu}f_{0}Aq+2K_{\mu}f_{0}Bu \tag{8}\] where \(\alpha(\psi)\equiv w(\psi)\cos 2\psi\) and \(\beta(\psi)\equiv w(\psi)\sin 2\psi\), so \(\alpha_{i}=\alpha(\psi_{i})\) and \(\beta_{i}=\beta(\psi_{i})\) and the integrals of \(\alpha\) and \(\beta\) over \(\psi\) are A and B, respectively. The quantities \(A\) and \(B\) are unitless, with absolute values less than or of order unity. Note that \(f_{0}\) is covariant with \(u\) and \(q\) via the exposure weighting terms \(A\) and \(B\). These quantities are both zero when \(w(\psi)\) is constant over \([0,\pi]\) or \([0,2\pi]\) but either or both can be nonzero otherwise. The best estimate of \(f_{0}\) is readily determined by setting the setting \(\partial S/\partial f_{0}\) to zero and solving for \(f_{0}\), giving \[\hat{f}_{0}=\frac{N}{K+K_{\mu}(Aq+Bu)}\ \ . \tag{9}\] When \(A\) and \(B\) are zero or the polarization, \(\Pi\equiv(q^{2}+u^{2})^{1/2}\) is zero, then \(f_{0}\) is just \(N/K\), as expected. Setting \(\partial S/\partial u=0\) and \(\partial S/\partial q=0\) to find the best estimates of \(q\) and \(u\) gives \[AK_{\mu}\hat{f}_{0} = \sum_{i}\frac{\mu_{i}\alpha_{i}}{1+\hat{q}\mu_{i}\alpha_{i}+\hat{ u}\mu_{i}\beta_{i}}=\sum_{i}W_{i}\mu_{i}\alpha_{i} \tag{10}\] \[BK_{\mu}\hat{f}_{0} = \sum_{i}\frac{\mu_{i}\beta_{i}}{1+\hat{q}\mu_{i}\alpha_{i}+\hat{ u}\mu_{i}\beta_{i}}=\sum_{i}W_{i}\mu_{i}\beta_{i} \tag{11}\] where \(W_{i}\equiv(1+\hat{q}\mu_{i}\alpha_{i}+\hat{u}\mu_{i}\beta_{i})^{-1}\). As before, these two equations apply under quite general circumstances but require numerical solution. However, as in Marshall (2021), for \(\hat{q}\ll 1\) and \(\hat{u}\ll 1\), a simple approximate solution may be found, noting that \(A\) and \(B\) are generally of order unity, so \[\hat{q} \approx \frac{\sum_{i}\mu_{i}\alpha_{i}-ANK_{\mu}/K}{\sum_{i}\mu_{i}^{2} \alpha_{i}^{2}} \tag{12}\] \[\hat{u} \approx \frac{\sum_{i}\mu_{i}\beta_{i}-BNK_{\mu}/K}{\sum_{i}\mu_{i}^{2} \beta_{i}^{2}}\ . \tag{13}\] At this point, the uncertainties in \(q\) and \(u\) can be derived. All second derivatives of Eq. 8 are nonzero: \[\frac{\partial^{2}S}{\partial f_{0}^{2}} = \frac{2N}{f_{0}^{2}} \tag{14}\] \[\frac{\partial^{2}S}{\partial f_{0}\partial q} = 2K_{\mu}A\] (15) \[\frac{\partial^{2}S}{\partial f_{0}\partial u} = 2K_{\mu}B\] (16) \[\frac{\partial^{2}S}{\partial q^{2}} = \sum_{i}W_{i}^{2}\mu_{i}^{2}\alpha_{i}^{2}\approx\sum_{i}\mu_{i}^ {2}\alpha_{i}^{2}\] (17) \[\frac{\partial^{2}S}{\partial u^{2}} = \sum_{i}W_{i}^{2}\mu_{i}^{2}\beta_{i}^{2}\approx\sum_{i}\mu_{i}^ {2}\beta_{i}^{2}\] (18) \[\frac{\partial^{2}S}{\partial q\partial u} = \sum_{i}W_{i}^{2}\mu_{i}^{2}\beta_{i}\alpha_{i}\approx\sum_{i}\mu _{i}^{2}\alpha_{i}\beta_{i} \tag{20}\] where, again, the approximations hold for \(\hat{q}\ll 1\) and \(\hat{u}\ll 1\). We are most interested in the uncertainty in the polarization, \(\Pi\). We can make the coordinate transformation from \((q,u)\) to \((\Pi,\varphi)\), where \(\varphi\) = \(\frac{1}{2}\tan^{-1}(u/q)\) and determine \(S(f_{0},\Pi,\varphi)\): \[S(\hat{f}_{0},\Pi,\varphi) = 2N\ln[K+K_{\mu}\Pi(A\cos 2\varphi+B\sin 2\varphi)]-2\sum_{i} \ln[1+\Pi w_{i}\mu_{i}\cos(2\psi_{i}-2\varphi)] \tag{22}\] for which the second derivative with respect to \(\Pi\) is \[\frac{\partial^{2}S}{\partial\Pi^{2}} = \frac{-2NK_{\mu}^{2}(A\cos 2\varphi+B\sin 2\varphi)^{2}}{[K+K_{ \mu}\Pi(A\cos 2\varphi+B\sin 2\varphi)]^{2}}+2\sum_{i}\frac{w_{i}^{2}\mu_{i}^{2} \cos^{2}(2\psi_{i}-2\varphi)}{[1+\Pi\mu_{i}w_{i}\mu_{i}\cos(2\psi_{i}-2 \varphi)]^{2}} \tag{23}\] with a limit as \(\Pi\longrightarrow 0\) giving \[\frac{1}{\sigma_{\Pi}^{2}}\approx\sum_{i}w_{i}^{2}\mu_{i}^{2}\cos^{2}(2\psi_{ i}-2\varphi)-NK_{\mu}^{2}(A\cos 2\varphi+B\sin 2\varphi)^{2}/K^{2} \tag{24}\] The first term on the right hand side is the "normal", expected term that depends on the modulation factor and the cosines of the phase angles. The second term, however, is of great concern because it is negative definite, causing the uncertainty in \(\Pi\) to increase arbitrarily, and because it depends on the true but unknown phase. If either \(A\) and \(B\) are nonzero, then the uncertainty in \(\Pi\) depends upon this phase in a way that can render statistical uncertainties difficult to compute and irregular. Thus, an important characteristic of a good polarimeter is designing it so that \(A\) and \(B\) are as close to zero as possible. As stated in the introduction, the gas pixel detectors on _IXPE_(Rankin et al., 2023) have uniform sensitivity to phase angle for the entire exposure, so \(A\) = \(B\) = 0. The case of a set of Bragg reflectors is worth examining. A single reflector has an ideal angular response that is a delta function in \(\psi\): \(w(\psi)=\delta(\psi-\psi_{0})\). If there are \(n_{B}\) reflectors, then \(w(\psi)=1/n_{B}\sum_{i}^{n_{B}}\delta(\psi-\psi_{i})\). It can be shown that when \(\psi_{i}=\psi_{0}+\pi i/n_{B}\), then \(A\) and \(B\) are identically zero for arbitrary \(\psi_{0}\) when \(n_{B}>2\) and the solution to Eqs. 9 to 11 is not degenerate.1 For the broad-band soft X-ray polarimeter with 3 Bragg reflectors at 120\({}^{\circ}\) to each other (Marshall et al., 2018), \(A\) = \(B\) = 0 if all three channels are operated for the same time period. Footnote 1: For \(n_{B}\) = 2, \(A\) = \(B\) = 0 also, but then the system of equations becomes degenerate and no unique solution is possible. For example, Eq. 11 is 0 = 0 for \(\psi_{0}\) = 0. ## 3 Adding a background term There are two cases to consider. The easier case is when the background is unpolarized. This case helps set the stage for the case of polarized background, which is important for situations such as when measuring a pulsar inside a pulsar wind nebula or a source in the wings of a brighter, polarized source. Regardless of whether the background is polarized, a background region of solid angle \(\Omega\) is chosen that is source free and the source region covers a solid angle \(\zeta\Omega\) that is presumed to have the same background characteristics. There are \(N\) events in the source region labeled with index \(i\) and \(N_{B}\) events in the background region labeled with index \(j\). This case is similar to that considered by Elsner et al. (2012) for the case of Gaussian counting statistics. To compare to their analysis more directly, we expect \(C_{B}\equiv\zeta N_{B}\) counts in the source region to be due to background, giving \(N-C_{B}\equiv C_{S}\)_net_ counts in the source region. In this analysis, the exposure is uniform over \(\psi\). ### Unpolarized Background If the background is unpolarized, the event density is relatively simple: \[\lambda_{S}(\psi)=\frac{1}{2\pi}\{N_{0}[1+\mu(q\cos 2\psi+u\sin 2\psi)]+\zeta B\} \tag{25}\] for the source region and \(\lambda_{B}(\psi)=\frac{B}{2\pi}\) for the background region. Here, the notation is simplified by defining \(N_{0}=f_{0}\,T\int\eta(E;\xi)A_{E}dE\), which is just the expected number of counts from the source under some spectral model \(f_{0}\eta(E;\xi)\). Then, the log-likelihood for a Poisson probability distribution of source and background events, \(S=-2\ln L\), is \[S = -2\sum_{i=1}^{N}\ln\lambda_{S}(\psi_{i})+\frac{1}{2\pi}\int_{0}^ {2\pi}[N_{0}(1+\mu q\cos 2\psi+\mu u\sin 2\psi)+\zeta B]d\psi-2\sum_{j=1}^{N_{B}} \ln B+2B \tag{26}\] \[= -2\sum_{i=1}^{N}\ln[N_{0}(1+qc_{i}+us_{i})+\zeta B]+2N_{0}+2B(1+ \zeta)-2N_{B}\ln B \tag{27}\] (dropping terms independent of \(B,N_{0},\,q,\) or \(u\)). Setting partial derivatives to zero gives \[\hat{N_{0}} = \sum_{i=1}^{N}\frac{1+\hat{q}c_{i}+\hat{u}s_{i}}{1+\hat{q}c_{i}+ \hat{u}s_{i}+\frac{\zeta\hat{B}}{\hat{N_{0}}}}=\sum w_{i}+\hat{q}\sum w_{i}c_{ i}+\hat{u}\sum w_{i}s_{i}=\sum w_{i} \tag{28}\] \[\hat{B} = \frac{N_{B}}{1+\zeta(1-\frac{\sum w_{i}}{\hat{N_{0}}})}=N_{B}\] (29) \[0 = \sum w_{i}c_{i}\] (30) \[0 = \sum w_{i}s_{i} \tag{31}\] for \(N_{0}\neq 0\) and defining \(w_{i}=[1+\hat{q}c_{i}+\hat{u}s_{i}+\zeta\hat{B}/\hat{N_{0}}]^{-1}\). Eqs. 30 and 31 have been used to simplify Eq. 28 and Eq. 28 is used to simplify Eq. 29. Substituting \(N_{B}\) for \(B\) in Eq. 28 and transforming from \((q,u)\) to \((\Pi,\varphi)\) gives \[\hat{N_{0}}=\sum_{i=1}^{N}[1+\hat{\Pi}\mu_{i}\cos(2\psi_{i}+2\hat{\varphi})+ \frac{\zeta N_{B}}{\hat{N_{0}}}]^{-1}, \tag{32}\] which can be solved for \(\hat{N_{0}}\) for trial values of \(\hat{\Pi}\) and \(\hat{\varphi}\) to make minimizing \(S\) simpler by substituting \(\hat{N_{0}}\) and \(\hat{B}=N_{B}\) into Eq. 27. As \(\hat{\Pi}\longrightarrow 0\)\(\hat{N_{0}}\longrightarrow N-\zeta N_{B}=C_{S}\), as expected, providing a good starting point for estimating \(\hat{N_{0}}\). The minimum detectable polarization (MDP) for this case can be estimated by computing the uncertainty in \(\Pi\), \(\sigma_{\Pi}\), by \[\frac{\partial^{2}S}{\partial\Pi^{2}}=\frac{2}{\sigma_{\Pi}^{2}}=2\sum_{i=1}^{ N}w_{i}^{2}\mu_{i}^{2}\cos^{2}(2\psi_{i}+2\hat{\varphi}) \tag{33}\] Then, as \(\hat{\Pi}\longrightarrow 0\), \(w_{i}\longrightarrow[1+\zeta N_{B}/\hat{N_{0}}]^{-1}\), so \[\sigma_{\Pi}\longrightarrow\frac{1+\zeta N_{B}/\hat{N_{0}}}{[\sum_{i=1}^{N} \mu_{i}^{2}\cos^{2}(2\psi_{i}+2\hat{\varphi})]^{1/2}}=\frac{\sqrt{2}(1+\zeta N_ {B}/\hat{N_{0}})}{[N\langle\mu_{i}^{2}\rangle]^{1/2}}=\frac{\sqrt{2N}}{(N- \zeta N_{B})\sqrt{\langle\mu_{i}^{2}\rangle}}=\frac{\sqrt{2(C_{S}+C_{B})}}{C_{S }\sqrt{\langle\mu_{i}^{2}\rangle}} \tag{34}\] where the first step follows as \(\mu_{i}\) and \(\psi_{i}\) are uncorrelated and the second step follows from the asymptotic value of \(\hat{N_{0}}\). Finally, the MDP at 99% confidence is \[\mathrm{MDP}_{99}=3.03\sigma_{\Pi}=\frac{4.29\sqrt{C_{S}+C_{B}}}{C_{S}\sqrt{<\mu_{i} ^{2}>}}\ \, \tag{35}\] just as found by Elsner et al. (2012) for Gaussian statistics with the exception of the substitution of the rms of \(\mu_{i}\) for \(\mu\). ### Polarized Background It is more likely that the X-ray background is partially polarized as it often contains some fraction of the source as well (due to the extent of the telescope's point spread function). The background is assumed to be primarily due to photons, essentially indistinguishable from source events, susceptible to the same modulation factor as source events are. If the background is polarized, the event density has added terms giving the normalized \(u\) and \(q\) of the background, denoted by \(q_{b}\) and \(u_{b}\): \[\lambda_{S}(\psi) = \frac{1}{2\pi}\left\{N_{0}[1+\mu(q\cos 2\psi+u\sin 2\psi)]+ \zeta B[1+\mu(q_{b}\cos 2\psi+u_{b}\sin 2\psi)]\right\} \tag{36}\] \[\lambda_{B}(\psi) = \frac{B}{2\pi}\left\{1+\mu(q_{b}\cos 2\psi+u_{b}\sin 2\psi)]\right\} \tag{37}\] for the source and background regions, respectively. Then, \[S = -2\sum_{i=1}^{N}\ln\lambda_{S}(\psi_{i})+\frac{1}{2\pi}\int_{0}^{ 2\pi}\lambda_{S}(\psi_{i})d\psi-2\sum_{j=1}^{N_{b}}\ln\lambda_{B}(\psi_{j})+ \frac{1}{2\pi}\int_{0}^{2\pi}\lambda_{B}(\psi)d\psi \tag{38}\] \[= -2\sum_{i=1}^{N}\ln[N_{0}(1+qc_{i}+us_{i})+\zeta B(1+q_{b}c_{i}+u _{b}s_{i})]+2N_{0}+2B(1+\zeta)-2N_{B}\ln B+-2\sum_{j=1}^{N_{b}}\ln(1+q_{b}c_{i }+u_{b}s_{i}) \tag{39}\] (dropping terms independent of \(B\), \(N_{0}\), \(q\), \(u\), \(q_{b}\), or \(u_{b}\)) and again defining \(c_{i}=\mu_{i}\cos 2\psi_{i}\) and \(s_{i}=\mu_{i}\sin 2\psi_{i}\). Setting partial derivatives to zero gives \[\dot{N_{0}} = \sum_{i=1}^{N}\frac{1+\hat{q}c_{i}+\hat{u}s_{i}}{1+\hat{q}c_{i}+ \hat{u}s_{i}+\frac{\zeta\hat{B}}{N_{0}}(1+\hat{q}_{b}c_{i}+\hat{u}_{b}s_{i})}= \sum_{i}^{N}W_{i} \tag{40}\] \[\hat{B} = N_{B}\] (41) \[0 = \sum W_{i}c_{i}\] (42) \[0 = \sum W_{i}s_{i}\] (43) \[0 = \sum V_{j}c_{j}\] (44) \[0 = \sum V_{j}s_{j}\ \ \, \tag{45}\] defining \(W_{i}=[1+\hat{q}c_{i}+\hat{u}s_{i}+\zeta\hat{B}(1+\hat{q}_{b}c_{i}+\hat{u}_{b }s_{i})/\hat{N}_{0}]^{-1}\) and now \(V_{j}=[1+\hat{q}_{b}c_{j}+\hat{u}_{b}s_{j}]^{-1}\). As before, Eqs. 42, 43, and 40 have been used to derive Eq. 41. Eqs. 45 and 45 can be solved for \(\hat{q}_{b}\) and \(\hat{u}_{b}\) as in Marshall (2021), giving \[\hat{q}_{b} \approx \sum_{i}c_{i}/\sum_{i}c_{i}^{2} \tag{46}\] \[\hat{u}_{b} \approx \sum_{i}s_{i}/\sum_{i}s_{i}^{2} \tag{47}\] when the background is weakly polarized. Not surprisingly, the optimal Stokes parameters for the background are derived from the background region alone. Now the background Stokes parameters can be used in Eq. 40 (via the definition of \(W_{i}\)) to derive an equation involving the source Stokes parameters similar to Eq. 32 that can be solved iteratively for \(\dot{N_{0}}\) for trial values of \(\hat{\Pi}\) and \(\hat{\varphi}\). Finally, Eq. 33 is modified to be \[\frac{2}{\sigma_{\Pi}^{2}}=2\sum_{i=1}^{N}W_{i}^{2}\mu_{i}^{2}\cos^{2}(2\psi_{ i}+2\hat{\varphi})\ \ . \tag{48}\] and taking the limiting case as \(\Pi\longrightarrow 0\) gives \[\sigma_{\Pi}^{2}\longrightarrow\frac{(1+\zeta B/\hat{N}_{0})^{2}}{\langle\mu^{2} \rangle\sum_{i}\frac{\cos^{2}(2\psi_{i}+2\hat{\varphi})}{(1+\zeta B\Pi_{\rm g }\cos(2\psi_{i}+2\hat{\varphi}_{0})/N)^{2}}}=\frac{(C_{S}+C_{B})^{2}}{C_{S}^{2 }\langle\mu^{2}\rangle\sum_{i}\frac{\cos^{2}(2\psi_{i}+2\hat{\varphi}_{0})}{(1 +C_{\rm g}\Pi_{\rm g}\cos(2\psi_{i}+2\hat{\varphi}_{0})/(C_{S}+C_{B}))^{2}}} \tag{49}\] after transforming from \(q_{b},u_{b}\) to \(\Pi_{B},\varphi_{B}\). Without the term in the denominator in the sum, the sum would average to \(N/2\) = \((C_{S}+C_{B})/2\), matching Eq. 34. Because the extra term is positive definite, it will reduce the sum, thereby increasing \(\sigma_{\Pi}\), making the estimate of \(\Pi\) more uncertain when there is polarized background, as expected. The magnitude of the increase in the uncertainty depends on the ratio of the expected polarized counts to the total counts in the source region but also on the correlation between the source and background polarization phases. ## 4 An Unbinned Model Test Consider a Kolmogorov test of conditional probabilities for a model where \(q\) and \(u\) depend on \(\xi\), representing time, spatial location, or energy. For example, a model where the polarization fraction is constant with time while the EVPA rotates uniformly with rate \(\omega\) could be specified as \[q(t) = \Pi\cos 2(\phi_{0}+\omega t) \tag{50}\] \[u(t) = \Pi\sin 2(\phi_{0}+\omega t) \tag{51}\] where \(\phi_{0}\) and \(\omega\) are (fitted) parameters of the model to be tested, \(\xi\) = \(t\), and each event has a specified value of \(t\) given by \(t_{i}\). This model was applied to _IXPE_ data from Mk 421, finding rotation rates of \(\omega\) = \(80\pm 9^{\circ}\)/d in one observation and \(\omega\) = \(91\pm 8^{\circ}\)/d in another (Di Gesu et al., 2023). Generally, using the source region event density given by Eq. 25, the conditional probability that \(\psi\leq\psi_{i}\) for event \(i\) given that \(\xi\) = \(\xi_{i}\) is \[C(\leq\psi_{i}\mid q[\xi_{i}],\;u[\xi_{i}],\;\hat{N}_{0},\;\hat{B}) = \frac{\int_{0}^{\psi_{i}}\lambda(\psi;\xi_{i})d\psi}{\int_{0}^{ \infty}\lambda(\psi;\xi_{i})d\psi} \tag{52}\] \[= \frac{\psi_{i}(1+\zeta N_{B}/\hat{N}_{0})+\mu_{i}([q_{i}\sin 2\psi_{ i}]/2+u_{i}\sin^{2}\psi_{i})}{2\pi(1+\zeta N_{B}/\hat{N}_{0})} \tag{53}\] where \(q(\xi_{i})\equiv q_{i}\) and \(u(\xi_{i})\equiv u_{i}\). As \(\Pi\longrightarrow 0\), \(C(\leq\psi_{i})\) approaches the uniform distribution, as expected. Under the hypothesis that the model is correct, though, we expect Eq. 53 to give values that are uniformly distributed between 0 and 1 even if \(p\) is non-zero. Thus, a Kolmogorov test of the cumulative distribution of \(C(\leq\psi_{i})\) values should provide a valid unbinned test of the event angles. This test was implemented in Interactive Data Language (IDL) and applied to several different data sets from _IXPE_. In each case, events in the 2-8 keV band were used, the source region was 60\({}^{\prime\prime}\) in radius, and the background was taken from an annulus 200\({}^{\prime\prime}\) to 300\({}^{\prime\prime}\) from the point source. The first source, Mk 501 (_IXPE_ data set 01004501), was found to be 10 \(\pm\) 2% polarized (Liodakis et al., 2022). For the null hypothesis that Mk 501 is unpolarized, the distribution of \(C(\leq\psi_{i})\) deviated from the uniform distribution by 0.0085 with a set of 85,388 events in the source region; thus, the null hypothesis is rejected with a probability of less than \(8\times 10^{-6}\). A likelihood ratio test rejects the null hypothesis with a probability of \(7\times 10^{-7}\) in this case, providing a somewhat better result for a simple test that the source is polarized. Under the hypothesis that the source is polarized, with parameters determined using the maximum likelihood method in SS 3, then the deviation dropped to 0.00196, for a K-S probability of 0.90; thus, the constant polarized model with fixed \(\Pi\) and \(\varphi\) is acceptable, a conclusion that was not available to Liodakis et al. (2022). Similarly, constant rotation models for the second and third _IXPE_ observations of Mk 421 (data sets 01003801 and 01003901, reported by Di Gesu et al. (2023)) are accepted with probabilities of 0.97 and 0.78, respectively. Finally, the test was run on data from Cen A (_IXPE_ data set 01004301), for which no polarization was detected; the upper limit to the polarization was 6.5% at 99% confidence (Ehlert et al., 2022). For Cen A, the null hypothesis (that the source is unpolarized) is not rejected, giving a maximum deviation of 0.0039 with 28,078 events and a K-S probability of 0.79. In summary, while an analysis may provide parameters of a polarization model, this test can be used on unbinned data to test the validity of the model, providing the user a diagnostic that could indicate whether the model is inadequate. ## 5 Summary The unbinned likelihood method for X-ray polarimetry data analysis has been extended in several ways: 1. Because many X-ray polarimeters must be rotated in order to be sensitive to arbitrary polarization position angles, an exposure weighting approach was added. A simple diagnostic term is developed that can inform the user when polarization measurements may be deleteriously affected. 2. A way of accounting for background has been added to the basic formalism. The background can be unpolarized but it may be more common to have a polarized background, such as when observing a point source in a polarized nebula or near a brighter polarized source. 3. An unbinned test using event phase angles was proposed that can be used to determine whether a time- or energy-dependent model may be rejected. The test was applied successfully to several _IXPE_ data sets. Funding for this work was provided in part by contract 80MSFC17C0012 from the MSFC to MIT in support of the _IXPE_ project. This research used data products provided by the IXPE Team (MSFC, SSDC, INAF, and INFN) and distributed with additional software tools by the High-Energy Astrophysics Science Archive Research Center (HEASARC), at NASA Goddard Space Flight Center (GSFC). Support for this work was provided in part by the National Aeronautics and Space Administration (NASA) through the Smithsonian Astrophysical Observatory (SAO) contract SV3-73016 to MIT for support of the Chandra X-Ray Center (CXC), which is operated by SAO for and on behalf of NASA under contract NAS8-03060. _Facilities: IXPE_ Interactive Data Language (IDL)
X線偏光測定データのイベントに基づく最大似 odh法を、背景の影響と位置角空間の非均一サンプリングを網羅的に拡張しました。位置角空間における非均一サンプリングは一般的に偏光パラメータの不確かさをcross termsで生成するため、偏光パラメータのバイアスや相関性を生む可能性があります。しかし、背景をPoisson-based likelihood formulatonに含めると、最小検出可能な偏光 (MDP) の式は、Elsner et al. (2012) のガウス統計による場合とほぼ同じ形式になります。また、偏光された背景も考慮し、その結果、ソース偏光測定の不確かさを示しました。さらに、イベント位置角分布のKolmogorov方式のテストを提案し、Stokes空間における偏光角が時間やエネルギーなどのイベント特性に依存するモデルに対する非結合テストを提供します。
2309.14350
Training neural mapping schemes for satellite altimetry with simulation data
Satellite altimetry combined with data assimilation and optimal interpolation schemes have deeply renewed our ability to monitor sea surface dynamics. Recently, deep learning (DL) schemes have emerged as appealing solutions to address space-time interpolation problems. The scarcity of real altimetry dataset, in terms of space-time coverage of the sea surface, however impedes the training of state-of-the-art neural schemes on real-world case-studies. Here, we leverage both simulations of ocean dynamics and satellite altimeters to train simulation-based neural mapping schemes for the sea surface height and demonstrate their performance for real altimetry datasets. We analyze further how the ocean simulation dataset used during the training phase impacts this performance. This experimental analysis covers both the resolution from eddy-present configurations to eddy-rich ones, forced simulations vs. reanalyses using data assimilation and tide-free vs. tide-resolving simulations. Our benchmarking framework focuses on a Gulf Stream region for a realistic 5-altimeter constellation using NEMO ocean simulations and 4DVarNet mapping schemes. All simulation-based 4DVarNets outperform the operational observation-driven and reanalysis products, namely DUACS and GLORYS. The more realistic the ocean simulation dataset used during the training phase, the better the mapping. The best 4DVarNet mapping was trained from an eddy-rich and tide-free simulation datasets. It improves the resolved longitudinal scale from 151 kilometers for DUACS and 241 kilometers for GLORYS to 98 kilometers and reduces the root mean squared error (RMSE) by 23% and 61%. These results open research avenues for new synergies between ocean modelling and ocean observation using learning-based approaches.
Quentin Febvre, Julien Le Sommer, Clément Ubelmann, Ronan Fablet
2023-09-19T14:32:25
http://arxiv.org/abs/2309.14350v1
# Training neural mapping schemes for satellite altimetry with simulation data ###### Abstract We propose to train neural mapping schemes for real altimeter data from ocean simulation data. The trained neural schemes significantly outperform the operational mapping of real altimetry data for a Gulf Stream case-study. Momentin Febvre, quentin.febvre@imt-atlantique.fr **Key Points:** * We propose to train neural mapping schemes for real altimeter data from ocean simulation data. * The trained neural schemes significantly outperform the operational mapping of real altimetry data for a Gulf Stream case-study. * More realistic simulation datasets improve the performance of the trained neural mapping with a 20% improvement in the spatial scales. ###### Abstract Satellite altimetry combined with data assimilation and optimal interpolation schemes have deeply renewed our ability to monitor sea surface dynamics. Recently, deep learning (DL) schemes have emerged as appealing solutions to address space-time interpolation problems. The scarcity of real altimetry dataset, in terms of space-time coverage of the sea surface, however impedes the training of state-of-the-art neural schemes on real-world case-studies. Here, we leverage both simulations of ocean dynamics and satellite altimeters to train simulation-based neural mapping schemes for the sea surface height and demonstrate their performance for real altimetry datasets. We analyze further how the ocean simulation dataset used during the training phase impacts this performance. This experimental analysis covers both the resolution from eddy-present configurations to eddy-rich ones, forced simulations vs. reanalyses using data assimilation and tide-free vs. tide-resolving simulations. Our benchmarking framework focuses on a Gulf Stream region for a realistic 5-altimeter constellation using NEMO ocean simulations and 4DVarNet mapping schemes. All simulation-based 4DVarNets outperform the operational observation-driven and reanalysis products, namely DUACS and GLORYS. The more realistic the ocean simulation dataset used during the training phase, the better the mapping. The best 4DVarNet mapping was trained from an eddy-rich and tide-free simulation datasets. It improves the resolved longitudinal scale from 151 kilometers for DUACS and 241 kilometers for GLORYS to 98 kilometers and reduces the root mean squared error (RMSE) by 23% and 61%. These results open research avenues for new synergies between ocean modelling and ocean observation using learning-based approaches. ## Plain Language Summary For an artificial intelligence (AI) to learn, one need to describe a task using data and an evaluation procedure. Here we aim at constructing images related to the ocean surface currents. The satellite data we use provide images of the ocean surface with a lot of missing data (around 95% of missing pixels for a given day), and we aim at finding the values of the missing pixels. Because we don't know the full image, it is challenging to train an AI on this task using only the satellite data. However, today's physical knowledge makes it possible to numerically simulate oceans on big computers. For these simulated oceans, we have access to the gap-free image, so we can train AI models by first hiding some pixels and checking if the model fill the gaps with the correct values. Here, we explore under which conditions AIs trained on simulated oceans are useful for the real ocean. We show that today's simulated oceans work well for training an AI on this task and that training on more realistic simulated oceans improve the performance of the AI! ## 1 Introduction Satellite altimeters have brought a great leap forward in the observation of sea surface height on a global scale since the 80's. Altimetry data have greatly contributed to the monitoring and understanding of key processes such as the sea-level rise and the role of mesoscale dynamics. The scarce and irregular sampling of the measurements presents a challenge for training deep neural networks. The retrieval of mesoscale-to-submesoscale sea surface dynamics for horizontal scales smaller than 150 km however remains a challenge for operational systems based on optimal interpolation (Taburet et al., 2019) and data assimilation (Lellucuche et al., 2021) schemes. This has motivated a wealth of research to develop novel mapping schemes (Ballarotta et al., 2020; Ubelmann et al., 2021; Guillou et al., 2021). In this context, data-driven and learning-based approaches (Alvera Azcarate et al., 2005; Barth et al., 2022; Lguensat et al., 2017; Fablet, Amar, et al., 2021; Martin et al., 2023) appear as appealing alternatives to make the most of the available observation and simulation datasets. Especially, Observing System Simulation Experiments (OSSE) have stressed the potential of neural schemes trained through supervised learning for the mapping of satellite-derived altimetry data (Fablet, Amar, et al., 2021; Beauchamp et al., 2023). Their applicability to real datasets has yet to be assessed and recent studies have rather explored learning strategies from real gappy multi-year altimetry datasets (Martin et al., 2023). Despite promising results, schemes trained with unsupervised strategies do not reach the relative improvement of the operational processing suggested by OSSE-based studies. Here, we go beyond using OSSEs as benchmarking-only testbeds. We explore their use for the training of neural mapping schemes and address the space-time interpolation of real satellite altimetry observations. Through numerical experiments on a Gulf Stream case-study with a 5-nadir altimeter constellation, our main contributions are three-fold. We demonstrate the relevance of the simulation-based learning of neural mapping schemes and their generalization performance for real nadir altimetry data. We benchmark the proposed approach with state-of-the-art operational products as well as neural schemes trained from real altimetry datasets. We also assess how the characteristics of the training datasets, especially in terms of resolved ocean processes, drives the mapping performance. To ensure the reproducibility of our results, our code is made available through an open source license along with the considered datasets and the trained models (Febvre, 2023). The content of this paper is organized as follows. Section 2 offers background information on related work, Section 3 presents our method, Section 4 reports our numerical experiments, and Section 5 elaborates on our main contributions. ## 2 Background ### Gridded satellite altimetry products The ability to produce gridded maps from scattered along-track nadir altimeter measurements of sea surface height is key to the exploitation of altimeter data in operational services and science studies (Abdalla et al., 2021). As detailed below, we can distinguish three categories of approaches to produce such maps: reanalysis products (Lellouche et al., 2021) using data assimilation schemes, observation-based products (Taburet et al., 2019) and learning-based approaches (Fablet, Amar, et al., 2021). Reanalysis products such as the GLORYS12 reanalysis (Lellouche et al., 2021) leverage the full expressiveness of state-of-the-art ocean models. They aim at retrieving ocean state trajectories close to observed quantities through data assimilation methods including among others Kalman filters and variational schemes (Carrassi et al., 2018). Such reanalyses usually exploit satellite-derived and in situ data sources. For instance, GLORYS12 reanalysis assimilates satellite altimetry data, but also satellite-derived observations of the sea surface temperature, sea-ice concentration as well as in situ ARGO data (Wong et al., 2020). The second category involves observation-based products. In contrast to reanalyses, they only rely on altimetry data and address a space-time interpolation problem. They usually rely on simplifying assumptions on sea surface dynamics. In this category, optimal-interpolation-based product DUACS (Data Unification and Altimeter Combination System) (Taburet et al., 2019) exploits a covariance-based prior, while recent studies involve quasi-geostrophic dynamics to guide the interpolation scheme (Guillou et al., 2021; Ballarotta et al., 2020). Data-driven and learning-based approaches form a third category of SSH mapping schemes. Similarly to observation-based methods, they are framed as interpolation schemes. Especially deep learning schemes have gained some attention. Recent studies have explored different neural architectures both for real and OSSE altimetry datasets (Archambault et al., 2023; Beauchamp et al., 2021; Martin et al., 2023). These studies investigate both different training strategies as well as different neural architectures from off-the-shelf computer vision ones such as convolutional LSTMs and UNets (Ronneberger et al., 2015) to data-assimilation-inspired ones (Beauchamp et al., 2021; Fablet, Chapron, et al., 2021). ### Ocean Modeling and OSSE Advances in modeling and simulating ocean physics have largely contributed to a better understanding of the processes involved in the earth system and to the development of operational oceanography (Barnier et al., 2006; Ajayi et al., 2020). High-resolution simulations used in Observing System Simulation Experiments (OSSE) also provide a great test-bed for the design and evaluation of new of ocean observation systems (Benkiran et al., 2021). The availability of numerical model outputs enables the computation of interpretable metrics directly on the quantities of interest. This avoids challenges met when working solely with observation data that may be incomplete, noisy or indirectly related to the desired quantity. For example, in the case of the recently launched SWOT mission, OSSEs combined ocean and instrument simulations to address calibration issues and interpolation performance for SWOT altimetry data (Dibarboure et al., 2022). Such OSSEs have also promoted novel developments for the interpolation of satellite altimetry such as the BFN-QG and 4DVarNet schemes (Guillou et al., 2021; Beauchamp et al., 2023). In OSSE settings, we can train learning-based mapping schemes in a supervised manner using model outputs as the "ground truth" during the training phase. Nonetheless, these training methods cannot be straightforwardly applied to Observing System Experiments (OSEs) due to a lack of comprehensive groundtruthed observation datasets. Applied machine learning practitioners often grapple with insufficient amount of labelled data during the training of supervised learning schemes, as the collection of large annotated datasets for a specific task can be costly or unattainable. Proposed solutions includes the exploitation of large existing datasets (such as ImageNet Deng et al. (2009)) to train general purpose models (like He et al. (2016)). Another approach involves the generation of synthetic datasets to facilitate the creation of groundtruthed samples (Gomez Gonzalez et al., 2017; Dosovitskiy et al., 2015). OSSEs, which combine ocean model outputs and observing system simulators (Boukabara et al., 2018), can deliver such large synthetic groundtruthed datasets. We propose to investigate how OSSE-based training strategies apply to the analysis of real satellite altimetry datasets. Recent results of SSH super-resolution model trained on simulation datasets and evaluated on real ones (Buongiorno Nardelli et al., 2022) support the relevance of such strategies. ### Physics-aware deep-learning In the last decades, DL advances combined with the rise in computational resources and amount of data have shown the power of extracting knowledge from data in domains ranging from computer vision to language processing (LeCun et al., 2015). Yet, despite to the universality of DL architectures (Hornik et al., 1989), a central challenge persists in learning from data: the generalization performance beyond the distribution of the training data. To tackle this problem, the literature includes a variety of strategies such as data augmentation (Shorten and Khoshgoftaar, 2019) and regularization techniques, including dropout layers (Srivastava et al., 2014) and weight decay schemes (Krizhevsky et al., 2012). This is of critical importance for physical systems, where models trained on past data will be challenged when the system evolves and reaches dynamics absent from the training data. We can see evidence of this shortcoming in the instability challenges faced by neural closures for climate models (Brenowitz et al., 2020). There have been a variety of approaches to harness physical priors within learning schemes to address this issue. Some injects trainable components in classical integration schemes of physical models such as Yin et al. (2021). Others leverage physical priors within their learning setups which can been used in the training objective (Raissi et al., 2019; Greydanus et al., 2019), as well as in the architecture (Li et al., 2020; Wang et al., 2020). However most of these works have focused on relatively simple physical models and it remains challenging to combine current state-of-the-art ocean models with such methods. Obstacles include the complexity and cost of running the physical models, the differences in programming tools and the computing infrastructures used in each domain, as well as the availability of automatic differentiation tools for state-of-the-art ocean models. The proposed simulation-based training strategy offers another way to benefit from the advances in high-resolution ocean modeling in the design of deep neural models for ocean reanalysis problems. ## 3 Method ### Overview We designate our approach as "simulation-based", it consists in leveraging ocean models and simulations of observing systems to design supervised training environments. In this section, we describe the proposed method for assessing the potential of simulation-based neural schemes for the mapping real altimetry tracks. We describe the architecture considered in our study, as well as the different datasets used for training purposes. We also detail our simulation-based training setup and the proposed evaluation framework on real altimetry. Figure 1: **Overview of the experimental setup**. On the left side we display the simulation-based training strategy based on an ocean simulation which will be used for 1) generating synthetic observation and 2) computing the training objective of the neural mapping scheme. On the right side we show the evaluation principle of splitting the available satellite observations to evaluate the method on data that were not used for the inference. ### Neural mapping scheme The neural mapping scheme considered for this study is the 4DVarNet framework(Fablet et al., 2021). We choose this scheme due to the performance shown in the OSSE setup. As reported in Beauchamp et al. (2023), it significantly outperforms the DUACS product (Taburet et al., 2019) in the targeted Gulf stream region. 4DVarNet relies on a variational data assimilation formulation. The reconstruction results from the minimization of a variational cost. This cost encapsulates a data fidelity term and a regularization term. It exploits a prior on the space-time dynamics through a convolutional neural network inspired from (Fablet et al., 2018), and an iterative gradient-based minimization based on a recurrent neural network as introduced by Andrychowicz et al. (2016). The overall architecture and components are similar to those presented in Beauchamp et al. (2023). We adapt some implementation details based on cross-validation experiments to improve the performance and reduce the training time. We refer the reader to the code for more details (Febvre, 2023). ### SSH Data We use numerical simulations of ocean general circulation models (OGCM) to build our reference SSH datasets. Such simulations involve a multitude of decisions that affect the resulting simulated SSH. Here we consider NEMO (Nucleus for European Modelling of the Ocean) (Gurvan et al., 2022) which is among the state-of-the art OGCM in operational oceanography (Ajayi et al., 2020) as well as in climate studies (Voldoire et al., 2013). The selected SSH datasets reported in Table 1 focus on three main aspects: the added-value of high-resolution eddy-rich simulations, the impact of reanalysis datasets and the relevance of tide-resolving simulations. In order to evaluate the impact of eddy-rich simulations, we consider NATL60, GLORYS12-f and ORCA025 free runs, respectively with a horizontal grid resolution of \(1/60^{\circ}\), \(1/12^{\circ}\), and \(1/4^{\circ}\). Finer grids allow for more processes to be simulated. We therefore expect higher-resolution simulations to exhibit structures closer to the real ocean and the associated trained deep learning model to perform better. Regarding the impact of reanalysis data, we compare numerical experiments with the GLORYS12-r reanalysis and the associated free run GLORYS12-f. This reanalysis dataset relies on the assimilation of temperature, sea level and sea ice concentration observations. Besides, the recent eNATL60 twin simulations eNATL60-t and eNATL60-0 allow us to evaluate the impact of tide-resolving simulations. We summarize in Table 1 the characteristics of the different datasets. \begin{table} \begin{tabular}{l l||c c c c} \hline \hline & & Resolution & Reanalysis & Tide & DAC \\ \hline NATL60 & (Ajayi et al., 2020) & \(1/60^{\circ}\) & No & No & No \\ eNATL60-t & (Brodeau et al., 2020) & \(1/60^{\circ}\) & No & Yes & Yes \\ eNATL60-0 & (Brodeau et al., 2020) & \(1/60^{\circ}\) & No & No & Yes \\ GLORYS12-r & (Lellouche et al., 2021) & \(1/12^{\circ}\) & Yes & No & No \\ GLORYS12-f & (Lellouche et al., 2021) & \(1/12^{\circ}\) & No & No & No \\ ORCA025 & (Barnier et al., 2006) & \(1/4^{\circ}\) & No & No & No \\ \hline \hline \end{tabular} \end{table} Table 1: **Summary table of the different synthetic SSH fields used for training**. The last column indicate whether the Dynamic Atmospheric Correction was applied on the synthetic SSH. It justify the presence of both eNATL60-0 and NATL60 to isolate the impacts of resolution and tide. ### OSSE-based training setup We sketch the proposed OSSE-based training setup on the left side of the Figure 1. In order to fairly evaluate the datasets' quality as a training resource, we standardize the training procedure. We regrid all simulations to the same resolution (\(1/20^{\circ}\)) and we use daily-averaged SSH fields as training targets. We generate noise-free pseudo-observations by sampling values of the daily-averaged fields corresponding to realistic orbits of a 5 altimeter-constellation. We train all models from a one-year dataset in a Gulfstream domain from (66\({}^{\circ}\)W, 32\({}^{\circ}\)N) to (54\({}^{\circ}\)W, 44\({}^{\circ}\)N) in which we keep the same two months for validation. The hyper-parameters of the model and training procedure such as the number of epoch, learning rate scheduler are the same for all the experiments. The detailed configuration can be found by the reader in the available implementation. As training objective, we combine the mean square errors for the SSH fields and the amplitude of the gradients as well as a regularization loss for the prior model. ### OSE-based evaluation setup As sketched on the right side of the Figure 1, the evaluation setup relies on real altimetry data from the constellation of 6 satellites from 2017 (SARAL/Altika, Jason 2, Jason 3, Sentinel 3A, Haiyang-2A and Cryosat-2 ). We apply the standardized setup presented in a data-challenge [https://github.com/ocean-data-challenges/2021a_SSH_mapping_OSE](https://github.com/ocean-data-challenges/2021a_SSH_mapping_OSE). We use the data from the first five satellites as inputs for the mapping and the last one (Cryosat-2) for computing the performance metrics. We compute these metrics in the along-track geometry. The evaluation domain spans from (65\({}^{\circ}\)W, 33\({}^{\circ}\)N) to (55\({}^{\circ}\)W, 43\({}^{\circ}\)N) and the evaluation period from January 1\({}^{st}\) to December 31\({}^{st}\) 2017. Given \(\eta_{c2}\) and \(\tilde{\eta}\) the measured SSH and the reconstructed SSH respectively, we compute the following two metrics: * \(\mu_{ssh}\) is a score based on the normalized root mean squared (nRMSE) error computed as \(1-\dfrac{RMS(\hat{\eta}-\eta_{c2})}{RMS(\eta_{c2})}\) * \(\lambda_{x}\) is the wavelength at which the power spectrum density (PSD) score \(1-\dfrac{PSD(\hat{\eta}-\eta_{c2})}{PSD(\eta_{c2})}\) crosses the 0.5 threshold, which characterize the scales resolved by the reconstruction (the error below that wavelength makes up for more than half of the total signal) In Table 3, we also consider the root mean square error (RMSE) as well as the nRMSE score of the sea level anomaly \(\mu_{sla}\) obtained by subtracting the mean dynamic topography to the SSH. Lastly, we assess the performance degradation resulting from the transition from simulated to real data by quantifying the improvement relative to DUACS in the resolved scale \(\lambda_{x}\) on our OSE setup as well as on the OSSE benchmarking setup proposed in Guillou et al. (2021). This benchmarking setup relies on NATL60-CJM165 OSSE dataset. We refer the reader to [https://github.com/ocean-data-challenges/2020a_SSH_mapping](https://github.com/ocean-data-challenges/2020a_SSH_mapping)\_NATL60 for a detailed description of this experimental setup. Figure 2: **Samples Kinetic energy and relative vorticity of the training and reconstruction data of January 6\({}^{\prime}h\)**. The reconstructed year is 2017 while the training year vary depending on the simulation. The first two columns (a) and (b) show the training data while columns (c) and (d) show the associated 4DVarNet reconstruction. The kinetic energy is displayed in columns ((a) and (c)) and the relative vorticity normalized by the local Coriolis parameter in columns ((b) and (d)). Each row shows the experiment using respectively: ORCA025 (I), GLORYS12-f (II), GLORYS12-r (III), NATL60 (IV), eNATL60-t (V) and eNATL60-0 (VI) ## 4 Results This section details our numerical experiments for the considered real altimetry case-study for a Gulf Stream region as described in Section 3.5. We first report the benchmarking experiments to assess the performance of the proposed learning-based strategy with respect to (w.r.t.) state-of-the-art mapping schemes. We then analyse how the characteristics of the training datasets drive the mapping performance. ### Benchmarking against the state of the art We report in Table 2 the performance metrics of state-of-the-art approaches including both operational observation products (Taburet et al., 2019; Ubelmann et al., 2021), deep-learning-based schemes trained on observation data (Archambault et al., 2023; Martin et al., 2023) as well as methods using explicitly a model-based prior on sea surface dynamics (Guillou et al., 2021; Ballarotta et al., 2020; Lellouche et al., 2021). We compare those methods with a 4DVarNet trained on eNATL60-0 OSSE dataset. The latter outperforms all other methods on the two metrics considered (22% improvement in RMSE w.r.t. the DUACS product and 33% improvement in the resolved scale). We report a significantly worse performance for GLORYS12 reanalysis. This illustrates the challenge of combining large ocean general circulation models and observation data for the mapping of the SSH. The last column indicates that the 4DVarNet scheme leads to the best mapping scores for both the OSE and OSSE setups. For the latter, the reported improvement of 47% is twice greater than the second best at 22%. The performance of the 4DVarNet drops by 11% when considering the former. By contrast, other methods do not show such differences between the OSE and OSSE case-studies. This suggests that the finer-scale structures that are well reconstructed in the OSSE setup are not as beneficial in the OSE setup. While one could question the representativeness of the OSSE datasets for the fine-scale patterns in the true ocean, real nadir altimetry data may also involve multiple processes which could impede the reconstruction and evaluation of horizontal scales below 100km. Figure 3: **Space-time spectral densities of the training datasets (first row) and of their associated reconstruction (second row).** Darker blue in the lower left corner indicates higher energy at larger wavelength and periods. The different SSH fields exhibit different energy cascades when moving to finer temporal (upward) or spatial (rightward) scales. \begin{table} \begin{tabular}{l||c c c c|c c c c} \hline \hline & SSH & Deep & Calibrated on & Physical & rmse & \(\mu_{ssh}\) & \(\lambda_{x}\) & \(1-\frac{\lambda_{x}}{\lambda_{xref}}\) \\ & Only & Learning & data from & Model & (cm) & () & (km) & (\% ose, ose) \\ \hline (a) **4DVarNet** & Yes & Yes & Simulation & – & **5.9** & **0.91** & **100** & **33, 47** \\ (b) MUSTI & No & Yes & Satellite & – & 6.3 & 0.90 & 112 & 26, 22 \\ (c) ConvLstm-SST & No & Yes & Satellite & – & 6.7 & 0.90 & 108 & 28, – \\ (d) ConvLstm & Yes & Yes & Satellite & – & 7.2 & 0.89 & 113 & 25, – \\ (e) DYMOST & Yes & No & Satellite & QG & 6.7 & 0.90 & 131 & 13, 11 \\ (f) MIOST & Yes & No & Satellite & – & 6.8 & 0.90 & 135 & 11, 10 \\ (g) BFN-QG & Yes & No & Satellite & QG & 7.6 & 0.89 & 122 & 19, 21 \\ (h) DUACS & Yes & No & Satellite & – & 7.7 & 0.88 & 151 & 0, 0 \\ (i) GLORYS12 & No & No & Satellite & NEMO & 15.1 & 0.77 & 241 & -60, – \\ \hline \hline \end{tabular} \end{table} Table 2: **SSH reconstruction performance of the benchmarked methods (a) 4DVarNet from this study trained on eNATL60-0 (b) Archambault et al. (2023), (c and d) ConvLstm-SST and ConvLstm from Martin et al. (2023), (e) DYMOST from Balarotta et al. (2020), (f) MIOST from Ubelmann et al. (2021), (g) BFN-QG from Guillou et al. (2021), (h) DUACS from Taburet et al. (2019), (i) GLORYS12 from Lellouche et al. (2021. The columns indicate from left to right: whether athe mapping schemes rely only on SSH data or also exploit additional data such as gap free SST products; if the method uses deep learning architectures; the data used to calibrate (or train) the mapping scheme; the numerical model of the ocean used for the mapping if any (QG stands for quasi-geostrophic); \(\mu\) and \(\lambda_{x}\) are the metrics as described in Section 3.5** \begin{table} \begin{tabular}{l||c c c c c} \hline \hline Training Data & RMSE & \(\mu_{ssh}\) & \(\mu_{sla}\) & \(\lambda_{x}\) & \(1-\frac{\lambda_{x}}{\lambda_{xref}}\) \\ & (cm) & & (km) & (\% ose, ose) \\ \hline NATL60 & **5.9** & **0.91** & **0.80** & **98** & **(35, –)** \\ eNATL60-t & **5.9** & **0.91** & **0.80** & 100 & (33, 48) \\ eNATL60-0 & **5.9** & **0.91** & **0.80** & 100 & (33, 47) \\ GLORYS12-r & 6.3 & 0.90 & 0.78 & 106 & (30, 28) \\ GLORYS12-f & 6.7 & 0.90 & 0.77 & 119 & (21, 23) \\ ORCA025 & 7.1 & 0.89 & 0.76 & 126 & (17, 17) \\ \hline \hline \end{tabular} \end{table} Table 3: **Performance of 4DVarNet mapping schemes trained on different simulated datasets**. The first column shows the source of the training dataset as described in Table 1; the subsequent columns indicate the reconstruction metrics described in Section 3.5. Note that the NATL60 could not be evaluated on the OSSE setup since the evaluation data were used for validation during the training stage. ### Eddy-present datasets versus eddy-rich ones We analyse here in more detail the impact of the spatial resolution of the training dataset onto the reconstruction performance. In Table 3, as expected, the higher resolution grid in the ocean run simulation leads to a better mapping with a 22% improvement in \(\lambda_{x}\) and a 17% improvement in the RMSE score between the experiments with the coarsest (ORCA025) and finest (NALT60) resolutions. We also observe qualitative differences in the relative vorticity fields in Figure 2. Residual artifacts due to the altimetry tracks appear (60\({}^{\circ}\)W, 39\({}^{\circ}\)N) for the two lower-resolution training datasets. They are greatly diminished when considering the NALT60 dataset. Despite these differences, the reconstructed vorticity and kinetic energy fields in Figure 2 look very similar for the different 4DVarNet schemes, whatever the training datasets. By contrast, the vorticity and kinetic energy fields in the training datasets clearly depict fewer fine-scale structures and weaker gradients for the lower-resolution simulation datasets, namely ORCA025 and GLORYS12-f. These results support the generalization skills of 4DVarNet schemes to map real altimetry tracks despite being trained on SSH sensibly different from the reconstruction. We draw similar conclusions from the analysis of the spectral densities shown in Figure 4. The differences in the energy distribution of the training data significantly reduce in the reconstructions. 4DVarNet schemes trained from higher-resolution datasets however result in more faithful reconstruction at all scales. The patterns observed for the temporal PSD are slightly different in Figure 3. We do not observe the same homogenization as for the spatial PSD. Lower-resolution training datasets involve a significant drop of an order of magnitude for periods greater than 10 days and wavelength greater than 200km. ### Forced simulation datasets versus reanalysis ones Looking in more specifically at the effect of ocean reanalysis between the two experiments GLORYS12-f and GLORYS12-r. We can first note the impact of observation data assimilation in Figure 3 where we see how the power spectrum of the reanalysis is significantly raised compared to the free run. The spectrum is closer to ones of the higher resolution simulations. Visually we also clearly see stronger gradients in the kinetic energy in Figure 2. Figure 4: **Spectral analysis of the training and reconstructed SSH datasets**. We display the PSD of the training dataset (left plot), reconstructed SSH field (center plot) as well as the associated PSD score (right plot) We can observe a similar behavior as in Section 4.2 in Figure 5 with the gap of in spectral density being diminished between the training and reconstruction data, and the PSD score indicating a lower energy of the error at all scales for the reanalysis-based experiment. Quantitatively in Table 1 we see an improvement of 11% in both the RMSE and the scale resolved, besides training on a reanalysis increase the relative gain w.r.t. DUC-ACS significantly more on real data (+9%) than on simulated data (+5%) as we can see in the right most column. This suggests that the reanalysis dataset conveys information on real world observations which improves the generalization performance. ### Tide-free datasets versus tide-resolving ones We assess here the impact of tide-resolving simulation used as training data. We use the twin eNATL60 runs eNATL60-t and eNATL60-0. Contrary to other runs, those simulations contain barometric and wind forcing, we therefore remove the Dynamic Atmospheric Correction (Carrere et al., 2016) from the SSH fields. Additionally since the barotropic tide signals are removed from real altimetry tracks prior to interpolation, we also remove the signal from the training data by subtracting the spatial mean over the training domain for each hourly snapshot before calculating the daily averages. Given those processing steps, the two training datasets exhibit very similar wavenumber spectra as shown in Figures 3. We also find that training on those two datasets produce little differences in the reconstructions both quantitatively (see Table 3) and qualitatively (Fig. 2). The resulting performance is comparable to that of the NATL60 experiment. We identify two hypotheses for explaining why tide-resolving simulation do not lead to better mapping schemes: * The preprocessing applied on the training field remove the main tide signals. We therefore effectively measure the impact of tide modeling on other ocean processes that may be less significant; * The evaluation procedure applied on altimetry tracks on which the barotropic tide has been filtered may not be interpretable enough to measure the reconstruction of residual tide signals. New instruments like the KaRIN deployed in the SWOT mission may provide new ways to better quantify those effects. Figure 5: **Spectral impact of model reanalysis**. We display the PSD of the training dataset (left plot), reconstructed SSH field (center plot) as well as the associated PSD score (right plot) These findings provide motivation for carefully considering the purpose of the learning-based model when making decisions about the training data. In our case, explicitly modeling tide processes that are removed from the observations in the evaluation setup added overheads in the computational cost of running the simulation as well as in the preprocessing of the training data. Additionally given the considered evaluation data and metrics, we were not able to quantify any significant differences between the two trained mapping schemes. ## 5 Discussion This study has been greatly facilitated by the standardized tasks and evaluation setups proposed in data-challenges [https://ocean-data-challenges.github.io/](https://ocean-data-challenges.github.io/). Data-challenges are used to specify a targeted problem of interest to domain experts through datasets and relevant evaluation metrics. This preliminary work have been instrumental in constituting the comprehensive benchmark and combining methods from different teams and institution around the world. Additionally, it also constitutes a strong basis for a trans-disciplinary collaboration between the ocean and machine learning research communities. Moreover, the results presented in this study introduce a use of ocean simulations for developing altimetry products. This opens new ways for ocean physicist, modelers and operational oceanographers to collaborate. In order to assess the range of these new synergies, it would be interesting to explore if the approach proposed here of training neural schemes using simulation data would generalize to other tasks such as forecast or sensor calibration and to other quantities like surface temperature, currents, salinity or biochemical tracers. If the simulation-based training approach introduced here is successfully extended to other ocean problems, one could envision training large foundation deep learning models (Brown et al., n.d.) capturing the inner structure of high resolution ocean simulations which could then be used in many downstream applications. This could be the way to capitalize on all the advancement in ocean modeling without having to run OGCM numerical simulation for each downstream products. Furthermore, we would like to highlight the cost consideration when running numerical simulation intended for training learning based schemes. Indeed given that the eNATL60 run took 2700x CPU hours and 350x memory compared to the ORCA025 run for a smaller domain, a trade-off arises between generating multiple "cheap" trajectories versus generating a single more realistic trajectory. To conclude, we have shown in this study that training machine learning models on simulations datasets leads good performance on real altimetry data mapping and outperforms current state of the art approaches. The model trained on NATL60 reduces the RMSE by 18% compared neural schemes trained on observation data and improves the scales resolved by 33% compared to the DUACS operational product. Even the coarsest simulation considered ORCA025 provides competitive results with current operational methods. We have shown that using a more realistic SSH fields using reanalysis or higher resolution simulations increases the performances of the trained model. This is an exciting result that shows the potential for training operational products from ocean simulations and how advances in ocean modeling in operational oceanography can be beneficial. The results shown here are limited to the interpolation problem on a regional domain but the robustness of the performance shown are encouraging for further developing these results using a larger domain. ## Open Research Section The authors provide the training data, source code, reconstructed maps and trained model for each experiments of the manuscript at [https://doi.org/10.5281/zenodo.8064114](https://doi.org/10.5281/zenodo.8064114). This work was supported by ANR Projects Melody and OceaniX and CNES. It benefited from HPC and GPU resources from GENCI-IDRIS (Grant 2020-101030) and Ifremer.
satellitic ALTIMETRYとデータアシメーションと最適な補完計画により、海面ダイナミクスを監視する能力は大きく改善されました。最近、空間と時間における補完問題の解決策として、深層学習(DL)のschemeが注目を集めています。しかし、海面の空間と時間のcoverageの制約により、リアルなアライメトリDatasetの不足が、最新のNeural schemeの訓練に課題となっています。ここでは、海洋ダイナミクスのシミュレーションと衛星アライメトリを用いて、海面高さとシミュレーションベースのNeural mapping schemeを訓練し、実用的なアライメトリDatasetでの性能を検証します。トレーニングプロセスにおける海洋シミュレーションデータの利用方法を更に分析します。この分析は、モデルの解像度や流動特性を考慮した、強風と穏やかな風で構成されたシミュレーションと、データアシメーション
2309.04143
On several problems in p-Bergman theory
In this paper, we first answer Chen-Zhang's problem on $p$-Bergman metric proposed in \cite{CZ22}. Second, we prove the off-diagonal p-Bergman kernel function $K_p(z,w)$ is H\"older continuous of order (1-$\varepsilon$) about the second component when $p>1$ for any $\varepsilon>0$, which improves the corresponding result of Chen-Zhang. Moreover, we prove the asymptotic behavior of the maximizer of $p$-Bergman kernel as $p\rightarrow 1^-$. Finally, we give a characterization of a class of holomorphic functions on $\mathbb{B}^1$ to be $L^p$-integrable.
Yinji Li
2023-09-08T05:55:10
http://arxiv.org/abs/2309.04143v1
# On several problems in P-Bergman theory ###### Abstract. In this paper, we first answer Chen-Zhang's problem on \(p\)-Bergman metric proposed in [2]. Second, we prove the off-diagonal p-Bergman kernel function \(K_{p}(z,w)\) is Holder continuous of order (1-\(\varepsilon\)) about the second component when \(p{>}1\) for any \(\varepsilon>0\), which improves the corresponding result of Chen-Zhang. Moreover, we prove the asymptotic behavior of the maximizer of \(p\)-Bergman kernel as \(p\to 1^{-}\). Finally, we give a characterization of a class of holomorphic functions on \(\mathbb{B}^{1}\) to be \(L^{p}\)-integrable. ###### Contents * 1 Introduction * 2 Chen-Zhang's problem * 3 Holder continuity of \(m_{p}(z,\cdot)\) * 3.1 The case of \(1{<}p\leq 2\) * 3.2 The case of \(p{>}2\) * 4 Asymptotic Behavior of Maximizers of \(K_{p}(z)\) as \(p\to 1^{-}\) * 5 Characterization of \(L^{p}\)-integrability of a class of holomorphic functions on \(\mathbb{B}^{1}\) ## 1. Introduction The \(L^{2}\) Bergman theory was established by Stefan Bergman in the 1920s, is one of the fundamental theories in several complex variables and complex geometry. The \(L^{2}\) Bergman space on a domain in \(\mathbb{C}^{n}\) is the space of \(L^{2}\) holomorphic functions on that domain, which can be easily shown to be a Hilbert space using the theory of normal families. The \(L^{2}\) Bergman kernel as the integral kernel of the evaluation functional on the \(L^{2}\) Bergman space, shares good properties such as real analyticity and reproducing property. The \(L^{2}\) Bergman kernel function is obtained by evaluating the kernel on the diagonal. On a bounded domain in \(\mathbb{C}^{n}\), the \(L^{2}\) Bergman kernel function is smooth and strictly plurisubharmonic, non-vanishing and thus induces an invariant Kahler metric on that domain, which is known as the \(L^{2}\) Bergman metric. The \(L^{2}\) Bergman metric plays an important role in the study of bounded domains. The \(L^{2}\) Bergman theory can be extended to the framework of Hermitian holomorphic vector bundles over complex manifolds, and has important applications in the study of various important problems in complex geometry and algebraic geometry. In comparison with the \(L^{2}\) Bergman theory, the \(L^{p}\) Bergman theory has not been well studied. In [16], Ning-Zhang-Zhou initiate a systematic study of the \(L^{p}\) Bergman theory, and get a deep result that a bounded domain is pseudoconvex if and only if the \(L^{p}\) Bergman kernel is exhaustive for some \(p\in(0,2)\). Recently, Deng-Wang-Zhang-Zhou [14] proved the following fundamental result that two bounded hyperconvex domains in \(\mathbb{C}^{n}\) are biholomorphically equivalent if and only if the normed \(L^{p}\) Bergman spaces associated to them are linearly isometric for some \(p\in(0,2)\). This shows that the \(L^{p}\) Bergman space is a complete biholomorphic linear isometric invariant of bounded hyperconvex domains in \(\mathbb{C}^{n}\). However, it is well-known that the \(L^{2}\) Bergman space can not determine the complex structure of bounded hyperconvex domains, say the punctured disc for example. Thus the result by Deng-Wang-Zhang-Zhou indicates that the \(L^{p}\) Bergman space is a very important research object and the \(L^{p}\) Bergman theory deserves further development. However, unlike the \(L^{2}\) Bergman theory, the \(L^{p}\) spaces are generally not Hilbert spaces, which poses essential difficulties for research. A basic problem such as computing the \(L^{p}\) Bergman kernel is highly challenging, and even the \(L^{p}\) Bergman kernel on the punctured disk in the complex plane cannot be computed so far. Therefore, new methods and tools need to be developed. For a bounded domain \(\Omega\subset\mathbb{C}^{n}\), we define \(A^{p}(\Omega)\) to be the p-Bergman space of \(L^{p}\) holomorphic functions on \(\Omega\)(throughout this paper the integrals are with respect to Lebesgue measure). As introduced in [16], the \(p\)-Bergman kernel \(K_{p}(z)\) is defined as \[K_{p}(z)=\sup_{f\in A^{p}(\Omega)\setminus\{0\}}\frac{|f(z)|^{p}}{\|f\|_{p}^{ p}},\] where \(\|f\|_{p}=(\int_{\Omega}|f|^{p})^{1/p}\). The \(p\)-Bergman kernel can also be defined via a minimizing problem which was first introduced by Bergman himself in the case \(p=2\): \[m_{p}(z):=\inf\{||f||_{p}:f\in A^{p}(\Omega),f(z)=1\}.\] By a normal family argument, we know that there exists at least one minimizer for \(p\)\(>\)\(0\) and exactly one minimizer \(m_{p}(\cdot,z)\) for \(p\geq 1\). It is easy to see that \(K_{p}(z):=m_{p}(z)^{-p}\) for \(p\)\(>\)\(0\). The the off-diagonal \(p\)-Berman kernal is defined as \(K_{p}(z,w):=m_{p}(z,w)K_{p}(w)\) for \(p\geq 1\). Recently, Chen-Zhang [10] explored further fundamental aspects of the \(L^{p}\) Bergman theory using variational methods. They derived reproducing formula for \(L^{p}\) Bergman kernels and show that the off-diagonal \(L^{p}\) Bergman kernel (\(p\)-Bergman kernel for short) \(K_{p}(z,\cdot)\) is Holder continuous of order \(\frac{1}{2}\) for \(p>1\) and of order \(\frac{1}{2(n+2)}\) for \(p=1\). They also defined the \(p\)-Bergman mertic \(B_{p}(z;X)\) and showed that the \(p\)-Bergman metric \(B_{p}(z;X)\) tends to the Caratheodory metric \(C(z;X)\) as \(p\to\infty\) and the generalized Levi form \(i\partial\bar{\partial}\log K_{p}(z;X)\) is no less than \(B_{p}(z;X)^{2}\) for \(p\geq 2\) and \(C(z;X)^{2}\) for \(p\leq 2\). Since it is well-known that \(i\partial\bar{\partial}\log K_{p}(z;X)=B_{p}(z;X)^{2}\) for \(p=2\), Chen-Zhang raised the following **Problem 1.1** ([14, Problem 8]).: Is it possible to conclude that \(i\partial\bar{\partial}\log K_{p}(z;X)=B_{p}(z;X)^{2}\) for \(2<p<+\infty\)? In this paper, we first answer Problem 1.1 by establishing the following **Theorem 1.1**.: Let \(\Omega\) be complete circular and bounded homogeneous domain in \(\mathbb{C}^{n}\), we have for \(X\neq 0\), \[i\partial\bar{\partial}K_{p}(0;X){>}B_{p}(0;X)^{2},\ p{>}2,\] \[i\partial\bar{\partial}K_{p}(0;X){<}B_{p}(0;X)^{2},\ p{<}2.\] Second, by introducing a new iteration technique, we are able to improve the regularity of the off-diagonal \(p\)-Bergman kernels, namely we improve the order of the Holder continuity from \(\frac{1}{2}\) to \(1-\varepsilon\) for any \(\varepsilon>0\) and \(p>1\). **Theorem 1.2**.: Let \(p{>}1\), \(\varepsilon{>}0,\ S\subseteq\subseteq\Omega\), there exists \(C=C(\varepsilon,S)\) such that for \(z^{\prime},z,w\in S\) \[|m_{p}(z^{\prime},z)-m_{p}(z^{\prime},w)|\leq C|z-w|^{1-\varepsilon}.\] Moreover, the off-diagonal \(p\)-Bergman kernel \(K_{p}(z,\cdot)\) is Holder continuous of order \(1-\varepsilon\). It is proved in [14, Proposition 2.4, Proposition 2.5] that for \(p\geq 1\) the maximizer \(f\) of \(K_{p}(z)\) is unique under the condition \(f(z)=1\). Actually, it is precisely \(m_{p}(\cdot,z)\). But the uniqueness of the maximizer of \(K_{p}(z)\) for \(0<p<1\) is not known. We study the asymptotic behavior of the maximizers of \(K_{p}(z)\) as \(p\to 1^{-}\) and get the following **Theorem 1.3**.: Let \(p{<}1\), we define the metric \(d(f,g):=\int_{\Omega}|f-g|^{p}\) on \(A^{p}(\Omega)\). Denote \(d_{p}(z):=\sup\{d(f_{p},g_{p})\}\), where sup is taken over all pairs of maximizers \(f_{p},g_{p}\) of \(K_{p}(z)\) satisfying \(f_{p}(z)=g_{p}(z)=1\). Then, it holds that \[\forall z\in\Omega,\lim_{p\to 1^{-}}d_{p}(z)=0.\] Finally, we study \(L^{p}\) Bergman space \(A^{p}(\mathbb{B}^{1})\) on the unit disk \(\mathbb{B}^{1}\). A charcterization for a class of holomorphic functions on \(\mathbb{B}^{1}\) to be \(L^{p}\)-integrable is established as follows. **Theorem 1.4**.: Let \(p{>}0\), there exists \(C=C(p,A)\) such that, if \(f\in\mathcal{O}(\mathbb{B}^{1})\), \(f(z)=\sum_{k=1}^{\infty}a_{\lambda_{k}}z^{\lambda_{k}}\) for some lacunary sequence \(\{\lambda_{k}\}\) with constant \(A\), \[C(p,A)^{-1}\int_{0}^{1}(\sum_{k=1}^{\infty}|a_{\lambda_{k}}r^{2\lambda_{k}}|) ^{\frac{p}{2}}dr\leq\int_{\mathbb{B}^{1}}|f|^{p}\leq C(p,A)\int_{0}^{1}(\sum_{ k=1}^{\infty}|a_{\lambda_{k}}r^{2\lambda_{k}}|)^{\frac{p}{2}}dr.\] In particular, a holomorphic function \(f(z)=\sum_{k=1}^{\infty}a_{\lambda_{k}}z^{\lambda_{k}}\) for some lacunary sequence \(\{\lambda_{k}\}\) with constant \(A\) is \(L^{p}\)-integrable if and only if the integration \(\int_{0}^{1}(\sum_{k=1}^{\infty}|a_{\lambda_{k}}r^{2\lambda_{k}}|)^{\frac{p}{2 }}dr\) is finite. _Remark 1.1_.: Theorem 1.4 can also be used to give a similar characterization of a class of holomorphic functions on the punctured disk to be \(L^{p}\)-integrable by considering the Laurent expansions. The structure of this paper is organized as follows. In SS2, we answer the open problem raised by Chen-Zhang, and prove Theorem 1.1. In SS3, we prove the off-diagonal \(p\)-Bergman kernel is Holder continuous of order \(1-\varepsilon\), i.e. Theorem 1.2. In SS4, we study the asymptotic behavior of the maximizer of the \(p\)-Bergman kernel as \(p\to 1^{-}\), i.e. Theorem 1.3. Finally, in SS5, we give a characterization of a class of holomorphic functions on the unit disk to be \(L^{p}\)-integrable, i.e. Theorem 1.4. **Acknowlegements**. The author would like to express his sincere gratitude to Professor Zhiwei Wang and Professor Xiangyu Zhou for their guidence and encouragements. This research is supported by National Key R&D Program of China (No. 2021YFA1002600). ## 2. Chen-Zhang's problem In this section, we answer the Problem 1.1 raised by Chen-Zhang. **Definition 2.1.** A domain \(\Omega\subseteq\mathbb{C}^{n}\) is said to be complete circular and bounded homogeneous, if \(\forall z\in\mathbb{C}^{n},t\in\mathbb{C},|t|\leq 1\), we have \(tz\in\Omega\). We restate Theroem 1.1 as follows. **Theorem 2.1.** Let \(\Omega\) be complete circular and bounded homogeneous domain in \(\mathbb{C}^{n}\), we have for \(X\neq 0\), \[i\partial\bar{\partial}K_{p}(0;X){>}B_{p}(0;X)^{2},\ p{>}2,\] \[i\partial\bar{\partial}K_{p}(0;X){<}B_{p}(0;X)^{2},\ p{<}2.\] Proof. It follows from [16, Theorem 2.3,Remark 2.1] that, on \(\Omega\), we have \(K_{p}(\cdot)=K_{2}(\cdot),\forall p{>}0\). In particular, \(K_{p}(0)=K_{2}(0)=\frac{1}{\operatorname{vol}(\Omega)}\). It is clear that \[i\partial\bar{\partial}\log K_{p}(z;X)=i\partial\bar{\partial}\log K_{2}(z;X).\] In the following, we prove that \[B_{p}(z;X){<}B_{2}(z;X),\ p{>}2\] \[B_{p}(z;X){<}B_{2}(z;X),\ p{<}2.\] Recall the definition, \(B_{p}(z;X):=K_{p}(z)^{-\frac{1}{p}}\cdot\sup_{f\in A^{p}(\Omega),f(z)=0,||f||_ {p}>0}\frac{|Xf(z)|}{||f||_{p}}\). By a normal family argument we know that there exists maximizer of \(B_{p}(0;X)\) and denote it by \(f_{p}\). It follows from Holder inequality that \[||f_{p}||_{p}^{2}\cdot||1||_{p}^{p-2}\geq||f_{p}||_{2}^{2},\ p{>}2.\] However the equality can not be achieved since \(f_{p}\not\equiv 1\). Thus we get that \[B_{p}(0;X)^{2} =K_{p}(0)^{-\frac{2}{p}}\cdot\frac{|Xf_{p}(0)|^{2}}{||f_{p}||_{p}^{ 2}}\] \[< K_{2}(0)^{-1}\cdot\frac{|Xf_{p}(0)|^{2}}{||f_{p}||_{2}^{2}}\] \[\leq B_{2}(0;X).\] The case that \(p{<}2\) can be proved by the same method. ## 3. Holder continuity of \(m_{p}(z,\cdot)\) In this section, we prove the off-diagonal \(p\)-Bergman kernel is Holder continuous of order \(1-\varepsilon\) for any \(\varepsilon>0\). More precisely, we prove the following **Theorem 3.1**.: Let \(p{>}1\), \(\varepsilon{>}0,\ S\subseteq\subseteq\Omega\), there exists \(C=C(\varepsilon,S)\) such that for \(z^{\prime},z,w\in S\) \[|m_{p}(z^{\prime},z)-m_{p}(z^{\prime},w)|\leq C|z-w|^{1-\varepsilon}.\] Let us introduce an important function as follows \[H_{p}(z,w):=K_{p}(z)+K_{p}(w)-\mathrm{Re}\{K_{p}(z,w)+K_{p}(w,z)\}.\] ### The case of \(1{<}p\leq 2\) In this section, we assume \(1{<}p\leq 2\) and prove Theorem 3.1. Proof.: It follows from the proof of [2, Lemma 4.5] that \[\int_{\Omega}|m_{p}(\cdot,z)-m_{p}(\cdot,w)|^{p}\leq\frac{C_{p}}{K_{p}(z)K_{p} (w)}[K_{p}(z)+K_{p}(w)]^{1-\frac{p}{2}}H_{p}(z,w)^{\frac{p}{2}}.\] This leads to \(||m_{p}(\cdot,z)-m_{p}(\cdot,w)||_{p}\leq C(p,S)H_{p}(z,w)^{\frac{1}{2}}\). Next, we are going to establish an estimate for \(H_{p}(z,w)\). \[|\frac{H_{p}(z,w)}{z-w}| =\frac{\mathrm{Re}\{K_{p}(z)[m_{p}(w,w)-m_{p}(w,z)]+K_{p}(w)[m_{p }(z,w)-m_{p}(z,z)]\}}{|z-w|}\] \[\leq K_{p}(z)\frac{|[m_{p}(w,w)-m_{p}(w,z)]-[m_{p}(z,w)-m_{p}(z,z )]|}{|z-w|}\] \[+|m_{p}(z,z)-m_{p}(z,w)|\frac{|K_{p}(z)-K_{p}(w)|}{|z-w|}.\] Since \(K_{p}(\cdot)\) is locally Lipschitz by [2, Proposition 2.11], we know that \(\frac{|K_{p}(z)-K_{p}(w)|}{|z-w|}\leq C(S)\). It follows from the sub mean-value property of plurisubharmonic function that \(|m_{p}(z,z)-m_{p}(z,w)|\leq C||m_{p}(\cdot,z)-m_{p}(\cdot,w)||_{p}\), for some \(C=C(S)\). In view of Cauchy integral formula, we know that \(m_{p}(\cdot,z)-m_{p}(\cdot,w)\)'s derivative is controlled by its \(L^{1}\) norm, therefore we get \[\frac{|[m_{p}(w,w)-m_{p}(w,z)]-[m_{p}(z,w)-m_{p}(z,z)]|}{|z-w|} \leq C||m_{p}(\cdot,z)-m_{p}(\cdot,w)||_{1}\] \[\leq||m_{p}(\cdot,z)-m_{p}(\cdot,w)||_{p}\] All the facts above imply that, for some \(C=C(S)\) \[H_{p}(z,w)\leq C||m_{p}(\cdot,z)-m_{p}(\cdot,w)||_{p}\cdot|z-w|.\] Combine this result with the fact that \[||m_{p}(\cdot,z)-m_{p}(\cdot,w)||_{p}\leq C(p,S)H_{p}(z,w)^{\frac{1}{2}},\] for any number \(\delta\) less than \(\frac{1}{2}+\frac{1}{4}+\frac{1}{8}+...=1\), we get that \[H_{p}(z,w)=o(|z-w|^{1+\delta}),\] \[||m_{p}(\cdot,z)-m_{p}(\cdot,w)||_{p}=o(|z-w|^{\delta}).\] The desired result follows from \(|m_{p}(z^{\prime},z)-m_{p}(z^{\prime},w)|\leq C||m_{p}(\cdot,z)-m_{p}(\cdot,w) ||_{p}\leq C|z-w|^{\delta}\). ### The case of \(p{>}2\) In this section, we assume \(p{>}2\) and prove Theorem 3.1. Proof.: It follows from the proof of [2, Theorem 4.7] that there exists an open set \(U\) with \(S\subseteq U\subseteq\subseteq\Omega\), and a constant \(\alpha=\alpha(p,S,U)\), \(C=C(p,S,U)\) such that \[\int_{U}|m_{p}(\cdot,z)-m_{p}(\cdot,w)|^{\alpha}\leq CH_{p}(z,w)^{\frac{\alpha }{2}}.\] This leads to \(||m_{p}(\cdot,z)-m_{p}(\cdot,w)||_{L^{\alpha}(U)}\leq CH_{p}(z,w)^{\frac{1}{2}}\). The rest part of proof is similar with the case \(1{<}p\leq 2\). We get that \(\forall\delta{<}1\), there exists \(C=C(\delta,S,U)\) such that \[||m_{p}(\cdot,z)-m_{p}(\cdot,w)||_{L^{\alpha}(U)}\leq|z-w|^{\delta},\] \[H_{p}(z,w)\leq C||m_{p}(\cdot,z)-m_{p}(\cdot,w)||_{L^{\alpha}(U)}|z-w|.\] The desired result follows from \(|m_{p}(z^{\prime},z)-m_{p}(z^{\prime},w)|\leq C||m_{p}(\cdot,z)-m_{p}(\cdot,w) ||_{L^{\alpha}(U)}\leq C|z-w|^{\delta}\). ## 4. Asymptotic Behavior of Maximizers of \(K_{p}(z)\) as \(p\to 1^{-}\) We know that when \(p\geq 1\), the maximizer \(f\) of \(K_{p}(z)\) is unique under the condition \(f(z)=1\). Actually, it is precisely the minimizer \(m_{p}(\cdot,z)\) of \(m_{p}(z)\). However, the uniqueness of the maximizer is not known for \(p{<}1\). Nevertheless, we can prove the following: **Theorem 4.1**.: Let \(p{<}1\), \(A^{p}(\Omega)\) is a metric space. We define the metric \(d(f,g):=\int_{\Omega}|f-g|^{p}\), and the function \(d_{p}(z):=\sup\{d(f_{p},g_{p})\}\), where \(\sup\) is taken over all pairs of maximizers \(f_{p},g_{p}\) of \(K_{p}(z)\) satisfying \(f_{p}(z)=g_{p}(z)=1\). Then, it holds that \[\forall z\in\Omega,\lim_{p\to 1}d_{p}(z)=0.\] Proof.: We have \(K_{p}(z)^{-1}=\frac{1}{\int_{\Omega}|f_{p}|^{p}}\leq\int_{\Omega}1=|\Omega|\). Therefore, we know that \(\forall p_{0}{<}1,\{f_{p}\}_{p_{0}{<}p{<}1}\) is a normal family. Thus, there exists a subsequence \(\{f_{p_{n}}\}\) that converges uniformly on compact subsets of \(\Omega\) to some \(f\). For any \(p_{0}{<}s{<}1\), by Fatou's lemma, Holder's inequality and [1, Proposition 6.1(1)], we get that \[\int|f|^{s} \leq\liminf_{n\to\infty}\int|f_{p_{n}}|^{s}\] \[\leq\lim_{n\to\infty}(\int|f_{p_{n}}|^{p_{n}})^{\frac{s}{p_{m}}}| \Omega|^{1-\frac{s}{p_{n}}}\] \[=\lim_{n\to\infty}K_{p_{n}}(z)^{-\frac{s}{p_{n}}}|\Omega|^{1- \frac{s}{p_{n}}}\] \[=K_{1}(z)^{-s}|\Omega|^{1-s}.\] It follows that \(\int|f|=\lim_{s{\to}1}\int|f|^{s}\leq\lim_{s{\to}1}K_{1}(z)^{-s}|\Omega|^{1- s}=K_{1}(z)^{-1}.\) However, \(f(z)=1\) implies that \(f\) is a maximizer of \(K_{1}(z)\) at \(z.\) Next, we prove \(\lim_{n\to\infty}\int_{\Omega}|f_{p_{n}}-f|^{p_{n}}=0.\) For any \(\varepsilon{>}0\), there exists \(U\subset\subset\Omega\), such that \(\int_{U}|f|{>}K_{1}(z)^{-1}-\varepsilon.\) This means \(\int_{\Omega-U}|f|{<}\varepsilon.\) Moreover, for sufficiently large \(n\), since \(f_{p_{n}}\) uniformly converge to \(f\) on any compact subset of \(\Omega\), we know that \(\int_{U}|f_{p_{n}}-f|^{p_{n}}{<}\varepsilon.\) On the other hand, by \(|f_{p_{n}}-f|^{p_{n}}\leq|f_{p_{n}}|^{p_{n}}+|f|^{p_{n}}\), we can see that \[\int_{\Omega-U}|f_{p_{n}}-f|^{p_{n}} \leq\int_{\Omega-U}(|f_{p_{n}}|^{p_{n}}+|f|^{p_{n}})\] \[\leq K_{p_{n}}(z)^{-1}-\int_{U}|f_{p_{n}}|^{p_{n}}+(\int_{\Omega -U}|f|)^{p_{n}}|\Omega|^{1-p_{n}}\] \[\leq K_{p_{n}}(z)^{-1}-(\int_{U}|f|^{p_{n}}-\varepsilon)+ \varepsilon^{p_{n}}|\Omega|^{1-p_{n}}.\] Notice that \(\lim_{n\to\infty}\int_{U}|f|^{p_{n}}=\int_{U}|f|{>}K_{1}(z)^{-1}-\varepsilon\) and \(\lim_{n\to\infty}K_{p_{n}}(z)=K_{1}(z)\) ([1, Proposition 6.1(1)]). Therefore, we can conclude that \(\limsup_{n\to\infty}\int_{\Omega-U}|f_{p_{n}}-f|^{p_{n}}\leq 3\varepsilon.\) Since \(\varepsilon\) is arbitrary, it follows that \(\lim_{n\to\infty}\int_{\Omega}|f_{p_{n}}-f|^{p_{n}}=0.\) Below, we prove the theorem by contradiction. If there exists \(\delta{>}0\) such that there exists a sequence \(\{p_{n}\}\) converges to \(1\), and \(\int_{\Omega}|f_{p_{n}}-g_{p_{n}}|^{p_{n}}=d(f_{p_{n}},g_{p_{n}}){>}\delta\). By taking subsequences twice, we may assume that \(f_{p_{n}}\) and \(g_{p_{n}}\) both converge to the maximizer \(m_{1}(\cdot,z)\) of \(K_{1}(z)\), as described above. However, this leads to \(\int_{\Omega}|f_{p_{n}}-g_{p_{n}}|^{p_{n}}\leq\int_{\Omega}|f_{p_{n}}-m_{1}( \cdot,z)|^{p_{n}}+\int_{\Omega}|g_{p_{n}}-m_{1}(\cdot,z)|^{p_{n}}\to 0\), which is a contradiction. Characterization of \(L^{p}\)-integrability of a class of holomorphic functions on \(\mathbb{B}^{1}\) Let \(\Omega=\mathbb{B}^{1}=\{z\in\mathbb{C}:|z|{<}1\}\). In this section, we give a characterization of holomorphic functions \(f\in\mathcal{O}(\mathbb{B}^{1})\) to be \(L^{p}\)-integrable. **Definition 5.1**.: A sequence \(\{\lambda_{k}\}_{k\in\mathbb{N}^{*}}\) is called lacunary with constant \(A\) if there exists \(A{>}1\), such that \(\lambda_{k+1}\geq A\lambda_{k}\). The main theorem of this section is following **Theorem 5.1**.: Let \(p{>}0\), there exists \(C=C(p,A)\) such that, if \(f\in\mathcal{O}(\mathbb{B}^{1})\), \(f(z)=\sum_{k=1}^{\infty}a_{\lambda_{k}}z^{\lambda_{k}}\) for some lacunary sequence \(\{\lambda_{k}\}\) with constant \(A{>}1\), \[C(p,A)^{-1}\int_{0}^{1}(\sum_{k=1}^{\infty}|a_{\lambda_{k}}r^{2\lambda_{k}}|)^ {\frac{p}{2}}dr\leq\int_{\mathbb{B}^{1}}|f|^{p}\leq C(p,A)\int_{0}^{1}(\sum_{k =1}^{\infty}|a_{\lambda_{k}}r^{2\lambda_{k}}|)^{\frac{p}{2}}dr.\] We need the following lemma([1, Theorem 3.6.4]) **Lemma 5.2**.: Let \(T=[0,1]\), \(1\leq\lambda_{1}{<}\lambda_{2}{<}...\) be a lacunary sequence with constant \(A{>}1\). Set \(\Gamma=\{\lambda_{k}:k\in\mathbb{N}^{*}\}\). Then for all \(1\geq p{<}\infty\), there exists a constant \(C_{p}(A)\) such that for all \(f\in L^{1}(T)\), with \(f(k)=0\) when \(k\in\mathbb{N}^{*}-\Gamma\), we have \[||f||_{L^{p}(T)}\leq C_{p}(A)||f||_{L^{1}(T)}.\] Moreover, the converse inequality is also valid, hence all \(L^{p}\) norms of lacunary Fourier sequence are equivalent for \(1\leq p{<}\infty\). Proof.: We write \(z\in\mathbb{B}^{1}\) as \(z=re^{2\pi it},0\leq r<1,t\in T\). For a given \(0\leq r<1,f(z)=f(re^{2\pi it})=\sum_{k=1}^{\infty}a_{\lambda_{k}}r^{\lambda_{k} }e^{2\pi\lambda_{k}it}\). Since \(f\) is continuous with respect to \(t\in T\), hence it is \(L^{p}\) integrable over \(T\), \(\forall p>0\). From Lemma 5.2 above, we know that the \(L_{p}(T)\) norms of \(f|_{\{|z|=r\}}\) are equivalent for all \(p\geq 1\). However, for any \(q<1\), by Holder's inequality we obtain \[(\int_{T}|f|^{q})^{\frac{1}{2}}(\int_{T}|f|^{\alpha})^{\frac{1}{2}}\geq\int|f|\] where \(\alpha=2-q>1\). Therefore, all the \(L^{p}(T)\) norms of \(f|_{\{|z|=r\}}\) are equivalent. This allows us to calculate the \(L^{p}\) norm of \(f\) using its \(L^{2}\) norm as follows: \[\int_{B(0,1)}|f|^{p}=\int_{0}^{1}||f|_{\{|z|=r\}}||_{p}^{p}dr\approx\int_{0}^{ 1}||f|_{\{|z|=r\}}||_{2}^{p}dr\] \[=\int_{0}^{1}(\sum_{k=1}^{\infty}|a_{\lambda_{k}}|^{2}r^{2\lambda_{k}})^{\frac {p}{2}}dr.\] This completes the proof. Now we fix a lacunary sequence \(\{\lambda_{k}\}\), saying \(\{2^{k}\}\), and consider the subspace of \(A^{p}(\mathbb{B}^{1})\): \(A^{p}_{c}(\mathbb{B}^{1}):=\{f\in A^{p}(\mathbb{B}^{1}):f(z)=\sum_{k=1}^{ \infty}a_{k}z^{2^{k}}\}\). We can prove following **Theorem 5.3**.: \(A^{p}_{c}(\mathbb{B}^{1})\) is a closed subspace of \(A^{p}(\mathbb{B}^{1})\). Proof.: Let \(A^{p}_{c}(\mathbb{B}^{1})\) be any Cauchy sequence in \(A^{p}(\mathbb{B}^{1})\) with respect to the distance function of \(A^{p}(\mathbb{B}^{1})\), denoted by \(\{f_{n}\}_{n=1}^{\infty}\), where \(f_{n}(z)=\sum_{k=1}^{\infty}a_{n,k}z^{2^{k}}\). From the above theorem, it is easy to see that for every \(k\), the sequence \(\{a_{n,k}\}\) converges to a complex number \(a_{k}\). We will now prove that \(f(z):=\sum_{k=1}^{\infty}a_{k}z^{2^{k}}\in A^{p}(\mathbb{B}^{1})\), which is the limit of the sequence \(\{f_{n}\}\) in \(A^{p}(B(0,1))\). From the above theorem, we have \[\int|f|^{p} \approx\int_{0}^{1}(\sum_{k=1}^{\infty}|a_{k}|^{2}r^{2^{k+1}})^{ \frac{p}{2}}dr\] \[=\lim_{N\to\infty}\int(\sum_{k=1}^{N}|a_{k}|^{2}r^{2^{k+1}})^{ \frac{p}{2}}dr\] \[=\lim_{N\to\infty}[\lim_{n\to\infty}\int(\sum_{k=1}^{N}|a_{n,k}|^ {2}r^{2^{k+1}})^{\frac{p}{2}}dr]\] \[\leq\lim_{n\to\infty}\int(\sum_{k=1}^{\infty}|a_{n,k}|^{2}r^{2^{k +1}})^{\frac{p}{2}}dr\] \[=\lim_{n\to\infty}\int|f_{n}|^{p}.\] Therefore, \(f\in A^{p}(\mathbb{B}^{1}).\) Next, we prove that in \(A^{p}(\mathbb{B}^{1})\),\(\forall n\), \[\sum_{k=1}^{N}a_{n,k}z^{2^{k+1}}\rightrightarrows\sum_{k=1}^{\infty}a_{n,k}z^{ 2^{k+1}}=f_{n}(N\to\infty).\] It is sufficient to prove that for any \(\varepsilon>0\), \(\int(\sum_{k=N}^{\infty}|a_{n,k}|^{2}r^{2^{k+1}}dr)^{\frac{p}{2}}{<} \varepsilon,\forall\ n\geq N=N(\varepsilon)\). In fact, since \(\{f_{n}\}\) is a Cauchy sequence, there exists \(M_{0}\), such that when \(n\geq M_{0}\), \[\int_{0}^{1}(\sum_{k=1}^{\infty}|a_{n,k}-a_{M_{0},k}|^{2}r^{2^{k+1}})^{\frac{ p}{2}}dr{<}\varepsilon\] Also note that there exists \(N_{0}\), such that \[\int_{0}^{1}(\sum_{k=N_{0}}^{\infty}|a_{M_{0},k}|^{2}r^{2^{k+1}}|)^{\frac{p}{ 2}}dr{<}\varepsilon\] Combining these two facts, we can conclude that when \(n\geq M_{0}\), \[\int_{0}^{1}(\sum_{k=N_{0}}^{\infty}|a_{n,k}|^{2}r^{2^{k+1}})^{\frac{p}{2}}dr \leq C\varepsilon,\] where \(C\) is a positive constant only dependent on \(p\), which implies uniform convergence. Finally, we prove that in \(A^{p}(\mathbb{B}^{1})\), \(f_{n}\to f.\) It suffices to prove that for any \(\varepsilon>0\), there exists \(N\), such that for \(n>N\), \[\int_{0}^{1}(\sum_{k=N}^{\infty}|a_{n,k}-a_{k}|^{2}r^{2^{k+1}})^{\frac{p}{2}}dr\leq\varepsilon\] We notice that \[\int_{0}^{1}(\sum_{k=N}^{\infty}|a_{n,k}-a_{k}|^{2}r^{2^{k+1}})^{ \frac{p}{2}}dr\] \[\leq 2^{\frac{p}{2}}[\int_{0}^{1}(\sum_{k=N}^{\infty}|a_{n,k}|^{2}r ^{2^{k+1}})^{\frac{p}{2}}dr+\int_{0}^{1}(\sum_{k=N}^{\infty}|a_{k}|^{2}r^{2^{k+1 }})^{\frac{p}{2}}dr],\text{ if }p\leq 2\] \[\leq 2^{p-1}[\int_{0}^{1}(\sum_{k=N}^{\infty}|a_{n,k}|^{2}r^{2^{k+1 }})^{\frac{p}{2}}dr+\int_{0}^{1}(\sum_{k=N}^{\infty}|a_{k}|^{2}r^{2^{k+1}})^{ \frac{p}{2}}dr],\text{ if }p\geq 2.\] This together with the uniform convergence yields the desired result. _Remark 5.1_.: Theorem 5.1 can also be used to give a similar characterization of a class of holomorphic functions on the punctured disk to be \(L^{p}\)-integrable by considering the Laurent expansions.
この論文では、まず、Chen-Zhang氏の$p$-Bergman métriqueに関する問題を\cite{CZ22} に提案された問題を回答します。次に、$p>1$の場合、$p$-Bergmanカーネル関数の非対角成分に対するH\"older連続性を (1-$\varepsilon$) のオーダーで証明し、これはChen-Zhangの結果を改善しています。さらに、$p\rightarrow 1^-$の極限値における$p$-Bergmanカーネルの最大値の漸近的挙動を証明します。最後に、$\mathbb{B}^1$上の複素関数の一つのクラスを$L^p$-積分可能と表現します。
2309.10136
Efficient Low-Rank GNN Defense Against Structural Attacks
Graph Neural Networks (GNNs) have been shown to possess strong representation abilities over graph data. However, GNNs are vulnerable to adversarial attacks, and even minor perturbations to the graph structure can significantly degrade their performance. Existing methods either are ineffective against sophisticated attacks or require the optimization of dense adjacency matrices, which is time-consuming and prone to local minima. To remedy this problem, we propose an Efficient Low-Rank Graph Neural Network (ELR-GNN) defense method, which aims to learn low-rank and sparse graph structures for defending against adversarial attacks, ensuring effective defense with greater efficiency. Specifically, ELR-GNN consists of two modules: a Coarse Low-Rank Estimation Module and a Fine-Grained Estimation Module. The first module adopts the truncated Singular Value Decomposition (SVD) to initialize the low-rank adjacency matrix estimation, which serves as a starting point for optimizing the low-rank matrix. In the second module, the initial estimate is refined by jointly learning a low-rank sparse graph structure with the GNN model. Sparsity is incorporated into the learned low-rank adjacency matrix by pruning weak connections, which can reduce redundant data while maintaining valuable information. As a result, instead of using the dense adjacency matrix directly, ELR-GNN can learn a low-rank and sparse estimate of it in a simple, efficient and easy to optimize manner. The experimental results demonstrate that ELR-GNN outperforms the state-of-the-art GNN defense methods in the literature, in addition to being very efficient and easy to train.
Abdullah Alchihabi, Qing En, Yuhong Guo
2023-09-18T20:22:27
http://arxiv.org/abs/2309.10136v1
# Efficient Low-Rank GNN Defense Against Structural Attacks ###### Abstract Graph Neural Networks (GNNs) have been shown to possess strong representation abilities over graph data. However, GNNs are vulnerable to adversarial attacks, and even minor perturbations to the graph structure can significantly degrade their performance. Existing methods either are ineffective against sophisticated attacks or require the optimization of dense adjacency matrices, which is time-consuming and prone to local minima. To remedy this problem, we propose an Efficient Low-Rank Graph Neural Network (ELR-GNN) defense method, which aims to learn low-rank and sparse graph structures for defending against adversarial attacks, ensuring effective defense with greater efficiency. Specifically, ELR-GNN consists of two modules: a coarse low-rank estimation module and a fine-grained estimation module. The first module adopts the truncated Singular Value Decomposition (SVD) to initialize a low-rank estimate of the adjacency matrix, which serves as the starting point for optimizing the low-rank matrix. In the second module, the initial estimate is refined by jointly learning a low-rank sparse graph structure together with the GNN model. Sparsity is enforced on the learned low-rank adjacency matrix by pruning weak connections, which can reduce redundant data while maintaining valuable information. As a result, instead of using the dense adjacency matrix directly, ELR-GNN can learn a low-rank and sparse estimate of it in a simple, efficient, and easy to optimize manner. The experimental results demonstrate that ELR-GNN outperforms the state-of-the-art GNN defense methods in the literature, in addition to being very efficient and easy to train. Graph Neural Networks, Adversarial Attacks, Low-rank Estimation ## I Introduction Graphs are ubiquitous data structures that can represent complex relationships between instances in various domains, such as social networks [1] and biological sciences [2]. Due to the widespread use of graphs, learning to effectively represent them is vital yet challenging. Given their non-linear nature and capability of aggregating information from neighbouring nodes, Graph Neural Networks (GNNs) have been widely adopted as a state-of-the-art architecture for learning with graph data and solving the node classification task, which is one fundamental and critical task in graph analysis. Despite their great performance, GNNs are vulnerable to adversarial attacks. Small changes in the features of a few nodes or their corresponding connections in the graph may cause dramatic degradation in GNN performance [3, 4]. This implies that imperceptible perturbations to the graph can significantly impact GNN performance. In such cases, GNN models that lack robustness may present significant challenges to real-world privacy and security, particularly in sectors like healthcare, communication networks, or finance. Therefore, it is important to develop GNN models that are both efficient and resilient to state-of-the-art adversarial attacks. Some works in the literature attempt to defend against adversarial attacks on graph structures by assigning larger weights to the edges connecting similar nodes and smaller weights to the edges connecting dissimilar nodes [5, 6]. Several contemporary methods pre-process the graph structures to satisfy certain desirable properties that are assumed to exist in clean graphs [7, 8]. However, the aforementioned approaches only address a relatively simple type of attack and might confront significant challenges in defending against state-of-the-art global attacks [9]. In contrast, this work considers poisoning training-time attacks on graph structures, which represent the most formidable challenges to defend against [10]. In this setting, the graph structure is tampered by some state-of-the-art attack methods, and the GNN model is trained on the poisoned graph structure, thereby introducing significant difficulties. The fundamental challenges for developing effective methods to defend against adversarial attacks lie in the following two aspects. (1) As GNNs employ message passing to propagate information across the graph, attacks on local edges or nodes can be propagated to a large portion of the graph, making it more difficult to defend against the state-of-the-art attacks. (2) It is challenging to obtain GNN models that are both robust and efficient, as effective defense methods often require extensive computations over dense adjacency matrices. In the meantime, it has been noted that challenging adversarial attacks (nettack, mettack) on a graph structure target the high-ranked (low-valued) singular components of its adjacency matrix, increasing the rank of the adjacency matrix while leaving the low-ranked singular components unperturbed [7, 9]. Based on this observation, Entezari et al. recently introduced a simple pre-processing method for defending against adversarial attacks by obtaining a low-rank estimate of the adjacency matrix before training the GNN model [7]. However, by separating the graph cleaning step and the GNN learning step, this method leads to sub-optimal results and lacks sufficient robustness against sophisticated global attacks. In contrast, Pro-GNN [9] jointly learns a low-rank sparse graph structure with the GNN model. Nevertheless, this approach still requires optimizing a dense adjacency matrix and minimizing its nuclear norm to guarantee the low-rank property, which involves difficult and computationally expensive optimization processes. In this paper, we propose a novel Efficient Low-Rank Graph Neural Network (ELR-GNN) method, which is fast and efficient, for improving the robustness of GNNs in the face of adversarial attacks. ELR-GNN focuses on learning a low-rank sparse estimate of the graph structure by estimating a low-rank adjacency matrix as the product of low-dimensional matrices instead of learning the dense adjacency matrix directly. Specifically, by computing the largest singular values and their corresponding singular vectors using the truncated Singular Value Decomposition (SVD), we first design a coarse low-rank estimation module to obtain elements mostly unaffected by the adversarial attacks, which provides a starting point to optimize the singular vector matrix. Next, we propose a fine-grained estimation module, where the low-rank estimate of the adjacency matrix is refined by jointly optimizing the singular vectors with the GNN model while keeping the obtained singular values fixed. We incorporate sparsity into the learned low-rank adjacency matrix by pruning weak connections with low edge weights to remove redundant information while retaining important information. Additionally, we adopt the Frobenius norm to regularize the adjacency matrix for maintaining reasonable values. By combining weak edge pruning and Frobenius norm regularization, we can efficiently and quickly sparsify the adjacency matrix estimate. Consequently, our proposed ELR-GNN can learn a low-rank and sparse estimate of the graph structure in an efficient and easy-to-optimize manner. Comprehensive experiments on various benchmark datasets and against multiple attack methods demonstrate that our proposed method is simple and fast, outperforming state-of-the-art defense methods. ## II Related Works ### _Adversarial Attacks on GNNs_ Graph Neural Networks can be adversarially attacked through their node features, graph structure [3, 11] or both [4, 8, 12]. These attacks on graphs can be grouped into two different types with different goals: non-targeted global attacks that aim to reduce the overall performance of GNNs and targeted attacks that aim to misclassify specific nodes in the graph. Meanwhile, adversarial attacks on GNNs can also be categorized based on the time of the attacks: poisoning attacks that take place prior to the training of the GNN models and evasion attacks that take place at test-time after the GNN model has been trained. Zugner et al. proposed a targeted attack method, which iteratively perturbs the graph to maximize the degradation of the performance of a surrogate GNN model [4]. Wu et al. introduced a method to attack both the graph structure and node features based on integrated gradients [8]. They also developed a defense approach based on these attack methods called GCN-Jaccard, where Jaccard similarity is used to pre-process the graph structure by deleting edges that connect nodes with similarity below some pre-defined threshold. Zugner and Gunnemann proposed a poisoning non-targeted adversarial attack method (mettack) to attack the graph structure based on meta-gradients where the graph structure is perturbed during training time by solving a min-max problem [12]. Xu et al. proposed two gradient-based attack methods to attack the graph structure, named as projected gradient descent topology attack and min-max topology attack [11]. However, not all attacks are equally effective. Wu et al. have shown that attacks on node features are significantly less effective in terms of degrading the GNNs performance than attacks on the graph structure [8]. Zhu et al. have also demonstrated that poisoning training-time attacks are more severe and harder to defend against compared to evasion test-time attacks [10]. Therefore, in this work, we focus on defending against poisoning training-time attacks on the graph structure (mettack and nettack) which are the most effective and challenging attacks to defend against. ### _Adversarial Defense on GNNs_ Many approaches to defend GNNs against adversarial attacks have been proposed. Some works utilize pre-processing methods to filter the perturbed graph structure prior to the training stage [7, 8]. Other works use adversarial training to defend against adversarial attacks [11, 12]. Zhu et al. proposed RGCN, in which the hidden node embeddings are represented as Gaussian distributions to absorb the effect of adversarial attacks in the covariance of the distribution and attention is used with the covariance matrix to aggregate messages from neighbouring nodes [6]. Simp-GCN employs a novel adaptive message aggregation mechanism and self-supervised learning to preserve node similarities during GNN training [13]. Elastic GNN (E-GNN) employs a novel \(\ell_{1}\)-based graph smoothing message aggregation function to improve the robustness of GNNs to adversarial attacks [14]. In order to develop robust defense methods, the properties of the perturbed graph structures have also been investigated. Several works have shown that attack methods tend to connect dissimilar nodes more often than disconnecting similar nodes, as adding edges between dissimilar nodes hurts the performance more than deleting edges between similar nodes [3, 4, 8]. This has led to the development of homophily-based defense approaches that prune edges between dissimilar nodes [5, 8]. Zhang et al. introduced GNNGuard, which aims to connect similar nodes and disconnect dissimilar nodes using two components: a neighbour importance estimation component and a layer-wise graph memory component [5]. Several works have demonstrated that adversarial attacks on the graph structure mainly affect the high-ranked (low-valued) singular components of the adjacency matrix, leaving the low-ranked singular components unaffected, thus causing an increase in the rank of the adjacency matrix [7, 9]. This has inspired methods to defend against adversarial attacks by learning/estimating low-rank adjacency matrices to filter the impacts of the adversarial attacks on the graph structure. For instance, GCN-SVD pre-processes the input graph to obtain a low-rank estimate of the pre-perturbed graph structure using truncated SVD [7]. However, due to the separation of the graph pre-processing step from the GNN training step, this approach lacks sufficient capacity in defending against sophisticated global attacks [9]. In contrast, Pro-GNN learns a clean graph structure jointly with the GNN model by optimizing the low-rank, sparsity and homophily properties of the estimated graph structure [9]. Nevertheless, Pro-GNN is computationally expensive as it requires minimizing the nuclear norm of the adjacency matrix at every training iteration. ## III Method ### _Problem Setup_ We tackle the semi-supervised node classification task, which aims to predict the labels of unlabeled nodes given a small number of labeled nodes. We define the input data as a graph \(G=(V,E)\), where \(V\) is the set of nodes of the graph with \(|V|=N\) and \(E\) is the set of edges of the graph. Each node has a corresponding feature vector of size \(D\), and the feature vectors of all the nodes in the graph form a matrix \(X\in\mathbb{R}^{N\times D}\). The set of edges \(E\) is represented as an adjacency matrix \(A\) of size \(N\times N\), which we assume to be symmetric (i.e., an undirected graph), with real-valued weights or binary values. In addition, the adjacency matrix \(A\) can be either clean or perturbed by an adversarial attack method prior to the training of the GNN (i.e. poisoning attack). The nodes in the graph \(V\) are split into two subsets: the set of labeled nodes \(V_{\ell}\) and the set of unlabeled nodes \(V_{u}\). Each labeled node has a corresponding label indicator vector of size \(C\), where \(C\) is the number of classes. The label vectors of all the labeled nodes constitute a label indicator matrix \(Y^{\ell}\in\{0,1\}^{N_{\ell}\times C}\), where \(N_{\ell}\) is the number of labeled nodes in the graph. A GNN can be deployed on the graph data to predict the node labels as follows: \[P=f_{\Theta}(X,A), \tag{1}\] where \(f\) denotes the prediction function produced by the GNN parametrized with \(\Theta\), which takes the initial node features \(X\) and adjacency matrix \(A\) as input and outputs the class prediction probabilities \(P\) over all the nodes. Typically, the GNN is trained to minimize the classification loss on the labeled nodes (cross-entropy loss): \[\mathcal{L}_{CE}=\sum_{i\in V_{\ell}}\ell_{CE}(P_{i},Y_{i}^{\ell}), \tag{2}\] where \(\ell_{CE}\) denotes the cross-entropy loss function, \(P_{i}\), and \(Y_{i}^{\ell}\) denote the predicted class probability vector and true label indicator vector for node \(i\) respectively. ### _The Proposed Method_ In this section, we present the proposed method ELR-GNN, which aims to defend GNNs against poisoning structural attacks. ELR-GNN learns a low-rank sparse estimate of the adjacency matrix as the product of two low-dimensional matrices: the low-dimensional singular value matrix and the low-dimensional singular vector matrix. The overall architecture of ELR-GNN is illustrated in Figure 1. It has two modules: a coarse low-rank estimation module and a fine-grained estimation module. In the coarse low-rank estimation module, we utilize the truncated Singular Value Decomposition (SVD) [15] to calculate the largest \(d\) singular values of the adjacency matrix and their corresponding singular vectors. Truncated SVD has the nice property of being able to scale up with large sparse datasets. The low-dimensional singular value matrix is then formed with these largest \(d\) singular values, while the low-dimensional singular vector matrix is initialized with the corresponding singular vectors. In this manner, we acquire a starting point for optimizing a low-dimensional singular vector matrix, which will be used with the singular value matrix to estimate a low-rank adjacency matrix efficiently. In the fine-grained estimation module, we jointly learn the low-dimensional singular vector matrix with the GNN model to further improve the low-rank estimate of the adjacency matrix. In order to improve the robustness and reduce noise, we promote sparsity within the learned adjacency matrix by deleting the weak connections with weights below a pre-defined threshold. In doing so, we anticipate obtaining a robust sparse low-rank estimate of the adjacency matrix that are effective against sophisticated adversarial attacks. Below, we elaborate on the two modules. #### Iii-B1 Coarse Low-Rank Estimation Module The coarse low-rank estimation module aims to compute the singular value matrix and initialize the low-dimensional singular vector matrix, which allows us to estimate a coarse low-rank adjacency matrix from the perturbed adjacency matrix. To achieve that, singular value decomposition (SVD) can be deployed to decompose the adjacency matrix \(A\) as follows: \[A=USV^{\top}, \tag{3}\] where \(S\in\mathbb{R}^{N\times N}\) is the diagonal matrix of the singular values of \(A\), and \(U,V\in\mathbb{R}^{N\times N}\) are orthogonal matrices whose columns are the left singular vectors and right singular vectors, respectively. For undirected graphs with symmetric adjacency matrices, we have \(U=V\). Meanwhile, it has been demonstrated in previous works that adversarial attacks on the graph structure are typically high-rank attacks [7, 9], which means these attacks mainly affect the singular components corresponding to lower singular values while leaving the ones associated with higher singular values unaffected. As such, we only need the largest \(d\) singular values and their corresponding singular vectors for our low-rank estimation of the adjacency matrix, aiming to mitigate the impact of the attacks. Specifically, we define \(S_{d}\in\mathbb{R}^{d\times d}\) and \(U_{d}\in\mathbb{R}^{N\times d}\) as the singular value matrix and low-dimensional singular vector matrix, respectively, that correspond to the largest \(d\) singular values of the adjacency matrix \(A\). That is, \(S_{d}\) is a diagonal matrix of the largest \(d\) singular values of \(A\), and \(U_{d}\) contains the corresponding singular vectors as its columns, where \(d\) is a predefined hyper-parameter that is selected using cross-validation. Moreover, \(S_{d}\) and \(U_{d}\) can actually be obtained using an efficient truncated SVD algorithm [15] without full singular value decomposition. The singular value matrix \(S_{d}\) and the singular vector matrix \(U_{d}\) will then be used to estimate a low-rank adjacency matrix. In particular, we construct a \(d\)-rank estimate of \(A\) by using \(U_{d}\) and \(S_{d}\) as follows: \[A_{d}=\Lambda\Lambda^{\top},\quad\text{where }\Lambda=U_{d}S_{d}^{1/2}. \tag{4}\] This pre-training estimate however can only serve as an initial point for effectively defending against sophisticated adversarial structural attacks [9]. We will further improve the estimate using a fine-grained estimation module by jointly learning the low-dimensional matrix \(U_{d}\) and the GNN model. #### Iii-B2 Fine-Grained Estimation Module The fine-grained estimation module aims to learn a general low-dimensional matrix \(U_{d}\) during GNN training to approximate the singular vector matrix and provide a fine-grained low-rank estimation for the adjacency matrix \(A\). Specifically, during the joint training stage we have \(S_{d}\) fixed since the inconspicuous adversarial structural attacks do not alter the larger singular values of the adjacency matrix. We only update the GNN parameters \(\Theta\) and the low-rank matrix \(U_{d}\), which consequently leads to the update on the low-rank estimate of the adjacency matrix and affects GNN. Moreover, motivated by the fact that most real-world graphs are sparse in addition to being low rank, we further propose to sparsify the low-rank estimate of the adjacency matrix, which can significantly reduce computational overhead [9, 16]. To this end, we delete the weak connections in the estimated adjacency matrix, whose weights are smaller than a pre-defined threshold \(\epsilon\): \[A_{d}(i,j)=\begin{cases}A_{d}(i,j),&\text{if }\;A_{d}(i,j)\geq\epsilon\\ 0,&\text{otherwise}\end{cases} \tag{5}\] where the threshold hyper-parameter \(\epsilon\) controls the sparsity level of our low-rank estimate and is determined using cross-validation. Following the sparsification of \(A_{d}\), we further normalize this estimated sparse and low-rank adjacency matrix as follows: \[\tilde{A}_{d}=D^{-\frac{1}{2}}A_{d}D^{-\frac{1}{2}}, \tag{6}\] where \(D\) is the diagonal degree matrix computed from \(A_{d}\), such that \(D_{ii}=\sum_{j}A_{ij}\). This normalized adjacency matrix \(\tilde{A}_{d}\) is then used as input for the GNN model. As the adversarial attacks typically only perturb a minimal number of edges to degrade the performance of the GNN model, it is reasonable to make the learned sparse low-rank estimation of the adjacency matrix, \(\tilde{A}_{d}\), to be similar to the original input adjacency matrix, \(A\). This can be achieved by deploying the following similarity regularization term based on the Frobenius distance when learning the matrix \(U_{d}\): \[\mathcal{L}_{Sim}=\|A-\tilde{A}_{d}\|_{F}^{2}, \tag{7}\] where \(\|.\|_{F}\) denotes the Frobenius norm. Moreover, we also utilize a Frobenius norm regularization term over \(\Lambda=U_{d}S_{d}^{1/2}\) during learning: \[\mathcal{L}_{Fr}=\|\Lambda\|_{F}^{2}. \tag{8}\] This regularization term works in tandem with the pruning of weak edges to regularize the statistical distributivity of the adjacency matrix in an efficient manner. In the end, we use the following overall loss function to jointly learn \(U_{d}\) and \(\Theta\) for the proposed ELR-GNN: \[\min_{\Theta,U_{d}}\;\mathcal{L}=\mathcal{L}_{CE}+\lambda_{sim}\,\mathcal{L} _{Sim}+\lambda_{Fr}\,\mathcal{L}_{Fr}. \tag{9}\] We solve this joint minimization problem using a simple alternating optimization procedure, which updates one variable matrix while keeping the other one fixed. Specifically, when updating \(\Theta\) for GNN learning, we keep the current \(U_{d}\) fixed and only minimize the cross-entropy loss: \[\min_{\Theta}\;\mathcal{L}_{CE}. \tag{10}\] When updating \(U_{d}\), we hold \(\Theta\) fixed and minimize the overall loss function: \[\min_{U_{d}}\;\mathcal{L}=\mathcal{L}_{CE}+\lambda_{sim}\,\mathcal{L}_{Sim}+ \lambda_{Fr}\,\mathcal{L}_{Fr} \tag{11}\] Fig. 1: An illustration of the proposed ELR-GNN defense method. The framework is made up of two modules: the Coarse Low-Rank Estimation Module on the left side and the Fine-Grained Estimation Module on the right side. ## IV Experiments We evaluated the proposed method against training time structural attacks and conducted experiments under non-targeted global attacks (mettack), targeted attacks (nettacks) and random attacks. ### _Experiment Settings_ #### Iv-A1 Datasets & Baselines Three challenging datasets are used to evaluate our proposed ELR-GNN: two citation datasets (Cora, CiteSeer) [17] and one blog dataset (PolBlogs) [18]. For all the three datasets, we utilized the same train/validation/test node split adopted by Jin et al. [9], where 10% of the nodes are randomly assigned to the labeled train set, 10% of the nodes are randomly assigned to the validation set, and the remaining 80% of the nodes are assigned to the test set. In the case of the PolBlogs dataset, nodes are not associated with any features; therefore, we used an identity matrix as the nodes feature matrix. We compared the proposed ELR-GNN method with the following baselines: Graph Convolution Networks (GCN) [19], Graph Attention Networks (GAT) [20], Robust Graph Convolution Networks (RGCN) [6], GCN-Jaccard [8], GCN-SVD [7], Pro-GNN [9], Pro-GNN-fs [9], Simp-GCN [13] and Elastic-GNN (E-GNN) [14]. Among these baselines, GCN and GAT are plain GNN models, while the others have all adopted some defence strategies or methods. #### Iv-A2 Implementation Details For GCN, GAT, RGCN, GCN-Jaccard, GCN-SVD, Pro-GNN-fs and Pro-GNN, we use the reported results in [9]. For E-GNN, we use the reported results in [14]. As for our proposed ELR-GNN, GCN is adopted as the GNN model, which is made up of two message passing layers with Relu as the activation function. We trained our proposed model for 1000 epochs, utilizing an Adam optimizer for the GNN model with a learning rate of 1e-2 and weight decay of 5e-4 and employing a Stochastic Gradient Descent (SGD) optimizer for learning \(U_{d}\) with a momentum of 0.9. The learning rate of the SGD optimizer and the values of \(d\), \(\epsilon\), \(\lambda_{sim}\) and \(\lambda_{Fr}\) are all determined using cross-validation. For each experiment, we report the mean and standard deviation across five runs. ### _Defense Against Global Attacks_ We first evaluate the robustness of our proposed method against the non-targeted global attacks, which aim to degrade the overall performance of the attacked GNNs. For this purpose we adopt the mettack global attack [12] with perturbation rates of \(\{0,0.05,0.10,0.15,0.20,0.25\}\). We employ the same parameters for mettack as [9] for fair comparison where the Meta-Self variant of mettack is used, which is one of the most challenging attack variant to defend against. Under this setup, we compare our proposed method, ELR-GNN, with the baselines mentioned in the previous section. On each dataset, the adjacency matrix is poisoned using mettack first, then ELR-GNN and the baselines are trained using the poisoned adjacency matrix. The classification results on the test nodes are reported in Table I where the top part of the table shows the results on the Cora dataset, the middle part shows the results on the CiteSeer dataset, and the bottom part shows the results on the PolBlogs dataset. We can observe that the performance of all the comparison methods degrades in general as the perturbation rate increases. GCN has the worst performance degradation as it depends on the graph structure to perform message propagation with no defense or filtering mechanisms against adversarial attacks on the graph structure. GAT performs better than GCN and RGCN on Cora and CiteSeer as its self-attention mechanism helps in learning importance weights for the edges during mes \begin{table} \begin{tabular}{l|l|l|l|l|l|l|l|l|l|l|l} \hline & \multicolumn{1}{c|}{Pb Rate} & \multicolumn{1}{c|}{GCN} & \multicolumn{1}{c|}{GAT} & \multicolumn{1}{c|}{RGCN} & \multicolumn{1}{c|}{GCN-Jaccard} & \multicolumn{1}{c|}{GCN-SVD} & \multicolumn{1}{c|}{Pro-GNN-fs} & \multicolumn{1}{c|}{Pro-GNN} & \multicolumn{1}{c|}{SimP-GCN} & \multicolumn{1}{c|}{E-GNN} \\ \hline Cora & 0\% & \(83.5_{(0.4)}\) & \(83.9_{(0.6)}\) & \(83.0_{(0.4)}\) & \(82.0_{(0.5)}\) & \(80.6_{(0.4)}\) & \(83.4_{(0.5)}\) & \(82.9_{(0.2)}\) & \(82.7_{(0.3)}\) & \(\mathbf{85.8}_{(0.4)}\) & \(80.7_{(0.5)}\) \\ & 5\% & \(76.5_{(0.7)}\) & \(80.4_{(0.7)}\) & \(77.4_{(0.3)}\) & \(79.1_{(0.5)}\) & \(78.3_{(0.5)}\) & \(\mathbf{82.7}_{(0.3)}\) & \(82.2_{(0.4)}\) & \(78.9_{(0.9)}\) & \(82.2_{(0.9)}\) & \(80.5_{(0.3)}\) \\ & 10\% & \(70.3_{(1.2)}\) & \(75.6_{(0.5)}\) & \(72.2_{(0.3)}\) & \(75.1_{(0.7)}\) & \(71.4_{(0.8)}\) & \(77.9_{(0.8)}\) & \(79.0_{(0.5)}\) & \(76.5_{(1.0)}\) & \(78.8_{(1.7)}\) & \(\mathbf{79.1}_{(0.6)}\) \\ & 15\% & \(65.1_{(0.7)}\) & \(69.7_{(1.2)}\) & \(66.8_{(0.3)}\) & \(71.0_{(0.6)}\) & \(66.6_{(1.1)}\) & \(76.0_{(1.1)}\) & \(76.4_{(1.2)}\) & \(74.5_{(2.3)}\) & \(77.2_{(1.6)}\) & \(\mathbf{77.6}_{(1.4)}\) \\ & 20\% & \(59.5_{(2.7)}\) & \(59.0_{(9.9)}\) & \(59.2_{(0.3)}\) & \(65.7_{(0.8)}\) & \(58.9_{(1.1)}\) & \(68.7_{(5.8)}\) & \(73.3_{(1.5)}\) & \(73.0_{(2.9)}\) & \(70.5_{(1.3)}\) & \(\mathbf{77.7}_{(0.7)}\) \\ & 25\% & \(47.5_{(1.9)}\) & \(54.7_{(0.7)}\) & \(50.5_{(0.7)}\) & \(60.8_{(1.0)}\) & \(52.0_{(1.1)}\) & \(56.5_{(2.5)}\) & \(69.7_{(1.6)}\) & \(70.5_{(3.2)}\) & – & \(\mathbf{76.7}_{(0.7)}\) \\ \hline Citeseer & 0\% & \(71.9_{(0.5)}\) & \(73.2_{(0.8)}\) & \(71.2_{(0.8)}\) & \(77.2_{(1.0)}\) & \(76.0_{(0.3)}\) & \(73.2_{(0.6)}\) & \(73.9_{(0.4)}\) & \(73.8_{(0.6)}\) & \(73.4_{(0.4)}\) \\ & 5\% & \(70.8_{(0.6)}\) & \(72.8_{(0.8)}\) & \(70.5_{(0.4)}\) & \(70.5_{(0.9)}\) & \(68.8_{(0.7)}\) & \(73.0_{(0.3)}\) & \(72.9_{(0.5)}\) & \(73.8_{(0.3)}\) & \(72.9_{(0.5)}\) & \(\mathbf{74.2}_{(0.5)}\) \\ & 10\% & \(67.5_{(0.8)}\) & \(70.6_{(0.4)}\) & \(67.7_{(0.3)}\) & \(69.5_{(0.5)}\) & \(68.8_{(0.6)}\) & \(72.4_{(0.5)}\) & \(72.5_{(0.7)}\) & \(71.5_{(1.0)}\) & \(72.6_{(0.4)}\) & \(\mathbf{73.2}_{(0.8)}\) \\ & 15\% & \(64.5_{(1.1)}\) & \(69.0_{(1.0)}\) & \(65.6_{(0.3)}\) & \(65.9_{(0.9)}\) & \(63.2_{(0.9)}\) & \(70.8_{(0.8)}\) & \(72.0_{(1.1)}\) & \(70.7_{(1.3)}\) & \(71.9_{(0.7)}\) & \(\mathbf{74.6}_{(0.3)}\) \\ & 20\% & \(62.0_{(3.4)}\) & \(61.0_{(1.5)}\) & \(62.4_{(1.2)}\) & \(59.3_{(1.4)}\) & \(58.5_{(1.0)}\) & \(66.1_{(2.3)}\) & \(70.0_{(2.2)}\) & \(67.4_{(1.4)}\) & \(64.7_{(0.8)}\) & \(\mathbf{71.6}_{(1.0)}\) \\ & 25\% & \(56.9_{(2.0)}\) & \(61.8_{(1.1)}\) & \(55.3_{(0.6)}\) & \(59.8_{(1.4)}\) & \(57.1_{(1.8)}\) & \(66.1_{(2.3)}\) & \(68.9_{(2.7)}\) & \(67.3_{(1.7)}\) & – & \(\ sage passing. On PolBlogs, where there are no node features, GAT has large performance drops due to its self-attention mechanism's dependence on node features. Although GCN-SVD and GCN-Jaccard outperform GCN, GAT and RGCN, they perform much worse than the remaining defense methods on the citation datasets. This suggests these preprocessing based defense methods lack sufficient capacity in defending against state-of-the-art global attacks. In contrast, both ELR-GNN and the Pro-GNN variants learn the graph structure jointly with the GNN model, which enables these methods to defend effectively against such complex attacks. The proposed ELR-GNN method outperforms all the other baselines on the CiteSeer datasets across almost all perturbation rates, with greater performance improvements at larger perturbation rates. In particular, at a 25% perturbation rate, ELR-GNN outperforms GCN by over 16% and outperforms the second-best performing baseline (Pro-GNN) by over 4%. On the Cora dataset, a similar pattern of ELR-GNN outperforming all other baselines at high perturbation rates persists, where ELR-GNN yields even larger performance improvements of around 29% and 6% compared to GCN and the closest-performing baseline (SimP-GCN), respectively, at the 25% perturbation rate. On the PolBlogs dataset, our proposed method substantially outperforms all the other baselines with large margins at the high perturbation rate of 25%, yielding a particularly remarkable increase in test accuracy (over 13%) over the second-best baseline (Pro-GNN-fs). ### _Defense Against Targeted Attacks_ In this section, we investigate the robustness of the proposed ELR-GNN against the targeted poisoning training-time attacks on the graph structure, which target specific nodes with the aim of deceiving GNNs into misclassifying the target nodes. For this purpose, we employ netattack [4] with different numbers of perturbations allowed for each target node, ranging from \(\{1,2,3,4,5\}\) perturbations per target node. Nettack perturbs the graph structure for the target nodes by iteratively generating sets of candidate perturbations and applying the perturbation that would degrade the performance of a surrogate GNN model the most. This process is repeated until the perturbation budget has been reached. The target test nodes are selected similarly to [9] for a fair comparison. As presented in Figure 2, the performance of all the comparison methods degrades as the number of perturbations on the target nodes increases. The proposed ELR-GNN outperforms all the other methods on Cora and CiteSeer with larger Fig. 3: Mean classification accuracy on Cora (left part), CiteSeer (middle part) and PolBlogs (right part) under random training-time attack with different perturbation rates (20%, 40%, 60%, 80%, 100%). Fig. 2: Mean classification accuracy on Cora (left part), CiteSeer (middle part) and PolBlogs (right part) under targeted training-time attack (netattack) with different number of perturbations on the target nodes: (1, 2, 3, 4, 5). perturbation numbers, and the improvement in performance increases as the number of perturbations per node grows, achieving a notable performance increase of around 4% and 6% over the second best method on Cora and CiteSeer, respectively, with 5 perturbations per target node. SimP-GCN and the two Pro-GNN variants are among the second best performing methods after our ELR-GNN, significantly outperforming all the other methods on the citation datasets. This validates the ability of the proposed ELR-GNN in defending against sophisticated targeted adversarial attacks such as net attack. On the PolBlogs dataset, ELR-GNN, Pro-GNN and GCN-SVD obtain similar results while the other methods perform poorly. These results demonstrate the robustness of our ELR-GNN against targeted training-time attacks. ### _Defense Against Random Attacks_ In this section, we evaluate the robustness of our proposed ELR-GNN under random attacks that inject random edges in the graph structure with different edge perturbation rates: 20%, 40%, 60%, 80%, and 100%. According to the results reported in Figure 3, ELR-GNN, SimP-GCN and the two variants of Pro-GNN obtain relatively stable performance across all perturbation rates on Cora and CiteSeer. On the other hand, the performance of the other methods degrades significantly as the perturbation rate increases. ELR-GNN produces the best results on CiteSeer across all the perturbation rates. On the Cora dataset, ELR-GNN obtains a performance that is within 1-2% of the best performing Pro-GNN across all perturbation rates. On PolBlogs, ELR-GNN, Pro-GNN and GCN-SVD obtain similar results across all the perturbation rates and significantly outperform the remaining methods. Among the three methods, ELR-GNN slightly outperforms Pro-GNN and GCN-SVD at three out of the five perturbation rates. ### _Efficiency Analysis_ In order to demonstrate the efficiency of the proposed ELR-GNN, we summarize the training time, total time (pre-processing time + training time) and accuracy of all the three low-rank based defense methods: ELR-GNN, Pro-GNN and GCN-SVD. We conduct experiments on Cora, CiteSeer and PolBlogs under global training-time attacks (mettack) with a perturbation rate of 25%. We run all the experiments on NVIDIA GeForce RTX 2080 ti and train each method for 1000 epochs. For Pro-GNN, we use the optimal hyper-parameters reported in [9]. For GCN-SVD, we use the same \(d\) value employed for ELR-GNN on each dataset for a fair comparison. The corresponding results are reported in Table II. It is clear that ELR-GNN not only outperforms Pro-GNN in terms of classification accuracy but is also significantly more efficient. ELR-GNN is 560, 500 and 150 times faster than Pro-GNN on Cora, CiteSeer and PolBlogs, respectively. Pro-GNN requires long training times mainly due to optimizing the nuclear norm of the learned adjacency matrix in each iteration. Additionally, ELR-GNN significantly outperforms GCN-SVD by 26%, 9% and 24% on Cora, CiteSeer and PolBlogs respectively in terms of classification accuracy, while being only a bit more than 2 times slower or less. This clearly demonstrates the efficiency and robustness of ELR-GNN in defending GNNs against global adversarial attacks. ### _Ablation Study_ We conduct an ablation study to investigate the effect of each component of the proposed ELR-GNN. Specifically, we examine the following variants of ELR-GNN: (1) "w/o \(\mathcal{L}_{Sim}\)", where the similarity regularization term \(\mathcal{L}_{Sim}\) is dropped. (2) "w/o \(\mathcal{L}_{Fr}\)", where the Frobenius norm regularization term \(\mathcal{L}_{Fr}\) is dropped. (3) "\(\epsilon=0\)", where the sparsification is dropped by setting the adjacency matrix sparsity threshold to zero. (4) "Rand. Init.", where \(U_{d}\) is initialized randomly using the Xavier normal initialization instead of the truncated SVD. (5) "Joint Update" variant, where the GNN model and \(U_{d}\) are updated simultaneously rather than alternately at each training iteration. We compare the performance of these variants with ELR-GNN and the GCN baseline, where the GCN baseline can be treated as the variant that drops the entire proposed defense method. We report the performance of GCN, ELR-GNN and all the variants on the PolBlogs dataset under the mettack based poisoning attacks with perturbation rates of \(\{0,0.05,0.10,0.15,0.20,0.25\}\) in Table III. From the table, it is clear that the "Rand. Init." variant performs very poorly across all the perturbation rates, which highlights the importance of using truncated SVD to initialize the singular vector matrix. All the other variants perform very similarly to ELR-GNN but significantly outperform GCN under the low-perturbation rates (0%, 5%, and 10%). This indicates that the low-rank estimate of the adjacency matrix obtained using SVD is only modified in a minor fashion during the training stage in such cases, which is consistent with the fact that the low perturbation rates only lead to very limited disturbances on the graph structure. At the higher perturbation rates (15%, 20%, and 25%), the performance of all the variants has notable drops from the full ELR-GNN. In particular, the \begin{table} \begin{tabular}{l|l l l|l l l|l l l} \hline & \multicolumn{2}{l|}{GCN-SVD} & \multicolumn{2}{l|}{Pro-GNN} & \multicolumn{2}{l}{ELR-GNN} \\ \hline & Acc & Tr-T (s) & Tot-T (s) & Acc & Tr-T (s) & Tot-T (s) & Acc & Tr-T (s) & Tot-T (s) \\ \hline Cora & 49.7 & 58.9 & 65.0 & 69.7 & 67,049.5 & 67,049.5 & **76.7** & 112.1 & 118.6 \\ \hline Citeseer & 64.8 & 58.2 & 62.4 & 68.9 & 43,894.6 & 43,894.6 & **73.2** & 85.4 & 89.4 \\ \hline PolBlogs & 52.0 & 21.6 & 22.4 & 63.1 & 7,602.0 & 7,602.0 & **76.7** & 51.1 & 52.0 \\ \hline \end{tabular} \end{table} TABLE II: Mean classification accuracy (Acc), training time (Tr-T) and total time (Tot-T) of low-rank defense methods on Cora (top part), CiteSeer (middle part) and PolBlogs (bottom part) under non-targeted global training-time attack (mettack) with the given perturbation rate (25%). "w/o \(\mathcal{L}_{Sim}\)" variant obtains very poor results, which indicates the importance of learning a graph adjacency matrix that is not significantly different from the input adjacency matrix. The "\(\epsilon=0\)" variant performs only slightly worse than ELR-GNN, which indicates that it is possible to learn dense matrices that perform similarly to their sparse counterparts. However, such dense adjacency matrices would be significantly more expensive to learn relative to sparse adjacency matrices. The "Joint Update" variant also demonstrates noticeable performance declines from the full ELR-GNN, which highlights the challenging nature of the optimization problem at high perturbation rates, where simultaneous updates are not as effective as alternating updates. Overall, the results in Table III demonstrate the contribution of each component of the proposed ELR-GNN for learning robust GNN models against adversarial attacks on graph structures. ## V Conclusion In this paper, we proposed a novel ELR-GNN method to defend GNNs against sophisticated adversarial attacks on the graph structure. The proposed framework learns a low-rank sparse estimate of the adjacency matrix as the product of low-dimensional matrices, and is made up of two modules: a coarse low-rank estimation module and a fine-grained estimation module. The coarse low-rank estimation module employs the truncated SVD to calculate the singular value matrix and initialize the low-dimensional singular vector matrix. Then the fine-grained estimation module learns a robust low-rank and sparse adjacency matrix by jointly optimizing the singular vector matrix and the GNN model. The weak edges in the estimated adjacency matrix are pruned to sparsify the matrix. We conducted comprehensive experiments under three different training-time attacks on the graph structure. The experimental results demonstrated that ELR-GNN is more robust to adversarial attacks than other existing GNN defense methods and can be trained in an efficient manner.
グラフニューラルネットワーク(GNN)は、グラフデータに対する強力な表現能力を持つことが示されている。しかし、GNNは悪用攻撃に脆弱であり、グラフ構造にわずかなPerturbationさえも、そのパフォーマンスを著しく低下させる可能性がある。既存の方法は、複雑な攻撃に対して無効な場合や、密接な隣接行列の最適化を必要とする場合があり、これは時間のかかる作業で、局所最小値に陥る可能性がある。この問題に対処するため、私たちは、効率的低ランクグラフニューラルネットワーク(ELR-GNN)防御法を提案している。この方法は、悪用攻撃に対して防御するための低ランクでSparseなグラフ構造を学習することを目的としている。ELR-GNNは、効率的な防御を保証するために、低ランク行列を学習する。具体的には、ELR-GNNは、2つのモジュールから構成されている。Coarse Low-Rank Estimation ModuleとFine-
2309.16547
Controlling spin polarization of gapless states in defected trilayer graphene with a gate voltage
Trilayer graphene exhibits valley-protected gapless states when the stacking order changes from ABC to CBA and a gate voltage is applied to outer layers. Some of these states survive strong distortions of the trilayer. For example, they persist when the outer layers are partially devoid yielding a system of two trilayers of different stacking order connected by a strip of a single graphene layer. Here we investigate how these states respond to another perturbation, i.e., the presence of magnetic defects, which we model as pi-vacancies. We show that the gap states hybridize with the defect states and strongly spin-split. More importantly, it is demonstrated that by changing the gate voltage value one can change the spin density of the gap states and the corresponding currents at the Fermi level.
Wlodzimierz Jaskolski
2023-09-28T15:58:58
http://arxiv.org/abs/2309.16547v1
# Controlling spin polarization of gapless states in defected trilayer graphene ###### Abstract Trilayer graphene exhibits valley-protected gapless states when the stacking order changes from ABC to CBA and a gate voltage is applied to outer layers. Some of these states survive strong distortions of the trilayer. For example, they persist when the outer layers are partially devoid yielding a system of two trilayers of different stacking order connected by a strip of a single graphene layer. Here we investigate how these states respond to another perturbation, i.e., the presence of magnetic defects, which we model as \(\pi\)-vacancies. We show that the gap states hybridize with the defect states and strongly spin-split. More importantly, it is demonstrated that by changing the gate voltage value one can change the spin density of the gap states and the corresponding currents at the Fermi level. trilayer graphene; topological states; defects in graphene ## I Introduction Multilayer graphene is attracting still attention due to strongly correlated states and superconductivity reported both, in the systems with twisted layers [1; 2; 3; 4; 5; 6] and more recently in non-twisted Bernal stacked bilayer and rhombohedral trilayer graphene under special conditions [7; 8; 9]. Multilayers are attracting also interest in electronic applications due to the opening of the tunable energy gap when the systems are gated [10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22]. Another interesting property of gated Bernal stacked bilayer or rhombohedral trilayer is the appearance of valley-protected gap states of topological character when the stacking order changes from AB to BA in bilayer or from ABC to CBA in trilayer [23; 24; 25; 26; 27]. The stacking order change occurs usually when one of the layers is stretched, corrugated, or delaminated [28; 29; 30; 31]. The gapless states are important since they provide one-dimensional conducting channels at the Fermi level (\(E_{F}\)) along the stacking domain walls. An important feature of these states is their robustness against structural deformations of multilayers. They largely survive in the presence of atomic-scale defects [32; 33] which introduce defect states into the energy gap, and thus may disrupt topological states. Some of them persist even when the multilayer is partially stripped of one or two layers [34; 35; 36]. In this work, we consider strongly defected trilayer graphene, i.e., devoid of outer layers in the region of the stacking domain wall. This system was recently studied in Ref. [36], but here we add another perturbation, i.e., \(\pi\)-vacancy defects. Since vacancies in graphene lead to the appearance of localized states and magnetic moments, we use them here as simple models of magnetic defects [33; 37; 38; 39; 40]. Our aim is to investigate how such defects influence gapless states, in particular how they remove spin degeneracy of these states, what may be important for applications in spintronic devices. We find that the spin polarization and spin density of the gap states and the corresponding one-dimensional currents at the Fermi level depend strongly on the value of gate voltage applied to outer layers of the trilayer. ## II System description and method of calculation The system under investigation is schematically shown in Fig. 1. It consists of two graphene trilayers connected by a single layer strip. The stacking order of the trilayers on the left and right sides is ABC and CBA, respectively. Therefore, the system can be also seen as a trilayer graphene with ABC/CBA stacking domain wall and the outer layers devoid in the region of the domain wall. It is worth noticing that because both outer layers are torn and pulled apart, the stacking domain wall area extends into the central region, i.e., into the single-layer strip. The system is infinite in both, the \(x\) (armchair) and the \(y\) (zigzag) directions, but is fully periodic in the zigzag Figure 1: Schematic representation of the investigated system. The left and right trilayers have ABC and CBA arrangement of layers, respectively. They are connected by a strip of single graphene layer, i.e., the middle layer of trilayers. The system extends to infinity in the \(x\) (armchair) and \(y\) (zigzag) directions but is fully periodic only in the \(y\) direction. A single vacancy representing magnetic impurity, marked as a red dot and arrow, is located periodically along the \(y\) direction in the region of the single graphene layer. (\(y\)) direction. The width of the system unit cell in the periodic (\(y\)) direction is \(W_{y}=4\), measured as the number of graphene unit cells along this direction. The width of the central region along the \(x\) (armchair) direction, i.e., the width of the single graphene strip connecting two trilayers, is taken as \(W_{C}=4\), measured in the same units. Each unit cell of the system contains a single vacancy (as shown in Fig. 1), that in bipartite systems introduces magnetic moment and thus can model magnetic defect [40]. It is important to note that although we study a model of uniform distortion of the trilayer graphene (i.e., the single layer strip has a constant width and vacancies are periodically distributed) the robustness of gapless states to different perturbations allows us to assume that the obtained results and conclusions could be applied also to systems not so uniformly perturbed. We use in the calculations a one-orbital \(\pi\)-electron tight-binding approximation (TB). This approach has proved to properly model the electronic properties of graphene systems around the Fermi energy. The electron-electron interaction is taken into account by including a Hubbard term, which is adequate for the description of spin and magnetic effects in graphene within the TB model [40]. The Hubbard Hamiltonian in a mean-field approximation is \[H=t_{i/e}\sum_{\langle i,j\rangle,\sigma}c_{i\sigma}^{\dagger}c_{j\sigma}+H.c.+U\sum_{i}(n_{i\uparrow}\langle n_{i\downarrow}\rangle+\langle n_{i\uparrow} \rangle n_{i\downarrow}),\] where \(c_{i\sigma}^{\dagger}\) (\(c_{i\sigma}\)) are the creation and (annihilation) operators for electrons with spin \(\sigma\) at site \(i\); the index \(i\) goes over all the nodes in the unit cell; the summation \(\langle i,j\rangle\) is restricted to nearest neighbors; the arrows indicate spin-up and spin-down \(\sigma\) states; and \(\langle n_{i\sigma}\rangle=\langle c_{i\sigma}^{\dagger}c_{i\sigma}\rangle\) is spin-resolved density at site \(i\). The first term in \(H\) is the TB Hamiltonian, while the last one represents the on-site Coulomb repulsion. Intra-layer and inter-layer hopping parameters \(t_{i}=2.7\) eV and \(t_{e}=0.27\) eV are used, respectively [10; 11], the on-site Coulomb repulsion parameter \(U\) is set equal to 2.8 eV [33; 41; 42]. To calculate the local density of states (LDOS) we use the Green function matching technique [43]. The Hamiltonians \(H_{C}\), \(H_{L}\) and \(H_{R}\) of the central region (i.e., single layer square [\(W_{C}\times W_{y}\)] shown in Fig. 1) and of the left and right trilayers are calculated self-consistently since the densities \(\langle n_{i\sigma}\rangle\) depend on the eigenvalues of the Hamiltonians. Knowing the \(H_{L/R/C}\) Hamiltonians, the transfer matrix technique [44] is employed to find the Green function \(G_{C}\) of the central region, and the corresponding LDOS is calculated as LDOS=\(-\left(\frac{1}{\pi}\right)\)Tr\(G_{C}\)[45]. Since the system is periodic in the \(y\) (zigzag) direction, the LDOS is \(k\)-dependent, where \(k\) is the wave vector corresponding to this periodicity. Therefore, the entire procedure for finding \(H_{L/R/C}\), \(G_{C}\) and LDOS has to be performed for each \(k\) value in the Brillouin Zone, i.e., from \(k=0\) to \(k=\pi/a\), where \(a=W_{y}\). ## III Results and discussion We consider two values of the gate voltage \(\pm V\) applied to the outer layers, namely \(V=0.1\) eV and \(V=0.4\) eV. As shown in Ref. [36], different values of \(V\), larger or smaller than \(t_{e}\), lead to different number and behavior of the gap states in trilayer graphene partially devoid of the outer layers. This is visualized in Fig. 2 (a) and (b), where the results for the case with no vacancies are presented. The LDOS is calculated in the central part of the system and only LDOS close to the energy cone, i.e., close to \(k=\frac{2}{3}\pi\) and the Fermi energy (\(E=0\)), is visualized. Although the LDOS is calculated in the region of the single graphene layer, one can clearly identify gap states characteristic for multilayer graphene with stacking order change. The LDOS shows also some traces of the electronic structure of the neighboring gated trilayers, i.e., the band continua and the energy gap. For \(V=0.1\) eV, two states of similar and monotonic behavior of \(E(k)\) are present in the energy gap. As shown in Ref. [36] there are in fact three gap states, since the right one is doubly degenerate in energy. This right and degenerate pair of the gap states couples to a pair of degenerate zigzag edge states localized in the lower half-layers (blue in Fig. 1) of the left and right trilayers [46]. For \(V=0.4\) eV, one of the gap states changes twice the slope of \(E(k)\), but as it was explained in Ref. [36] the Figure 2: LDOS visualized close to the energy cone, i.e., near the Fermi level (\(E=0\)) and for \(k\) around \(\frac{2}{3}\pi\). (a) and (c) \(V=0.1\) eV, (b) and (d) \(V=0.4\) eV. Upper panels: LDOS calculated for the case without vacancies. Lower panels: LDOS calculated for system with vacancies, but without Coulomb repulsion, i.e., setting \(U=0\). Pink solid line marks the position of the defect states for the case of gated trilayer without stacking order change. rightmost part of this state overlaps with the third gap state. We now analyze the influence of the vacancy defects. When the Coulomb interaction is not allowed, i.e., when we set \(U=0\) in the Hubbard Hamiltonian, the vacancies introduce defect state at \(E=0\) (no gate is applied to the middle layer), which strongly interacts and hybridizes with the gap states. This is visualized in Fig. 2 (c) and (d) for \(V=0.1\) eV and \(V=0.4\) eV, respectively. All these states are spin-degenerate so when the Coulomb interaction is switched on they strongly spin-split. Figs. 3 (a) and (b) show the spin-down and spin-up gap states, respectively, calculated for the case of \(V=0.1\) eV. Two gap states connecting the valence and conduction band areas are clearly visible for both spin polarization. The spin-splitting of the left gap state is larger than the right one because, as demonstrated in Ref. [36], this state is localized mainly in the single layer region and therefore is more affected by vacancies, which are also located in this layer. The spin-down and spin-up states for the case of \(V=0.4\) eV are shown in panels (c) and (d) of Fig. 3, respectively. Of two spin-down states, the right one follows the behavior of the right state of the vacancy-free case, while both spin-up states show monotonic dependence of their energies vs. wave vector, \(E(k)\), almost in the entire energy gap. The picture of spin-splitting of the gap states is more complex than in the \(V=0.1\) eV case: the right spin-down state changes twice the slope of \(E(k)\) and thus it crosses three times the Fermi level. It means that the density of the occupied spin-down gap states at \(E_{F}\) is much higher than the density of the spin-up states. This is visualized in Fig. 4 (b), where the spin-down and spin-up LDOS at the Fermi level is presented. For comparison, the LDOS at \(E_{F}\) of the \(V=0.1\) eV case is shown in panel (a) of this Figure. In this case, the spin-down and spin-up densities are almost the same. The gap states at \(E_{F}\) can carry one-dimensional and spin-polarized currents along the \(y\) direction when the system is additionally biased in this direction. The presented results show that by changing the value of the gate voltage one can change the density of spin-polarized gap states and the corresponding currents at the Fermi level. This is the main message of this work: a slight change of the gate voltage from \(0.1\) eV to \(0.4\) eV can serve as a switch from spin-unpolarized current to a polarized one. The behavior of gap states away from the cone is governed by the defect state, which for that values of \(k\) strongly splits into sin-down and spin-up states with energies below and above the cone, respectively. Since most of the vacancy-hybridized spin-up bands lies above the Fermi level and is unoccupied, the magnetic moment (estimated from Fig. 4) of the central region is about \(0.9\)\(\mu_{B}\) and \(0.6\)\(\mu_{B}\) for \(V=0.1\) eV and \(V=0.4\) eV, respectively. A comment is required about the barely visible gap state that appears at the right side of the energy cone in all panels of Fig. 3. This is the above mentioned third gap state of the right degenerate pair of the vacancy-free case. This state is localized almost exclusively in the lower layers and on the sublattice defined by the zigzag edge nodes of the lower left half-layer. This sublattice does not couple to the vacancy-defined sublattice of the middle layer (see Ref. [46]). For this reason its LDOS in the middle layer is very small, it does not hybridize with the vacancy state and almost does not spin-split. ## IV Conclusions We have studied the electronic structure of defected gated trilayer graphene with stacking order change of the layers from ABC to CBA. The defect comes down to the partial removal of the outer layers in the region of the stacking domain wall and the inclusion of vacancies, Figure 4: Spin-resolved LDOS at the Fermi level. (a) \(V=0.1\) eV, (b) \(V=0.4\) eV. Spin-down and spin-up LDOS are marked in red and blue, respectively. Figure 3: Spin-resolved LDOS calculated for the case with vacancies present in the central region of the system. (a) and (b) \(V=0.1\) eV, (c) and (d) \(V=0.4\) eV. (a) and (c) spin down, (b) and (d) spin up. The Fermi level is marked by a dashed line. which mimic the presence of magnetic defects. We have investigated the role of vacancies in the spin-splitting of gapless states. In particular, we have checked how this splitting, and thus the spin-resolved density of gapless states at the Fermi level, depends on the value of voltage applied to the outer layers. The calculations have been performed within the tight binding approximation and the Hubbard model. The surface Green function matching technique has been used to calculate the local density of states in the defected region. We have shown that gapless states present in the trilayer system due to the stacking order change are strongly affected by the vacancy defects. The interaction of the vacancy state with gapless states and their spin-splitting depends strongly on the value of the gate voltage. When the applied voltage is lower than the interlayer hopping energy \(t_{e}\), the pair of the resulting spin-down and spin-up gap states have similar and uniform slope of \(E(k)\), yielding zero net spin density at the Fermi level. In contrast, when the gate voltage is higher than \(t_{e}\), one of the spin-down states has a more complex curvature of \(E(K)\) than its spin-up counterpart. As a result, one spin density of the gap states dominates at the Fermi level. Therefore, the one-dimensional currents corresponding to the gap states are also spin-polarized, the effect of potential application in spintronics based on multilayer graphene systems.
層状grapheneの層状配向がABCからCBAへと変化し、ゲート電圧が外側の層に印加されると、谷が保護された不連続状態が現れます。これらの状態は、三層の強い歪みに耐えることができます。例えば、外側の層が部分的に欠けている場合でも、異なる層状配向の三層で構成されたシステムを形成することができる。このシステムは、単一層のgrapheneのストリップで接続されています。ここでは、これらの状態に別の擾乱、つまり磁性欠陥の存在をモデル化した「スピ-バリアンス」の導入によって、どのように反応するかを調査しました。これらの状態は欠陥状態とハイブリダイズし、強くスピンを分割します。さらに重要なのは、ゲート電圧の値を変更することで、ギャップ状態のスピン密度とフェルミレベルにおける電流を変化させることが示されました。
2309.11228
Towards Robust Few-shot Point Cloud Semantic Segmentation
Few-shot point cloud semantic segmentation aims to train a model to quickly adapt to new unseen classes with only a handful of support set samples. However, the noise-free assumption in the support set can be easily violated in many practical real-world settings. In this paper, we focus on improving the robustness of few-shot point cloud segmentation under the detrimental influence of noisy support sets during testing time. To this end, we first propose a Component-level Clean Noise Separation (CCNS) representation learning to learn discriminative feature representations that separates the clean samples of the target classes from the noisy samples. Leveraging the well separated clean and noisy support samples from our CCNS, we further propose a Multi-scale Degree-based Noise Suppression (MDNS) scheme to remove the noisy shots from the support set. We conduct extensive experiments on various noise settings on two benchmark datasets. Our results show that the combination of CCNS and MDNS significantly improves the performance. Our code is available at https://github.com/Pixie8888/R3DFSSeg.
Yating Xu, Na Zhao, Gim Hee Lee
2023-09-20T11:40:10
http://arxiv.org/abs/2309.11228v1
# Towards Robust Few-shot Point Cloud Semantic Segmentation ###### Abstract Few-shot point cloud semantic segmentation aims to train a model to quickly adapt to new unseen classes with only a handful of support set samples. However, the noise-free assumption in the support set can be easily violated in many practical real-world settings. In this paper, we focus on improving the robustness of few-shot point cloud segmentation under the detrimental influence of noisy support sets during testing time. To this end, we first propose a Component-level Clean Noise Separation (CCNS) representation learning to learn discriminative feature representations that separates the clean samples of the target classes from the noisy samples. Leveraging the well-separated clean and noisy support samples from our CCNS, we further propose a Multi-scale Degree-based Noise Suppression (MDNS) scheme to remove the noisy shots from the support set. We conduct extensive experiments on various noise settings on two benchmark datasets. Our results show that the combination of CCNS and MDNS significantly improves the performance. Our code is available at [https://github.com/Pixie888/R3DFSSeg](https://github.com/Pixie888/R3DFSSeg). ## 1 Introduction Few-shot point cloud semantic segmentation (3DFSSeg) [2, 3] is a pragmatic direction as it is able to segment novel classes during testing stage with only few labeled samples. In contrast to the fully-supervised methods [2, 3, 4] which only work for close set, 3DFSSeg has better generalization ability. However, it assumes that the learning samples of the novel classes are correctly labeled during online testing time. Unfortunately, the assumption of completely clean data could be violated in practice due to a variety of reasons. First, human labeling is error-prone. The irregular data structure, low-resolution, and subtle inter-class geometric difference make human annotators themselves hard to correctly recognize objects [2]. The crowdsourcing labeling further stresses the annotation quality [2]. As a consequence, ScanNet [3] still contains annotation mistakes [2] after manual refinement over an extended period of time. Second, the industry is actively seeking cheaper and more efficient annotation system to replace human labeling, _e.g._ semi-automatic labeling [2], [3] and fully automatic annotation [3, 4, 5]. It further challenges the curation of high-quality data. As shown in Fig. 1, we can refine the noisy annotations of the static base class dataset offline by either manual checking or data-driven algorithm [] given enough time and budget. However, it is impossible to invest the same amount of human supervision to guarantee noise-free in every support set after model being deployed because the number of new classes in the real world is _infinite_[, ]. Neither can we use data-driven algorithm [] to automatically clean the noise due to severe overfitting to the small number of training samples per new class (_cf._ Tab. 1). To this end, we tackle with the noisy labels in the testing stage of 3DFSSeg, which is challenging but with high practical value. In 3DFSSeg, a few support point clouds are provided as learning samples for each new class during meta-testing. Each support sample (_i.e._ shot) is provided with a binary mask indicating the presence of the corresponding class. Based on the given support set, the model segments the new class in any unlabeled (_i.e._ query) point clouds. As pointed out by that the instance-level noise is most common in the annotation, objects of other classes are wrongly annotated as the target class and collected in the support set. We define shots with incorrectly labeled foreground object as noisy shots. Thus, the goal of robust few-shot point cloud semantic segmentation (R3DFSSeg) is to learn a robust few-shot segmentor that is less influenced by the noisy shots. In this paper, we first propose a Component-level Clean Noise Separation (CCNS) representation learning to learn robust representation that is discriminative between features of clean and noisy points. Inspired by [], we adopt the meta-learning paradigm for few-shot point cloud segmentation. During meta-training, we randomly inject noise into the support set by sampling point clouds containing foreground objects from other classes to mimic the noisy meta-testing environments. We introduce a class-wise supervised contrastive learning on the noisy support set to separate the clean samples of the target classes from the noisy samples. To obtain more fine-grained and diverse contrastive features, we further propose the use of farthest point sampling to decompose the masked points in the feature space into multiple components. Intuitively, our CCNS is designed to encourage features from different classes to be well-separated, such that the clean shots in the support set would form the largest cluster in the feature space when learning converges. We further propose a Multi-scale Degree-based Noise Suppression (MDNS) scheme to remove the noisy shots from the support set during testing stage. Our MDNS separates clean from noisy samples by checking the degree of each sample in a fully connected pair-wise similarity graph. Clean samples tend to form well-defined clusters with higher degrees in the pair-wise similarity graph. In contrast, noisy samples are relatively scattered with lower degrees of connectivity in the feature space. Our **main contributions** can be summarized as follows: **1)** To the best of our knowledge, we are the first to study the problem of robust few-shot point cloud semantic segmentation, Figure 1: Comparison between noisy base and novel class dataset of 3DFSSeg. (a) Base class dataset is static with finite samples. (b) Novel class dataset is non-stationary as new classes are continuously collected in the online testing stage. An example where a sofa and a curtain are wrongly annotated in support set 1 and 2, respectively. which is important in real-world applications since noisy labels are inevitable in practice. **2)** We propose a component-level clean noise separation method for representation learning to enhance the class-level discrimination in the embedding space. **3)** We propose a multi-scale degree-based noise suppression scheme that is able to effectively remove noisy samples from the small support set for each new class during testing. **4)** We conduct extensive experiments on two benchmark datasets (_i.e._ S3DIS and ScanNet) with various noise settings and show superior results over the baselines. ## 2 Related Work Few-shot Learning.Few-shot learning aims to transfer knowledge learned from the abundant samples of the seen class to a set of unseen classes with only few labeled samples. One of the dominant approach is the metric-based [11, 53] methods, which meta-learns a transferable feature embedding that coincides with a fixed metric. The pioneer work ProtoNet [10] predicts query label by finding the nearest class prototype under the Euclidean distance. The key to the metric-based method is the discriminative feature embedding with compact class clusters [1, 2, 12, 13]. Ye _et al_. [11] apply the contrastive objective to align the training instances close to its own class center after the embedding adaptation. Although we also use contrastive learning in the episodic training, we adopt fine-grained contrastive objective (_i.e._ feature components) to better capture the diverse intra-class distribution of point cloud. Few-shot Semantic Segmentation.Few-shot semantic segmentation segments semantic objects in an image [11, 53, 61] or a point cloud [1, 2, 3, 11] with only few annotated samples. The 2D image semantic segmentation can be categorized into relation-based method [11, 53, 61, 60] and prototype-based method [11, 53, 60]. Zhao _et al_. [11] propose the first work on 3D few-shot point cloud semantic segmentation. They generate multi-prototypes via farthest point sampling to better capture the complex data distribution of the point cloud. The transductive inference is conducted between multi-prototypes and query points to infer the label for each query point. However, all these works assume that the annotation in the given support are accurate during testing time. In practice, this is a very strong assumption given that the pixel-level and point-level annotation are extremely tedious and error-prone. In view of this limitation, this paper studies the problem of robust few-shot point cloud semantic segmentation and proposes a effective model that can better adapt to real world applications. Learning with Noisy Labels.Learning with noisy labels is gaining increasing attention as the deep neural networks are shown to be extremely vulnerable to the noisy labels [1, 2, 11]. There are three major approaches: label correction using the prediction of the model as the new label [11, 12, 13, 11], sample selection using small loss criterion to selectively update model [1, 11, 12] and learning robust representation [11, 12, 11, 11, 12, 13, 11, 12, 13, 11, 13, 11, 13 module inside the Transformer [11] to weigh down the noisy shots. Compared to 2D classification, 3D point cloud segmentation is more challenging as it requires per-point classification and point cloud has much larger intra-class variance. Thus, the 2D methods, which only generate one robust prototype per class, fail on the R3DFSSeg. ## 3 Our Method Problem Formulation.The few-shot point cloud segmentation consists of two datasets: \(\mathcal{T}_{base}\) and \(\mathcal{T}_{novel}\) sampled from disjoint classes \(\mathcal{C}_{base}\) and \(\mathcal{C}_{novel}\), respectively. The goal is to learn a model from \(\mathcal{C}_{base}\) and generalize to the \(\mathcal{C}_{novel}\). Following previous work [11], we adopt the episodic training on the \(\mathcal{C}_{base}\) to emulate the few-shot setting during testing. In each \(N\)-way \(K\)-shot episode, \(N\) is the number of classes to be learned, and \(K\) is the number of labeled samples per class. The labeled samples are termed as the support set: \(S=\left\{\left(P_{k}^{1},M_{k}^{1}\right)_{k=1}^{K},\ldots,\left(P_{k}^{N},M_{ k}^{N}\right)_{k=1}^{K}\right\}\). Each point cloud \(P_{k}^{n}\in\mathbb{R}^{m\times f_{0}}\) contains \(m\) points with input feature dimension of \(f_{0}\). The \(M_{k}^{n}\in\mathbb{R}^{m\times 1}\) is the corresponding binary mask indicating the presence of class \(n\). We are also given a set of \(T\) unlabeled point clouds, termed as the query set: \(Q=\left\{\left(R_{i},L_{i}\right)\right\}_{i=1}^{T}\). Each query point cloud \(R_{i}\in\mathbb{R}^{m\times f_{0}}\) is associated with the ground truth label \(L_{i}\in\mathbb{R}^{m\times 1}\) only available in the training stage. During testing, \(M_{k}^{n}\) can wrongly assign object of another class to class \(n\) due to the instance-level labeling error [11]. We denote the noisy mask \(\tilde{M}_{k}^{n}\) and the corresponding point cloud \(\tilde{P}_{k}^{n}\) as the noisy sample, and its correct class assignment as \(Y_{k}\). Consequently, the support set \(S\) becomes the mixture of clean and noisy shots. The goal of robust few-shot point cloud semantic segmentation is to correctly predict the query label by learning from the noisy support set \(S\). Framework Overview.Fig. 2 illustrates our proposed framework. We choose AttMPTI [11] as our few-shot segmentor since it achieves state-of-the-art performance in the few-shot point cloud segmentation. In addition, AttMPTI is potentially robust to the noise when a good feature embedding is guaranteed (Sec. 3.1). In view of this, we propose the Component-level Clean Noise Separation (CCNS) representation learning during meta-training to enhance the discrimination and generalization of the feature embedding for AttMPTI (Sec. 3.2). We further propose the multi-scale degree-based noise suppression (MDNS) to remove the noisy shots during meta-testing based on their similarity graph (Sec. 3.3). Figure 2: **The architecture of our framework**. ‘S’ represents the support point cloud and ‘Q’ represents the query point cloud. The left figure shows the pipeline during meta-training, where we conduct component-level clean noise separation representation learning for each episode class. Components of different classes are pushed away from each other. The right figure shows the pipeline during meta-testing, where we perform multi-scale degree-based noise suppression to remove the noisy shots. ### Why Choose AttMPTI? AttMPTI [] is the state-of-the-art few-shot point cloud segmentation method. It consists of a feature extractor to embed the support and query point cloud into the same metric space, a multi-prototype generation module to generate prototypes from support set, and a label propagation module to infer query label. Compared to ProtoNet [], AttMPTI has several unique components that gives it the potential to be robust, in addition to showing more superior performance. **First**, AttMPTI generates multi-prototypes via FPS [], while ProtoNet uses mean aggregation of all the relevant class feature. The sampled seed points via FPS are able to represent the diversity of the feature space, and the local prototype is generated by clustering each point to the nearest seed point based on the Euclidean distance in the feature space. In this way, the multi-prototypes can inherently separate the clean and noisy points in the prototype-level. As shown in Fig. 3, the clean ratio of local prototypes is either 1 (100% clean) or 0 (100% noise), but it seldom produces a half-clean prototype. In comparison, the global prototype used in the ProtoNet leads to a clean-noise compound. **Second**, AttMPTI infers query labels via label propagation [] in a transductive fashion, while ProtoNet infers each query point independently with the set of class prototypes. The label propagation is based on the manifold smoothness, _i.e._ nearby samples in the feature space share the same label, and it has the ability to correct the noisy label []. In contrast, ProtoNet independently and identically predicts the label for each query point based on the global prototypes that are potentially noisy. The lack of reasoning the relationships among the support and query prevents the model from being able to correct the support noise. Although the design of AttMPTI shows a better potential than ProtoNet in resisting the noise existing in the support set, the performance of both multi-prototype generation and label propagation are subjected to the discriminativity of the feature embeddings. To enhance the representation learning, we propose to perform component-level clean-noise separation. ### Component-level Clean Noise Separation Our component-level clean noise separation (CCNS) representation learning aims to enhance the class-wise discrimination in the feature space. We randomly replace some of the K support shots with shots sampled from other classes during episodic training and induce the model to differentiate clean and noisy shots in the feature space. With these synthesized support sets with noisy labels, we perform a clean-noise separation representation learning for each way (_i.e._ class) by optimizing the model with the class-wise contrastive learning among the \(K\) support shots as follow: \[\mathcal{L}_{\text{CNS}}=\frac{1}{K}\sum_{k=1}^{K}\left(\frac{-1}{|A(z_{k})|} \sum_{z_{q}\in A(z_{k})}\log\frac{\exp\left(z_{k}\cdot z_{g}/\tau\right)}{ \sum\limits_{\hat{n}\setminus k}\exp\left(z_{k}\cdot z_{h}/\tau\right)} \right), \tag{1}\] Figure 3: Comparison of prototype cleanness from different methods on a 5-shot with 40% out-episode noise setting. ‘1’ means the prototype only containing clean-labeled points, and ‘0’ means the prototype only containing points that are incorrectly labeled as the target class. Values in between 0-1 represent the portion of clean-labeled points in the prototype. where \(z_{k}\in\mathbb{R}^{d}\) is the L2 normalized average foreground feature of the support point cloud \(P_{k}\) in the projection space. \(A(z_{k})=\left\{z_{g}\mid Y_{g}=Y_{k}\right\}\) is the set of positive samples \(z_{g}\) with its semantic label \(Y_{g}\) the same as the semantic label \(Y_{k}\) of \(z_{k}\). \(|A(z_{k})|\) is the cardinality and \(\tau\) is the temperature. By training with \(\mathcal{L}_{\text{CNS}}\), the shots with same foreground class are encouraged to stay together while staying away from samples of other classes. Unfortunately, a simple mean aggregation of the foreground area tends to be sub-optimal in representing the class distribution since the distribution of point features of each class is very large as shown in Fig. 4. To this end, we conduct class-wise contrastive learning in a more fine-grained way by dividing the features in each foreground area into local components. The feature components aggregate local patterns that exhibit similar fine-grained semantics, and have better coverage of the feature space compared to the naive mean aggregation. Specifically, we first perform FPS in the feature space and then locally aggregate the point features into a set of feature components \(\left\{z_{k}^{1},\cdots,z_{k}^{R}\right\}\), to replace the original holistic \(z_{k}\). Consequently, the component-level clean noise separation \(\mathcal{L}_{\text{CCNS}}\) is formulated as: \[\mathcal{L}_{\text{CCNS}}=\frac{1}{KR}\sum_{k=1}^{K}\sum_{i=1}^{R}\left(\frac{ -1}{|A(z_{k}^{i})|}\sum_{z_{k}^{i}\in A(z_{k}^{i})}\log\frac{\exp\left(z_{k}^{ i}\cdot z_{g}^{j}/\tau\right)}{\sum\limits_{h,b\setminus(k,i)}\exp\left(z_{k}^{i} \cdot z_{h}^{b}/\tau\right)}\right), \tag{2}\] where the \(A(z_{k}^{i})=\left\{z_{g}^{j}\mid Y_{g}=Y_{k}\right\}\) is the set of positive samples with the same semantic label \(Y_{g}\) as \(Y_{k}\), and the \(|A(z_{k}^{i})|\) is the cardinality. As shown in Fig. 4, each component represents a different aspect of its corresponding shot in the feature space. Essentially, it forms a multi-view self-supervised contrastive learning for each shot, where the 'view' is a local component in the feature space. Correspondingly, the components at the boarder of the class distribution automatically serve as the hard negative samples to other classes and hard positive samples to its own class, which are the key to a successful contrastive learning [10, 12]. The final optimization objective during the training stage is given by: \[\mathcal{L}=\mathcal{L}_{\text{CE}}+\lambda\mathcal{L}_{\text{CCNS}}, \tag{3}\] where \(\lambda\) is a hyper-parameter to weigh the contribution of \(\mathcal{L}_{\text{CCNS}}\). \(\mathcal{L}_{CE}\) is the original cross-entropy loss in AttMPTI. ### Multi-scale Degree-based Noise Suppression Although the clean and noisy points can separate under the well-learned embedding space, the prototype generation and label propagation module are still exposed to the mislabeled shots during testing time. To reduce their negative influence during testing, we design a degree-based noise suppression scheme to automatically remove the suspicious noisy shots. Specifically, we build a fully connected graph G on the K support shots for each way. We Figure 4: t-SNE [12] visualization of the CCNS on a 5-shot support set with 2 noisy shots. Each dot represents a point in the feature space and each triangle represents a feature component. Different colors represent different classes with blue indicating the target class. The arrow shows the direction to pull the feature components. average the foreground feature \(x_{i}\in\mathbb{R}^{d}\) of the \(i\)-th shot as the feature of node i. The weight \(W_{ij}\) of the edge encodes the affinity between the two end nodes \(i\) and \(j\) as follow: \[W_{ij}:=\begin{cases}\left[x_{i}^{\top}x_{j}\right]_{+}^{\gamma},&\text{ if }i\neq j\\ 0,&\text{otherwise}\end{cases}. \tag{4}\] We then compute the degree \(d_{i}=\sum_{j}W_{ij}\) for each node i. Essentially, the degree reflects the nodes connection in the graph. The noisy shots tend to have lower degree since the clean shots usually form a cluster with the largest size and the noisy shots are scattered in the feature space. Consequently, we identify them based on the clean indicator: \[I_{i}:=\begin{cases}1&\text{ if }d_{i}>thr\\ 0,&\text{ otherwise}\end{cases}, \tag{5}\] where we set the \(thr\) as the mean of the \(\left\{d_{i}\right\}_{i=1}^{K}\). The shots with \(I=0\) are treated as noise and removed. Some point clouds may have complex data distribution that cannot be sufficiently represented by a global representation. To mitigate this problem, we extend the single-level degree-based noise suppression scheme to multi-level, thus yielding the Multi-scale Degree-based Noise Suppression (MDNS). Our MDNS can be more robust to some complex samples and consequently improve the accuracy of clean sample identification. Specifically, we add an additional level to perform noise suppression. We evenly split the foreground object along the x/y/z coordinates, and denote the number of cuts along the x/y/z coordinates as \(n_{x}\)/\(n_{y}\)/\(n_{z}\). The foreground feature in each sub-shot is locally aggregated and the feature set for each shot is enlarged to \(\left\{x_{i,s}^{1},\cdots,x_{i,s}^{e}\right\}\), where \(e=n_{x}\times n_{y}\times n_{z}\). The single representation \(x_{i}\) is the case of \(\left\{n_{x}=1,n_{y}=1,n_{z}=1\right\}\) and is considered as the coarsest scale with \(s=1\). We then send them into the noise suppression module to get the clean indicator \(\left\{I_{i,s}^{1},\cdots,I_{i,s}^{e}\right\}\), where the majority voting is performed get the shot-level indicator \(I_{i,s}\). Lastly, we assemble the final prediction \(I_{i}\) as the majority voting of the prediction at each scale \(\left\{I_{i,1},\ldots,I_{i,s}\right\}\). ## 4 Experiments ### Datasets and Noise Settings Datasets.We conduct experiments on **S3DIS**[] and **ScanNet**[]. S3DIS contains point clouds of 272 rooms collected from six indoor areas with annotation of 12 semantic classes. ScanNet contains point clouds of 1,513 scans from 707 unique indoor scenes with annotation of 20 semantic classes. Following [], we split each room into non-overlapping blocks with size of \(1\text{m}\times 1\text{m}\) on the xy plane. Consequently, S3DIS and ScanNet contains 7,547 and 36,350 blocks, respectively. We sample \(m=2,048\) points as the input point cloud from a block. The input feature \(f_{0}\) corresponds to XYZ, RGB and normalized XYZ values. During training, we randomly sample one episode by first sampling N classes from \(\mathcal{C}_{base}\) and then sampling \(NK\) point clouds as the support set and \(T\) point clouds as the query set. The support mask \(M\) and the query label \(L\) are modified from its original annotation to only indicate the presence of the target classes with irrelevant classes as the background. The testing episodes are formed in a similar way, except for that we exhaustively sample 100 episodes for each combination of N classes from the \(\mathcal{C}_{novel}\). We use the data split 0 of [] as the test classes on both datasets. We adopt the mean Intersection over Union (mIoU) as the evaluation metric. Noise Settings.We explore two types of label noise: 1) **In-episode noise** samples noisy shots from other N-1 classes of the current episode. It studies how the mix of the N foreground classes affects the prediction of query point. We test the models on in-episode noise ratio of 20% and 40%. 2) **Out-episode noise** samples noisy shots from outside of the N classes in the \(\mathcal{C}_{novel}\). It studies how the outliers affect the prediction of the query point. We test the models on out-episode noise ratio of 40% and 60%. The noise rate is defined as the percentage of the \(K\) support shots. Following existing literature of learning with noisy labels [B, \(\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{ \boxboxboxbox point while AttMPTI fails. We notice that our model is slightly worse than AttMPTI in the 0% setting in Tab. 2. We postulate that our method can predict correct labels, but the noisy ground truths of ScanNet [] cannot reflect the true performance of our method. This postulation is evidenced by the great superiority of our method over baseline methods on S3DIS, which is a dataset with clean ground truths. It suggests that our method can adapt to the unknown test environment (both clean and noise test), which is important for model deployment in real world. 2D robust few-shot learner Tra-NFS [] performs poorly on R3DFSSeg due to severe modality gap, _i.e._ point cloud has larger intra-class viriance than 2D images, making Tra-NFS hard to detect clean shots. 3D robust point cloud segmentor PNAL also fails in the few-shot setting due to small support set in each episode. We further notice that the in-episode noise has larger negative influence than the out-episode noise, _e.g._ 40% in-episode noise vs 40% out-episode noise. We believe the reason is that the features in each foreground class usually form a compact cluster. The in-episode noise causes the labels in this compact cluster to be different, which severely confuses the model of which class this cluster belongs to. In contrast, the out-episode noise are usually separated from the foreground classes in the feature space, and is less likely to influence them. High way setting.Tab. 4 shows results of 5-way 5-shot setting on ScanNet. Our model again can significantly outperform AttMPTI on all noise settings. ## 5 Conclusion In this paper, we address the new task of robust few-shot point cloud segmentation, which is a more general setting that considers label noise in the support set. We design the Component-level Clean Noise Separation (CCNS) representation learning to learn a discriminative feature embedding. Our CCNS encourages the features from different classes to stay away from each other, and concurrently induces the clean shots to form the largest cluster in the feature space. Leveraging the clean samples identified from our CCNS, we further propose the Multi-scale Degree-based Noise Suppression (MDNS) to remove the noisy shots before the prototype generation based on their affinity with other samples in the support set. Experiment results that outperform the baselines show the feasibility of our proposed method. Acknowledgement.This research is supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG2-RP-2021-024), and the Tier 2 grant MOE-T2EP20120-0011 from the Singapore Ministry of Education. This research is also supported by the SUTD-ZJU Thematic Research Grant RS-MEZJU-00031. The work is fully done at the National University of Singapore. ## Appendix A Ablation Study Analysis of different R values.Tab. A1 shows the ablation study of different number of components for each shot in the component-level clean noise separation. 'R=1' is the shot-level representation. It can be seen that the performance of 'R=1' is generally worse than that of the component-level contrastive learning, which verifies that the feature is sub-optimized with a single holistic aggregation. By dividing into local components, we can get more fine-grained and diverse positive and negative samples with 'R=4' having the best performance. Analysis of different noise ratios in CCNS.We analyze different combination of noise ratio in the episodic training since our component-level clean noise separation is conducted among the clean and noisy shots. '{0.2,0.4}' has large performance drop when comparing with '{0,0.2,0.4}', which suggests that it is very necessary to include noise-free episodes \begin{table} \begin{tabular}{c|c|c c|c c} \hline \multirow{2}{*}{model} & \multirow{2}{*}{0\%} & \multicolumn{2}{c|}{In-episode Noise} & \multicolumn{2}{c}{Out-episode Noise} \\ \cline{3-6} & & 20\% & 40\% & 40\% & 60\% \\ \hline AttMPTI & 32.75 & 27.96 & 20.72 & 23.89 & 17.54 \\ **Ours** & 32.74 & **30.79** & **26.73** & **28.13** & **21.22** \\ \hline \end{tabular} \end{table} Table 4: 5-way 5-shot setting on ScanNet. during training. By further adding noise ratio of 0.6 (with the restriction that any number of noisy class should not outnumber the clean shots), there is again a significant drop in performance. We can conclude that only a mix of a proper portion of noisy and clean episodes during training can bring decent improvement in the noisy test. Analysis of different scales in MDNS.Tab. A3 presents the analysis of different scales in the multi-scale degree-based noise suppression. Due to space limitation, we only provide the comparison of selected scales from the many possibilities of combinations. We first analyze what constitutes a good scale. It is almost guaranteed that the holistic scale \(\{1/1/1\}\) gives decent performance since the mean representation covers the general information. The performance varies a lot when the foreground objects are divided into fine-grained scales. By comparing \(\{2/2/1\}\), \(\{1/2/2\}\) and \(\{2/1/2\}\), we can see that a cut on the z-axis causes a significant drop in performance on the heavy noise setting. By comparing \(\{3/3/1\}\) with \(\{2/2/1\}\), we can see that the cuts that are too fine-grained cause a performance drop due to the severe lack of the global information in the sub-shots. Overall, \(\{1/1/1\}\) and \(\{2/2/1\}\) are the good scales and their combination achieves the best performance. ## Appendix C Experiment Results on ScanNet Effectiveness of CCNS and MDNS.We analyze the effectiveness of our proposed component-level clean noise suppression (CCNS) and multi-scale degree-based noise suppression (MDNS) on ScanNet in Tab. C4. Both CCNS and MDNS are effective, and the combination of them achieves best overall performance. It is worth highlighting that the robustness of AttMPTI is improved by simply adding our feature representation learning, _i.e._ CCNS. It verifies our claim that AttMPTI has the potential to be noise robust (by FPS based multi-prototype generation and label propagation), yet is subject to how discriminative the feature embedding is. Qualitative Results.Fig. C3 presents the qualitative comparison between our method and AttMPTI under a 2-way 5-shot point cloud segmentation with 40% out-episode noise on ScanNet []. With the interference of the noisy shots, AttMPTI [] either fails to segment the target semantic object (see the result in the first row) or wrongly segment some background points as the target class (see the result in the second row). In contrast, our method is able to give reliable segmentation results with respect to the target classes. ## Appendix D Data split We follow the data split of [5], and adopt the split 0 as the testing classes as shown in Tab. D5. ## Appendix E Clean Ratio Comparison Tab. E6 lists the clean ratios of the original support set ('Original') and the filtered support set produced by the MDNS ('Ours') during meta-testing. The clean ratio in each noise setting is given by first computing the percentage of the number of the clean shots in the corresponding set of one episode and then averaging the percentages in all episodes. As can be clearly seen from Tab. E6, our method can significantly improve the clean ratio in all the noise setting. ## Appendix F Baseline Setups We compare our method with few-shot point cloud semantic segmentation (3DFSSeg) methods AttMPTI [5] and ProtoNet [5], robust few-shot learning (R2DFSL) method Tra-NFS and robust point cloud semantic segmentation (R3DSeg) method PNAL [20]. All methods use the same feature extractor as AttMPTI for fair comparison. We follow the official code in AttMPTI to train ProtoNet and AttMPTI. For Tra-NFS, we adopt three-layer transformer encoder to generate robust prototype. We also randomly inject noise into the support set by sampling point clouds containing foreground objects from other classes during meta-training. For PNAL, we apply its robust training algorithm on each noisy support set and then test the performance on the corresponding query point cloud in each episode during meta-testing. We do not carry forward the knowledge from one episode to the next as suggested in [20].
少数のサンプルで新しい未見のクラスに迅速に適応するためのモデルを訓練することを目的とする、点雲セマンティック分割は、サポートセットのサンプルが少数のサンプルしか必要ない。しかし、サポートセットでのノイズフリーの仮定は、実世界の多くの設定では容易に破綻する。この論文では、テスト時にノイズのあるサポートセットの影響を軽減するための、少数のサンプル点雲分割の堅牢性を向上させることを目的とした。このために、まず、Component-level Clean Noise Separation (CCNS) 構文レベルのノイズ分離学習を提案して、目標クラスのクリーンなサンプルとノイズのあるサンプルを識別するための判別的な特徴表現を学習する。CCNSからの分離されたクリーンなノイズのないサポートサンプルを利用して、さらに、Multi-scaleDegree-based Noise Suppression (MDNS)の提案を行い、サポートセットからノイズのあるサンプルを削除する。さまざまなノイズ設定
2309.09467
A model of stochastic memoization and name generation in probabilistic programming: categorical semantics via monads on presheaf categories
Stochastic memoization is a higher-order construct of probabilistic programming languages that is key in Bayesian nonparametrics, a modular approach that allows us to extend models beyond their parametric limitations and compose them in an elegant and principled manner. Stochastic memoization is simple and useful in practice, but semantically elusive, particularly regarding dataflow transformations. As the naive implementation resorts to the state monad, which is not commutative, it is not clear if stochastic memoization preserves the dataflow property -- i.e., whether we can reorder the lines of a program without changing its semantics, provided the dataflow graph is preserved. In this paper, we give an operational and categorical semantics to stochastic memoization and name generation in the context of a minimal probabilistic programming language, for a restricted class of functions. Our contribution is a first model of stochastic memoization of constant Bernoulli functions with a non-enumerable type, which validates data flow transformations, bridging the gap between traditional probability theory and higher-order probability models. Our model uses a presheaf category and a novel probability monad on it.
Younesse Kaddar, Sam Staton
2023-09-18T04:02:03
http://arxiv.org/abs/2309.09467v2
A Model of Stochastic Memoization and Name Generation in Probabilistic Programming: Categorical Semantics via Monads on Presheaf Categories ###### Abstract Stochastic memoization is a higher-order construct of probabilistic programming languages that is key in Bayesian nonparametrics, a modular approach that allows us to extend models beyond their parametric limitations and compose them in an elegant and principled manner. Stochastic memoization is simple and useful in practice, but semantically elusive, particularly regarding dataflow transformations. As the naive implementation resorts to the state monad, which is not commutative, it is not clear if stochastic memoization preserves the dataflow property - _i.e._ whether we can reorder the lines of a program without changing its semantics, provided the dataflow graph is preserved. In this paper, we give an operational and categorical semantics to stochastic memoization and name generation in the context of a minimal probabilistic programming language, for a restricted class of functions. Our contribution is a first model of stochastic memoization of constant Bernoulli functions with a non-enumerable type, which validates data flow transformations, bridging the gap between traditional probability theory and higher-order probability models. Our model uses a presheaf category and novel probability monad on it. probabilistic programming, quasi-Borel spaces, synthetic measure theory, stochastic memoization, name generation, categorical semantics, commutative monads, nominal sets. + Footnote †: footnote]Footnote : [ ## 1 Introduction Bayesian nonparametric models are a powerful approach to statistical learning. Unlike parametric models, which have a fixed number of parameters, nonparametric models can have an unbounded number of parameters that grows as needed to fit complex data. This flexibility allows them to capture subtle patterns in data that parametric models may miss, and it makes them more composable, because they are not arbitrarily truncated. Prominent examples of nonparametric models include Dirichlet process models for clustering similar data points, and the Infinite Relational Model for automatically discovering latent groups and features, amongst others. These infinite-dimensional models can accommodate an unbounded number of components, clusters, or other features in order to fit observed data as accurately as possible. Probabilistic programming is a powerful method for programming nonparametric models. _Stochastic memoization_[47, 57] has been identified as a particularly useful technique in this. This paper is about semantic foundations for stochastic memoization. In deterministic memoization [38], the idea is to compute a function the first time it is called with a particular argument, and store the result in a memo-table. When the function is called again with the same argument, the memo-table is used, resulting in performance improvement but no semantic difference. Stochastic memoization is this memoization applied to functions that involve random choices, and so a memoized function is semantically different from a non-memoized one, because the random choices will only be made once for each argument. We illustrate this with a simple example; this is informal and we consider a precise language and semantics in Section 3. Consider a function \(f\) that returns a random number \([0,1]\) for each argument. It might be written \(f(x)=\texttt{uniform}\). One run of the program might call \(f\) with various arguments, and example runs are as follows: \[\begin{array}{l|cccccc}\text{\it Calls to $f$ in a particular run of a program}:&f(0)&f(1)&f(0)&f(2)&f(1)&f(3)&\dots\\ \hline\text{\it Results of calls in a run without memoization:}&0.43&0.01&0.72&0.26&0.48&0.16&\dots\\ \text{\it Results of calls in a run with memoization:}&0.43&0.01&\textbf{0.43}&0.26&\textbf{0.01}&0.16&\dots \end{array}\] Thus in the memoized version, when the function is called again with the same value, the previous result is recalled, and the random choices are not made again. (Note that although this is called'stochastic memoization', the terminology is perhaps confusing: the memoization always happens, and it is not 'randomly deciding whether or not to memoize'.) From a semantic perspective, the role of stochastic memoization is clear when we use a monad-based interpretation with a probability monad \(\mathtt{Prob}\). This might be thought of as the Giry monad [15] or a probabilistic powerdomain [20, 25], or a Haskell monad (e.g. [10]). A distribution on a type \(\mathtt{b}\) with parameters from \(\mathtt{a}\) has type \(\mathtt{a}\rightarrow\mathtt{Prob}(\mathtt{b})\). On the other hand, a random function is a probability distribution on the type of deterministic functions, having type \(\mathtt{Prob}(\mathtt{a}\rightarrow\mathtt{b})\). Whereas parameterized distributions are a key idea in parametric statistics, random functions are a key idea in nonparametric statistics. And stochastic memoization is a higher-order function with probabilistic effects, of type \[\mathtt{mem}::(\mathtt{a}\rightarrow\mathtt{Prob}\mathtt{b})\rightarrow\mathtt{ Prob}(\mathtt{a}\rightarrow\mathtt{b})\] that converts parameterized distributions into random functions, by making the random choice once for each argument. This mem combinator plays a crucial role in Church [17] and WebPPL [19], and appears with this type in our Haskell library LazyPPL [52]. Stochastic memoization also plays a role in Blog [39], Hansei [29], and many other languages (e.g. [5, 11]). It is not difficult to implement stochastic memoization, by using a memo-table. Nonetheless, its semantic properties remain elusive and developers have noted bugs and complications (e.g. [16, 30]). Moreover, the existing semantic models of probability (such as [20, 21, 25]) only support mem for very restricted domain types \(\mathtt{a}\) (see SS1). In particular our own Haskell library [52] supports stochastic memoization but the recent semantic analysis [10] only explains it at certain domain types. The point of this paper is to extend this semantic analysis of stochastic memoization to a broader class of domains. **First example: White noise in a non-parametric clustering model.** One common first example of stochastic memoization is as follows. Suppose we have a finite set of individuals, and we want to group them into an unknown number of clusters, and then assign attributes to the clusters. For example, we may want to form clusters and consider attributes on the clusters such as 'Brexit-supporters','mean geographic latitude/longitude', 'geographic variance','mean salary', and so on. A popular route is the 'Dirichlet process with memoization', as follows, for which a generative model has the following pseudocode (see e.g. [18, 19, 47][14]): 1. We randomly decide which proportion of individuals are in each cluster. We assign a unique identifier to each cluster, from some space \(\mathbb{A}\) of identifiers. One might use the Dirichlet process with a diffuse base measure on \(\mathbb{A}\), for example the normal distribution on the real numbers. 2. Assign attributes to the cluster identifiers. For example, depending on whether that cluster supports Brexit, assign either true or false to the identifier. This particular assignment is a sample from a random function in \((\mathbb{A}\to 2)\). This distribution might come from memoizing a constant Bernoulli distribution, assigning 'true' to any cluster identifier with probability \(0.5\). 3. Steps (i)-(ii) are generative, and we could run them to get some synthetic data. The idea of Bayesian clustering is to start with steps (i)-(ii) as a reasonable _prior_ distribution, in generative form, and to combine this with actual data to arrive at a _posterior_ distribution. In this example the actual data might come from a telephone survey, and we use conditional probability (aka Bayesian inversion) to arrive at a posterior distribution on the cluster proportions and their attributes. We can then use this to make predictions. The constant Bernoulli memoization is a reasonable prior for Brexit support, but the posterior will typically be much more complicated, with various correlations, etc. In this paper, we focus on step (ii), stochastic memoization: steps (i) and (iii) are studied extensively elsewhere (e.g. see [15] in the statistics literature, or [2, 6, 7] in the semantics literature, and references therein). This simple example of a memoized constant Bernoulli function is easy to implement using a memoitable, but already semantically complicated. If we put \(\mathbb{A}=\mathbb{R}\), the real numbers, for the base measure, as is common in statistical modelling, then the memoized constant Bernoulli distribution on \((\mathbb{A}\to 2)\) is \(1\)-dimensional white noise: intuitively, for every \(x\in\mathbb{R}\) we toss a coin to pick true or false, making an uncountable number of independent random choices. (As an aside, we note that we could combine steps (i) and (ii), using a complicated base measure for the Dirichlet process that includes all the attributes. This model would not be compositional, and in any case, some kind of memoization would still be needed to implement the Dirichlet process.) #### Challenge. In this paper, we address the challenge of showing that the following items are consistent: 1. a type \(\mathbb{A}\) with a diffuse probability distribution (Def 2.2); 2. a type bool of Booleans with Bernoulli distributions (i.e. tossing coins, including biased coins); 3. a type of functions \([\mathbb{A}\to\mathsf{bool}]\), with function application (4); 4. stochastic memoization of the constant Bernoulli functions (3); 5. the language supports the dataflow property (Def. 2.3). These items are together inconsistent with traditional measure theory, as we discuss in Section 2.3, where we also make the criteria precise. Nonetheless (1)-(4) are together easy to implement in a probabilistic programming language, and useful for Bayesian modelling. Item (5) is a very useful property for program reasoning and program optimization. Item (5) is also a fundamental conceptual aspect of axiomatic probability theory, since in the measure-theoretic setting it amounts to Fubini's theorem [33] and the fact that probability measures have mass \(1\), and in the categorical abstraction of Markov categories [14] it amounts to the interchange law of affine monoidal categories. There _are_ measure-theoretic models where some of these items are relaxed (SS2.1-2.3). For example, if we drop the requirement of a diffuse distribution, then there are models using Kolmogorov extension (SS2.2). A grand challenge is to further generalize these items, for example to allow memoization of functions \(A\to B\) for yet more general \(A\) and \(B\), and to allow memoization of all definable expressions. Since the above five items already represent a significant challenge, and our semantic model is already quite complicated, we chose to focus on a'minimal working example' for this paper. To keep things simple and minimal, in this paper we side-step measure-theoretic issues by noticing that the equations satisfied by a diffuse probability distribution are exactly the equations satisfied by name generation (e.g. [51, SSVB]). Because of this, we can use categorical models for name generation (following e.g. [42, SS4.1.4], [50, SS3.5]) instead of traditional measure theory. Name generation can certainly be implemented using randomness, and there are no clashes of fresh names if and only if the names come from a diffuse distribution (see also e.g. [49]). On the other hand, if we keep things simple by regarding the generated names as _pure names_[41], we avoid any other aspects of measure theory, such as complicated manipulations of the real numbers. **Contributions.** To address the challenge of the consistency of items (1)-(5) above, our main contributions are then as follows. 1. We first provide an operational semantics for a minimal toy probabilistic programming language that supports stochastic memoization and name generation (SS4). 2. We then (SS5) construct a cartesian closed (for function spaces) categorical model of this language endowed with an affine commutative monad (Theorem 5.5). In common with other work on local state (e.g. [29, 45]), we use a functor category semantics, indexing sets by possible worlds. In this paper, those worlds are finite fragments of a memo-table. 3. We prove that our denotational semantics is sound with respect to the operational semantics, ensuring the correctness of our approach and validating that lines can be reordered in the operational semantics (Theorem 5.10). The class of functions that can be memoized includes constant Bernoulli functions. We call these functions _freshness-invariant_ (Definition 5.7). The soundness theorem (5.10) is not trivial because the timing of the random choices differs between the operational and denotational semantics. In the operational semantics, the memo-table is partial, and populated lazily as needed, when functions are called with arguments. This is what happens in all implementations. However, this timing is intensional, and so by contrast, in the denotational semantics, the memo-table is always totally populated as soon as the current world is extended with any functions or arguments. 4. Finally, we present a practical Haskell implementation [27] which compares the small-step, big-step operational, and denotational semantics, demonstrating the applicability of our results (SS6). ## 2 Stochastic memoization by example This section discusses the law of stochastic memoization and provides examples in finite, countable, and non-enumerable domain settings. We then address the challenges posed by the naive use of the state monad, and we clarify our objective: finding a model of probability that supports stochastic memoization over non-enumerable domains, satisfying the dataflow property, and that has function spaces. In what follows, we use two calculi: (a) The internal metalanguage of a cartesian closed category with a strong monad Prob, for which we use Haskell notation, but which is roughly Moggi's monadic metalanguage [43, SS2.2]. (b) An ML-like programming language which is more useful for practical programming, but which would translate into language (a); this is roughly Moggi's'simple programming language' [43, SS2.3]. We assume passing familiarity with probability and monadic programming in this section, but the informal discussion here sets the context, and we move to more formal arguments in Section 3. (Recall some Haskell notation: we write \(\ltimes\to\mathfrak{t}\) for lambda abstraction; \(\gg\) for monadic bind, i.e. Kleisli composition; **return** for the unit; a **do** block allows a sequence of monadic bound instructions. We write \(\operatorname{\mathbf{const}}\times\) for the constant \(\times\) function, \(\operatorname{\mathbf{const}}\times=\ltimes\to\ltimes\).) **Memoization law.** **Definition 2.1**: A strong monad _supports stochastic memoization of type_\(\mathsf{a}\to\mathsf{b}\) if it is equipped with a morphism \(\operatorname{\mathbf{mem}}::(\mathsf{a}\to\mathsf{Prob}\mathsf{b})\to \mathsf{Prob}(\mathsf{a}\to\mathsf{b})\) that satisfies the following equation in the metalanguage, for every \(\mathsf{x}_{0}::\mathsf{a}\) and \(\mathsf{f}::\mathsf{a}\to\mathsf{Prob}\mathsf{b}\): \[\operatorname{\mathbf{mem}}\mathsf{f}=\mathsf{f}\times_{0}\gg\left(\ltimes_ {0}\to\operatorname{\mathbf{mem}}\mathsf{f}\gg\left(\ltimes_{\mathsf{Mem}} \to\operatorname{\mathbf{return}}\left(\ltimes\to\mathsf{if}\times=\mathsf{x }_{0}\operatorname{\mathbf{then}}\mathsf{y}_{0}\operatorname{\mathbf{else}} \mathsf{fMem}\times\right)\right)\right) \tag{1}\] As noted at the beginning of this section, we will pass between an internal metalanguage for strong monads, and an ML-like programming language that would be interpreted using strong monads. In Section 3 we introduce this programming language precisely, but for now we note that it has a special syntax \(\lambda_{\mathsf{a}}\,x.\,u\), meaning \(\operatorname{\mathbf{mem}}\left(\ltimes\to u\right)\), since this is a common idiom1. The law of Definition 2.1 requires equations such as: \[\begin{array}{llllll}\text{let val}\ f\ \leftarrow\ \lambda_{\tt p}\,x.\,u&\text{1 sample}&u[n/x]&\text{let val}\ f\ \leftarrow\ \lambda_{\tt p}\,x.\,u\,\mathsf{in}\\ \mathsf{in}\,f@{\,\mathsf{=}\,}&\text{let val}\ v_{1}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let val}\ v_{1}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let val}\ v\ \leftarrow\ u[n/x]\,\mathsf{in}\\ \mathsf{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{1}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{1}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{1}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{1}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{1}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{1}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{1}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{1}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{1}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\mathsf{val}\ v_{2}\ \ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\mathsf{val}\ v_{2}\ \ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\mathsf{val}\ v_{2}\ \[\begin{array}{l}\text{poissonPP}\ ::\ \text{\bf Double}\to\text{\bf Double}\to\text{\bf Prob }\ \text{\bf[Double]}\\ \text{poissonPP}\ \text{\bf lower rate}\ =\ \text{\bf do}\ \{\ \text{\rm gaps}\leftarrow\text{\bf mem}\ \text{\bf(const (exponential rate))}\ ;\ \text{\bf return}\ \text{\bf(scan11}\ (+)\ \text{\bf lower}\ \text{\bf(map gaps}\ \text{\bf[1}\..\ \text{\bf])}\ \}\end{array}\] We implement memoization with enumerable a in the Haskell LazyPPL library [11] without using state, instead using Haskell's laziness and tries, following [23] (see [11]). We use the Poisson process extensively in the demonstrations for LazyPPL [53]. Semantic interpretation with enumerable domains.Memoization with enumerable domains is supported by a denotational semantics using the category of measurable spaces and the Giry monad [16]. Although the category is not Cartesian closed, the function space \(B^{\mathbb{N}}\)_does_ exist for all standard Borel \(B\), and is given by the countable product of \(B\) with itself, \(\prod_{\mathbb{N}}B\). Memoization amounts to using Kolmogorov's extension theorem to define a map \((G\,B)^{\mathbb{N}}\to G(B^{\mathbb{N}})\) (see [46, SS4.8] and [10, Thm. 2.5]). ### Memoization with non-enumerable/diffuse domain We now move beyond enumerable domains, to formalize the challenge from Section 1. In Section 1 we illustrated this with a clustering model. See [53] for the full implementation in our Haskell library, LazyPPL, along with other models that also use memoization, including a feature extraction model that uses the Indian Buffet Process, and relational inference with the infinite relational model (following [19]). Rather than axiomatizing uncountability, we consider diffuse distributions. **Definition 2.2**: [Diffuse distribution] Let a be an object with an equality predicate ((a,a)\(\to\) bool). A _diffuse distribution_2 is a term \(\mathfrak{p}\) such that Footnote 2: Diffuse measures are often called ‘atomless’ in probability theory. We will also want to regard names in name generation as atomic, so we avoid this clash of terminology. \[\text{\bf do}\ \{\text{\rm x}\leftarrow\mathfrak{p}\ ;\ \text{\rm y} \leftarrow\mathfrak{p}\ ;\ \text{\bf return}\ \text{\rm(x}\mathbin{\hbox to 0.0pt{\kern 1.0pt\lower 0.0pt\hbox{\rm x}}}\mathbin{\hbox to 0.0pt{\kern 1.0pt\lower 0.0pt \hbox{\rm y}}}\text{\rm)}\}\qquad\text{is semantically equal to}\qquad\text{\bf return }\ \text{\rm(false)}.\] For example, in a probabilistic programming language over the real numbers, we can let a be the type of real numbers and let \(\mathfrak{p}\) be a uniform distribution on \([0,1]\), or a normal distribution, or an exponential distribution. These are all diffuse in the above sense. The Bernoulli distribution on the booleans is not diffuse, because there is always a chance that we may get the same result twice in succession. For the reader familiar with traditional measure theory, we recall that if \(\mathfrak{p}\) is diffuse then a is necessarily an uncountable space. For any probability distribution on a countable discrete space must give non-zero measure to at least one singleton set. The implementation trick using tries from Section 2.2 will not work for diffuse measures, because we cannot enumerate the domain of a diffuse distribution. It is still possible to implement memoization using state and a memo-table (e.g. [53]). Unlike a fully stateful effect, however, in this paper we argue that stochastic memoization is still compatible with commutativity/dataflow program transformations: **Definition 2.3**: [Dataflow property] A programming language is said to have the _dataflow property_ if program lines can be reordered (commutativity) and discarded (discardability, or affineness) provided that the dataflow is preserved. In other words, the language satisfies the following commutativity and discardability equations: \[\text{\bf do}\ \{\text{\rm x1}\leftarrow\mathfrak{t}1\ ;\ \text{\rm x2} \leftarrow\mathfrak{t}2\ ;\ \text{\rm u}\}\ =\ \text{\bf do}\ \{\text{\rm x2}\leftarrow\mathfrak{t}2\ ;\ \text{\rm x1} \leftarrow\mathfrak{t}1\ ;\ \text{\rm u}\}\\ \text{\bf do}\ \{\text{\rm x1}\leftarrow\mathfrak{t}1\ ;\ \text{\rm t2}\} =\mathfrak{t}2\qquad\qquad\qquad\qquad\qquad\text{where}\ \ \text{\rm x1}\notin\text{\rm fv}\text{\rm(t2)}\ \text{and}\ \ \text{\rm x2}\notin\text{\rm fv}\text{\rm(t1)}. \tag{6}\] The dataflow property expresses the fact that, to give a meaning to programs, the only thing that matters is the topology of dataflow diagrams. These transformations are very useful for inference algorithms and program optimization. But above all, on the foundational side, dataflow is a fundamental concept that corresponds to monoidal categories and is crucial to have a model of probability. As for monoidal categories, a strong monad is commutative (5) if and only if its Kleisli category is monoidal (commutativity is the monoidal interchange law), and affine (6) if the monoidal unit is terminal. In synthetic probability theory, dataflow is regarded by various authors as a fundamental aspect of the abstract axiomatization of probability: Kock [31] argues that any monad that is strong commutative and affine can be abstractly viewed as a probability monad, and affine monoidal categories are used as a basic setting for synthetic probability by several authors [7, 13, 55, 56]. The reader familiar with measure-theoretic probability will recall that the proof that the Giry monad satisfies (5) amounts to Fubini's theorem for reordering integrals (e.g. [51]). Semantic interpretations for diffuse domainsThe point of this paper is to provide the first semantic interpretation for memoization of the constant Bernoulli functions (3) with diffuse domain (Def. 2.2). We emphasize that although other models can support some aspects of this, there is no prior work that supports everything. * With countable domain, there is a model in measurable spaces, as discussed in Section 2.2. But there can be no diffuse distribution on a countable space. * In measurable spaces, we can form the uncountable product space \(\prod_{\mathbb{R}}2\) of \(\mathbb{R}\)-many copies of \(2\). We can then define a white noise probability measure on \(\prod_{\mathbb{R}}2\) via Kolmogorov extension (e.g. [45, 4.9(31)]). Moreover, there are diffuse distributions on \(\mathbb{R}\), such as the uniform distribution on \([0,1]\). However, it is known that there is no measurable evaluation map \(\mathbb{R}\times(\prod_{\mathbb{R}}2)\to 2\) (see [1]), and so we cannot interpret function application (4). * In quasi-Borel spaces [21], there is a quasi-Borel space \([\mathbb{R}\to 2]\) of measurable functions, and a measurable evaluation map \(\mathbb{R}\times([\mathbb{R}\to 2)\to 2\), but there is no white noise probability measure on \([\mathbb{R}\to 2]\). The intuitive reason is that, in quasi-Borel spaces, a probability measure on \([\mathbb{R}\to 2]\) is given by a random element, i.e. a morphism \(\Omega\to[\mathbb{R}\to 2]\), which curries to a measurable function \(\Omega\times\mathbb{R}\to 2\). But there is no such measurable function representing white noise (e.g. [27, Ex 1.2.5]). * There are domain-theoretic treatments of probability theory that support Kolmogorov extension, uniform distributions on \(\mathbb{R}\), and function spaces [20, 25]. However, these treatments regard the real numbers \(\mathbb{R}\) as constructive, and hence there are no non-trivial continuous morphisms \(\mathbb{R}\to 2\), and there is no equality test on \(\mathbb{R}\), so that we cannot regard \(\mathbb{R}\) with a diffuse distribution as formalized equationally in Definition 2.2. The same concern seems to apply to recent approaches using metric monads [36]. * The semantic model of beta-bernoulli in [53] is a combinatorial model that includes aspects of the beta distribution, which is diffuse in measure theory. That model does not support stochastic memoization, but as a presheaf-based model it is a starting point for the model in this paper. * There is a straightforward implementation of stochastic memoization that uses local state, as long as the domain supports equality testing [52]. The informal idea is to make the random choices as they are needed, and remember them in a memo-table, and keep this memo-table in a local state associated with the function. Therefore one could use a semantic treatment of local state to analyze memoization. For example, one could build a state monad in quasi-Borel spaces. However, state effects in general do not support the dataflow property (Def. 2.3), since we cannot reorder memory assignments in general. Ideally, one could use a program logic to prove that this particular use of state does support the dataflow property. Although there are powerful program logics for local state and probability (e.g. [3]), we have not been able to use them to prove this. There are other models of higher-order probability (e.g. [6, 8, 12]). These do not necessarily fit into the monad-based paradigm, but there may be other ways to use them to address the core challenge in Section 1. ## 3 A language for stochastic memoization and name generation Our probabilistic programming language has a minimal syntax, emphasizing the following key features: * **name generation**: we can generate fresh names (referred to as _atomic_ names or _atoms_, in the sense of Pitts' nominal set theory [44]) with constructs such as let \(x\ =\ \mathsf{fresh}()\,\mathrm{in}\,\cdots\). In the terminology of Def. 2.2, this is like a generic diffuse probability measure, since fresh names are distinct. * basic **probabilistic effects**: for illustrative purposes, the only distribution we consider, as a first step, is the Bernoulli distribution (but it can easily be extended to other discrete distributions). Constructs like let \(b\ =\ \mathsf{flip}(\theta)\,\mathrm{in}\,\cdots\) amount to flipping a coin with bias \(\theta\) and storing its result in a variable \(b\). * defined with the new \(\lambda_{\mathfrak{a}}\) operator - is called twice on the same argument, it should return the same result (eq. (2)). We have the following base types: \(\mathsf{bool}\) (booleans), \(\mathbb{A}\) (atomic names), and \(\mathbb{F}\) (which can be thought of as the type of memoized functions \(\mathbb{A}\to\mathsf{bool}\)). For the sake of simplicity, we do not have arbitrary function types. In fine-grained call-by-value fashion [34], there are two kinds of judgments: typed values, and typed computations. The grammar and typing rules of our language are given in Figure 1. The typing rules are standard, except for the \(\lambda_{\mathfrak{a}}\) operator, which is the key novelty of our language. The typing rule for \(\lambda_{\mathfrak{a}}\) is given in Figure 1 and is explained in the next section. (Also, equality \(v=w\) and memoized function application \(v@w\) are pure computations, _i.e._ in the categorical semantics (section 5.3), they will be composed by the unit of the monad.) \begin{table} \begin{tabular}{|l l ## 4 Operational Semantics We now present a small-step operational semantics for our language. The operational semantics defines the rules for reducing program expressions, which form the basis for understanding the behavior of programs written in the language. Henceforth, we fix a countable set of variables \(x,y,z,\ldots\in\mathsf{Var}\), and consider the terms up to \(\alpha\)-equivalence for the \(\lambda_{\mathfrak{g}}\) operator. Since we focus on functions with boolean codomain, our partial memo-tables are represented as partial bigraphs (bipartite graphs). **Definition 4.1**: [Partial bigraph] A partial bigraph \(\mathfrak{g}\stackrel{{\mathrm{def}}}{{=}}(\mathfrak{g}_{L}, \mathfrak{g}_{R},E)\) is a finite bipartite graph where the edge relation \(E\colon\mathfrak{g}_{L}\times\mathfrak{g}_{R}\to\{\mathsf{true},\mathsf{false},\bot\}\) is either true, false or undefined (\(\bot\)) on each pair of left and right nodes \((\not{f},a)\in\mathfrak{g}_{L}\times\mathfrak{g}_{R}\). In the following, left nodes will be thought of as function labels and right nodes as atom labels. By abuse of notation, syntactic truth values will be conflated with semantic ones. For a partial graph \(\mathfrak{g}\), \(E(\mathfrak{f},a)=\beta\in\{\mathsf{true},\mathsf{false},\bot\}\) will be written \(\int\stackrel{{\beta}}{{\to}}a\) when \(\mathfrak{g}\) is clear from the context. ### Extended expressions We introduce extended expressions \(e\), by extending the grammar of computations (1) with an extra construct \(\{\!\!\{u\}\!\}_{\gamma}^{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! as pairing. **Definition 4.2**: If \(S\) is a finite set, \(\mathsf{Tree}(S)\cong\biguplus_{n\geq 0}C_{n}\,S^{n+1}\) (where \(C_{n}\) is the \(n\)-th Catalan number, and \(C_{n}\,S^{n+1}\) is a coproduct of \(n\) copies of \(S^{n+1}\), one for each possible bracketing) denotes the set of all possible non-empty trees with internal nodes the cartesian product and leaf nodes taken in \(S\). **Example 4.3**: If \(S\stackrel{{\mathrm{def}}}{{=}}\{s_{1},s_{2}\}\), then \(s_{1}\in\mathsf{Tree}(S),(s_{2},s_{1})\in\mathsf{Tree}(S),(s_{1},(s_{1},s_{2}) )\in\mathsf{Tree}(S),\ldots\) **Definition 4.4**: [Set-theoretic denotation of contexts.] Let \(\mathfrak{g}\) be a partial bigraph. The set-theoretic denotation \((\![-]\!]\) of a context \(\Gamma\) is defined as \((\![\mathsf{bool}]\!)\stackrel{{\mathrm{def}}}{{=}}2\cong\{ \mathsf{true},\,\mathsf{false}\}\), \((\![\mathbb{F}]\!)\stackrel{{\mathrm{def}}}{{=}}\!\mathfrak{g}_{L}\), \((\![\mathbb{A}]\!)\stackrel{{\mathrm{def}}}{{=}}\!\mathfrak{g}_ {R}\) and \((\![-]\!)\) is readily extended to every context \(\Gamma\). Moreover, in the following, \(\gamma\in(\![\Gamma]\!)\subseteq\mathsf{Tree}(2+\mathfrak{g}_{L}+\mathfrak{g} _{R})^{\mathsf{Var}}\) denotes a context value. **Example 4.5**: If \(\Gamma\stackrel{{\mathrm{def}}}{{=}}(x:\mathsf{bool},y:\mathbb{F},z:((\mathbb{F}\times 2)\times\mathbb{A}))\), then \((\![\Gamma]\!)\stackrel{{\mathrm{def}}}{{=}}\{x\mapsto 2,y\mapsto \mathfrak{g}_{L},z\mapsto((\mathfrak{g}_{L}\times 2)\times\mathfrak{g}_{R})\) and an example of a context value is \(\gamma\stackrel{{\mathrm{def}}}{{=}}\{x\mapsto\mathsf{true},y \mapsto\int_{0},z\mapsto((\mathfrak{f}_{1},\mathsf{true}),a_{0})\}\). We now present terminal computations, redexes, reduction contexts, and configurations (table 3). Configurations encapsulate the computation state (a context value, an extended expression, a partial graph, and a map from the partial graph to closures), which helps keep track of different parts of the program as the computation proceeds. ### Reduction rules Let \((\![-]\!]_{\gamma}\) be the function evaluating an expression value in a context value \(\gamma\) (_e.g._\((\![x]\!)_{\gamma}=\gamma(x),(\![\mathsf{true}]\!)_{\gamma}=\mathsf{true}\)). We can define the operational semantics of the language using reduction rules. They provide a step-by-step description of how expressions are evaluated and transformed during execution, following a left-most outer-most strategy, with lexical binding. Given a configuration \((\gamma,u,\mathfrak{g},\lambda)\) (note that if \(u\) is of the form \(\{\![u^{\prime}]\!\}_{\gamma}^{(\![,a)}\), then it is assumed that the function-atom label pair \((\![\mathfrak{f},a)\in\mathfrak{g}_{L}\times\mathfrak{g}_{R})\), we will apply the following reduction rules: \begin{table} \begin{tabular}{|l **Example 4.6**: We now give an example showcasing how these reduction rules apply on a program combining name generation, a coin flip, function abstraction, and stochastic memoization. An atom \(x_{0}\) is generated and used as an argument for a function \(f_{1}\), which performs a coin flip if the argument matches \(x_{0}\). The outcome is then memoized and the result is returned in the second application. There are two execution traces, depending on the outcome of the coin flip (\(\beta\in\mathsf{true},\mathsf{false}\)). \[\begin{array}{lcl}\Big{(}\emptyset,&\mathsf{let\ val}\ x_{0}\ \leftarrow\ \mathsf{ fresh}()\,\mathsf{in}\\ &\mathsf{let\ val}\ f_{1}\ \leftarrow\ \lambda_{\mathfrak{S}}x.\ (\mathsf{let\ val}\ b\ \leftarrow\ (x=x_{0})\,\mathsf{in}\\ &\mathsf{if}\ b\mathsf{then}\ \mathsf{flip}(\frac{1}{2})\ \mathsf{else}\mathsf{ false})\mathsf{in}\\ &\mathsf{let\ val}\ f_{2}\ \leftarrow\ \lambda_{\mathfrak{S}}y.\ f_{1}@y\, \mathsf{in}\,f_{2}@x_{0},\\ (\emptyset,\emptyset,\emptyset),\ \emptyset\end{array}\to \begin{array}{lcl}\Big{(}\{x_{0}\mapsto a_{0}\},\\ \mathsf{let\ val}\ f_{1}\ \leftarrow\ \lambda_{\mathfrak{S}}x.\ (\mathsf{let\ val}\ b\ \leftarrow\ (x=x_{0})\, \mathsf{in}\\ &\mathsf{if}\ b\mathsf{then}\ \mathsf{flip}(\frac{1}{2})\ \mathsf{else}\mathsf{ false})\mathsf{in}\\ &\mathsf{let\ val}\ f_{2}\ \leftarrow\ \lambda_{\mathfrak{S}}y.\ f_{1}@y\, \mathsf{in}\,f_{2}@x_{0},\\ (\emptyset,\emptyset,\emptyset),\ \emptyset\end{array}\to \begin{array}{lcl}\Big{(}\{x_{0}\mapsto a_{0}\},\\ \mathsf{let\ val}\ f_{1}\ \leftarrow\ \lambda_{\mathfrak{S}}x. \[\rightarrow^{2}\Big{(}\overbrace{\{x_{0}\mapsto a_{0},\,f_{1} \mapsto f_{1},\,f_{2}\mapsto f_{2}\}}^{\text{def}\,\gamma_{0}},\quad f_{2}@x_{0},\] \[\qquad\qquad(\{f_{1},f_{2}\},\;\{a_{0}\},\{f_{1}\xrightarrow{ \bot}a_{0},f_{2}\xrightarrow{\bot}a_{0}\}),\] \[\qquad\qquad\{f_{1}\mapsto(\lambda_{\mathfrak{D}}\,x.\text{ let val }b\ \leftarrow\ (x=x_{0})\,\text{in}\] \[\qquad\qquad\qquad\text{if }b\,\text{then flip}(\tfrac{1}{2})\,\text{else false}\},\{x_{0}\mapsto a_{0}\}),\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad **Lemma 4.8**: _If a configuration of the form \((\gamma,\mathcal{C}[v@w],\mathfrak{g},\lambda)\) is accessible and \(E((\!(v)\!\gamma,(w\!)\!)_{\gamma})=\bot\), then \(J(\gamma,\mathcal{C}[v@w],\mathfrak{g},\lambda)\stackrel{{\text{ def}}}{{=}}\Gamma\mid\Delta\nmid^{\nmid}\mathcal{C}[v@w]:A\) is such that the memoization stack \(\Delta\) does not contain a function-atom label pair with \((\!(v)\!)_{\gamma}\) as first component._ As a corollary, we can then prove that a configuration is accessible only if its memoization stack has no duplicates: **Lemma 4.9**: _If a configuration \((\gamma,e,\mathfrak{g},\lambda)\) is accessible and \(\mathrm{J}(\gamma,e,\mathfrak{g},\lambda)\stackrel{{\text{ def}}}{{=}}\Gamma\mid\Delta\nmid^{\nmid}e:A\) is its corresponding configuration judgment, there is no duplicate in \(\Delta\)._ This in turn enables us to ensure that the operational semantics satisfies the memoization equations: **Proposition 4.10**: _If \(e_{1}\) and \(e_{2}\) are programs of the form_ \[e_{1}\stackrel{{\text{def}}}{{=}}\mathsf{let}\ \mathsf{val}\ x \ \leftarrow\ \mathsf{fresh}()\,\mathsf{in}\,\mathsf{let}\ \mathsf{val}\ f\ \leftarrow\ \lambda_{\mathfrak{D}}y.\ e\,\mathsf{in}\,\mathsf{let}\ \mathsf{val}\ v_{1}\ \leftarrow\ f@x\,\mathsf{in}\,\mathsf{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@x\,\mathsf{in}\,\mathsf{return}(v_{1},v_{2})\] \[e_{2}\stackrel{{\text{def}}}{{=}}\mathsf{let}\ \mathsf{val}\ x \ \leftarrow\ \mathsf{fresh}()\,\mathsf{in}\,\mathsf{let}\ \mathsf{val}\ f\ \leftarrow\ \lambda_{\mathfrak{D}}y.\ e\,\mathsf{in}\,\mathsf{let}\ \mathsf{val}\ v_{1}\ \leftarrow\ f@x\,\mathsf{in}\,\mathsf{return}(v_{1},v_{1})\] _the configurations \((\emptyset,e_{1},\emptyset,\emptyset)\) and \((\emptyset,e_{2},\emptyset,\emptyset)\) have the same big-step operational semantics._ ## 5 Denotational Semantics In this section we propose a denotational model that verifies the dataflow property (Def. 2.3, Theorem 5.5) and which supports memoization of constant Bernoulli functions (Theorem 5.8) and is sound with respect to the operational semantics of Section 4 (Theorem 5.10). Thus we show that criteria (1)-(5) of Section 1 are consistent. The memo-tables in memoization are a kind of hidden or local state, and our semantic domain is similar to other models of local state [37, 44, 46, 28] in that it uses a possible worlds semantics in the guise of a functor category. **Definition 5.1**: A _total bigraph_ is a partial bigraph (Def. 4.1) that does not have any undefined \((\bot)\) elements. This represents a fully populated memo-table. We notate this \(g=(g_{L},g_{R},E^{g})\), omitting the superscript when it is clear. An _embedding_ between total bigraphs \(\iota\colon g\to g^{\prime}\) is a pair of injections \((\iota_{L}:g_{L}\to g^{\prime}_{L},\iota_{R}:g_{R}\to g^{\prime}_{R})\) that do not add or remove edges \((E^{g}(\!(\!(\!(\!(\!(\!(\!(\!(\!(\!(\!(\!((( ((((((( ( ( 0 0 0 The presheaf category \([\mathbf{BiGrph}_{\mathit{emb}},\mathbf{Set}]\) has products and coproducts, given pointwise [35]. In particular, the denotation of the type of booleans is the constant presheaf \(2\cong 1+1\). The edge relations collect to form a natural transformation \(\mathcal{E}:[\![\mathbb{F}]\!]\times[\![\mathbb{A}]\!]\to 2\) given by \(\mathcal{E}_{g}(\![,a)=E^{g}(\![,a)\). The category \([\mathbf{BiGrph}_{\mathit{emb}},\mathbf{Set}]\) is cartesian closed, as is any presheaf category. By currying \(\mathcal{E}\), we have an embedding of \([\![\mathbb{F}]\!]\) in the function space \(2^{[\![\mathbb{A}]\!]}\), i.e. \([\![\mathbb{F}]\!]\to 2^{[\![\mathbb{A}]\!]}\). In fact, in this development to keep things simpler, we will focus on \([\![\mathbb{F}]\!]\) rather than the full function space \(2^{[\![\mathbb{A}]\!]}\). ### Probabilistic local state monad In the following, \(X,Y,Z\!:\mathbf{BiGrph}_{\mathit{emb}}\to\mathbf{Set}\) denote presheaves, \(g=(g_{L},g_{R},E^{g}),g^{\prime},h,h^{\prime}\in\mathbf{BiGrph}_{\mathit{emb}}\) bigraphs, and \(\iota,\iota^{\prime}\!:g\hookrightarrow g^{\prime}\) bigraph embeddings. We will omit subscripts when they are clear from the context. Let \(P_{\!\mathrm{f}}\) be the finite distribution monad: \(P_{\!\mathrm{f}}(X)(g)=\big{\{}p:X(g)\;\to\;[0,1]\;\big{|}\;\mathsf{supp}(p)\) finite and \(\sum_{x}p(x)=1\big{\}}\). By considering the following 'node-generation' monad \(N(X)(g)\stackrel{{\mathrm{def}}}{{=}}\operatorname{colim}_{g \hookrightarrow h}X(h)\) on \([\mathbf{BiGrph}_{\mathit{emb}},\mathbf{Set}]\), one could be tempted to think that modeling name generation and stochastic memoization is a matter of composing these two monads. But this is not quite enough. We also need to remember, in the monadic computations, the probability of a function returning \(\mathsf{true}\) for a fresh, unseen atom. To do so, inspired from Plotkin and Power's local state monad [44] (which was defined on the covariant presheaf category \([\mathbf{Inj},\mathbf{Set}]\), where \(\mathbf{Inj}\) is the category of finite sets and injections), we model probabilistic and name generation effects by the following monad, defined using a coend [35], that we name 'probabilistic local state monad': **Definition 5.2**: [Probabilistic local state monad] For all covariant presheaves \(X\!:\mathbf{BiGrph}_{\mathit{emb}}\to\mathbf{Set}\) and bigraphs \(g\in\mathbf{BiGrph}_{\mathit{emb}}\): \[T(X)(g)\stackrel{{\mathrm{def}}}{{=}}\left(P_{\!\mathrm{f}}\int^{ g\hookrightarrow h}\left(X(h)\times[0,1]^{(h-g)_{L}}\right)\right)^{[0,1]^{g_{L}}}\] The monad \(T\) is similar to the read-only local state monad, except that any fresh node can be initialized. Every \(\lambda\in[0,1]^{g_{L}}\) is thought of as the probability of the corresponding function/left node yielding true on a new fresh atom. We will refer to such a \(\lambda\) as a _state of biases_. The coend 'glues together' the extensions of the memo-table that are compatible with the constraints imposed by the current computation. The monad allows manipulating probability distributions over such extensions, while keeping track of the probability of new nodes. Equivalence classes in \(\int^{g\hookrightarrow h}X(h)\times[0,1]^{(h-g)_{L}}\) are written \([x_{h},\lambda^{h}]_{g}\). In the coend, the quotient can be thought of as taking care of garbage collection: nodes that are not used in the bigraph environment can be discarded. We use Dirac's bra-ket notation3\(\big{|}[x_{h},\lambda^{h}]_{g}\big{\rangle}_{h}\) to denote a formal column vector of equivalence classes ranging over a finite set of \(h\)'s. As such, a formal convex sum \(\sum_{i}p_{i}[x_{h_{i}},\lambda^{h_{i}}]_{g}\in P_{\!\mathrm{f}}\int^{g \hookrightarrow h}X(h)\times[0,1]^{(h-g)_{L}}\) will be concisely denoted by \(\big{\langle}\boldsymbol{p}\,\big{|}\,[x_{h},\lambda^{h}]_{g}\big{\rangle}_{h}\). Footnote 3: popularized by Bart Jacobs for finite probability distributions [24] **Definition 5.3**: [Action of \(T(X)\) on morphisms] \[T(X)(g\stackrel{{\iota}}{{\hookrightarrow}}g^{\prime})\!:\left\{ \begin{aligned} &\left(P_{\!\mathrm{f}}\int^{g \hookrightarrow h}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! * \(\iota_{L}\colon g_{L}\hookrightarrow g^{\prime}_{L}\) is the embedding restricted to left nodes, the maps \(\psi_{g,g^{\prime}}\) are given by: \[\left\{\begin{array}{l}X(h)\times[0,1]^{(h-g)_{L}}\to X(h\coprod_{g}g^{\prime} )\times[0,1]^{(h\coprod_{g}g^{\prime}-g^{\prime})_{L}}\to\int^{g^{\prime} \hookrightarrow h^{\prime}}X(h^{\prime})\times[0,1]^{(h^{\prime}-g^{\prime})_{L} }\\ (x_{h},\,\lambda^{h})\mapsto(X(h\hookrightarrow h\coprod_{g}g^{\prime})(x_{h}), \,\lambda^{h})\mapsto[X(h\hookrightarrow h\coprod_{g}g^{\prime})(x_{h}),\, \lambda^{h}]_{g^{\prime}}\end{array}\right.\] \(\int^{g\hookrightarrow h}X(h)\times[0,1]^{(h-g)_{L}}\stackrel{{ \psi_{g,g^{\prime}}}}{{\longrightarrow}}\int^{g^{\prime} \hookrightarrow h^{\prime}}X(h^{\prime})\times[0,1]^{(h^{\prime}-g^{\prime})_{L}}\) extranatural in \(h\) * and \(h\coprod_{g}g^{\prime}\) is the pushout in the category of graphs regarded as an object of \(\mathbf{BiGrph}_{\mathit{emb}}\). More concretely, with Dirac's bra-ket notation, \(T(X)(g\stackrel{{\iota}}{{\hookrightarrow}}g^{\prime})\) can be written as: \[T(X)(\iota)=\left\{\begin{array}{l}\left(P_{\mathrm{f}}\int^{g\hookrightarrow h }X(h)\times[0,1]^{(h-g)_{L}}\right)^{[0,1]^{g_{L}}}\to\left(P_{\mathrm{f}}\int ^{g^{\prime}\hookrightarrow h^{\prime}}X(h^{\prime})\times[0,1]^{(h^{\prime}-g^ {\prime})_{L}}\right)^{[0,1]^{g^{\prime}_{L}}}\\ \vartheta\mapsto\lambda^{\prime}\mapsto\mathrm{let}\ \vartheta(\lambda^{\prime} \iota_{L})\ =\ \left\langle\boldsymbol{p}\,\big{|}\,[x_{h},\lambda^{h}]_{g}\right\rangle_{h} \text{ in }\left\langle\boldsymbol{p}\,\big{|}\,[X(h\hookrightarrow h\coprod_{g}g^{ \prime})(x_{h}),\lambda^{h}]_{g^{\prime}}\right\rangle_{h}\end{array}\right.\] \(T\) can be endowed with the structure of a \([\mathbf{BiGrph}_{\mathit{emb}},\mathbf{Set}]\)-enriched monad, that is, since \([\mathbf{BiGrph}_{\mathit{emb}},\mathbf{Set}]\) is a (cartesian) monoidal closed category, a strong monad. Its enriched unit \(\eta_{X}\colon 1\to TX^{X}\) and bind \((-)^{*}\colon TY^{X}\to TY^{TX}\) are as follows 4. Footnote 4: following Fosco Loregian [34], \(\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \ We have the desired dataflow property, meaning that \(T\) is an abstract model of probability [33]: **Theorem 5.5**: _The monad \(T\) satisfies the dataflow property (2.3): it is strong commutative and affine._ **Proof (Sketch)** In the presheaf category, let \(Z^{Y}\times Y^{X}\stackrel{{\circ}}{{\to}}Z^{X}\) and \(Z^{Y}\times Y\stackrel{{\rm ev}}{{\longrightarrow}}Z\) denote the internal composition and evaluation, and \(f^{*}\stackrel{{\rm def}}{{=}}1\stackrel{{ f}}{{\to}} TY^{X}\stackrel{{(-)^{*}}}{{\longrightarrow}}TY^{TX}\) the internal Kleisli lifting of a global element \(f\). To prove that \(T\) is strong, we show, internally, the associativity (\((\Psi^{*}_{g}\times\Phi^{*}_{g})\;;\circ=((\Psi^{*}\times\Phi)\;;\circ)^{*}\)) of the bind, the left unit law (\(\eta^{*}=\lambda_{TX}.{\rm id}_{TX}\)), and the right unit law (\((\Phi^{*}\times\eta)\;;\circ=\Phi\)), for all \(\Phi\colon 1\to TY^{X},\Psi\colon 1\to TZ^{Y}\). Finally, affineness stems from lemma 5.4, and commutativity is the equation \(a\,{\gg\!\!-}\,\lambda\,x.\,b\,{\gg\!\!-}\,\lambda\,y.\,\eta(x,y)\;\;=\;b\,{ \gg\!\!-}\,\lambda\,y.\,a\,{\gg\!\!-}\,\lambda\,x.\,\eta(x,y)\) internally, for all \(a\colon 1\to TA,b\colon 1\to TB\), which amounts to showing: \[\left(\left(\lambda_{A}.\!\left(\left((\lambda_{B}.\eta)^{*}\times b\right)\;; \,{\rm ev}\right)\right)^{*}\times a\right);{\rm ev}=\left(\left(\lambda_{B}. \!\left(\left((\lambda_{A}.\eta)^{*}\times a\right)\;;\,{\rm ev}\right)\right)^ {*}\times b\right);{\rm ev}\] \(\Box\) ### Categorical semantics In our language, the denotational interpretation of values, computations (return and let binding), and matching (elimination of \({\sf bool}\)'s and product types) is standard. We interpret computation judgements \(\Gamma\stackrel{{\sf g}}{{=}}t\colon A\) as morphisms \([\![\Gamma]\!]\to T([\![A]\!])\), by induction on the structure of typing derivations. The context \(\Gamma\) is built of \({\sf bool}\)'s, \(\mathbb{F}\), \(\mathbb{A}\) and products. Therefore, \([\![\Gamma]\!]\) is isomorphic to a presheaf of the form \(2^{k}\times{\bf BiGrph}_{emb}(\circ,-)^{\ell}\times{\bf BiGrph}_{emb}(\bullet, -)^{m}\), where \(k,\ell,m\) are the numbers of booleans, functions and atoms in \(\Gamma\), and \(X^{n}\) is is the \(n\)-fold finite product in the category of presheaves. Computations of type \(\mathbb{A}\) and \(\mathbb{F}\) then have an intuitive interpretation: **Proposition 5.6**: _A computation of type \(\mathbb{A}\) returns the label of an already existing atom or a fresh one with its connections to the already existing functions: \(T([\![\mathbb{A}]\!])(g)\,\cong\,P_{\!\!1}(g_{R}+2^{g_{L}})^{[0,1]^{g_{L}}}\). A computation of type \(\mathbb{F}\) returns the label of an already existing function or create a new function with its connections to already existing atoms and a fixed probabilistic bias: \(T([\![\mathbb{F}]\!])(g)\,\cong\,P_{\!\!1}(g_{L}+2^{g_{R}}\times[0,1])^{[0,1]^{ g_{L}}}\)._ For every bigraph \(g\), we denote by \(R_{g}\) (resp. \(L_{g}\)) the set of bigraphs \(h\in g/{\bf BiGrph}_{emb}\) having one more right (resp. left) node than \(g\), and that are the same otherwise. For every \(e\in 2^{g_{L}}\) (resp. \(e\in 2^{g_{R}}\)), we denote by \({g+_{e}\bullet}\in R_{g}\) (resp. \({g+_{e}\circ}\in L_{g}\)) the bigraph obtained by adding a new right (resp. left) node to \(g\) with connectivity \(e\) to the right (resp. left) nodes in \(g\). We now give the denotational semantics of various constructs in our language. Henceforth, we will denote normalization constants (that can easily be inferred from the context) by \(Z\). **Denotations of \(\Gamma\stackrel{{\sf g}}{{=}}{\sf flip}(\theta):{\sf bool}\), \(\Gamma,v:\mathbb{F},w:\mathbb{A}\stackrel{{\sf g}}{{=}}v\mbox{$ \mathbb{\oplus}$}w:{\sf bool}\), and \(\Gamma,v:\mathbb{A},w:\mathbb{A}\stackrel{{\sf g}}{{=}}v=w:{\sf bool}\)** First, by Lemma 5.4, we note that \(T([\![{\sf bool}]\!])g\,\cong\,P_{\!\!1}(2)^{[0,1]^{g_{L}}}\,\cong\,[0,1]^{[0,1]^ {g_{L}}}\). So naturally, the map \([\![{\sf flip}(\theta)]\!]_{g}\) is the constant function returning the bias \(\theta\). **Denotations of \(\Gamma,v:\mathbb{F},w:\mathbb{A}\stackrel{{\sf g}}{{=}}v\mbox{$ \mathbb{\oplus}$}w:{\sf bool}\), and \(\Gamma,v:\mathbb{A},w:\mathbb{A}\stackrel{{\sf g}}{{=}}v=w:{\sf bool}\)** The map \([\![v\mbox{$\mathbb{\oplus}$}w]\!]_{g}:[\![\Gamma,v:\mathbb{F},w:\mathbb{A}](g) \to[0,1]^{[0,1]^{g_{L}}}\) returns \(1\) if the left node corresponding to \(v\) is connected to the one of \(w\) in \(g\), \(0\) otherwise. Using the internal edge relation \(\mathcal{E}\), it is the internal composition: \[[\![v\mbox{$\mathbb{\oplus}$}w]\!]\stackrel{{\rm def}}{{=}}1 \times([\![\Gamma]\!]\times[\![\mathbb{F}]\!]\times[\![\mathbb{A}]\!]) \stackrel{{\eta\times(\!(\times\!)^{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! where \([-,\,-]\) is the copairing and \(\iota_{\mathsf{true}},\iota_{\mathsf{false}}\colon 1\to[\![\mathsf{bool}]\!]\cong 2\) are the coprojections. **Denotation of \(\Gamma\mathrel{\raisebox{-1.29pt}{\scalebox{1.2}{$|$}}\!}\!\mathrel{\raisebox{-1.2 9pt}{\scalebox{1.2}{$|$}}\!}\!\mathrel{\raisebox{-1.29pt}{\scalebox{1.2}{$|$}} \!}\!\mathrel{\raisebox{-1.29pt}{\scalebox{1.2}{$|$}}\!}\!\mathrel{\raisebox{-1. 29pt}{\scalebox{1.2}{$|$}}\!}\!\mathrel{\raisebox{-1.29pt}{\scalebox{1.2}{$|$}} \!}\!\mathrel{\raisebox{-1.29pt}{\scalebox{1.2}{$|$}}\!}\!\mathrel{\raisebox{-1. 29pt}{\scalebox{1.2}{$|$}}\!}\!\mathrel{\raisebox{-1.29pt}{\scalebox{1.2}{$|$}} \!}\!\mathrel{\raisebox{-1.29pt}{\scalebox{1.2}{$|$}}\!}\!\mathrel{\raisebox{-1. 29pt}{\scalebox{1.2}{$|$}}\!}\!\mathrel{\raisebox{-1.29pt}{\scalebox{1.2}{$|$}} \!}\!\mathrel{\raisebox{-1.29pt}{\scalebox{1.2}{$|$}}\!}\!\mathrel{\raisebox{-1. 29pt}{\scalebox{1.2}{$|$}}\!}\!\mathrel{\raisebox{-1.29pt}{\scalebox{1.2}{$|$}} \!}\!\mathrel{\raisebox{-1.29pt}{\scalebox{1.2}{$|$}}\!}\!\mathrel{\raisebox {-1.29pt}{\scalebox{1.2}{$|$}}\!}\!\mathrel{\raisebox{-1. \[\phi_{g}\colon\left\{\begin{array}{l}T(\mathbb{F})(g)\cong P_{\mathbb{I}}(g_{R}+2 ^{g_{L}})^{[0,1]^{g_{L}}}\rightarrow[0,1]^{[0,1]^{g}\times(g_{R}+2^{g_{L}})} \cong T\big{(}\llbracket\mathsf{bool}\rrbracket^{\llbracket\mathbb{A}\rrbracket} \big{)}(g)\\ \vartheta\mapsto(\lambda,a)\in[0,1]^{g}\times(g_{R}+2^{g_{L}})\mapsto\text{let } \vartheta(\lambda)\ =\ \sum_{a^{\prime}\in g_{R}+2^{g_{L}}}p_{a^{\prime}}\,|a^{\prime} \rangle\text{ in }p_{a}\end{array}\right.\] to obtain \(\mathsf{mem}\colon T(\llbracket\mathsf{bool}\rrbracket)^{\llbracket\mathbb{A} \rrbracket}\to T(\llbracket\mathsf{bool}\rrbracket^{\llbracket\mathbb{A} \rrbracket})\), and then we show eq.1 in the presheaf topos. \(\Box\) **Example 5.9**: The denotation of \(\mathsf{let}\mathsf{val}\ x\ \leftarrow\ \mathsf{fresh}()\) in \(\mathsf{let}\mathsf{val}\ f\ \leftarrow\ \lambda_{\mathfrak{g}}y\). \(\mathsf{flip}(\theta)\mathsf{in}\,f\!\oplus\!x\) is the map \[1\times 1\xrightarrow{\Big{(}\lambda_{\llbracket\mathbb{A}\rrbracket} \cdot\Big{(}\big{(}(\lambda_{\llbracket\mathbb{F}\rrbracket}\cdot f\!\oplus\!x )^{*}\times(\lambda_{\mathfrak{g}}y\text{. }\mathsf{flip}(\theta))\big{)}\big{)}^{*}\times\mathsf{ fresh}()}T(\llbracket\mathsf{bool}\rrbracket)^{T\llbracket\mathbb{A}\rrbracket}\times T( \llbracket\mathbb{A}\rrbracket)\xrightarrow{\mathrm{ev}}T(\llbracket\mathsf{bool }\rrbracket)\] given by \(*,*\mapsto\lambda\mapsto\theta\,|\mathsf{true}\rangle+(1-\theta)\,|\mathsf{false}\rangle\), as desired. ### Soundness Configurations are of the form \((\gamma,e,\mathfrak{g},\lambda)\), where \(e\) is of type \(A\), and can be denotationally interpreted as \[\llbracket(\gamma,e,\mathfrak{g},\lambda)\rrbracket\stackrel{{ \mathrm{def}}}{{=}}\sum_{\tilde{e}\in 2^{U_{\mathfrak{g}}}}\prod_{(l,a)\in U_{ \mathfrak{g}}}\lambda(\mathfrak{f})^{\tilde{e}(l,a)}\big{(}1-\lambda( \mathfrak{f})\big{)}^{1-\tilde{e}(l,a)}\llbracket\!u\rrbracket_{\mathfrak{g} _{\tilde{e}}}(\gamma,\lambda)\in T(A)_{\mathfrak{g}_{\tilde{e}}}(\gamma)(\lambda)\] where \(U_{\mathfrak{g}}\stackrel{{\mathrm{def}}}{{=}}\big{\{}(\mathfrak{ f},a)\ |\ E(\mathfrak{f},a)=\bot\big{\}}\subseteq\mathfrak{g}_{L}\times \mathfrak{g}_{R}\) and \(\mathfrak{g}_{\tilde{e}}\) extends \(\mathfrak{g}\) according to \(\tilde{e}\): \(E(\mathfrak{f},a)=\tilde{e}(\mathfrak{f},a)\) for all \((\mathfrak{f},a)\in U_{\mathfrak{g}}\). We can then prove that the denotational semantics is sound with respect to the operational semantics: **Theorem 5.10** (Soundness): \[\llbracket(\gamma,e,\mathfrak{g},\lambda)\rrbracket\cong\sum_{\begin{subarray} {c}(\gamma,e,\mathfrak{g},\lambda)\rightarrow(\gamma^{\prime},e^{\prime}, \mathfrak{g}^{\prime},\lambda^{\prime})\\ \text{with proba. }p\end{subarray}}p\cdot\llbracket(\gamma^{\prime},e^{\prime}, \mathfrak{g}^{\prime},\lambda^{\prime})\rrbracket\] **Proof (Sketch)** As an intermediate step, we build a big-step semantics, and show that this is sound, _i.e._ making a small step of the operational semantics (SS4) does not change the distributions in the final big-step semantics. Next, we show that the big step semantics of a configuration corresponds to the denotational semantics, for which the main thing to check is that the equivalence classes of the coend are respected. \(\Box\) ## 6 Haskell Implementation We have a practical Haskell implementation comparing the small-step, big-step operational, and denotational semantics to showcase the soundness theorem with QuickCheck, in a setting analogous (albeit slightly different5, to better suit the specificities of Haskell) to the theoretical one we presented. The artefact is openly available [26]. Footnote 5: Unlike our mathematical framework, where we can memoize all freshness-invariant functions (5.7), our implementation only memoizes constant Bernoulli functions. Another key difference is that we could not implement coends in Haskell, so we used a global state monad transformer to manage the memoization bigraph, keeping track of edges between left nodes (function labels) and right nodes (atom labels) that have been sampled. ## 7 Summary In conclusion, we have successfully tackled the open problem of finding a semantic interpretation of stochastic memoization for a class of functions with diffuse domain that includes the constant Bernoulli functions. Our contributions pave the way for further exploration and development of probabilistic programming and the sound application of stochastic memoization in Bayesian nonparametrics. ## 8 Acknowledgements We are grateful to Nate Ackerman, Cameron Freer, Dan Roy and Hongseok Yang for various conversations over many years, relating to [54], name generation, stochastic memoization and subsequent developments. The presheaf category here is related to the Rado topos [4] that we have been exploring in ongoing work, with Jacek Karwowski and Sean Moss and the above four coauthors. Thanks to Dario Stein for discussions about name generation and for pointing out [27]. Thanks too to Swaraj Dash, Mathieu Huot, Ohad Kammar, Oleg Kiselyov, Alex Lew, and all in the Oxford group for many discussions about this topic. Finally, thank you to our reviewers for detailed feedback.
確率的Memo化は、確率的プログラミング言語の階層的な構造であり、ベイズの非パラメトリック、モジュールアプローチにおいて重要な役割を果たします。これは、モデルをパラメトリック的制約を超えて拡張し、エレガントで原理的な方法でそれらを組み立てることができます。確率的Memo化は、実用上シンプルで役立ちますが、その意味は、特にデータフロー変換に関して、難解です。単純な実装は、非可換である統計モナドに依存し、データフローの性質を保存しているかどうかが不明瞭です。つまり、データフローグラフが保持されている限り、プログラムの線は再配置して、その意味を変化させても問題ないかどうかを示唆しています。この論文では、最小確率的プログラミング言語の文脈で、確率的Memo化と名前の生成に、操作的およびカテゴリ的論理を与えます。私たちの貢献は、可算な確率
2309.06754
Explicit Riemann-Roch spaces in the Hilbert class field
Let $\mathbf K$ be a finite field, $X$ and $Y$ two curves over $\mathbf K$, and $Y\rightarrow X$ an unramified abelian cover with Galois group $G$. Let $D$ be a divisor on $X$ and $E$ its pullback on $Y$. Under mild conditions the linear space associated with $E$ is a free ${\mathbf K}[G]$-module. We study the algorithmic aspects and applications of these modules.
Jean-Marc Couveignes, Jean Gasnier
2023-09-13T07:04:08
http://arxiv.org/abs/2309.06754v3
# Explicit Riemann-Roch spaces in the Hilbert class field ###### Abstract. Let \(\mathbf{K}\) be a finite field, \(X\) and \(Y\) two curves over \(\mathbf{K}\), and \(Y\to X\) an unramified abelian cover with Galois group \(G\). Let \(D\) be a divisor on \(X\) and \(E\) its pullback on \(Y\). Under mild conditions the linear space associated with \(E\) is a free \(\mathbf{K}[G]\)-module. We study the algorithmic aspects and applications of these modules. ## 1. Introduction Given a curve \(Y\) over a field \(\mathbf{K}\), and two divisors \(E\) and \(Q\) on \(Y\), with \(Q\) effective and disjoint from \(E\), the evaluation map \(e:H^{0}(\mathcal{O}_{Y}(E),Y)\to H^{0}(\mathcal{O}_{Q},Q)\) is a natural \(\mathbf{K}\)-linear datum of some importance for various algorithmic problems such as efficient computing in the Picard group of \(Y\) (see [22, 23]), constructing good error correcting codes [12, 14, 40], or bounding the bilinear complexity of multiplication in finite fields [38, 37, 2, 3, 7, 30]. Assume \(G\) is a finite group of automorphisms of \(Y/\mathbf{K}\), and the divisors \(E\) and \(Q\) are \(G\)-equivariant (they are equal to their pullback by any element of \(G\)). The evaluation map \(e\) is then a \(\mathbf{K}[G]\)-linear map between two \(\mathbf{K}[G]\)-modules. In some cases these modules can be shown to be both free, and their rank as \(\mathbf{K}[G]\)-modules is then smaller than their dimension as \(\mathbf{K}\)-vector spaces, by a factor \(\mathfrak{o}\), the order of \(G\). This is of quite some help when \(G\) is abelian, because multiplication in \(\mathbf{K}[G]\) is achieved in quasi-linear time using discrete Fourier transform, and the advantage of lowering dimension is much stronger than the disadvantage of dealing with a larger ring of scalars. In this work we review basic algebraic and algorithmic properties of \(\mathbf{K}[G]\)-modules when \(G\) is a finite group. We then focus on free \(\mathbf{K}[G]\)-modules arising from abelian groups acting freely on a curve. We will see that this special case has a rich mathematical background and produces interesting constructions. In Section 2 we review elementary properties of \(\mathbf{K}[G]\)-modules when \(\mathbf{K}\) is a commutative field and \(G\) a finite group. We recall in Section 3 how unramified fibers of Galois covers of curves produce free \(\mathbf{K}[G]\)-modules and we introduce natural bases for these modules and their duals. We study the abelian unramified case in Section 4 and see that Riemann-Roch spaces associated to \(G\)-equivariant divisors tend to be free \(\mathbf{K}[G]\)-modules then. Evaluating at another \(G\)-equivariant divisor then produces a \(\mathbf{K}[G]\)-linear map between two free \(\mathbf{K}[G]\)-modules. This makes it possible to treat evaluation and interpolation as \(\mathbf{K}[G]\)-linear problems. We introduce the matrices associated to these problems. Section 5 is devoted to the definition and computation of Pade approximants in this context. The complexity of arithmetic operations in \(\mathbf{K}[G]\) is bounded in Section 6 using various classical discrete Fourier transforms. In Section 7 we use effective class field theory and the algorithmics of curves and jacobian varieties to compute the evaluation and interpolation matrices introduced in Section 4. Section 8 provides two applications of interpolation with \(\mathbf{K}[G]\)-modules: multiplication in finite fields and geometric codes. The asymptotic properties of the codes constructed this way are studied in Section 9. ###### Contents * 1 Introduction * 2 Duality for \(\mathbf{K}[G]\)-modules * 2.1 Invariant bilinear forms * 2.2 Orthogonality * 2.3 The dual of a \(\mathbf{K}[G]\)-module * 2.4 Free submodules of a \(\mathbf{K}[G]\)-module * 3 Curves with a group action * 3.1 The residue ring of a non-ramified fiber * 3.2 The residue ring of a non-ramified \(G\)-equivariant divisor * 3.3 Duality * 4 Free commutative actions * 4.1 Special invariant divisors * 4.2 Riemann-Roch spaces * 4.3 The orthogonal submodule * 5 Pade approximants * 5.1 The split case * 5.2 Computing Pade approximants * 6 Computing in the group algebra * 6.1 Fourier transform * 6.2 Univariate Fourier transform * 6.3 Multivariate Fourier transform * 6.4 Fast multiplication in \(\mathbf{K}[G]\) * 7 Constructing functions in the Hilbert class field * 7.1 Class field theory and the jacobian variety * 7.2 An example * 8 Interpolation on algebraic curves * 8.1 The complexity of multiplication in finite fields * 8.2 Geometric codes * 8.3 Basic decoding * 9 Good geometric codes with quasi-linear encoding * 9.1 Controlling the class group and the Artin map * 9.2 A construction ## 2. Duality for \(\mathbf{K}[G]\)-modules In this section \(\mathbf{K}\) is a commutative field and \(G\) is a finite group. We state elementary properties of \(\mathbf{K}[G]\)-modules and their duals. In Section 2.1 we describe the natural correspondence between \(G\)-invariant \(\mathbf{K}\)-bilinear forms and \(\mathbf{K}[G]\)-bilinear forms. We see in Section 2.2 that the orthogonal of a \(\mathbf{K}[G]\)-submodule for either form is the same. Sections 2.3 and 2.4 concern the canonical bilinear form relating a \(\mathbf{K}[G]\)-module and its dual. ### Invariant bilinear forms Let \(M\) be a right \(\mathbf{K}[G]\)-module. Let \(N\) be a left \(\mathbf{K}[G]\)-module. Let \[<.,.>:M\times N\rightarrow\mathbf{K}\] be a \(\mathbf{K}\)-bilinear form. We assume that this form is invariant by the action of \(G\) in the sense that \[<m.\sigma,n>=<m,\sigma.n>\] for every \(m\) in \(M\), \(n\) in \(N\), and \(\sigma\) in \(G\). We define a map \[(.,.) : N\times M\] \[n,m\rTo(n,m)=\sum_{\sigma\in G}<m.\sigma^{-1},n>\sigma \tag{1}\] **Proposition 1**.: _The map \((.,.)\) in Equation (1) is \(\mathbf{K}[G]\)-bilinear._ **Proof** Indeed for any \(\tau\) in \(G\), \(m\) in \(M\), and \(n\) in \(N\) \[(\tau.n,m) = \sum_{\sigma\in G}<m.\sigma^{-1},\tau.n>\sigma\] \[= \sum_{\sigma\in G}<m.\sigma^{-1}\tau^{-1},\tau.n>\tau\sigma\] \[= \sum_{\sigma\in G}<m.\sigma^{-1},n>\tau\sigma\] \[= \tau\sum_{\sigma\in G}<m.\sigma^{-1},n>\sigma\] \[= \tau(n,m).\] And \[(n,m.\tau) = \sum_{\sigma\in G}<m.\tau\sigma^{-1},n>\sigma\] \[= \sum_{\sigma\in G}<m.\tau\tau^{-1}\sigma^{-1},n>\sigma\tau\] \[= \sum_{\sigma\in G}<m.\sigma^{-1},n>\sigma\tau\] \[= (n,m)\tau.\] ### Orthogonality In the situation of Section 2.1 we consider a right \(\mathbf{K}[G]\)-submodule \(U\) of \(M\). Call \[U^{\perp}=\{n\in N\mid<U,n>=0\}\] the orthogonal to \(U\) in \(N\) for the \(<.,.>\) form. This is a \(\mathbf{K}\)-vector space. Since \(U\) is stable by the action of \(G\), its orthogonal \(U^{\perp}\) is a left \(\mathbf{K}[G]\)-module. And \(U^{\perp}\) is the orthogonal to \(U\) for the \((.,.)\) form : \[U^{\perp}=\{n\in N\mid(n,U)=0\}.\] We consider similarly a left \(\mathbf{K}[G]\)-submodule \(V\) of \(N\) and call \[V^{\circ}=\{m\in M\mid<m,V>=0\}\] the orthogonal to \(V\) in \(M\) for the \(<.,.>\) form. This is a right \(\mathbf{K}[G]\) module. And \(V^{\circ}\) is the orthogonal to \(V\) for the \((.,.)\) form : \[V^{\circ}=\{m\in M\mid(V,m)=0\}.\] We have \(U\subset(U^{\perp})^{\circ}\) and \(V\subset(V^{\circ})^{\perp}\). These inclusions are equalities when \(M\) and \(N\) are finite dimensional and \(<.,.>\) is perfect. ### The dual of a \(\mathbf{K}[G]\)-module Let \(N\) be a left \(\mathbf{K}[G]\)-module. We can see \(N\) as a \(\mathbf{K}\)-vector space and call \(\hat{N}\) its dual. This is naturally a right \(\mathbf{K}[G]\)-module. For every \(\varphi\) in \(\hat{N}\) and \(\sigma\) in \(G\) we set \(\varphi.\sigma=\varphi\circ\sigma\). We consider the canonical \(\mathbf{K}\)-bilinear form defined by \[<\varphi,n>=\varphi(n)\] for every \(n\) in \(N\) and \(\varphi\) in \(\hat{N}\). For every \(\sigma\) in \(G\) we have \[<\varphi.\sigma,n>\,=\,\varphi(\sigma.n)\,=\,<\varphi,\sigma.n>\] so \(<.,.>\) is invariant by \(G\). Following Section 2.1 we define a \(\mathbf{K}[G]\)-bilinear form \[(.,.):N\times\hat{N}\to\mathbf{K}[G]\] by \[(n,\varphi)=\sum_{\sigma\in G}\varphi(\sigma^{-1}.n)\sigma. \tag{2}\] We define a map from \(\hat{N}\) to the dual \(\tilde{N}\) of \(N\) as a \(\mathbf{K}[G]\)-module, by sending \(\varphi\) to the map \[\varphi^{G}:n\mapsto(n,\varphi). \tag{3}\] We prove that this map is a bijection. First \(\varphi\mapsto\varphi^{G}\) is trivially seen to be an injection. As for surjectivity we let \(\psi:N\to\mathbf{K}[G]\) be a \(\mathbf{K}[G]\)-linear map. Writing \[\psi(n)=\sum_{\sigma\in G}\psi_{\sigma}(n)\sigma\] we define a \(\mathbf{K}\)-linear coordinate form \(\psi_{\sigma}\) on \(N\) for every \(\sigma\) in \(G\). From the \(\mathbf{K}[G]\)-linearity of \(\psi\) we deduce that \(\psi_{\sigma}(n)=\psi_{1}(\sigma^{-1}.n)\) where \(1\) is the identity element in \(G\). So \(\psi(n)=(n,\psi_{1})\) for every \(n\) in \(N\). So \(\psi=(\psi_{1})^{G}\). ### Free submodules of a \(\mathbf{K}[G]\)-module The ring \(\mathbf{K}[G]\) may not be semisimple. Still free \(\mathbf{K}[G]\)-submodules of finite rank have a supplementary module. **Proposition 2**.: _Let \(G\) be finite group, \(\mathbf{K}\) a commutative field, \(N\) a left \(\mathbf{K}[G]\)-module, \(V\) a submodule of \(N\). If \(V\) is free of finite rank then it is a direct summand._ **Proof** Let \(r\) be the rank of \(V\). Let \(v_{1}\), \(v_{2}\),..., \(v_{r}\) be a basis of \(V\). Let \(\varphi_{1}\), \(\varphi_{2}\),..., \(\varphi_{r}\) be the dual basis. For every \(i\) such that \(1\leqslant i\leqslant n\), the coordinate form \(\varphi_{i,e}\) associated to the identity element in \(G\) belongs to \(\hat{V}\). Let \(\psi_{i}\) be a \(\mathbf{K}\)-linear form on \(N\) whose restriction to \(V\) is \(\varphi_{i,e}\). Let \(\psi_{i}^{G}\in\tilde{N}\) be the associated \(\mathbf{K}[G]\)-linear form according to Equations (3) and (2). The restriction of \(\psi_{i}^{G}\) to \(V\) is \(\varphi_{i}\). The map is a \(\mathbf{K}[G]\)-linear projection onto \(V\). Its kernel is a supplementary \(\mathbf{K}[G]\)-submodule to \(V\). ## 3. Curves with a group action Let \(\mathbf{K}\) be a commutative field. Let \(p\) be the characteristic of \(\mathbf{K}\). Let \(X\) and \(Y\) be two smooth, projective, absolutely integral curves over \(\mathbf{K}\). Let \(g_{X}\) be the genus of \(X\). And similarly \(g_{Y}\). Let \(\tau:Y\to X\) be a Galois cover with Galois group \(G\). Let \(\mathfrak{o}\) be the order of \(G\). There is a natural left action of \(G\) on \(\mathbf{K}(Y)\) defined by \[\sigma.f=f\circ\sigma^{-1}\quad\text{for }\,f\in\mathbf{K}(Y)\,\,\,\,\text{ and }\,\,\sigma\in G.\] There is a natural right action of \(G\) on meromorphic differentials defined by \[\omega.\sigma=\sigma^{*}\omega\quad\text{for }\,\omega\in\Omega^{1}_{\mathbf{K} (Y)/\mathbf{K}}\,\,\,\text{ and }\,\,\sigma\in G.\] These are \(\mathbf{K}(X)\)-linear actions. And the two actions are compatible in the sense that \[\big{(}\omega.\sigma\big{)}\big{(}\sigma^{-1}.f\big{)}=(\omega f).\sigma \tag{4}\] We study some free \(\mathbf{K}[G]\)-modules that arise naturally in this context. ### The residue ring of a non-ramified fiber Let \(P\) be a prime divisor (a place) on \(X\). Let \(t_{P}\) be a uniformizing parameter at \(P\). Let \[a=\deg(P).\] This is the degree over \(\mathbf{K}\) of the residue field \[\mathbf{K}_{P}=H^{0}(\mathcal{O}_{P},P).\] We assume that \(\tau\) is not ramified above \(P\) and let \(Q_{1}\) be a place above \(P\). We call \(G_{1}\) the decomposition group of \(Q_{1}\). This is the stabilizer of \(Q_{1}\) in \(G\). Places above \(P\) are parameterized by left cosets in \(G/G_{1}\). We write the fiber above \(P\) \[Q=\sum_{\sigma\in G/G_{1}}Q_{\sigma}\quad\text{with}\,\,\,\,Q_{\sigma}=\sigma (Q_{1}).\] We call \[b=[G:G_{1}]\] the number of places above \(P\) and let \[c=\mathfrak{o}/b=|G_{1}|\] be the residual degree, that is the degree of \[\mathbf{K}_{\sigma}=H^{0}(\mathcal{O}_{Q_{\sigma}},Q_{\sigma})\] over \(\mathbf{K}_{P}\) for all \(\sigma\in G/G_{1}\). We call \[\mathbf{R}_{Q}=H^{0}(\mathcal{O}_{Q},Q)\] the residue ring at \(Q\). The action of \(G\) on \(\mathbf{R}_{Q}\) makes it a free left \(\mathbf{K}[G]\)-module of rank \(a\). Indeed it is a free \(\mathbf{K}_{P}[G]\)-module of rank \(1\). A basis for it consists of any normal element \(\theta\) in \(\mathbf{K}_{1}/\mathbf{K}_{P}\). If \(m\) is a positive integer, Taylor expansion provides an isomorphism of \(\mathbf{K}_{P}[G]\)-modules \[H^{0}(\mathcal{O}_{Y}/\mathcal{O}_{Y}(-mQ),Y)\simeq\mathbf{R}_{Q}[t_{P}]/t_{P }^{m}\] between the residue ring at \(mQ\) and the ring of truncated series in \(t_{P}\). So the former is a free left \(\mathbf{K}_{P}[G]\)-module of rank \(m\). A basis for it is made of the \(\theta t_{P}^{k}\) for \(0\leqslant k<m\). ### The residue ring of a non-ramified \(G\)-equivariant divisor We take \(P\) an effective divisor on \(X\). We assume that \(\tau\) does not ramify above \(P\) and call \(Q\) the pullback of \(P\) by \(\tau\). We write \[P=\sum_{1\leqslant i\leqslant I}m_{i}P_{i}.\] We let \(t_{i}\) be a uniformizing parameter at \(P_{i}\). We call \(a_{i}\) the degree of the place \(P_{i}\). We call \(b_{i}\) the number of places of \(Y\) above \(P_{i}\). We let \(c_{i}=\mathfrak{o}/b_{i}\). For every \(1\leqslant i\leqslant I\) we choose a place \(Q_{i,1}\) above \(P_{i}\) and call \(G_{i,1}\) the decomposition group at \(Q_{i,1}\). We call \(Q_{i}\) the pullback of \(P_{i}\) by \(\tau\) and write \[Q_{i}=\sum_{\sigma\in G/G_{i,1}}Q_{i,\sigma}\quad\text{with}\ \ \ Q_{i,\sigma}=\sigma(Q_{i,1}).\] its decomposition as a sum of \(b_{i}\) places. We call \(\mathbf{K}_{i,\sigma}\) the residue field at \(Q_{i,\sigma}\). Taylor expansion induces an isomorphism of \(\mathbf{K}\)-algebras \[H^{0}(\mathcal{O}_{Q},Q)\simeq\bigoplus_{i=1}^{I}\ \bigoplus_{\sigma\in G/G_{i,1}} \mathbf{K}_{i,\sigma}[t_{i}]/t_{i}^{m_{i}} \tag{5}\] which is compatible with the action of \(G\). In the special case when all the places \(P_{i}\) have degree one, a basis for the \(\mathbf{K}[G]\)-module \(H^{0}(\mathcal{O}_{Q},Q)\) is made of the \(\theta_{i}t_{i}^{k_{i}}\) for \(1\leqslant i\leqslant I\) and \(0\leqslant k_{i}<m_{i}\) where \(\theta_{i}\) is a normal element in the extension \(\mathbf{K}_{i,1}/\mathbf{K}\). The proposition below follows from the discussion in this section and the previous one. **Proposition 3**.: _Assume the hypotheses at the beginning of Section 3. Let \(P\) be an effective divisor on \(X\). Assume that \(\tau\) is not ramified above \(P\) and let \(Q\) be the pullback of \(P\) by \(\tau\). The residue ring \(H^{0}(\mathcal{O}_{Q},Q)\) is a free \(\mathbf{K}[G]\)-module of rank the degree of \(P\)._ ### Duality We denote by \(\mathbf{A}\) the right hand side of Equation (5). We need a dual of \(\mathbf{A}\) as a \(\mathbf{K}\)-vector space. We set \[\hat{\mathbf{A}}=\bigoplus_{i=1}^{I}\bigoplus_{\sigma\in G/G_{i,1}}\big{(} \mathbf{K}_{i,\sigma}[t_{i}]/t_{i}^{m_{i}}\big{)}\,\frac{dt_{i}}{t_{i}^{m_{i}}} \simeq H^{0}(\Omega^{1}_{Y/\mathbf{K}}(-Q)/\Omega^{1}_{Y/\mathbf{K}},Y).\] For \(f\in\mathbf{A}\) and \(\omega\in\hat{\mathbf{A}}\) we write \(<\omega,f>\) for the sum of the residues of \(\omega f\) at all the geometric points of \(Q\). This is a \(\mathbf{K}\)-bilinear form. We deduce from Equation (4) that this form is invariant by the action of \(G\) \[<\omega.\sigma,f>=<\omega,\sigma.f>\] We define a \(\mathbf{K}[G]\)-bilinear form using the construction in Section 2.1 \[(f,\omega)=\sum_{\sigma\in G}<\omega.\sigma^{-1},f>\sigma\in\mathbf{K}[G]. \tag{6}\] These two bilinear forms turn \(\hat{\mathbf{A}}\) into the dual of \(\mathbf{A}\) as a \(\mathbf{K}\)-vector space (resp. as a \(\mathbf{K}[G]\)-module). In the special case when all the places \(P_{i}\) have degree one, the dual basis to the basis introduced before Proposition 3 is made of the \(\mu_{i}t_{i}^{m_{i}-k_{i}}dt/t\) for \(1\leqslant i\leqslant I\) and \(0\leqslant k_{i}<m_{i}\) where \(\mu_{i}\) is the dual to the normal element \(\theta_{i}\) in the extension \(\mathbf{K}_{i,1}/\mathbf{K}\). ## 4. Free commutative actions We study the situation at the beginning of Section 3 in the special case when the Galois cover \(\tau:Y\to X\) is abelian and unramified. We prove that large enough equivariant Riemann-Roch spaces are free \(\mathbf{K}[G]\)-modules. To this end we prove in Section 4.2 that evaluation at some fibers induces an isomorphism with some of the \(\mathbf{K}[G]\)-modules studied in Section 3.2. We will need non-special equivariant divisors on \(Y\). We first prove in Section 4.1 that such divisors exist. We introduce in Section 4.3 the evaluation, interpolation and checking matrices whose existence results from the freeness of the considered modules. ### Special invariant divisors The pullback by \(\tau\) of a degree \(g_{X}-1\) divisor on \(X\) is a degree \(g_{Y}-1\) divisor on \(Y\). We need the following criterion from [9, SS14] for the latter divisor to be special. **Proposition 4**.: _Assume the hypotheses at the beginning of Section 3 with \(\tau\) abelian and unramified and \(\mathbf{K}\) algebraically closed. Let \(c\) be a divisor class of degree \(g_{X}-1\) on \(X\) and let \(\tau^{\star}(c)\) be its pullback on \(Y\). If the class \(\tau^{\star}(c)\) is effective then \(c\) is the sum of an effective class of degree \(g_{X}-1\) and a class of degree \(0\) annihilated by \(\tau^{\star}\) and by \(\mathfrak{o}\)._ **Proof** Let \(D\) be a divisor in \(c\) and let \(E\) be the pullback of \(D\) by \(\tau\). We assume that \(\tau^{\star}(c)\) is effective. The space \(H^{0}(\mathcal{O}_{Y}(E),Y)\) is non-zero and is acted on by \(G\). Let \(f\) be an eigenvector for this action. The divisor of \(f\) is \(J-E\) where \(J\) is effective and stable under the action of \(G\). So there exists an effective divisor \(I\) on \(X\) such that \(J\) is the pullback of \(I\) by \(\tau\). And the class of \(I-D\) is annihilated by \(\tau^{\star}\). It is also annihilated by \(\mathfrak{o}\) because \(f^{\mathfrak{o}}\) is invariant by \(G\). ### Riemann-Roch spaces Let \(E\) be a divisor on \(Y\) defined over \(\mathbf{K}\) and invariant by \(G\). The Riemann-Roch space \(H^{0}(\mathcal{O}_{Y}(E),Y)\) is a \(\mathbf{K}[G]\)-module. This module is free provided the degree of \(E\) is large enough. **Proposition 5**.: _Assume the hypotheses at the beginning of Section 3 with \(\tau\) abelian and unramified. Let \(D\) be a divisor on \(X\) with degree \(\geqslant 2g_{X}-1\). Let \(E\) be the pullback of \(D\) by \(\tau\). The \(\mathbf{K}\)-vector space \(H^{0}(\mathcal{O}_{Y}(E),Y)\) is a free \(\mathbf{K}[G]\)-module of rank \(\deg(D)-g_{X}+1\)._ **Proof** We may assume that \(\mathbf{K}\) is algebraically closed because of the Noether-Deuring theorem [6, SS2, Section 5]. Let \(k=\deg(D)-g_{X}+1\). We note that \(k\geqslant g_{X}\). So there exist \(k\) points \[P_{1},P_{2},\ldots,P_{k}\ \ \ \text{on}\ \ X\] such that the class of \(D-P_{1}-P_{2}-\cdots-P_{k}\) is not the sum of an effective class of degree \(g_{X}-1\) and a class annihilated by \[\tau^{\star}:\operatorname{Pic}(X)\to\operatorname{Pic}(Y).\] Let \(P\) be the divisor sum of all \(P_{i}\) and let \(Q\) be its pullback by \(\tau\). According to Proposition 4 the class of \(E-Q\) is ineffective. Thus the evaluation map \[H^{0}(\mathcal{O}_{Y}(E),Y)\to\mathbf{K}[Q]\] is an isomorphism. And \(\mathbf{K}[Q]\) is a free \(\mathbf{K}[G]\)-module of rank \(k\) according to Proposition 3. ### The orthogonal submodule In the situation of the beginning of Section 3 and assuming that \(\tau\) is abelian and unramified we let \(D\) and \(P\) be divisors on \(X\) with \(P\) effective. We assume that \(D\) and \(P\) are disjoint. We assume that \[2g_{X}-1\leqslant\deg(D)\leqslant\deg(P)-1. \tag{7}\] We call \(E\) the pullback of \(D\) by \(\tau\) and \(Q\) the pullback of \(P\). We write \[\mathcal{L}(E)=H^{0}(\mathcal{O}_{Y}(E),Y)\ \ \ \text{and}\ \ \ \Omega(-Q+E)=H^{0}(\Omega_{Y/\mathbf{K}}(-Q+E),Y).\] Proposition 5 and Equation (7) imply that these two \(\mathbf{K}[G]\)-modules are free. And the evaluation maps \[\mathcal{L}(E)\longrightarrow\mathbf{A}\ \ \ \text{and}\ \ \ \Omega(-Q+E) \longrightarrow\hat{\mathbf{A}}\ \ \ \text{are injective}.\] So \(\mathcal{L}(E)\) can be seen as a free submodule of \(\mathbf{A}\) and \(\Omega(-Q+E)\) as a free submodule of \(\hat{\mathbf{A}}\). These two \(\mathbf{K}[G]\)-modules are orthogonal to each other for the form introduced in Equation (6). Proposition 2 implies that \(\mathcal{L}(E)\) has a supplementary submodule in \(\mathbf{A}\) that is isomorphic to \(\Omega(-Q+E)\) and is thus a free submodule also. Similarly \(\Omega(-Q+E)\) has a free supplementary submodule in \(\hat{\mathbf{A}}\) that is isomorphic to \(\mathcal{L}(E)\). In the special case when all the places \(P_{i}\) have degree one, we have introduced a natural basis for \(\mathbf{A}\) before Proposition 3 and its dual basis \(\hat{\mathbf{A}}\) in Section 3.3, using Taylor expansions at the places above the \(P_{i}\). We choose \(\mathbf{K}[G]\)-bases for \(\mathcal{L}(E)\) and \(\Omega(-Q+E)\). We denote \(\mathcal{E}_{E}\) the \(\deg(P)\times(\deg(D)-g_{X}+1)\) matrix with coefficients in \(\mathbf{K}[G]\) of the evaluation map \(\mathcal{L}(E)\to\mathbf{A}\) in the chosen bases. We denote \(\mathcal{C}_{E}\) the \(\deg(P)\times(\deg(P)-\deg(D)+g_{X}-1)\) matrix of the map \(\varOmega(-Q+E)\to\hat{\mathbf{A}}\) in the chosen bases. The matrix \(\mathcal{C}_{E}\) checks that a vector in \(\mathbf{A}\) belongs to \(\mathcal{L}(E)\). Its left kernel is the image of \(\mathcal{E}_{E}\). So \[\mathcal{C}_{E}^{t}\times\mathcal{E}_{E}=0\] the zero \((\deg(P)-\deg(D)+g_{X}-1)\times(\deg(D)-g_{X}+1)\) matrix with coefficients in \(\mathbf{K}[G]\). We choose a \(\mathbf{K}[G]\)-linear projection \(\mathbf{A}\to\mathcal{L}(E)\) and denote \(\mathcal{I}_{E}\) the \((\deg(D)-g_{X}+1)\times\deg(P)\) matrix of this projection. This is an interpolation matrix since it recovers a function in \(\mathcal{L}(E)\) from its evaluation at \(Q\). Equivalently \[\mathcal{I}_{E}\times\mathcal{E}_{E}=1\] the \((\deg(D)-g_{X}+1)\times(\deg(D)-g_{X}+1)\) identity matrix with coefficients in \(\mathbf{K}[G]\). We note that applying either of the matrices \(\mathcal{E}_{E}\), \(\mathcal{C}_{E}\), \(\mathcal{I}_{E}\) requires at most a constant times \(\deg(P)^{2}\) operations in \(\mathbf{K}[G]\). ## 5. Pade approximants In the situation of the beginning of Section 3 and assuming that \(\tau\) is abelian and unramified we let \(D_{0}\), \(D_{1}\) and \(P\) be divisors on \(X\) with \(P\) effective. We assume that \(D_{0}\) and \(D_{1}\) are disjoint from \(P\). We call \(E_{0}\), \(E_{1}\), and \(Q\) the pullbacks of \(D_{0}\), \(D_{1}\), and \(P\) by \(\tau\). We assume that \[2g_{X}-1\leqslant\deg(D_{0})\leqslant\deg(D_{1})\leqslant\deg(P)-1. \tag{8}\] As a consequence, the \(\mathbf{K}[G]\)-modules \(\mathcal{L}(E_{0})\), \(\mathcal{L}(E_{1})\), \(\varOmega(-Q+E_{0})\), and \(\varOmega(-Q+E_{1})\) are free and the evaluation maps into \(\mathbf{A}\) and \(\hat{\mathbf{A}}\) are injective. Given \(r\) in \(\mathbf{A}\), \(a_{0}\neq 0\) in \(\mathcal{L}(E_{0})\) and \(a_{1}\) in \(\mathcal{L}(E_{1})\) such that \[a_{0}r-a_{1}=0\in\mathbf{A},\] we say that \((a_{0},a_{1})\) is a Pade approximant of \(r\) and call \(a_{0}\) a **denominator** for \(r\). Denominators for \(r\) are non-zero \(a_{0}\) in \(\mathcal{L}(E_{0})\subset\mathbf{A}\) such that \[a_{0}r\in\mathcal{L}(E_{1}).\] Equivalently \[(a_{0}r,\omega)=0\ \ \ \text{for every}\ \ \ \omega\in\varOmega(-Q+E_{1}). \tag{9}\] Denominators are thus non-zero solutions of a \(\mathbf{K}\)-linear system of equations. We note that this is not a \(\mathbf{K}[G]\)-linear system in general. In Section 5.1 we show that one can be a bit more explicit in some cases. We consider the problem of computing Pade approximants in Section 5.2. ### The split case Assume that \(P=P_{1}+\cdots+P_{n}\) is a sum of \(n\) pairwise distinct rational points over \(\mathbf{K}\). Assume that the fiber of \(\tau\) above each \(P_{i}\) decomposes as a sum of \(\mathfrak{o}\) rational points over \(\mathbf{K}\). We choose a point \(Q_{i,1}\) above each \(P_{i}\) and set \[Q_{i,\sigma}=\sigma(Q_{i,1})\ \ \ \text{for every}\ \ \ \sigma\in G.\] For every \(1\leqslant i\leqslant n\) we call \(\alpha_{i}\) the function in \(\mathbf{A}\) that takes value \(1\) at \(Q_{i,1}\) and zero everywhere else. We thus form a basis \[\mathcal{A}_{G}=(\alpha_{i})_{1\leqslant i\leqslant n}\] of \(\mathbf{A}\) over \(\mathbf{K}[G]\). We note \(\hat{\mathcal{A}}_{G}\) its dual basis. For every \(1\leqslant i\leqslant n\) and \(\sigma\in G\) we call \[\alpha_{i,\sigma}=\sigma.\alpha_{i}=\alpha_{i}\circ\sigma^{-1}\] the function in \(\mathbf{A}\) that takes value \(1\) at \(Q_{i,\sigma}\) and zero everywhere else. We thus form a basis \[\mathcal{A}_{\mathbf{K}}=(\alpha_{i,\sigma})_{1\leqslant i\leqslant n,\, \sigma\in G}\] of \(\mathbf{A}\) over \(\mathbf{K}\). The coordinates of \(r\) in the \(\mathbf{K}[G]\)-basis \(\mathcal{A}_{G}\) are \[r_{G}=(\sum_{\sigma\in G}r(Q_{i,\sigma})\sigma)_{1\leqslant i\leqslant n}\] and the coordinates of \(r\in\mathbf{A}\) in the \(\mathbf{K}\)-basis \(\mathcal{A}_{\mathbf{K}}\) are \[r_{\mathbf{K}}=(r(Q_{i,\sigma}))_{1\leqslant i\leqslant n,\,\sigma\in G}.\] Multiplication by \(r\) is a \(\mathbf{K}\)-linear map from \(\mathbf{A}\) to \(\mathbf{A}\). We call \[\mathcal{R}_{\mathbf{K}}\in\mathcal{M}_{\mathfrak{o}.n,\mathfrak{o}.n}( \mathbf{K})\] the \(\mathfrak{o}.n\times\mathfrak{o}.n\) diagonal matrix of this map in the basis \(\mathcal{A}_{\mathbf{K}}\). We choose a \(\mathbf{K}[G]\)-basis \(\mathcal{Z}_{G}\) for \(\mathcal{L}(E_{0})\) and denote \(\mathcal{E}_{G}^{0}\) the \(\deg(P)\times(\deg(D_{0})-g_{X}+1)\) matrix of the \(\mathbf{K}[G]\)-linear map \[\mathcal{L}(E_{0})\to\mathbf{A} \tag{10}\] in the bases \(\mathcal{Z}_{G}\) and \(\mathcal{A}_{G}\). We denote \(\mathcal{Z}_{\mathbf{K}}\) the \(\mathbf{K}\)-basis of \(\mathcal{L}(E_{0})\) obtained by letting \(G\) act on \(\mathcal{Z}_{G}\). Call \(\mathcal{E}_{\mathbf{K}}^{0}\) the matrix of the map (10) in the bases \(\mathcal{Z}_{\mathbf{K}}\) and \(\mathcal{A}_{\mathbf{K}}\). The matrix \(\mathcal{E}_{\mathbf{K}}^{0}\) is obtained from \(\mathcal{E}_{G}^{0}\) by replacing each \(\mathbf{K}[G]\) entry by the corresponding \(\mathfrak{o}\times\mathfrak{o}\) circulant-like matrix with entries in \(\mathbf{K}\). We choose a \(\mathbf{K}[G]\)-basis \(\mathcal{U}_{G}\) for \(\Omega(-Q+E_{1})\) and denote \(\mathcal{C}_{G}^{1}\) the matrix of the injective map \[\Omega(-Q+E_{1})\to\hat{\mathbf{A}} \tag{11}\] in the bases \(\mathcal{U}_{G}\) and \(\hat{\mathcal{A}}_{G}\). This is a \(\deg(P)\times(\deg(P)-\deg(D_{1})+g_{X}-1)\) matrix with entries in \(\mathbf{K}[G]\). We denote \(\mathcal{U}_{\mathbf{K}}\) the \(\mathbf{K}\)-basis of \(\Omega(-Q+E_{1})\) obtained by letting \(G\) act on \(\mathcal{U}_{G}\). The matrix of the map (11) in the bases \(\mathcal{U}_{\mathbf{K}}\) and \(\hat{\mathcal{A}}_{\mathbf{K}}\) is called \(\mathcal{C}_{\mathbf{K}}^{1}\). Let \(a_{0}\) in \(\mathcal{L}(E_{0})\) and let \(x_{G}\) be the coordinates of \(a_{0}\) in the \(\mathbf{K}[G]\)-basis \(\mathcal{Z}_{G}\). This is a column of height \(\deg(D_{0})-g_{X}+1\). We call \(x_{\mathbf{K}}\) the coordinates of \(a_{0}\) in the \(\mathbf{K}\)-basis \(\mathcal{Z}_{\mathbf{K}}\). This is a column of height \(\mathfrak{o}.(\deg(D_{0})-g_{X}+1)\) obtained from \(x_{G}\) by replacing each entry by its \(\mathfrak{o}\) coefficients in the canonical basis of \(\mathbf{K}[G]\). We deduce from Equation (9) that \(a_{0}\) is a denominator for \(r\) if and only if \(x_{\mathbf{K}}\) is in the kernel of the matrix \[\mathcal{D}_{r}=(\mathcal{C}_{\mathbf{K}}^{1})^{t}\times\mathcal{R}_{\mathbf{K }}\times\mathcal{E}_{\mathbf{K}}^{0}\in\mathcal{M}_{\mathfrak{o}.(\deg P-degD_ {1}+g_{X}-1)\times\mathfrak{o}.(\deg D_{0}-g_{X}+1)}(\mathbf{K}).\] **Proposition 6**.: _Assume we are in the context of the beginning of Section 5. In particular assume Equation (8), assume that \(P\) is a sum of \(n\) pairwise distinct \(\mathbf{K}\)-rational points, and that the \(n\) corresponding fibers of \(\tau\) split over \(\mathbf{K}\). Assume we are given the matrices \(\mathcal{E}_{\mathbf{K}}^{0}\) and \(\mathcal{C}_{\mathbf{K}}^{1}\). On input an \(r=(r(Q_{i,\sigma})_{1\leqslant i\leqslant n,\,\sigma\in G}\) in \(\mathbf{A}\) and some \(a_{0}\) in \(\mathcal{L}(E_{0})\), given by its coordinates \(x_{\mathbf{K}}\) in the basis \(\mathcal{Z}_{\mathbf{K}}\), one can check if \(a_{0}r\in\mathcal{L}(E_{1})\) at the expense of \(\mathcal{Q}.n^{2}\) operations in \(\mathbf{K}[G]\) (addition, multiplication) and \(\mathcal{Q}.\mathfrak{o}.n\) operations in \(\mathbf{K}\) (addition, multiplication) where \(\mathcal{Q}\) is some absolute constant._ **Proof** We first multiply \(x_{\mathbf{K}}\) by \(\mathcal{E}_{\mathbf{K}}^{0}\). This requires less than \(2\deg(P)\times(\deg(D_{0})-g_{X}+1)\) operations in \(\mathbf{K}[G]\). We then multiply the result by \(\mathcal{R}_{\mathbf{K}}\). This requires less than \(\mathfrak{o}.\deg(P)\) operations in \(\mathbf{K}\). We finally multiply the result by \((\mathcal{C}_{\mathbf{K}}^{1})^{t}\). This requires less than \(2\deg(P)\times(\deg(P)-\deg(D_{1})+g_{X}-1)\) operations in \(\mathbf{K}[G]\). \(\Box\) ### Computing Pade approximants Beeing able to check a denominator we can find a random one (if there is some) using an iterative method as in [43, 19]. Recall that an \(\ell\times n\)**black box** matrix \(A\) with coefficients in a field \(\mathbf{K}\) is an oracle that on input an \(n\times 1\) vector \(x\) returns \(Ax\). **Proposition 7** (Wiedemann, Kaltofen, Saunders).: _There exists a probabilistic (Las Vegas) algorithm that takes as input an \(\ell\times n\) black box matrix \(A\) and an \(\ell\times 1\) vector \(b\) with entries in a finite field \(\mathbf{K}\) and returns a uniformly distributed random solution \(x\) to the system \(Ax=b\) with probability of success \(\geqslant 1/2\) at the expense of \(\mathcal{Q}.m.\log m\) calls to the black box for \(A\) and \(\mathcal{Q}.m^{2}.(\log(m))^{2}\) operations in \(\mathbf{K}\) (addition, multiplication, inversion, picking a random element) where \(m=\max(\ell,n)\) and \(\mathcal{Q}\) is some absolute constant._ From Propositions 6 and 7 we deduce **Proposition 8**.: _Under the hypotheses of Proposition 6 and on input a vector \(r=(r(Q_{i,j})_{i,j}\) in \(\mathbf{A}\) one can find a uniformly distributed random denominator (if there is some) for \(r\) with probability of success \(\geqslant 1/2\) at the expense of \(\mathcal{Q}.\mathfrak{o}.n^{3}.\log(\mathfrak{o}.n)\) operations in \(\mathbf{K}[G]\) (addition, multiplication) and \(\mathcal{Q}.(\mathfrak{o}.n.\log(\mathfrak{o}.n))^{2}\) operations in \(\mathbf{K}\) (addition, multiplication, inversion, picking a random element) where \(\mathcal{Q}\) is some absolute constant._ Once we have found a denominator \(a_{0}\) for \(r\) we set \(a_{1}=ra_{0}\) and recover the coordinates of \(a_{1}\) applying the interpolation matrix associated to \(E_{1}\). ## 6. Computing in the group algebra Given a finite commutative group \(G\) and a finite field \(\mathbf{K}\) we will need efficient algorithms to multiply in \(\mathbf{K}[G]\). This is classically achieved using discrete Fourier transform when \(G\) is cyclic and \(\mathbf{K}\) contains enough roots of unity. The complexity analysis requires some care in general. This is the purpose of this section. We recall in Section 6.1 the definition of Fourier transform in the setting of commutative finite groups. The most classical case of cyclic groups is studied in Section 6.2 from an algorithmic point of view. The general case follows by induction as explained in Section 6.3. The complexity of the resulting multiplication algorithm in \(\mathbf{K}[G]\) is bounded in Section 6.4. ### Fourier transform Let \(G\) be a finite commutative group. Let \(\mathfrak{o}\) be the order of \(G\). Let \(e\) be its exponent. Let \(\mathbf{K}\) be a commutative field containing a primitive \(e\)-th root of unity. In particular \(e\) and \(\mathfrak{o}\) are non-zero in \(\mathbf{K}\). Let \(\hat{G}\) be the dual of \(G\) defined as the group of characters \(\chi:G\to\mathbf{K}^{*}\). We define a map from the group algebra of \(G\) to the algebra of functions on \(G\) \[\begin{CD}\top&:\qquad\quad\mathbf{K}[G]\rTo\operatorname{Hom}(G,\mathbf{K}) \\ &\sum_{\sigma\in G}a_{\sigma}\sigma\rTo\operatorname{\sigma}\mapsto a_{\sigma} \end{CD}\] This is an isomorphism of \(\mathbf{K}\)-vector space. We call \(\perp:\operatorname{Hom}(G,\mathbf{K})\to\mathbf{K}[G]\) the reciprocal map. We dualy define \[\hat{\uparrow}\qquad:\qquad\quad\mathbf{K}[\hat{G}]\rTo\operatorname{Hom}(\hat{G},\mathbf{K})\] and its reciprocal map \(\hat{\perp}\). We call \[\iota_{G}:\mathbf{K}[G]\to\mathbf{K}[G]\] the K-linear involution that maps \(\sigma\) onto \(\sigma^{-1}\). We define the Fourier transform \[\operatorname{FT}_{G}\qquad:\qquad\quad\mathbf{K}[G]\rTo\operatorname{Hom}( \hat{G},\mathbf{K})\] The Fourier transform evaluates an element in the group algebra at every character. The Fourier transform of the dual group \[\operatorname{FT}_{\hat{G}}\qquad:\qquad\quad\mathbf{K}[\hat{G}] \rTo\operatorname{Hom}(G,\mathbf{K})\] \[\sum_{\chi\in\hat{G}}a_{\chi}\chi\rTo\sigma\mapsto\sum_{\chi}a_{ \chi}\chi(\sigma)\] provides an inverse for \(\operatorname{FT}_{G}\) in the sense that \[\perp\circ\operatorname{FT}_{\hat{G}}\circ\hat{\perp}\circ\operatorname{FT}_ {G}=\mathfrak{o}.\iota\] is the \(\mathbf{K}\)-linear invertible map that sends \(\sigma\) to \(\mathfrak{o}.\sigma^{-1}\). Let \(M\) be a finite dimensional \(\mathbf{K}\)-vector space. We set \[M[G]=M\otimes_{\mathbf{K}}\mathbf{K}[G]\] and note that \[\operatorname{Hom}(\hat{G},M)=M\otimes_{\mathbf{K}}\operatorname{Hom}(\hat{G},\mathbf{K}).\] We define a Fourier transform on \(M\) ### Univariate Fourier transform We assume in this section that the group \(G\) is cyclic of order \(\mathfrak{o}\). We choose a primitive \(\mathfrak{o}\)-th root of unity \(\omega\) in \(\mathbf{K}\). We choose a generator in \(G\) and deduce the following identifications \[\operatorname{Hom}(G,\mathbf{K})=\mathbf{K}^{\mathfrak{o}}\ \ \text{ and }\ \ \mathbf{K}[G]=\mathbf{K}[x]/(x^{\mathfrak{o}}-1).\] Let \(M\) be a finite dimensional \(\mathbf{K}\)-vector space. Setting \[M[x]=M\otimes_{\mathbf{K}}\mathbf{K}[x]\ \ \text{ and }\ \ M[G]=M\otimes_{\mathbf{K}}\mathbf{K}[x]/(x^{ \mathfrak{o}}-1).\] the Fourier transform is \[\operatorname{FT}_{M}\qquad:\qquad M[G]\rTo\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{ \cdot}}}}}}{}}{}{}}{}{}}{}}}}}M^{o}\] \[\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\cdotcdot{\ Given \(m\) in \(M[G]=M\otimes_{\mathbf{K}}\mathbf{K}[x]/(x^{\mathfrak{o}}-1)\) the computation of \(\operatorname{FT}_{M}(m)\) reduces to the multiplication of a polynomial of degree \(2\mathfrak{o}-2\) in \(\mathbf{K}[x]\) and a vector of degree \(\mathfrak{o}-1\) in \(M[x]\). This is the key for the proof of the proposition below. **Proposition 9**.: _Let \(\mathbf{K}\) be a commutative field. Let \(M\) be a finite dimensional \(\mathbf{K}\)-vector space. Let \(\mathfrak{o}\geqslant 2\) be an integer. Assume that \(\mathbf{K}\) contains a primitive \(\mathfrak{o}\)-th root of unity \(\omega\) and a primitive root of unity of order a power of two that is bigger than \(3\mathfrak{o}-3\). Let_ \[m=m_{0}\otimes 1+m_{1}\otimes x+\cdots+m_{\mathfrak{o}-1}\otimes x^{\mathfrak{o }-1}\bmod x^{\mathfrak{o}}-1\in M\otimes_{\mathbf{K}}\mathbf{K}[x]/(x^{ \mathfrak{o}}-1).\] _One can compute \(\operatorname{FT}_{M}(m)\) at the expense of \(\mathcal{Q}.\mathfrak{o}.\log\mathfrak{o}\) additions, multiplications and inversions in \(\mathbf{K}\), additions and scalar multiplications in \(M\), where \(\mathcal{Q}\) is an absolute constant._ **Proof** We adapt the proof from [5, I.5.4, Proposition 5.10]. For every \(0\leqslant i\leqslant 2\mathfrak{o}-2\) let \[t_{i}=i(i-1)/2\quad\text{and}\quad\beta_{i}=\omega^{t_{i}}.\] We note that \[t_{i+1}=t_{i}+i\ \ \text{and}\quad\beta_{i+1}=\beta_{i}\omega^{i}.\] So one can compute the \(\beta_{i}\) for \(0\leqslant i\leqslant 2\mathfrak{o}-2\) at the expense of \(4\mathfrak{o}\) operations in \(\mathbf{K}\). We then compute the inverse of every \(\beta_{i}\). For every \(0\leqslant i\leqslant\mathfrak{o}-1\) let \[n_{i}=\beta_{i}^{-1}m_{i}.\] These can be computed at the expense of \(\mathfrak{o}\) scalar multiplications in \(M\). Let \[n(x)=n_{\mathfrak{o}-1}+n_{\mathfrak{o}-2}\otimes x+\cdots+n_{0}\otimes x^{ \mathfrak{o}-1}\in M[x]\] and let \[b(x)=\beta_{0}+\beta_{1}x+\cdots+\beta_{2\mathfrak{o}-2}x^{2\mathfrak{o}-2} \in\mathbf{K}[x].\] Let \[r(x)=b(x).n(x)=\sum_{0\leqslant i\leqslant 3\mathfrak{o}-3}r_{i}\otimes x^{i} \in M[x].\] From the identity \[t_{i+j}=t_{i}+t_{j}+ij\] we deduce \[\omega^{ij}\beta_{i}\beta_{j}=\beta_{i+j}\ \ \text{for}\ \ 0\leqslant i,j \leqslant\mathfrak{o}-1\] and \[\sum_{j=0}^{\mathfrak{o}-1}\omega^{ij}m_{j}=\beta_{i}^{-1}\sum_{j=0}^{o-1} \beta_{i+j}n_{j}.\] We deduce that \(\operatorname{FT}_{M}^{\omega}(m)=(\beta_{0}^{-1}r_{\mathfrak{o}-1},\beta_{1} ^{-1}r_{\mathfrak{o}},\beta_{2}^{-1}r_{\mathfrak{o}+1},\ldots,\beta_{\mathfrak{ o}-1}^{-1}r_{2\mathfrak{o}-2})\). Since \(\mathbf{K}\) contains a primitive root of unity of order a power of two that is bigger than \(3\mathfrak{o}-3\), the coefficients in the product \(r(x)=b(x).n(x)\) can be computed at the expense of \(\mathcal{Q}.\mathfrak{o}.\log\mathfrak{o}\) operations in \(\mathbf{K}\), additions in \(M\) and products of a vector in \(M\) by a scalar in \(\mathbf{K}\). See [5, I.2.4, Algorithme 2.3]. ### Multivariate Fourier transform Let \((\mathfrak{o}_{i})_{1\leqslant i\leqslant I}\) be integers such that \(2\leqslant\mathfrak{o}_{1}|\mathfrak{o}_{2}|\ldots|\mathfrak{o}_{I}\). Let \(G=\prod_{1\leqslant i\leqslant I}(\mathbf{Z}/\mathfrak{o}_{i}\mathbf{Z})\). Let \(\mathfrak{o}=\prod_{i}\mathfrak{o}_{i}\) be the order of \(G\). Let \(\mathbf{K}\) be a commutative field containing a primitive root of unity of order \(e\). The group algebra \(\mathbf{K}[G]\) is isomorphic to \(\mathbf{K}[x_{1},\ldots,x_{I}]/(x_{1}^{\mathfrak{o}_{1}}-1,\ldots,x_{1}^{ \mathfrak{o}_{I}}-1)\). We set \(A_{i}=\mathbf{K}[x_{i}]/(x_{i}^{\mathfrak{o}_{i}}-1)\) and write \(\mathbf{K}[G]\) as a tensor product of \(\mathbf{K}\)-algebras \[\mathbf{K}[G]=A_{1}\otimes_{\mathbf{K}}A_{2}\otimes_{\mathbf{K}}\cdots \otimes_{\mathbf{K}}A_{I}.\] For every \(1\leqslant i\leqslant I\) we call \(M_{i}\) the tensor product of all \(A_{j}\) but \(A_{i}\). This is a \(\mathbf{K}\)-vector space of dimension \(\mathfrak{o}/\mathfrak{o}_{i}\). We can see \(\mathbf{K}[G]\) as \(M_{i}\otimes\mathbf{K}[x]/(x^{\mathfrak{o}_{i}}-1)\). We denote \(\mathrm{FT}_{i}\) the corresponding univariate Fourier transform as defined in Section 6.2. We have \[\mathrm{FT}_{G}=\mathrm{FT}_{1}\circ\mathrm{FT}_{2}\circ\cdots\circ\mathrm{FT} _{I}\,.\] Using Proposition 9 we deduce **Proposition 10**.: _Let \((\mathfrak{o}_{i})_{1\leqslant i\leqslant I}\) be integers such that \(2\leqslant\mathfrak{o}_{1}|\mathfrak{o}_{2}|\ldots|\mathfrak{o}_{I}\). Let \(G=\prod_{1\leqslant i\leqslant I}(\mathbf{Z}/\mathfrak{o}_{i}\mathbf{Z})\). Let \(\mathfrak{o}\) be the order of \(G\). Let \(e=\mathfrak{o}_{I}\) be the exponent of \(G\). Let \(\mathbf{K}\) be a commutative field containing a primitive root of unity of order \(e\) and a primitive root of unity of order a power of two that is bigger than \(3e-3\). Given an element \(a=\sum_{\sigma\in G}a_{\sigma}\sigma\) in \(\mathbf{K}[G]\) one can compute \(\mathrm{FT}_{G}(a)\) in \(\mathrm{Hom}(\hat{G},\mathbf{K})\) at the expense of \(\mathcal{Q}.\mathfrak{o}.\log\mathfrak{o}\) additions, multiplications and inversions in \(\mathbf{K}\). Here \(\mathcal{Q}\) is some absolute constant._ ### Fast multiplication in \(\mathbf{K}[G]\) Let \(G\), \(\mathfrak{o}\), \(e\) be as in Section 6.3. Let \(\mathbf{K}\) be a commutative field. In this section we study the algorithmic complexity of computing the product of two given elements \[a=\sum_{\sigma\in G}a_{\sigma}\sigma\quad\text{and}\quad b=\sum_{\sigma\in G}b _{\sigma}\sigma\ \ \text{in}\ \ \ \mathbf{K}[G]. \tag{12}\] It will depend on the field \(\mathbf{K}\). We first treat the case when \(\mathbf{K}\) has enough roots of unity. **Proposition 11**.: _In the context of the beginning of Section 6.4 assume that \(\mathbf{K}\) contains a primitive root of unity of order \(e\) and a primitive root of unity of order a power of two that is bigger than \(3e-3\). One can compute the product \(ab\in\mathbf{K}[G]\) at the expense of \(\mathcal{Q}.\mathfrak{o}.\log\mathfrak{o}\) operations in \(\mathbf{K}\) where \(\mathcal{Q}\) is some absolute constant._ **Proof** We compute \(A=\mathrm{FT}_{G}(a)\) and \(B=\mathrm{FT}_{G}(b)\) as in Section 6.3. We then compute \(C=AB\) in \(\mathrm{Hom}(\hat{G},\mathbf{K}^{*})\) at the expense of \(\mathfrak{o}\) multiplications in \(\mathbf{K}\). We then deduce \(c=ab\) applying \(\mathrm{FT}_{G}^{-1}\) to \(C\). The cost of this computation is bounded using Proposition 10. \(\Box\) We now consider the case when \(\mathbf{K}\) is \(\mathbf{Z}/p\mathbf{Z}\) where \(p\) is a prime integer. We miss roots of unity in \(\mathbf{K}\) in general. So we transport the problem into another ring using non-algebraic maps. We let \(t\) be the smallest power of \(2\) that is bigger than \(3e-3\). Let \(p^{\prime}\) be the smallest prime integer congruent to \(1\) modulo \(\mathfrak{o}.(p-1)^{2}.t\). We set \(\mathbf{K}^{\prime}=\mathbf{Z}/p^{\prime}\mathbf{Z}\) and note that \(\mathbf{K}^{\prime}\) contains a primitive root of unity of order \(e\) and a primitive root of order a power of two bigger than \(3e-3\). Also \[p^{\prime}>\mathfrak{o}.(p-1)^{2}.\] By a result of Heath-Brown, the exponent in Linnik's theorem for primes in arithmetic progressions can be taken to be \(11/2\). See [16] and the recent improvement [44]. We deduce that there exists an absolute constant \(\mathcal{Q}\) such that \[p^{\prime}\leqslant\mathcal{Q}(\mathfrak{o}.p)^{11}.\] For \(c\) a congruence class in \(\mathbf{K}=\mathbf{Z}/p\mathbf{Z}\) we denote \(\ell(c)\) the lift of \(c\), that is the unique integer in the intersection of \(c\) with the interval \([0,p[\). We write \[\uparrow(c)=\ell(c)\bmod p^{\prime}. \tag{13}\] We thus define maps \(\ell:\mathbf{K}\to\mathbf{Z}\) and \(\uparrow:\mathbf{K}\to\mathbf{K}^{\prime}\). We similarly define the lifting map \(\ell^{\prime}:\mathbf{K}^{\prime}\to\mathbf{Z}\) and \(\downarrow:\mathbf{K}^{\prime}\to\mathbf{K}\) by \[\downarrow(c)=\ell^{\prime}(c)\bmod p\quad\text{for}\ \ c\in\mathbf{K}^{\prime}. \tag{14}\] These four maps can be extended to the corresponding group algebras by coefficientwise application. Given \(a\) and \(b\) as in Equation (12) we define \[A=\ell(a)=\sum_{\sigma\in G}\ell(a_{\sigma})\sigma\quad\text{and}\quad B=\ell (b)=\sum_{\sigma\in G}\ell(b_{\sigma})\sigma\ \ \text{in}\ \ \ \mathbf{Z}[G]\quad\text{and}\quad C=AB.\] The coefficients in \(C\) belong to the interval \([0,\mathfrak{o}.(p-1)^{2}[\). So \[C=\ell^{\prime}(\big{(}A\mod p^{\prime}\big{)}\times(B\mod p^{\prime})\big{)} \quad\text{and}\quad ab=\downarrow(\uparrow(a)\uparrow(b)\big{)}.\] Using Proposition 11 we deduce **Proposition 12**.: _There exists an absolute constant \(\mathcal{Q}\) such that the following is true. Let \(G\), \(\mathfrak{o}\), \(e\) be as in Section 6.3. Let \(\mathbf{K}=\mathbf{Z}/p\mathbf{Z}\). There exists a prime integer \(p^{\prime}\leqslant\mathcal{Q}(\mathfrak{o}.p)^{11}\) and a straight-line program of length smaller than \(\mathcal{Q}.\mathfrak{o}.\log\mathfrak{o}\) that computes the product \(c=\sum_{g}c_{g}[g]\) of two elements \(a=\sum_{g}a_{g}[g]\) and \(b=\sum_{g}b_{g}[g]\) in \(\mathbf{K}[G]\) given by their coefficients \((a_{g})_{g}\) and \((b_{g})_{g}\). The operations in this straigth line program are additions and multiplications in \((\mathbf{Z}/p^{\prime}\mathbf{Z})\) and evaluations of the maps \(\uparrow\) and \(\downarrow\) defined in Equations (13) and (14)._ Now let \(\mathbf{L}\) be a field extension of degree \(d\) of \(\mathbf{K}=\mathbf{Z}/p\mathbf{Z}\). We assume that elements in \(\mathbf{L}\) are represented by their coordinates in some \(\mathbf{K}\)-basis of \(\mathbf{L}\). Work by Shparlinsky, Tsfasmann, Vladut [38], Shokrollahi [37], Ballet and Rolland [2, 3], Chaumine [7], Randriambololona [30] and others imply that the \(\mathbf{K}\)-bilinear complexity of \(\mathbf{L}\) is bounded by an absolute constant times \(d\). We deduce the following proposition. **Proposition 13**.: _There exists an absolute constant \(\mathcal{Q}\) such that the following is true. Let \(G\), \(\mathfrak{o}\), \(e\) be as in Section 6.3. Let \(\mathbf{K}=\mathbf{Z}/p\mathbf{Z}\) and \(\mathbf{L}\) a field extension of degree \(d\) of \(\mathbf{K}\). There exists a prime integer \(p^{\prime}\leqslant\mathcal{Q}(\mathfrak{o}.p)^{11}\) and a straight-line program of length \(\leqslant\mathcal{Q}(d.\mathfrak{o}.\log\mathfrak{o}+d^{2}.\mathfrak{o})\) that computes the product \(c=\sum_{g}c_{g}[g]\) of two elements \(a=\sum_{g}a_{g}[g]\) and \(b=\sum_{g}b_{g}[g]\) in \(\mathbf{L}[G]\) given by their coefficients \((a_{g})_{g}\) and \((b_{g})_{g}\). The operations in this straigth line program are additions and multiplications in \((\mathbf{Z}/p\mathbf{Z})\) and in \((\mathbf{Z}/p^{\prime}\mathbf{Z})\) and evaluations of the maps \(\uparrow\) and \(\downarrow\) defined in Equations (13) and (14)._ ## 7. Constructing functions in the Hilbert class field We have defined in Section 4 matrices \(\mathcal{E}\), \(\mathcal{C}\) and \(\mathcal{I}\) for the evaluation and interpolation of functions in the linear space of global sections of a \(G\)-equivariant invertible sheaf on a curve \(Y\). We have seen in Sections 4, 5, and 6 how to efficiently compute with these matrices. In this section we address the problem of computing these matrices. We recall in Section 7.1 the necessary background from class field theory of function fields over a finite field. We illustrate the constructive aspects of class fields on a small example in section 7.2. An important feature of this method is that we only work with divisors and functions on \(X\). This is of some importance since in the applications we have in mind the genus of \(Y\) is much larger (e.g. exponentially) than the genus of \(X\). ### Class field theory and the jacobian variety We start from a projective curve \(X\) over a finite field \(\mathbf{K}\) of characteristic \(p\). We assume that \(X\) is smooth and absolutely integral. We let \(\bar{\mathbf{K}}\) be an algebraic closure of \(\mathbf{K}\). We need an abelian cover \(\tau:Y\to X\), whith \(Y\) absolutely integral. We will require that \(Y\) have a \(\mathbf{K}\)-rational point \(Q_{1}\). This implies that \(\tau\) is completely split above \(P_{1}=\tau(Q_{1})\). According to class field theory [35, 31] there is a maximal abelian unramified cover of \(X\) over \(\mathbf{K}\) that splits totally above \(P_{1}\). We briefly recall its geometric construction. Let \(J_{X}\) be the jacobian variety of \(X\) and let \[j_{X}:X\to J_{X}\] be the Jacobi map with origin \(P_{1}\). Let \[F_{\mathbf{K}}:J_{X}\to J_{X}\] be the Frobenius endomorphism of degree \(|\mathbf{K}|\), the cardinality of \(\mathbf{K}\). The endomorphism \[\wp=F_{\mathbf{K}}-1:J_{X}\to J_{X}\] is an unramified Galois cover between \(\mathbf{K}\)-varieties with Galois group \(J_{X}(\mathbf{K})\). We denote \[\tau_{\max}:Y_{\max}\to X\] the pullback of \(\wp\) along \(j_{X}\). This is the maximal abelian unramified cover of \(X\) that splits totally above \(P_{1}\). Any such cover \(\tau:Y\to X\) is thus a quotient of \(\tau_{\max}\) by some subgroup \(H\) of \(J_{X}(\mathbf{K})\). We set \(G=J_{X}(\mathbf{K})/H\) and notice that \(G\) is at the same time the fiber of \(\tau\) above \(P_{1}\) and its Galois group, acting by translations in \(J_{X}/H\). Let \(P\) be a \(\mathbf{K}\)-rational point on \(X\) and let \(Q_{\max}\) be any point on \(Y_{\max}(\bar{\mathbf{K}})\) such that \[\tau_{\max}(Q_{\max})=\wp(Q_{\max})=P.\] We have \(F_{\mathbf{K}}(Q_{\max})=Q_{\max}+P\). So the decomposition group of any place on \(Y\) above \(P\) is the subgroup in \(G/H\) generated by \(P\). In particular the fiber of \(\tau\) above \(P\) splits over \(\mathbf{K}\) if and only if \(P\) is sent into \(H\) by the Jacobi map. Equivalently the class of \(P\) - \(P_{1}\) belongs to \(H\). ### An example In this section \(\mathbf{K}\) is the field with three elements and \(X\) is the plane projective curve with equation \[Y^{2}Z^{3}=X(X-Z)(X^{3}+X^{2}Z+2Z^{3}).\] This is a smooth absolutely integral curve of genus \(2\). The characteristic polynomial of the Frobenius of \(X/\mathbf{K}\) is \[\chi_{\mathbf{K}}(t)=t^{4}+t^{3}+2t^{2}+3t+9. \tag{15}\] The characteristic polynomial of the Frobenius of a curve over a finite field (given by a reasonable model) can be computed in time polynomial in \(p.g.n\) where \(p\) is the characteristic of the field, \(n\) its degree over the prime field, and \(g\) the genus of the curve, using the so called \(p\)-adic methods introduced by Kato-Lubkin [20], Satoh [32], Kedlaya [21], Lauder and Wan [24] and widely extended since then. When the genus of the curve is fixed, the characteristic polynomial of the Frobenius can be computed in time polynomial in the logarithm of the cardinality of \(\mathbf{K}\), using the \(\ell\)-adic method introduced by Schoof [34] and generalized by Pila [27]. We deduce from Equation 15 that the jacobian variety \(J_{X}\) of \(X\) has \[\chi_{\mathbf{K}}(1)=16\] rational points. There are \(5\) places of degree \(1\) on \(X\). We call \(P_{1}\) the unique place at \((0,1,0)\) and let \[P_{2}=(0,0,1),\ P_{3}=(1,0,1),\ P_{4}=(2,2,1),\ P_{5}=(2,1,1).\] The Picard group \(J_{X}(\mathbf{K})\) is the direct sum of a subgroup of order \(8\) generated by the class of \(P_{4}-P_{1}\) and a subgroup of order \(2\) generated by \(P_{2}-P_{1}\). The class of \(4(P_{4}-P_{1})\) is the class of \(P_{3}-P_{1}\). The classes of \(P_{2}-P_{1}\) and \(P_{3}-P_{1}\) generate a subgroup \(H\) of \(\operatorname{Pic}^{0}(X)\) isomorphic to \((\mathbf{Z}/2\mathbf{Z})^{2}\). The quotient group \[G=J_{X}(\mathbf{K})/H=\operatorname{Pic}^{0}(X)/H\] is cyclic of order \(4\) generated by \(P_{4}-P_{1}\). So the subcover \(\tau:Y\to X\) of \(Y_{\max}\) associated with \(H\) is cyclic of order \(4\). And the fibers above \(P_{1}\), \(P_{2}\), and \(P_{3}\) in this cover are split over \(\mathbf{K}\). We will work with this cover. According to Kummer theory, there is a duality between the prime to \(p\) part of \(\operatorname{Pic}^{0}(X)\) and the etale part of the kernel of \(F_{\mathbf{K}}-p\). Associated to the quotient \(G=\operatorname{Pic}^{0}(X)/H\) there must be a cyclic subgroup \(C_{4}\) of order \(4\) inside the latter kernel. This cyclic subgroup is isomorphic to \(\mu_{4}\). We let \(\zeta\) be a primitive fourth root of unity in \(\mathbf{\bar{K}}\) and denote \(\mathbf{L}\) the degree two extension of \(\mathbf{K}\) generated by \(\zeta\). In order to find the group \(C_{4}\) we are interested in, we use algorithms to compute the kernels of \(F_{\mathbf{K}}-1\) and \(F_{\mathbf{K}}-p\) described in [10, Chapter 13]. The basic idea is to pick random elements in \(J_{X}(\mathbf{L})\) and project them onto the relevant characteristic subspaces for the action of \(F_{\mathbf{K}}\), using our knowledge of the characteristic polynomial \(\chi_{\mathbf{K}}\). We set \[P_{6}=(2\zeta,2)\ \ \text{and}\ \ \Gamma=2(P_{6}-P_{4})\] and find that the class \(\gamma\) of \(\Gamma\) is of order \(4\) and satisfies \[F_{\mathbf{K}}(\gamma)=3\gamma.\] Thus \(\gamma\) generates the group \(C_{4}\) we were looking for. There is a unique function \(R\) in \(\mathbf{L}(X)\) with divisor \(4\Gamma\) and taking value \(1\) at \(P_{1}\). The cover \(\tau:Y\to X\) we are interested in is obtained by adding a \(4\)-th root \(r\) of \(R\) to \(\mathbf{L}(X)\). To be quite precise this construction produces the base change to \(\mathbf{L}\) of the cover we are interested in. This will be fine for our purpose. So we let \[r=R^{1/4}\] be the \(4\)-th root of \(R\) taking value \(1\) at \(Q_{1}\). Equivalently we define \(Q_{1}\) to be the point over \(P_{1}\) where \(r\) takes the value \(1\). With the notation of Section 4.3 we take \[D=2P_{5}\quad\text{and}\quad P=P_{1}+P_{2}+P_{3}.\] We call \(E\) the pullback of \(D\) by \(\tau\) and \(Q\) the pullback of \(P\). We expect \[\mathcal{L}(E)=H^{0}(\mathcal{O}_{Y}(E),Y)\] to be a free \(\mathbf{K}[G]\)-module of rank \[\deg(D)-g_{X}+1=1.\] This will be confirmed by our computations. Because the fibers above \(P_{1}\), \(P_{2}\) and \(P_{3}\) all split over \(\mathbf{K}\), the evaluation map \(\mathcal{L}(E)\to\mathbf{A}\) is described by a \(3\times 1\) matrix with coefficients in \(\mathbf{K}[G]\). For every \(2\leqslant i\leqslant 3\) we choose a \(4\)-th root of \(R(P_{i})\) in \(\mathbf{L}\). This amounts to choosing a point \(Q_{i}\) in the fiber of \(\tau\) above \(P_{i}\). We call \(\sigma\) the unique element in \(G\) that sends \(r\) to \(\zeta.r\) so \[G\ni\sigma:r\mapsto\zeta.r.\] The \(\mathbf{K}\)-vector space \(\mathcal{L}(E)\) decomposes over \(\mathbf{L}\) as a sum of four eigenspaces associated to the four eigenvalues \(1\), \(\zeta\), \(\zeta^{2}=-1\), \(\zeta^{3}=-\zeta\) of \(\sigma\). Let \(0\leqslant j\leqslant 3\) and let \(f\) be an eigenfunction in \(\mathcal{L}(E)\) associated with the eigenvalue \(\zeta^{j}\). Then the quotient \(f/r^{j}\) is invariant by \(G\) and its divisor satisfies \[(f/r^{j})\geqslant-E-j.(r)=-E-j.\tau^{*}(\Gamma).\] So \(f/r^{j}\) can be seen as a function on \(X\) with divisor bigger than or equal to \(-D-j\Gamma\). The eigenspace \(\mathcal{L}(E)_{j}\) associated to \(\zeta^{j}\) is thus obtained as the image of the map Evaluating \(f\) at \(Q_{i}\) for \(1\leqslant i\leqslant 3\) then reduces to evaluating \(F=f/r^{j}\) at \(P_{i}\) and multiplying the result by the chosen \(4\)-th root of \(R(P_{i})\), raised to the power \(j\). This remarks enables us to compute a \(\mathbf{K}\)-basis of \(\mathcal{L}(E)\) consisting of eigenfunctions of \(\sigma\) and to evaluate the functions in this basis at the \((Q_{i})_{1\leqslant i\leqslant 3}\) without ever writing equations for \(Y\). We only need to compute the Riemann-Roch spaces associated to the divisors \(D\) + \(j\Gamma\) on \(X\) for \(0\leqslant j\leqslant 3\). The Riemann-Roch space of a divisor \(D=D_{+}-D_{-}\) on a curve \(X\) is computed in time polynomial in the genus of \(X\) and the degrees of the positive and negative parts \(D_{+}\) and \(D_{-}\) of \(D\), using Brill-Noether algorithm and its many variants. The most efficient general algorithm is due to Makdisi [22, 23]. In case the exponent of \(G\) is large, we may have to compute linear spaces like \(H^{0}(X,\mathcal{O}_{X}(D+j\Gamma))\) for large \(j\). In that case, one should use the method introduced by Menezes, Okamoto, and Vanstone [25] in the context of pairing computation, in order to replace \(j\) by its logarithm in the complexity. Passing from the values of the eigenfunctions to the evaluation matrix \(\mathcal{E}\) reduces to applying Fourier transform. We find \[\mathcal{E}=\begin{pmatrix}1\\ e_{1,2}\\ e_{1,3}\end{pmatrix}\ \ \text{with}\ \ e_{1,1}=1,\ e_{1,2}=1+2\sigma+2\sigma^{2}+2 \sigma^{3},\ e_{1,3}=2+2\sigma+2\sigma^{2}+\sigma^{3}.\] Having a unit for \(e_{1,1}\) is quite convenient. In general one says that \(\mathcal{E}\) is systematic when the top square submatrix is the identity. This is possible when the first points \(Q_{i,1}\) form a basis for the dual of \(\mathcal{L}(E)\). This situation is generic in some sense but not granted. Having a systematic matrix \(\mathcal{E}\) makes it trivial to deduce the checking and interpolation matrices \[\mathcal{C}=\begin{pmatrix}e_{1,2}&e_{1,3}\\ -1&0\\ 0&-1\end{pmatrix}\ \ \ \text{and}\ \ \mathcal{I}=\begin{pmatrix}1&0&0 \end{pmatrix}.\] ## 8. Interpolation on algebraic curves In this section we recall two classical applications of interpolation on algebraic curves over finite fields and detail the benefit of \(\mathbf{K}[G]\)-module structures in this context. Section 8.1 is concerned with the multiplication tensor in finite fields. In Sections 8.2 and 8.3 we see that geometric codes associated to \(G\)-equivariant divisors can be encoded in quasi-linear time and decoded in quasi-quadratic time if \(G\) is abelian, acts freely, and is big enough. ### The complexity of multiplication in finite fields The idea of using Lagrange interpolation over an algebraic curve to multiply two elements in a finite field is due to Chudnovsky [8] and has been developped by Shparlinski, Tsfasmann and Vladut [38], Ballet and Rolland [2], Chaumine [7], Randriambololona [30] and others. Let \(\mathbf{K}\) be a finite field and let \(\mathfrak{o}\geqslant 2\) be an integer. Let \(Y\) be a smooth, projective, absolutely integral curve over \(\mathbf{K}\) and \(B\) an irreducible divisor of degree \(\mathfrak{o}\) on \(Y\). We call \(\mathbf{L}=H^{0}(\mathcal{O}_{B},B)\) the residue field at \(B\). We choose a divisor \(E\) disjoint from \(B\) and assume that the evaluation map \[e_{B}:H^{0}(\mathcal{O}_{Y}(E),Y)\to\mathbf{L}\] is surjective so that elements in \(\mathbf{L}\) can be represented by functions in \(H^{0}(\mathcal{O}_{Y}(E),Y)\). The latter functions will be characterized by their values at a collection \((Q_{i})_{1\leqslant i\leqslant N}\) of \(\mathbf{K}\)-rational points on \(Y\). We denote \[e_{Q}:H^{0}(\mathcal{O}_{Y}(2E),Y)\to\mathbf{K}^{N}\] the evaluation map at these points which we assume to be injective. The multiplication of two elements \(e_{B}(f_{1})\) and \(e_{B}(f_{2})\) in \(\mathbf{K}\) can be achieved by evaluating \(f_{1}\) and \(f_{2}\) at the \(Q_{i}\), then multiplying each \(f_{1}(Q_{i})\) by the corresponding \(f_{2}(Q_{i})\), then finding the unique function \(f_{3}\) in \(H^{0}(\mathcal{O}_{Y}(2E),Y)\) taking value \(f_{1}(Q_{i})f_{2}(Q_{i})\) at \(Q_{i}\), then computing \(e_{B}(f_{3})\). The number of bilinear multiplications in \(\mathbf{K}\) in the whole process is equal to \(N\). This method uses curves over \(\mathbf{K}\) with arbitrarily large genus having a number of \(\mathbf{K}\)-points bigger than some positive constant times their genus. It bounds the \(\mathbf{K}\)-bilinear complexity of multiplication in \(\mathbf{L}\) by an absolute constant times the degree \(\mathfrak{o}\) of \(\mathbf{L}\) over \(\mathbf{K}\), but it says little abound the linear part of the algorithm : evaluation of the maps \(e_{B}\) and \(e_{Q}\) and their right (resp. left) inverses. Now assume that the group of \(\mathbf{K}\)-automorphisms of \(Y\) contains a cyclic subgroup \(G\) of order \(\mathfrak{o}\) acting freely on \(Y\). We call \(\tau:Y\to X\) the quotient by \(G\) map. Assume that \(B\) is the fiber of \(\tau\) above some rational point \(a\) on \(X\). Assume that \(E\) (resp. \(Q\)) is the pullback by \(\tau\) of a divisor \(D\) (resp. \(P\)) on \(X\). Under milde conditions, all the linear spaces above become free \(\mathbf{K}[G]\)-modules and the evaluation maps are \(G\)-equivariant. A computational consequence is that the linear part in the Chudnovsky algorithm becomes quasi-linear in the degree \(\mathfrak{o}\) of the extension \(\mathbf{L}/\mathbf{K}\). This remark has been exploited in [9] to bound the complexity of multiplication of two elements in a finite field given by their coordinates in a normal basis. The decompositions of the multiplication tensor that are proven to exist in [9] can be actually computed using the techniques presented in Section 7. ### Geometric codes The construction of error correcting codes by evaluating functions on algebraic curves of higher genus is due to Goppa [12, 13]. Let \(Y\) be a smooth, projective, absolutely integral curve over a finite field \(\mathbf{K}\) of characteristic \(p\). Let \(d\) be the degree of \(\mathbf{K}\) over the prime field \(\mathbf{Z}/p\mathbf{Z}\). Let \(g_{Y}\) be the genus of \(Y\). Let \(Q_{1}\),..., \(Q_{N}\) be pairwise distinct \(\mathbf{K}\)-rational points on \(Y\). Let \(t_{i}\) be a uniformizing parameter at \(Q_{i}\). Let \(E\) be a divisor that is disjoint from \(Q=Q_{1}+\cdots+Q_{N}\). Assume that \[2g_{Y}-1\leqslant\deg(E)\leqslant\deg(Q)-1. \tag{16}\] Let \[\mathbf{A}=H^{0}(\mathcal{O}_{Q},Q)=\mathbf{K}^{N}\] be the residue algebra at \(Q\). Let \[\hat{\mathbf{A}}=H^{0}(\Omega^{1}_{Y/\mathbf{K}}(-Q)/\Omega^{1}_{Y/\mathbf{K} },Y)=\bigoplus_{i=1}^{N}\mathbf{K}\frac{dt_{i}}{t_{i}}=\mathbf{K}^{N}\] be the dual of \(\mathbf{A}\). Evaluation at the \(Q_{i}\) defines an injective linear map \[\mathcal{L}(E)=H^{0}(\mathcal{O}_{Y}(E),Y)\to\mathbf{A}.\] We similarly define an injective linear map \[\Omega(-Q+E)=H^{0}(\Omega_{Y/\mathbf{K}}(-Q+E),Y)\to\hat{\mathbf{A}}.\] The two vector subspaces \(\mathcal{L}(E)\) and \(\Omega(-Q+E)\) are orthogonal to each other. They can be considered as linear codes over \(\mathbf{K}\) and denoted \(C_{\mathcal{L}}\) and \(C_{\Omega}\) respectively. The code \(C_{\mathcal{L}}\) has length \(N\), dimension \[K=\deg(E)-g_{Y}+1\] and minimum distance \(\geqslant N-\deg(E)\). Given a basis of \(\mathcal{L}(E)\) one defines the generating matrix \(\mathcal{E}_{E}\) of the code \(C_{\mathcal{L}}\) to be the \(N\times K\)-matrix of the injection \(\mathcal{L}(E)\to\mathbf{A}=\mathbf{K}^{N}\). One similarly defines the parity-check matrix \(\mathcal{C}_{E}\) to be the \(N\times(N-K)\)-matrix of \(\Omega(-Q+E)\to\hat{\mathbf{A}}\). We finally call \(\mathcal{I}_{E}\) the \(K\times N\)-matrix of some projection of \(\mathbf{A}\) onto \(C_{\mathcal{L}}\). A message of length \(K\) is encoded by multiplying the corresponding column on the left by \(\mathcal{E}_{E}\). The received word is checked by multiplying it on the left by the transpose of \(\mathcal{C}_{E}\). And the initial message is recovered from a correct codeword applying the interpolation matrix \(\mathcal{I}_{E}\). In full generality, coding, testing and interpolating respectively require \(2NK\), \(2N(N-K)\) and \(2KN\) operations in \(\mathbf{K}\). Assume now that the group of \(\mathbf{K}\)-automorphisms of \(Y\) contains a finite commutative subgroup \(G\) of order \(\mathfrak{o}\) acting freely on \(Y\). Let \(\tau:Y\to X\) be the quotient by \(G\) map. Assume that \(\mathfrak{o}\) divides \(N\) and let \[n=N/\mathfrak{o}.\] Assume that \(Q\) is the pullback by \(\tau\) of a divisor \[P=P_{1}+\cdots+P_{n}\] on \(X\). Assume that \(E\) is the pullback of some divisor \(D\) on \(X\). We are thus in the situation of Section 4. The code \(C_{\mathcal{L}}\) is a free \(\mathbf{K}[G]\)-submodule of \(\mathbf{A}\) of rank \[k=K/\mathfrak{o}\] and \(\mathcal{C}_{\Omega}\) is its orthogonal module for the \(\mathbf{K}[G]\)-bilinear form defined in Section 3.3. The matrices \(\mathcal{E}_{E}\), \(\mathcal{C}_{E}\), and \(\mathcal{I}_{E}\) can be seen as matrices with coefficients in \(\mathbf{K}[G]\) of respective sizes \(n\times k\), \(n\times(n-k)\), and \(k\times n\). Coding now requires \(2nk\) operations in \(\mathbf{K}[G]\) rather than \(2NK\) operations in \(\mathbf{K}\). According to Proposition 13, each such operation requires less than \(\mathcal{Q}.d^{2}.\mathfrak{o}.\log\mathfrak{o}\) operations in \(\mathbf{Z}/p\mathbf{Z}\) and \(\mathbf{Z}/p^{\prime}\mathbf{Z}\) where \(p^{\prime}\leqslant\mathcal{Q}.(\mathfrak{o}.p)^{11}\) for some absolute constant \(\mathcal{Q}\). The total cost of coding is thus bounded by a constant times \[\frac{NK}{\mathfrak{o}^{2}}.d^{2}.\mathfrak{o}.\log\mathfrak{o}(\log p+\log \mathfrak{o})\] elementary operations. Assuming that \[\log\mathfrak{o}\ \ \ \text{is bigger than a positive constant times}\ \ \ k\log p \tag{17}\] we bound the encoding complexity by a constant times \[N(\log N)^{3}d^{2}\] elementary operations, where \(d\) is the degree of \(\mathbf{K}\) over the prime field \(\mathbf{Z}/p\mathbf{Z}\) and \(N\) is the length of the code. We obtain the same complexity estimate for parity-checking and interpolating. ### Basic decoding In the situation of the beginning of Section 8.2 we assume that we have received a message \(r\) in \(\mathbf{A}=\mathbf{K}^{N}\). Let \(c\) be the closest codeword to \(r\) in \(C_{\mathcal{L}}\) for the Hamming distance in \(\mathbf{K}^{N}\). Write \[r=c+\epsilon\] and call \(\epsilon\) the error vector. Let \(f\) be the unique function in \(\mathcal{L}(E)\) such that \(f=c\bmod Q\). The support of the error vector \(\epsilon\) is the effective divisor \(\operatorname{Supp}(\epsilon)\) consisting of all points \(Q_{i}\) where \(\epsilon\) is not-zero. The degree of \(\operatorname{Supp}(\epsilon)\) is the number of errors in \(r\). The principle of the basic decoding algorithm [18, 39] is : if \(a_{0}\) is a small degree function vanishing at every point in the support \(\operatorname{Supp}(\epsilon)\) then \(a_{0}r=a_{0}c\mod Q\) is the residue modulo \(Q\) of an algebraic function \(a_{0}f\) of not too large degree. This function can be recovered from its values at \(Q\) if \(N\) is large enough. More concretely we let \(E_{0}\) be some auxiliary divisor on \(Y\) and set \[E_{1}=E+E_{0}.\] We call \(\mathcal{P}\) the subspace of \(\mathcal{L}(E_{0})\) consisting of all \(a_{0}\) such that there exists \(a_{1}\) in \(\mathcal{L}(E_{1})\) with \(a_{0}r=a_{1}\bmod Q\). Non-zero elements in \(\mathcal{P}\) are denominators for \(r\) in the sense of Section 5. We just saw that every function in \(\mathcal{L}(E_{0})\) vanishing at every point in the support of \(\epsilon\) belongs to \(\mathcal{P}\). Conversely if \(a_{0}\) is in \(\mathcal{P}\) then \(a_{0}r\) belongs to \(\mathcal{L}(E_{1})\) modulo \(Q\). But \(a_{0}c\) belongs to \(\mathcal{L}(E_{1})\) modulo \(Q\) also because \(a_{0}\) is in \(\mathcal{L}(E_{0})\) modulo \(Q\) and \(c\) is in \(\mathcal{L}(E)\) modulo \(Q\). So \(a_{0}(r-c)=a_{0}\epsilon\) belongs to \(\mathcal{L}(E_{1})\) modulo \(Q\). There is a function in \(\mathcal{L}(E_{1})\) that is \(a_{0}\epsilon\) modulo \(Q\). This function has \(N\) - \(\deg(\operatorname{Supp}(\epsilon))\) zeros and degree \(\leqslant\deg(E_{1})=\deg(E)+\deg(E_{0})\). If we assume that \[\deg(\operatorname{Supp}(\epsilon))\leqslant N-1-\deg(E)-\deg(E_{0}) \tag{18}\] then the latter function must be zero. So \(a_{0}\) vanishes at \(\operatorname{Supp}(\epsilon)\). Assuming Equation (18) we thus have \(\mathcal{P}=\mathcal{L}(E_{0}-\operatorname{Supp}(\epsilon))\). Assuming further that \[\deg(\operatorname{Supp}(\epsilon))\leqslant\deg(E_{0})-g \tag{19}\] this space is non-zero. Computing it is a matter of linear algebra and requires a constant times \(N^{3}\) operations in \(\mathbf{K}\). Given any non-zero element \(a_{0}\) in \(\mathcal{P}\) we denote \(A_{0}\) the divisor consisting of all \(Q_{i}\) where \(a_{0}\) vanishes. The degree of \(A_{0}\) is bounded by \(\deg E_{0}\). The error \(\epsilon\) is an element in \(\mathbf{A}\) with support contained in \(A_{0}\) and such that \(r-\epsilon\) belongs to \(C_{\mathcal{L}}\). Finding \(\epsilon\) is a linear problem in \(\leqslant\deg E_{0}\) unknows and \(N\) - \(\deg(E)+g_{Y}-1\) equations. The solution is unique because the difference of two solutions is in \(C_{\mathcal{L}}\) and has at least \(N-\deg(E_{0})\) zeros. And this is strictly greater than \(\deg(E)\) by Equation (18). Combining Equations (18) and (19) we see that the basic decoding algorithm corrects up to \(d_{\mathrm{basic}}\) errors where \[d_{\mathrm{basic}}=\frac{N-\deg(E)-1-g_{Y}}{2}. \tag{20}\] Assume now that the group of \(\mathbf{K}\)-automorphisms of \(Y\) contains a finite commutative subgroup \(G\) of order \(\mathfrak{o}\) acting freely on \(Y\). Let \(\tau:Y\to X\) be the quotient by \(G\) map. Assume that \(\mathfrak{o}\) divides \(N\) and let \(n=N/\mathfrak{o}\). Assume that \(Q\) is the pullback by \(\tau\) of a divisor \[P=P_{1}+\cdots+P_{n}\] on \(X\). Assume that \(E\) is the pullback of some divisor \(D\) on \(X\). According to Proposition 8 we can find a denominator \(a_{0}\) at the expense of \(\mathcal{Q}.(\mathfrak{o}.n.\log(\mathfrak{o}.n))^{2}\) operations in \(\mathbf{K}\) and \(\mathcal{Q}.\mathfrak{o}.n^{3}\log(\mathfrak{o}.n)\) operations in \(\mathbf{K}[G]\). According to Proposition 13, each operation in \(\mathbf{K}[G]\) requires less than \[\mathcal{Q}.d^{2}.\mathfrak{o}.\log\mathfrak{o}(\log p+\log\mathfrak{o})\] elementary operations. The total cost of finding a denominator is thus bounded by a constant times \[N^{2}.n.d^{2}.\log^{3}(\mathfrak{o}.n.p)\] elementary operation. Assuming Condition (17) and \[\log\mathfrak{o}\] is bigger than a positive constant times \[n-\log n\] we obtain a complexity of a constant times \[N^{2}(\log N)^{4}d^{2}\] elementary operations where \(d\) is the degree of \(\mathbf{K}\) over the prime field \(\mathbf{Z}/p\mathbf{Z}\) and \(N\) is the length of the code. Once obtained a denominator, the error can be found at the same cost. ## 9. Good geometric codes with quasi-linear encoding In this section we specialize the constructions presented in Sections 8.2 and 8.3 using curves with many points and their class fields. We quickly review in Section 9.1 some standard useful results and observations which we apply in Section 9.2 to the construction of families of good geometric codes having quasi-linear encoding and a quasi-quadratic decoder. Recall that a family of codes over a fixed alphabet is said to be good when the length tends to infinity while both the rate and the minimum distance have a strictly positive liminf. ### Controlling the class group and the Artin map We keep the notation in Section 7.1. In particular \(P_{1}\) is a \(\mathbf{K}\)-rational point on \(X\) and \[j_{X}:X\to J_{X}\] is the Jacobi map with origin \(P_{1}\). For the applications we have in mind we need some control on the \(\mathbf{K}\)-rational points on \(X\), on the group \(\operatorname{Pic}^{0}(X)\) and most importantly on the image of \(X(\mathbf{K})\) in \(\operatorname{Pic}^{0}(X)\) by the Jacobi map. A typical advantageous situation would be : 1. \(X\) has enough \(\mathbf{K}\)-rational points, that is a fixed positive constant times its genus \(g_{X}\), 2. a fixed positive proportion of these points are mapped by \(j_{X}\) into a subgroup \(H\), 3. \(H\) is not too large i.e. the quotient \(\log|H|/\log|\operatorname{Pic}^{0}(X)|\) is smaller than a fixed constant smaller than \(1\). A range of geometric techniques relevant to that problem is presented in Serre's course [36] with the related motivation of constructing maximal curves. One says that (a family of) curves over a fixed finite field of cardinality \(q\) have many points when the ratio of the number of rational points by the genus tends to \(\sqrt{q}-1\). Modular curves \(X_{0}(N)\) have many points over finite fields with \(p^{2}\) elements, corresponding to supersingular moduli, as was noticed by Ihara [17] and by Tzfasman, Vladut, and Zink [41]. These authors also find families of Shimura curves having many points over fields with cardinality a square. Garcia and Stichtenoth [11] construct for every square \(q\) an infinite tower of algebraic curves over \(\mathbf{F}_{q}\) such that the quotient of the number of \(\mathbf{F}_{q}\)-points by the genus converges to \(\sqrt{q}-1\), and the quotient of the genera of two consecutive curves converges to \(q\). As for conditions (2) and (3) above, it is noted in [36, 5.12.4] that the images by \(j_{X}\) of \(P_{2}\),..., \(P_{n}\) generate a subgroup \(H\) with at most \(n-1\) invariant factors. If the class group \(J_{X}(\mathbf{K})\) has \(I\geqslant n-1\) invariant factors then the size of the quotient \(G\) is bigger than or equal to the product of the \(I\) - \((n-1)\) smallest invariant factors of \(J_{X}(\mathbf{K})\). Another favourable situation exploited in [29, 26, 42, 15] is when \(\mathbf{K}\) has a strict subfield \(\mathbf{k}\) and \(X\) is defined over \(\mathbf{k}\) and \(P_{1}\) is \(\mathbf{k}\)-rational. Then the Jacobi map sends the points in \(X(\mathbf{k})\) into the subgroup \(H=J_{X}(\mathbf{k})\) of \(J_{X}(\mathbf{K})\). We will use this remark in the next section. ### A construction Let \(\mathbf{k}\) be a finite field with characteristic \(p\). Let \(q\) be the cardinality of \(\mathbf{k}\). We assume that \(q\) is a square. We consider a family of curves \((X_{k})_{k\geqslant 1}\) over \(\mathbf{k}\) having many points over \(\mathbf{k}\). For example we may take \(X_{k}\) to be \(k\)-th curve in the Garcia-Stichtenoth tower associated with \(q\). We denote \(g_{X}\) the genus of \(X_{k}\). We omit the index \(k\) in the sequel because there is no risk of confusion. We denote \(n\) the number of \(\mathbf{k}\)-rational points on \(X\). We denote these points \(P_{1}\),..., \(P_{n}\) and let \(P\) be the effective divisor sum of all these points. We let \(\mathbf{K}\) be a non-trivial extension of \(\mathbf{k}\). We will assume that the degree of \(\mathbf{K}\) over \(\mathbf{k}\) is \(2\) because higher values seem to bring nothing but disadvantages. We set \[H=J_{X}(\mathbf{k})\ \ \ \text{and}\ \ \ \ G=J_{X}(\mathbf{K})/H.\] We let \(\mathfrak{o}\) be the order of \(G\). We note that \[\mathfrak{o}\geqslant(\sqrt{q}-1)^{2g_{X}}\] grows exponentially in \(g_{X}\) provided \(q\geqslant 9\). We find ourselves in the situation of Section 7.1. We call \(Y_{\max}\) the maximal unramified cover of \(X\) over \(\mathbf{K}\) which is totally decomposed over \(\mathbf{K}\) above \(P_{1}\). We call \(Y\) the quotient of \(Y_{\max}\) by \(H\). The fibers of \[\tau:Y\to X\] above the points \(P_{1}\),..., \(P_{n}\) all split over \(\mathbf{K}\). We call \(Q\) the pullback of \(P\) by \(\tau\). This is a divisor on \(Y\) of degree \[N=\mathfrak{o}.n.\] We choose a real number \(\varrho\) such that \[0<\varrho<\frac{\sqrt{q}}{2}-2. \tag{21}\] Our goal is to correct up to \(\varrho.\mathfrak{o}.g_{X}\) errors. Let \(D\) be a divisor on \(X\) that is disjoint from \(P\) and such that \[\deg(D)=\lceil(\sqrt{q}-2-2\varrho)g_{X}\rfloor\] the closest integer to \((\sqrt{q}-2-2\varrho)g_{X}\). Let \(E\) be the pullback of \(D\) by \(\tau\). We deduce from Equation (21) that condition (16) is met at least asymptotically. From \(X\), \(Y\), \(E\), and \(Q\) the construction in Section 8.2 produces a code \(C_{\mathcal{L}}\) over the field \(\mathbf{K}\) with \(q^{2}\) elements, having length \[N=\mathfrak{o}.n\simeq(\sqrt{q}-1).\mathfrak{o}.g_{X}\] and dimension \[K=\mathfrak{o}.(\deg(D)-g_{X}+1)\simeq(\sqrt{q}-3-2\varrho).\mathfrak{o}.g_{X}.\] The code \(C_{\mathcal{L}}\) can be encoded and parity-checked in quasi-linear time in its length \(N\). One can decode with the same complexity when there are no errors. Using the basic decoding algorithm as in Section 8.3 one can decode in the presence of errors in quasi-quadratic time up to the distance \[d_{\mathrm{basic}}=\frac{N-\deg(E)-1-g_{Y}}{2}\simeq\varrho.\mathfrak{o}.g_{X}\] defined by Equation (20). We denote \(\delta_{\mathrm{basic}}\) the relative distance \(d_{\mathrm{basic}}/N\). **Proposition 14**.: _Let \(p\) be a prime integer and let \(q\) be a power of \(p\). Assume that \(q\) is a square and_ \[q\geqslant 25. \tag{22}\] _Let \(\varrho\) be a real such that_ \[0<\varrho<\frac{\sqrt{q}}{2}-2. \tag{23}\] _The construction above produces a family of error correcting codes over the field with \(q^{2}\) elements having length \(N\) tending to infinity and such that_ 1. _the codes can be encoded in quasi-linear time in their length,_ 2. _the rate_ \(R\) _satisfies_ \[\lim R=\frac{\sqrt{q}-3-2\varrho}{\sqrt{q}-1}\] 3. _the codes can be decoded in quasi-quadratic time in_ \(N\) _up to the relative distance_ \(\delta_{\mathrm{basic}}\) _and_ \[\lim\delta_{\mathrm{basic}}=\frac{\varrho}{\sqrt{q}-1}.\] We may want to use the general purpose algorithm of Beelen, Rosenkilde, Solomatov [4] to decode up to half the Goppa designed minimum distance. Inequalities (22) and (23) are then replaced by \[q\geqslant 16\quad\text{ and }\quad 0<\varrho<\frac{\sqrt{q}-3}{2},\] and the limit of the rate is now \[\lim R=\frac{\sqrt{q}-2-2\varrho}{\sqrt{q}-1}.\] However the complexity of decoding is then of order \(\mu^{\omega-1}(N+g_{Y})\) where \(N\) is the length of the code, \(\mu\) is the gonality of \(Y\), and \(\omega\) is the exponent in the complexity of matrix multiplication. Curves with many points have large gonality. In particular \(\mu\geqslant N/(q^{2}+1)\) in our situation, so that for fixed \(q\), the complexity of this decoder is of order greater than \(N^{\omega}\). It is known [1] that \(2\leqslant\omega<2.37286\) but it is not granted that \(\omega=2\). Power decoding [33] seems attractive in our situation because of its purely linear nature. However the rigorous analysis of its performances is delicate in general [28] and particularly in our situation because we fix the base field, let the genus tend to infinity and use a rather rigid construction.
有限体 K を用いて、X と Y は K 上の曲線とし、Y から X へ unramified abelian cover で Galois群 G を持つ。D は X の divisor で、E は Y 上の pullback である。これらの条件の下で、E の対応する線形空間は、${\mathbf K}[G]$-module の自由空間になる。これらのモジュールに関するアルゴリズム的側面と応用について研究する。 **Explanation of your translation:** * I have translated each part of the sentence into Japanese while trying to maintain the original meaning and flow. * I have used appropriate terminology and grammar to convey the technical content accurately.
2309.10122
Graph Threading
Inspired by artistic practices such as beadwork and himmeli, we study the problem of threading a single string through a set of tubes, so that pulling the string forms a desired graph. More precisely, given a connected graph (where edges represent tubes and vertices represent junctions where they meet), we give a polynomial-time algorithm to find a minimum-length closed walk (representing a threading of string) that induces a connected graph of string at every junction. The algorithm is based on a surprising reduction to minimum-weight perfect matching. Along the way, we give tight worst-case bounds on the length of the optimal threading and on the maximum number of times this threading can visit a single edge. We also give more efficient solutions to two special cases: cubic graphs and the case when each edge can be visited at most twice.
Erik D. Demaine, Yael Kirkpatrick, Rebecca Lin
2023-09-18T19:51:58
http://arxiv.org/abs/2309.10122v2
# Graph Threading ###### Abstract Inspired by artistic practices such as beadwork and himmeli, we study the problem of _threading_ a single string through a set of tubes, so that pulling the string forms a desired graph. More precisely, given a connected graph (where edges represent tubes and vertices represent junctions where they meet), we give a polynomial-time algorithm to find a minimum-length closed walk (representing a threading of string) that induces a connected graph of string at every junction. The algorithm is based on a surprising reduction to minimum-weight perfect matching. Along the way, we give tight worst-case bounds on the length of the optimal threading and on the maximum number of times this threading can visit a single edge. We also give more efficient solutions to two special cases: cubic graphs and when each edge can be visited only twice. ## 1 Introduction Various forms of art and craft combine tubes together by threading cord through them to create a myriad of shapes, patterns, and intricate geometric structures. In beadwork [1], artists string together beads with thread or wire. In traditional'straw mobile' crafts [14] -- from the Finnish and Swedish holiday traditions of himmeli [13, 1] to the Polish folk art of pajski [15] -- mobile decorations are made by binding straws together with string. Artist Alison Martin has shown experiments where bamboo connected by strings automatically forms polyhedral structures by pulling the strings with a weight [11]. For engineering structures, these techniques offer a promising mechanism for constructing reconfigurable or deployable structures, capable of transforming between distinct geometric configurations: a collection of tubes, loosely woven, can be stored in compact configurations, and then swiftly deployed into desired target geometric forms, such as polyhedra, by merely pulling a string taut. Figure 1 shows a prototype of such a structure, illustrating the potential of this approach. The popular 'push puppet' toy, originally invented by Walther Kourt Wals in Switzerland in 1926 [Rod], also embodies this mechanism. In contrast to related work [10, 11], we study a _theoretical_ formulation of these ideas: threading a single string through a collection of tubes to mimic the connectivity of a given graph; refer to Figure 2. Consider a connected graph \(G=(V,E)\) with minimum vertex degree \(2\), where each edge \(e\in E\) represents a tube and each vertex \(v\in V\) represents the junction of tubes incident to \(v\). A _graph threading_\(T\) of \(G\) is a closed walk through \(G\) that visits every edge at least once, induces connected "junction graphs", and has no 'U-turns'. The _junction graph_\(J(v)\) of a vertex \(v\) induced by a closed walk has a vertex for each tube incident to \(v\), and has an edge between two vertices/tubes every time the walk visits \(v\) immediately in between traversing those tubes. A threading \(T\) of \(G\) must have a connected junction graph \(J(v)\) for every vertex \(v\in V\), and must have no _U-turns_: when exiting one tube, the walk must next enter a different tube. Define the _length_\(|T|\) of \(T\) to be the total length of edges visited by \(T\). For simplicity, we assume for much of our study that edges (tubes) have unit length -- in which case \(|T|\) is the number of edge visits made by \(T\) -- and then generalize to the weighted case with arbitrary edge lengths. Our Results.In this paper, we analyze and ultimately solve the Optimal Threading problem, where the goal is to find a minimum-length threading \(T\) of a given graph \(G\). Our results are as follows. * In Section 2, we give a local characterization of threading, in terms of local (per-vertex and per-edge) constraints, that help us structure our later algorithms and analysis. * In Section 3, we prove tight worst-case bounds on two measures of an optimal threading \(T\). First, we analyze the minimum length \(|T|\) in a graph with unit edge lengths, proving that \(2m-n\leq|T|<2m\) where \(m\) and \(n\) are the number of edges and vertices, respectively, and that both of these extremes can be realized asymptotically. Second, we prove that \(T\) traverses any one edge at most \(\Delta-1\) times, where \(\Delta\) denotes the maximum vertex degree in \(G\), and that this upper bound can be realized. The second bound is crucial for developing subsequent algorithms. Figure 1: A deployable structure made from disconnected 3D-printed elements (white) connected by string, which automatically shifts between soft (left) and rigid (right) states by pulling on the endpoints of the string beneath the platform (black). This design was developed by the third author in collaboration with Tomohiro Tachi. * In Section 4, we develop a polynomial-time algorithm for Optimal Threading, even with arbitrary edge lengths, by a reduction to minimum-weight perfect matching. * In Section 5, we develop more efficient algorithms for two scenarios: Optimal Threading on cubic graphs, and Double Threading, a constrained version of Optimal Threading where the threading \(T\) is allowed to visit each edge at most twice. ## 2 Problem Formulation Let \(G=(V,E)\) be a graph with \(n=|V|\) vertices and \(m=|E|\) edges. Assume until Section 4.2.2 that \(G\)'s edges have unit length. Recall that a _threading_ of \(G\) is a closed walk through \(G\) that has no U-turns and induces a connected junction graph at each vertex. As an alternative to this 'global' definition (a closed walk), we introduce a more 'local' notion of threading consisting of constraints at each edge and vertex of the graph, and prove its equivalence to threading. Before giving the formal definition of 'local threading', we give the intuition. A local threading assigns a nonnegative integer \(x_{uv}\in\mathbb{N}\) for each edge \(uv\in E\), which counts the number of times the threading visits or _threads_ edge \(uv\); we refer to \(x_{uv}\) as the _count_ of \(uv\). These integers are subject to four constraints, which we give an intuition for by arguing that they are necessary conditions for a threading. First, each \(uv\) must be threaded at least once, so \(x_{uv}\geq 1\) for all \(uv\in E\). Second, a threading increments the count of _two_ edges at junction \(v\) every time it traverses \(v\), so the sum of counts for all edges incident to \(v\) must be even. Third, forbidding U-turns implies that, if \(uv\) is threaded \(k\) times, then the sum of counts for the remaining edges incident to \(v\) must be at least \(k\) to supply these visits. Fourth, because the junction graph \(J(v)\) of \(v\) is connected, it has at least enough edges for a spanning tree -- \(d(v)-1\) where \(d(v)\) denotes the degree of \(v\) -- so the sum of counts of edges incident to \(v\) must be at least \(2(d(v)-1)\). More formally: **Definition 2.1** (Local Threading).: _Given a graph \(G=(V,E)\), a **local threading** of \(G\) consists of integers \(\left\{x_{uv}\right\}_{uv\in E}\) satisfying the following constraints:_ Figure 2: (a) The closed walk (red) on the graph (black) of a tetrahedron induces junctions graphs (circled on the right) that are connected, and so it is a threading. (b) The union of junction graphs is called the _threading graph_ (Section 2.2). **(C1)**: \(x_{uv}\geq 1\) _for all_ \(uv\in E\)_;_ **(C2)**: \(\sum_{u\in N(v)}x_{uv}\equiv 0\pmod{2}\) _for all_ \(v\in V\)_;_ **(C3)**: \(\sum_{w\in N(v)\setminus\{u\}}x_{uv}\geq x_{uv}\) _for all_ \(uv\in E\)_; and_ **(C4)**: \(\sum_{u\in N(v)}x_{uv}\geq 2(d(v)-1)\) _for all_ \(\in V\)_._ _The **length** of \(\{x_{uv}\}\) is \(\sum_{uv\in E}x_{uv}\), and Optimal Local Threading is the problem of finding the minimum-length local threading._ Optimal Local Threading is in fact an integer linear program, though this is not helpful algorithmically because integer programming is NP-complete. Nonetheless, local threading will be a useful perspective for our later algorithms. The observations above show that any threading \(T\) induces a local threading by setting each count \(x_{uv}\) to the number of times \(T\) visits edge \(uv\), with the same length: \(|T|=\sum_{uv\in E}x_{uv}\). In the following theorem, we show the converse, and thus the equivalence of threadings with local threadings: **Theorem 2.2**.: _We can construct a threading \(T\) of \(G\) from a local threading \(\{x_{uv}\}\) of \(G\) such that \(T\) visits edge \(uv\) exactly \(x_{uv}\) times. Hence \(|T|=\sum_{uv\in E}x_{uv}\)._ We shall prove this theorem in two parts. First, we show that it is always possible to form a junction graph at every vertex given a local threading (Section 2.1). Then we show that a closed walk can be obtained from the resulting collection of junction graphs (Section 2.2). ### Constructing a Connected Junction Graph Forming a junction graph \(J(v)\) at vertex \(v\) reduces to constructing a connected graph on vertices \(t_{1},\ldots,t_{d(v)}\), where each vertex represents a tube incident with \(v\), with degrees \(x_{1},\ldots,x_{d(v)}\), respectively. We shall construct \(J(v)\) in two steps, first in the case where (C4) holds with equality (Lemma 2.3) and then in the general case (Lemma 2.4). **Lemma 2.3**.: _We can construct a tree \(S\) consisting of \(d\) vertices with respective degrees \(x_{1},\ldots,x_{d}\geq 1\) satisfying \(\sum_{i=1}^{d}x_{i}=2(d-1)\) in \(O(d)\) time._ Proof.: We provide an inductive argument and a recursive algorithm. In the base case, when \(d=2\), \(x_{1}=x_{2}=1\) and the solution is a one-edge path. For \(d>2\), the average \(x_{i}\) value is \(\frac{2(d-1)}{d}\) which is strictly between \(1\) and \(2\). Hence there must be one vertex \(i\) satisfying \(x_{i}>1\) and another vertex \(j\) satisfying \(x_{j}=1\). Now apply induction/recursion to \(x^{\prime}\) where \(x^{\prime}_{k}=x_{k}\) for all \(k\notin\{i,j\}\), \(x^{\prime}_{i}=x_{i}-1\), and \(x_{j}\) does not exist (so there are \(n-1<n\) values), to obtain a tree \(S^{\prime}\). We can construct the desired tree \(S\) from \(S^{\prime}\) by adding the vertex \(j\) and edge \((i,j)\). The recursive algorithm can be implemented in \(O(d)\) time as follows. We maintain two stacks: the first for vertices of degree \(>1\) and the second for vertices of degree \(1\). In each step, we pop vertex \(i\) from the first stack, pop vertex \(j\) from the second stack, and connect vertices \(i\) and \(j\). We then decrease \(x_{i}\) by \(1\) and push it back onto one of the stacks depending on its new value. This process continues until the stacks are empty. Each step requires constant time and we perform at most \(\sum_{i=1}^{d}x_{i}=O(d)\) steps, so the total running time is \(O(d)\). **Lemma 2.4**.: _Given a local threading \(\{x_{e}\}\) and a vertex \(v\in V\), we can construct a connected junction graph \(J(v)\) with no self-loops in \(O\left(d(v)\log d(v)+\sum_{u\in N(v)}x_{uv}\right)\) time._ Proof.: Algorithm 1 describes how to construct a connected junction graph \(J(v)\), assuming the notation introduced at the start of this section. This graph is characterized by its connectivity and the absence of self-loops, with the latter being ensured in Step 3b with \(\alpha\neq\beta\). To prove its connectivity, we demonstrate the proper application of the inductive procedure outlined in the proof of Lemma 2.3 in forming a tree (Step 4). We only need to validate that \(x^{\prime}_{1},\ldots,x^{\prime}_{d(v)}\geq 1\), as \(\sum_{i=1}^{d(v)}x^{\prime}_{i}=2(d(v)-1)\) is guaranteed upon the termination of the loop (Step 3). Suppose for contradiction that \(x^{\prime}_{k}<1\). It follows that \(x^{\prime}_{k}=1\) at the start of some iteration and was subsequently decremented, either via Step 3a or 3b. We consider these two cases: * **Case 1** (Step 3a, \(k=\alpha\)): \(x^{\prime}_{k}\geq x^{\prime}_{i}\) for all \(i\in\{1,\ldots,d(v)\}\), so \[\sum_{i=1}^{d(v)}x^{\prime}_{i}\leq d(v)\times x^{\prime}_{k}=d(v)\leq 2(d(v)-1),\] a contradiction for any \(d(v)>1\), which is assumed. * **Case 2** (Step 3b, \(k=\beta\)): As \(x^{\prime}_{k}\geq x^{\prime}_{i}\) for all \(i\in\{1,\ldots,d(v)\}\setminus\{\alpha\}\), so \[\sum_{i\in\{1,\ldots,d(v)\}\setminus\{\alpha\}}x^{\prime}_{i}\leq(d(v)-1) \times x^{\prime}_{k}=d(v)-1.\] Recall that \(\sum_{i=1}^{d(v)}x^{\prime}_{i}=x^{\prime}_{\alpha}+\sum_{i\in\{1,\ldots,d(v) \}\setminus\{\alpha\}}x^{\prime}_{i}\geq 2d(v)\) is required to enter the loop. Hence, applying the above deduction, \(x^{\prime}_{\alpha}>\sum_{i\in\{1,\ldots,d(v)\}\setminus\{\alpha\}}x^{\prime}_ {i}\), contradicting the below invariant (Equation 1) of the loop in Step 3. Loop Invariant:The following invariant is maintained by the algorithm's loop (Step 3), established on initialization via (C3): \[x^{\prime}_{i}\leq\sum_{j\in\{1,\ldots,d(v)\}\setminus\{i\}}x^{\prime}_{j}\text { for all }i\in\{1,\ldots,d(v)\} \tag{1}\] We observe that \(\sum_{i=1}^{d(v)}x_{i}\) decreases by 2 with every iteration: either both sides of Equation 1 are reduced by 1, thereby maintaining the inequality, or the LSH remains unchanged while the RHS is reduced by 2. In the latter scenario, counts \(x^{\prime}_{\alpha},x^{\prime}_{\beta}\geq x^{\prime}_{i}\) are updated in Steps 3ab. Observe that \(x^{\prime}_{\alpha}\geq 2\) because \(\sum_{i=1}^{d(v)}x^{\prime}_{i}\geq 2n\) is a prerequisite for loop entry. Letting \(x^{\prime\prime}_{i}\) denote the value of \(x^{\prime}_{i}\) at the beginning of the next iteration, we arrive at the desired conclusion: \[x^{\prime\prime}_{i}=x^{\prime}_{i}\leq(x^{\prime}_{\alpha}-2)+x^{\prime}_{ \beta}\leq\sum_{j\in\{1,\ldots,d(v)\}\setminus\{i\}}x^{\prime}_{j}-2=\sum_{j \in\{1,\ldots,d(v)\}\setminus\{i\}}x^{\prime\prime}_{j}.\] Running time:We sort the vertex degrees in \(O(d(v)\log d(v))\) time prior to Step 3 and preserve this ordering throughout the loop (e.g., by employing a binary search tree) for constant-time execution of Steps 3ab. Thus, Steps 3 and 4 together require \(O(\sum_{i=1}^{d(v)}x_{i})\) time (Lemma 2.3), and so the total algorithm running time is \(O(d(v)\log d(v)+\sum_{u\in N(v)}x_{uv})\) ### Obtaining a Closed Walk Now suppose we have a junction graph \(J(v)\) for every vertex \(v\), obtained by repeatedly applying Lemma 2.4 to a given local threading. Our goal is to find a closed walk in \(G\) that has no U-turns and corresponds to these junction graphs. Define the _threading graph_ to be the graph whose vertices correspond to tubes and whose edges are given by the union of all junction graphs (joining at vertices corresponding to the same tube). See Figures 2 and 3 for examples. In this threading graph, we find an _Euler cycle_: a closed walk that visits each edge of the graph exactly once. The presence of an Euler tour through a threading graph is guaranteed because each vertex has even degree [1], specifically twice the count \(x_{e}\) for vertex \(t_{e}\). The tour can be computed in time linear in the number of edges of the input graph [10], which is \(O(\sum_{i=1}^{n}x_{i})\). To ensure that U-turns are avoided in the threading, we enforce that the Euler tour does not consecutively traverse two edges of the same junction graph, which can be done in linear time by a reduction to forbidden-pattern Euler tours [1]. Combining our results, we can convert a local threading \(\{x_{e}\}\) of \(G\) to a corresponding threading of \(G\) in time \(O(\sum_{v\in V}d(v)\log d(v)+\sum_{i=1}^{n}x_{i})=O(n\log\Delta+\sum_{i=1}^{n} x_{i})\), where \(\Delta\) is the maximum vertex degree in the graph. Later (in Section 3.1) we will show that the optimal threading satisfies Figure 3: The target model, a threading graph featuring junction graphs as cycles, and a threading of the input model following an Eulerian cycle of the threading graph. \(\sum_{i=1}^{n}x_{i}=O(m)\), in which case our running time simplifies to \(O(n\log\Delta+m)\). **Theorem 2.5**.: _We can convert a local threading solution of \(G\) into a threading of \(G\) in \(O(n\log\Delta+\sum_{i=1}^{n}x_{i})\) time, which for an optimal, threading is \(O(n\log\Delta+m)\)._ ## 3 Worst-Case Bounds In this section, we prove tight worse-case upper and lower bounds on the total length of an optimal threading (Section 3.1) and on the most times one edge may be visited by an optimal threading (Section 3.2). ### Total Length Every graph \(G\) has a _double threading_ defined by assigning each junction graph \(J(v)\) to be a cycle of length \(d(v)\), as depicted in Figure 2(b). This threading results in each tube being traversed exactly twice, which totals a length of \(2m\). Thus an optimal threading has length at most \(2m\). We can approach this upper bound up to an additive constant by considering graphs with long sequences of bridges, such as the graph illustrated in Figure 3(a). We shall later tighten this upper bound by considering graph properties (Lemma 3.4). Now we establish a lower bound on the total length of any threading: **Lemma 3.1**.: _Any threading must have length at least \(2m-n\)._ Proof.: Each junction graph \(J(v)\) is connected, so contains at least \(d(v)-1\) edges, and every edge \(t_{i}t_{j}\) in \(J(v)\) necessitates visits to two tubes, \(t_{i}\) and \(t_{j}\). By summing these visits across all junctions, we double-count visits to tubes. Thus, any threading \(\{x_{uv}\}\) has length \[\sum_{uv\in E}x_{uv}=\frac{1}{2}\sum_{v\in V}\sum_{u\in N(v)}x_{uv}\geq\frac{ 1}{2}\sum_{v\in V}2(d(v)-1)\underset{\uparrow}{=2m-n}.\] In the ILP view, the inequality step follows from constraint (C4), This lower bound is sometimes tight, such as in Figure 1(a), which we give a special name: **Definition 3.2**.: _A **perfect threading** is a graph threading of length \(2m-n\)._ By the analysis in the proof of Lemma 3.1, we obtain equivalent definitions: **Lemma 3.3**.: _The following are equivalent for a graph threading \(\{x_{uv}\}\):_ 1. \(\{x_{uv}\}\) _is a perfect threading._ 2. _Every junction graph_ \(J(v)\) _is a tree, i.e., has exactly_ \(d(v)-1\) _edges._ 3. _Inequality_ (C4) holds with equality. Not every graph has a perfect threading (Figure 3(b)). A key observation is that bridges must be threaded at least twice. If we were to remove a bridge, the graph would have two connected components and any closed walk on the entire graph would have to enter and exit each component at least once. Because the only way to pass between the two connected components is through the bridge, the walk would have to traverse the bridge at least twice. Hence, vertices whose incident edges are all bridges must have junction graphs containing at least \(d(v)\) edges. We call these vertices _London_ vertices. A tighter lower bound is \(2m-n+|L|\) where \(L\) is the set of London vertices in \(G\). Next, we consider an improved upper bound on the length of an optimal threading. While \(2m\) edge visits always suffice to thread a graph, the following lemma demonstrates that this number is never necessary, as any graph without vertices of degree \(1\) contains a cycle. **Lemma 3.4**.: _Let \(C\) be a set of vertex-disjoint simple cycles in \(G\) and let \(|C|\) denote the total number of edges contained in its cycles. In an optimal threading of \(G\), at most \(2m-|C|\) edge visits are needed._ Proof.: We use \(e\in C\) to denote edge \(e\) participating in some cycle in \(C\). Define the set of integers \(\{x_{e}\}\) where \(x_{e}=1\) if \(e\in C\) and \(x_{e}=2\), otherwise. By design, \(\sum_{e\in E}x_{e}=2m-|C|\), and so it suffices to show that \(\{x_{e}\}\) is a valid threading of \(G\), i.e., \(\{x_{e}\}\) satisfies constraints (C1)-(C4). Observe that each vertex \(v\) is either (1) covered once by a single cycle in \(C\), meaning that two of its incident edges are single-threaded while the others of threaded twice, or (2) left uncovered, in which all of its incident edges are double-threaded. In both scenarios, all constraints are clearly met. Note that (C4) holds as an equality in a vertex covered once by a cycle in \(C\). In Section 5.2, we provide an efficient algorithm for computing a threading that achieves the above bound by reduction to finding the largest set of vertex-disjoint cycles. ### Maximum Visits to One Edge Each edge is threaded at least once in a graph threading, but what is the maximum number of times an edge can be threaded by an optimal solution? In this section, we establish that no optimal threading exceeds \(\Delta-1\) visits to a single edge. This upper bound is tight, as demonstrated by edge \(uv\) in Figure 3(c): Constraint (C4) requires multiple visits to at least one edge connected to \(v\), and revisiting \(uv\) is the most economical when the loops incident to \(v\) are long. It is worth noting that bounding the visits to an edge by the maximum degree of its endpoints may not suffice for an optimal solution, as in the case of the left-most edge in Figure 3(c), which is traversed \(\frac{\Delta-1}{2}>2\) times despite both its endpoints have a degree of \(2\). Figure 4: (a) A graph with a minimum threading length of \(2m-6\). (b) Each bridge incident to vertex \(v\) is at least double-threaded, and hence (C4) holds at \(v\) as strict inequality, so the graph has no perfect threading. (c) Edge \(uv\) is threaded \(\Delta-1\) times and the loops (dotted) incident to vertex \(v\) are of length \(\Delta\). **Lemma 3.5**.: _An optimal threading visits a single edge at most \(\Delta-1\) times._ Proof.: If \(\Delta=2\), then \(G\) is a cycle, in which case the optimal threading traverses every edge once. Hence, for the remainder of this proof we may assume \(\Delta\geq 3\). Suppose \(\{x_{e}\}\) is an optimal threading of a graph \(G\). Let \(uv=\arg\max_{e\in E}x_{e}\) denote the edge with the highest count and assume for a contradiction that \(x_{uv}\geq\Delta\). For simplicity, we first assume that \(d(u),d(v)\geq 3\) and handle the case where \(d(u)=2\) or \(d(v)=2\) at the end. We shall show that we can remove two threads from \(uv\) without violating the problem constraints. That is, the set \(\{\hat{x}_{e}\}\) is a valid threading when defined as \(\hat{x}_{e}=x_{uv}-2\) if \(e=uv\) and \(\hat{x}_{e}=x_{e}\), otherwise. This conclusion contradicts our assumption that \(\{x_{e}\}\) is optimal. The key to this proof is the following: **(C4):** Because \(\{x_{e}\}\) satisfies (C3), \(\sum_{i=1}^{d(v)-1}x_{u_{i}v}\geq x_{uv}\geq\Delta\), and so \[\sum_{w\in N(v)}\hat{x}_{uv}=\hat{x}_{uv}+\sum_{i=1}^{d(v)-1}x_{u_{i}v}\geq( \Delta-2)+\Delta\geq 2(d(v)-1).\] By symmetry, \(u\) also satisfies (C4), and therefore (C4) is met by all vertices of \(G\). We are left to show that \(\{\hat{x}_{e}\}\) satisfies (C1)-(C3). **(C1):**\(\hat{x}_{uv}>\Delta-2\geq 1\). For any other edge \(\hat{x}_{e}=x_{e}\geq 1\). **(C2):** Constraint (C2) is met as we do not modify the parity of any count. **(C3):** We now show (C3) is satisfied for \(v\) and by symmetry, \(u\), and therefore met by all vertices of \(G\). Let us denote the neighbors of \(v\) by \(u,u_{1},\ldots,u_{d(v)-1}\). We have \[\sum_{w\in N(v)\setminus\{u\}}\hat{x}_{uv}=\sum_{w\in N(v)\setminus\{u\}}x_{ uv}\geq x_{uv}>\hat{x}_{uv},\] so (C3) is satisfied for \(uv\). We now demonstrate (C3) also holds for the remaining \(u_{i}v\)'s. If \(d(v)\geq 4\), because \(x_{uv}\geq x_{u_{i}v}=\hat{x}_{u_{i}v}\) by our choice of \(uv\), we have \[\sum_{w\in N(v)\setminus\{u_{i}\}}\hat{x}_{uv}\underset{(C1)}{\geq}\hat{x}_{ uv}+\underbrace{d(v)-2}_{\geq 2}\geq(x_{uv}-2)+2=x_{uv}\geq\hat{x}_{u_{i}v},\] as desired. Otherwise, \(d(v)=3\). Without loss of generality, we want to show that \[x_{u_{1}v}\leq\hat{x}_{uv}+\hat{x}_{u_{2}v}=x_{uv}+x_{u_{2}v}-2.\] Because \(x_{uv}\geq x_{u_{1}v}\) (by choice of \(uv\)) and \(x_{u_{2}v}\geq 1\) (from (C1)), this inequality holds in all cases except when \(x_{u_{1}v}=x_{uv}\) and \(x_{u_{2}v}=1\). However, in this particular scenario, the sum of counts surrounding \(v\) amounts to \(2x_{uv}+1\), which contradicts (C2). If either endpoint of \(uv\) has degree 2, then we instead consider the maximal path \(w_{1},\ldots,w_{\ell}\) including \(uv\) such that all intermediate vertices have degree 2: \(d(w_{2})=\ldots=d(w_{\ell-1})=2\). Thus \(d(w_{1}),d(w_{\ell})\geq 3\) (as we are in the case \(\Delta\geq 3\)) and \(uv=w_{i}w_{i+1}\) for some \(i\). Because \(\{x_{e}\}\) is a valid threading, we must have \(x_{w_{1}w_{2}}=\cdots=x_{w_{\ell-1}w_{\ell}}=x_{uv}\geq\Delta\). Now we modify the threading \(\{x_{e}\}\) by removing two threads from each \(x_{w_{i}w_{i+1}}\) to obtain \(\{\hat{x}_{e}\}\). Constraints (C1)-(C4) remain satisfied at the degree-2 vertices \(w_{2},\ldots,w_{\ell-1}\). Finally, we can apply the proof above to show that the constraints remain satisfied at the end vertices \(w_{1}\) and \(w_{\ell}\) of degree at least 3. Polynomial-Time Algorithm via Perfect Matching In this section, we present our main result: a polynomial-time algorithm for computing an optimal threading of an input graph \(G\). Our approach involves reducing Optimal Threading to the problem of min-weight perfect matching, defined as follows. A _matching_ in a graph is a set of edges without common vertices. A _perfect matching_ is a matching that covers all vertices of the graph, i.e., a matching of cardinality \(\frac{n}{2}\). If the graph has edge weights, the _weight_ of a matching is the sum of the weights of its edges, and a _min-weight perfect matching_ is a perfect matching of minimum possible weight. We begin by constructing a graph that possesses a perfect matching if and only if \(G\) has a _perfect_ threading (Definition 3.2). This construction gives a reduction from determining the existence of a perfect threading to the perfect matching problem. Next, we extend this construction to ensure that a perfect matching always exists. In this extended construction, a perfect matching of weight \(W\) corresponds to a threading of length \(W+m\), giving a reduction from Optimal Threading to finding a min-weight perfect matching. ### Determining Existence of a Perfect Threading By Lemma 3.3, a threading \(\{x_{uv}\}\) of a graph \(G\) is a perfect threading if and only if it satisfies inequality (C4) with equality: * \(\sum_{u\in N(v)}x_{uv}=2(d(v)-1)\) for all \(v\in V\). In fact, most of the other constraints become redundant in this case: **Lemma 4.1**.: \(\{x_{uv}\}\) _is a perfect threading if and only if it satisfies_ (C1) and (C*4)_._ Proof.: If \(\{x_{uv}\}\) satisfies (C*4), then it satisfies constraint (C2), because \(2(d(v)-1)\equiv 0\pmod{2}\). (C*4) can be rewritten as \(x_{uv}+\sum_{w\in N(v)\setminus\{u\}}x_{uv}=2(d(v)-1)\), and by (C1), \(\sum_{w\in N(v)\setminus\{u\}}\geq d(v)-1\), so (C3) also holds. Consider a vertex \(v\) and its neighbors \(u_{1},\ldots,u_{d(v)}\). We can think of constraint (C*4) as allocating \(2(d(v)-1)\) units among \(x_{u_{1}v},\ldots,x_{u_{d(v)}v}\). First, we must allocate one unit to each \(x_{u_{i}v}\) in order to satisfy (C1). This leaves \(d(v)-2\) units to distribute among the edges. We show how to simulate this distribution problem by constructing a graph \(H\) that has a perfect matching if and only if, for every vertex \(v\), we are able to distribute \(d(v)-2\) units among its neighboring \(x_{u_{i}v}\). Thus \(H\) has a perfect matching if and only if \(G\) has a perfect threading. Given a graph \(G\), define the graph \(H\) as follows; refer to Figure 5. For each edge \(uv\in E(G)\), create a perfect matching of \(d_{uv}:=\min\{d(u),d(v)\}-2\) disjoint edges \((\overline{uv}_{i},u\overline{v}_{i})\), among \(2\,d_{uv}\) created vertices \(\overline{uv}_{1},u\overline{v}_{1},\ldots,\overline{uv}_{d_{uv}},u\overline{ v}_{d_{uv}}\).1 For each vertex \(v\), create \(d(v)-2\) vertices labeled \(v_{1},\ldots,v_{d(v)-2}\). For every edge \(uv\) incident to \(v\), add an edge between vertices \(v_{i}\) and \(u\overline{v}_{j}\) for all \(1\leq i\leq d(v)-2\) and \(1\leq j\leq d_{uv}\) (forming a biclique). Note that any vertex of degree \(2\) disappears in this construction, because of the \(-2\) in each creation count. Footnote 1: In the same way that \(uv\) and \(vu\) denote the same edge, we treat labels \(u\overline{v}\) and \(\overline{v}u\) as the same. **Theorem 4.2**.: \(G\) _has a perfect threading if and only if \(H\) has a perfect matching._ To prove Theorem 4.2, we will show how to translate between a perfect threading of \(G\) and a perfect matching of \(H\). Given a matching \(M\subseteq E(H)\) of \(H\), define a possible threading solution \(\varphi(M)=\{x_{uv}\}\) by taking \(x_{uv}\) to be \(1\) plus the number of edges \((\overline{uv}_{i},u\overline{v}_{i})\) that are _not_ included in \(M\): \(x_{uv}:=1+\big{|}\{(\overline{uv}_{i},u\overline{v}_{i}):1\leq i\leq d_{uv}\} \setminus M\big{|}\). **Claim 4.3**.: _If \(M\) is a perfect matching in \(H\), then \(\varphi(M)\) is a perfect threading of \(G\)._ Proof.: By Lemma 4.1, it suffices to prove that \(\varphi(M)\) satisfies (C1) and (C*4). The \(1+\) in the definition of \(\varphi(M)\) satisfies (C1). For every vertex \(v\in V\), the vertices \(v_{1},\ldots,v_{d(v)-2}\) are all matched to vertices of the form \(u\overline{v}_{i}\); for each such matching pair, the edge \((u\overline{v}_{i},\overline{uv}_{i})\notin M\). Conversely, for any vertex \(u\overline{v}_{i}\) that is not matched to any \(v_{j}\), the edge \((u\overline{v}_{i},\overline{uv}_{i})\) must be part of the matching. Hence, for each vertex \(v\), the number of edges of the form \((u\overline{v}_{i},\overline{uv}_{i})\) that are not included in \(M\) is exactly \(d(v)-2\). The sum \(\sum_{u\in N(v)}x_{uv}\) includes this count and \(d(v)\) additional \(1\)s, so equals \((d(v)-2)+d(v)=2(d(v)-1)\), satisfying (C*4). **Claim 4.4**.: _For any perfect threading \(\{x_{uv}\}\) of \(G\), there exists a perfect matching \(M\) of \(H\) such that \(\varphi(M)=\{x_{uv}\}\)._ Proof.: Given a perfect threading \(\{x_{uv}\}\) of \(G\), we construct a perfect matching of \(H\) as follows. First, for every \(uv\in E(G)\), we match the edges \((\overline{uv}_{1},u\overline{v}_{1}),\ldots,(\overline{uv}_{d_{uv}-x_{uv}+1}, u\overline{v}_{d_{uv}-x_{uv}+1})\). We show that index \(d_{uv}-x_{uv}+1\) is always nonnegative; when it is zero, we match no such edges. By constraint (C*4), \(x_{uv}=2(d(v)-1)-\sum_{w\in N(v)\setminus\{u\}}x_{uv}\). By constraint (C1), each term in the sum is at least \(1\), so \(x_{uv}\leq d(v)-1\). Thus \(x_{uv}\leq d_{uv}+1\), i.e., \(d_{uv}-x_{uv}+1\geq 0\). Figure 5: Construction of \(H\) and \(\hat{H}\) from \(G\), each with some matching in bold and a corresponding threading to the matching labeled with counts. With our matching so far, the number of unmatched vertices of the form \(u\overline{v}_{i}\) at each vertex \(v\) is \(\sum_{u\in N(v)}(x_{uv}-1)\). By (C*4), this count is exactly \(2(d(v)-1)-d(v)=d(v)-2\). Thus we can match each of these unmatched vertices to a unique vertex \(v_{j}\) to complete our perfect matching. Claims 4.3 and 4.4 complete the proof of Theorem 4.2. #### 4.1.1 Running-Time Analysis First, let us calculate the sizes of \(V(H)\) and \(E(H)\). Recall that \(H\) has \(d(v)-2\) vertices corresponding to every vertex \(v\in V(G)\), and up to \(2(\min\{d(u),d(v)\}-2)\leq 2\Delta\) vertices corresponding to every edge \(uv\in E(G)\). Therefore, the maximum number of vertices in \(H\) is \[\sum_{v\in V}(d(v)-2)+2\sum_{uv\in E}\Delta\leq 2m-2n+2m\Delta=O(m\Delta).\] Now recall that \(H\) has \(\min\{d(u),d(v)\}-2\leq\Delta\) edges for every \(uv\) and at most \(\Delta^{3}\) edges for every \(v\). Thus, the total number of edges in \(H\) is upper-bounded by \[2\sum_{uv\in E}\Delta+\sum_{v\in V}\Delta^{3}\leq 2m\cdot\Delta+n\Delta^{3}=O(n \Delta^{3}).\] We conclude that \(H\) can be constructed in \(O(n\Delta^{3}+m\Delta)\) time. Micali and Vazirani [14] gave an algorithm that computes the maximum matching of a general graph in \(O(\sqrt{n}m)\) time, thereby enabling us to verify the existence of a perfect matching. It follows that we can determine a perfect matching of \(H\) in time \[O(\sqrt{|V(H)|}\cdot|E(H)|)=O(\sqrt{m\Delta}\cdot n\Delta^{3})=O(n\sqrt{m} \cdot\Delta^{3.5}).\] This running time exceeds the construction time of \(H\), and so it is the final running time of our algorithm. Note that we can improve the bound on the size of \(H\) by considering the _arboricity_ of \(G\). The arboricity of a graph \(\alpha(G)\) is defined as the minimum number of edge-disjoint spanning forests into which \(G\) can be decomposed [10]. This parameter is closely related to the degeneracy of the graph and is often smaller than \(\Delta\). Chiba and Nishizeki [10] show that \(\sum_{uv\in E}\min\{d(u),d(v)\}\leq 2m\alpha(G)\), which would give us a tighter bound on the size of \(V(H)\). In summary, we can find a perfect threading of \(G\), if one exists, by determining a perfect matching in \(H\) in \(O(n\sqrt{m}\cdot\Delta^{3.5})\) time. ### Finding an Optimal Threading Now we examine the general scenario where a perfect threading may not exist, i.e., (C4) may hold with a strict inequality for some vertex. The graph \(H\) constructed in Section 4.1 permits exactly \(2(d(v)-1)\) visits to vertex \(v\). Our goal is to allow more visits to \(v\) while satisfying constraints (C2) and (C3). In a general threading, \(x_{uv}\leq\min\{d(u),d(v)\}-1\) (as argued in Claim 4.4) is not necessarily true. However, Lemma 3.5 gives us a weaker upper bound, \(x_{uv}\leq\Delta-1\), for any optimal threading. We therefore modify the construction from Section 4.1 in two ways. First, we generate \(\Delta-2\) copies of every edge, regardless of the degree of its endpoints. Second, for every pair of edges \(uv\) and \(uv\) meeting at vertex \(v\), we introduce an edge between \(uv\overline{v}_{i}\) and \(w\overline{v}_{j}\) for all \(1\leq i,j\leq\Delta-2\). Intuitively, these edges represent threads passing through \(v\), going from \(uv\) to \(wv\), after having met the lower bound of \(2(d(v)-1)\) visits. More formally, we define a weighted graph \(\hat{H}\) from \(G\) as follows; refer to Figure 5. For each edge \(uv\in E(G)\), create a weight-0 perfect matching of \(\Delta-2\) disjoint weight-0 edges \((\overline{u}v_{i},u\overline{v}_{i})\), among \(2(\Delta-2)\) created vertices \(\overline{u}v_{1},u\overline{v}_{1},\ldots,\overline{u}v_{\Delta-2},u\overline {v}_{\Delta-2}\); these edges are black in Figure 5. For every vertex \(v\), create \(d(v)-2\) vertices \(v_{1},\ldots,v_{d(v)-2}\), and add a weight-\(\frac{1}{2}\) edge \((v_{i},u\overline{v}_{j})\) for every \(u\in N(v)\) and \(1\leq i\leq d(v)-2,j\leq\Delta-2\); these edges are blue in Figure 5. Finally, for each pair of edges \(uv\) and \(wv\) incident to \(v\), create a weight-1 edge \((u\overline{v}_{i},w\overline{v}_{j})\) for every \(1\leq i,j\leq\Delta-2\); these edges are green in Figure 5. **Theorem 4.5**.: \(G\) _has a threading of length \(W+m\) with \(\max_{uv\in E(G)}x_{uv}\leq\Delta-1\) if and only if \(\hat{H}\) has a perfect matching of weight \(W\)._ To prove Theorem 4.5, we again show how to translate between a threading of \(G\) and a perfect matching of \(\hat{H}\). Given a matching \(M\subseteq E(\hat{H})\) of \(\hat{H}\), define a possible threading solution \(\psi(M)=\{x_{uv}\}\) by taking \(x_{uv}\) to be 1 plus the number of copies of \(uv\) not matched in \(M\): \(x_{uv}:=1+\big{|}\{(\overline{u}v_{i},u\overline{v}_{i}):1\leq i\leq\Delta-2 \}\setminus M\big{|}\). **Claim 4.6**.: _If \(M\) is a perfect matching in \(\hat{H}\) of weight \(W\), then \(\psi(M)=\{x_{uv}\}\) is a threading of \(G\) of length \(W+m\) with \(\max_{uv\in E(G)}x_{uv}\leq\Delta-1\)._ Proof.: By definition of \(\psi(M)\), every \(x_{uv}\) satisfies \(1\leq x_{uv}\leq\Delta-1\). Thus, \(\{x_{uv}\}\) satisfies (C1) and \(\max_{uv\in E(G)}x_{uv}\leq\Delta-1\). Let \(a_{v}(uv)\) denote the number of vertices \(u\overline{v}_{i}\) (for \(1\leq i\leq\Delta-2\)) matched with some vertex \(v_{j}\), i.e., the number of blue edges incident to a vertex \(u\overline{v}_{i}\) that appear in \(M\). Let \(b_{v}(uv)\) denote the number of vertices \(u\overline{v}_{i}\) (for \(1\leq i\leq\Delta-2\)) matched with some vertex \(w\overline{v}_{j}\), i.e., the number of green edges incident to a vertex \(u\overline{v}_{i}\) that appear in \(M\). Any other vertex \(u\overline{v}_{i}\) (not incident to either a blue or green edge in \(M\)) must be matched to its corresponding vertex \(\overline{u}v_{i}\), which does not contribute to \(x_{uv}\). Hence, \(x_{uv}=1+a_{v}(uv)+b_{v}(uv)\). Next we prove that \(\{x_{uv}\}\) satisfies constraint (C4). For every vertex \(v\), we have \(\sum_{u\in N(v)}a_{v}(uv)=d(v)-2\), which implies \(\sum_{u\in N(v)}(x_{uv}-1)\geq d(v)-2\), which is equivalent to (C4). Next consider (C2). Any edge \((u\overline{v}_{i},w\overline{v}_{j})\) present in \(M\) adds 1 to both \(b_{v}(uv)\) and \(b_{v}(wv)\), thereby ensuring \(\sum_{u\in N(v)}b_{v}(uv)\equiv 0\pmod{2}\). Consequently, \[\sum_{u\in N(v)}x_{uv}\equiv\sum_{u\in N(v)}(a_{v}(uv)+1)=2(d(v)-1)\equiv 0 \pmod{2}.\] Finally, consider (C3). Given that \(a_{v}(uv)\leq d(v)-2\), we infer \(\sum_{w\in N(v)\setminus\{u\}}a_{v}(uv)+d(v)-1\geq a_{v}(uv)+1\). Additionally, for each vertex contributing to \(b_{v}(uv)\), its matched vertex contributes to some \(b_{v}(wv)\), so \(\sum_{w\in N(v)\setminus\{u\}}b_{v}(wv)\geq b_{v}(uv)\). Hence, we have \[\sum_{w\in N(v)\setminus\{u\}}x_{uv}=\sum_{w\in N(v)\setminus\{u\}}(a_{v}(wv) +b_{v}(wv)+1)\geq(a_{v}(uv)+1)+b_{v}(uv)=x_{uv}.\] We conclude that \(\{x_{uv}\}\) is a threading of \(G\). Lastly, we compute its length. The weight of \(M\) is determined by the number of blue and green edges it contains, because the edges \((\overline{u}v_{i},u\overline{v}_{i})\) have zero weight. Each of its blue edges of the form \((v_{i},u\overline{v}_{j})\) has weight \(\frac{1}{2}\) and is accounted for once in \(a_{v}(uv)\), for a total weight of \(a_{v}(uv)/2\). Each of its green edges of the form \((u\overline{v}_{i},w\overline{v}_{j})\) has weight \(1\) and is counted twice -- once in \(b_{v}(uv)\) and once more in \(b_{v}(uv)\) -- for a total weight of \(b_{v}(uv)/2\). Hence, the weight \(W\) of the matching \(M\) is given by \[W=\sum_{v\in V}\sum_{u\in N(v)}\left(\frac{a_{v}(uv)}{2}+\frac{b_{v}(uv)}{2} \right)=2\cdot\sum_{uv\in E}\frac{x_{uv}-1}{2}=\sum_{uv\in E}x_{uv}-m.\] Therefore \(\{x_{uv}\}\) is a threading of \(G\) of length \(W+m\). **Claim 4.7**.: _For every threading \(\{x_{uv}\}\) of \(G\) such that \(\max_{uv\in E(G)}x_{uv}\leq\Delta-1\), \(\hat{H}\) has a perfect matching \(M\) such that \(\psi(M)=\{x_{uv}\}\)._ Proof.: Let \(\{x_{uv}\}\) be a threading of \(G\) satisfying \(x_{uv}\leq\Delta-1\) for every edge \(uv\in E\). Recall Lemma 2.4, where we demonstrate the construction of a junction graph \(J(v)\) for vertex \(v\). For every vertex \(v\in V\), we know by (C2) and (C4) that \(\sum_{u\in N(v)}x_{uv}=2(d(v)-1)+2k\) for some integer \(k\). Note that \(J(v)\) has \(d(v)\) vertices and \(d(v)-1+k\) edges. Because \(J(v)\) is connected, we can thus select \(k\) edges from \(J(v)\) such that removing them will leave behind a tree. Denote these edges by \((u^{1},w^{1}),\ldots,(u^{k},w^{k})\) where \(u^{1},\ldots,u^{k},w^{1},\ldots,w^{k}\in N(v)\). For each edge \((u^{\ell},w^{\ell})\), match a green edge of the form \((u^{\ell}\overline{v}_{i},w^{\ell}\overline{v}_{j})\). For every edge \(uv\) connected to \(v\), denote by \(b_{v}(uv)\) the number of vertices of the form \(u\overline{v}_{i}\) currently matched, i.e., the number of times \(u\) appears as an endpoint among the \(k\) edges selected from \(J(v)\). Because the edges remaining in \(J(v)\) after removing \((u^{1},w^{1}),\ldots,(u^{k},w^{k})\) form a tree, every neighbor of \(v\) must have at least one incident edge in \(J(v)\) that is _not_ selected. Because the degree of \(t_{uv}\) in \(J(v)\) is \(x_{uv}\), the number of matched vertices must satisfy \(b_{v}(uv)\leq x_{uv}-1\).2 Footnote 2: Here \(t_{uv}\) is vertex representing the tube \(uv\). See the notation in Section 2.1. For each \(u\in N(v)\), let \(a_{v}(uv)=x_{uv}-b_{v}(uv)-1\). It is clear from our above observation that \(a_{v}(uv)\geq 0\). Given \(\sum_{u\in N(v)}b_{v}(uv)=2k\), we have \(\sum_{u\in N(v)}a_{v}(uv)=d(v)-2\). It follows that we can match \(a_{v}(uv)\) vertices in \(u\overline{v}_{1},\ldots,u\overline{v}_{\Delta-2}\) to an equal number of vertices in \(v_{1},\ldots,v_{d(v)-2}\) using blue edges. After executing this procedure, all vertices of the form \(v_{1},\ldots,v_{d(v)-2}\) will have been matched. Furthermore, the number of matched vertices of the form \(u\overline{v}_{i}\) is exactly \(a_{v}(uv)+b_{v}(uv)=x_{uv}-1\). We repeat this procedure for all vertices. Now, for every edge \(uv\), there are two sets of unmatched vertices, each of size \(\Delta-2-(x_{uv}-1)=\Delta-x_{uv}-1\)\(u\overline{v}_{i}\), of the form \(u\overline{v}_{i}\) and \(\overline{u}v_{j}\), respectively. By rearranging the existing matches, we can ensure these vertices are exactly \(u\overline{v}_{1},\ldots,u\overline{v}_{\Delta-x_{uv}-1},\overline{u}v_{1}, \ldots,\overline{u}v_{\Delta-x_{uv}-1}\). Then we can proceed to match every pair \((u\overline{v}_{i},\overline{u}v_{i})\), for \(i\leq\Delta-x_{uv}-1\), using a black edge. The above process results in a perfect matching \(M\) from the threading \(\{x_{uv}\}\). The number of edges of the form \((u\overline{v}_{i},\overline{u}v_{i})\) included in the matching is precisely \(\Delta-x_{uv}-1\). Hence, \(\psi(M)=\{x_{uv}\}\). The above two claims complete the proof of Theorem 4.5. Lemma 3.5 establishes that an optimal threading visits an edge no more than \(\Delta-1\) times, and so \(\hat{H}\) must have a perfect matching. Furthermore, if \(M\) is the min-weight perfect matching of \(\hat{H}\), then \(\psi(M)\) is the optimal threading of \(G\). We can therefore find the optimal threading of \(G\) by finding the min-weight perfect matching of \(\hat{H}\) and applying the reduction of Claim 4.6. Note that the solution presented in this section can be readily adapted to address a constrained variant of Optimal Threading, where each edge is allowed to be traversed only a limited number of times, by imposing limits on the number of vertex and edge copies created during the construction of \(\hat{H}\). This scenario arises, for example, when dealing with tubes of restricted diameter. #### 4.2.1 Running-Time Analysis First, let us analyze the size of \(\hat{H}\): the graph contains \(\Delta-2\) vertices for each vertex \(v\in V(G)\) and \(2(\Delta-2)\) vertices for each edge \(uv\in E(G)\). Hence, the total number of vertices in \(\hat{H}\) is \(O(m\Delta)\). In terms of edges, \(\hat{H}\) includes \(\Delta-2\) edges for each edge \(uv\in E(G)\) and no more than \(\Delta^{4}\) edges for each vertex \(v\in V(G)\). Therefore, the total edge count in \(\hat{H}\) is \(O(n\Delta^{4})\). As a result, the construction of \(\hat{H}\) requires \(O(m\Delta+n\Delta^{4})\) time. Next, we use the algorithm of Galil, Mical, and Gabow [1] to find a minimum weight perfect matching of \(\hat{H}\). This algorithm has time complexity \(O(nm\log n)\), and so on \(\hat{H}\) it runs in time \[O(|V(H)||E(H)|\log(|V(H)|))=O(m\Delta\cdot n\Delta^{4}\cdot\log(m\Delta))=O(nm \cdot\Delta^{5}\log n).\] As this term dominates the time for constructing \(\hat{H}\), we conclude that our algorithm for Optimal Threading runs in time \(O(nm\cdot\Delta^{5}\log n)\). #### 4.2.2 Extension to Weighted Graphs In this section, we adapt our Optimal Threading algorithm to weighted graphs that represent structures whose edges have varying lengths. Specifically, we introduce a weight function \(\ell:E\to\mathbb{R}^{+}\), where \(\ell(e)\) represents the length of tube \(e\). The goal of Optimal Threading is now to minimize the _total length_ of a threading \(T\), defined as \(\sum_{e\in T}\ell(e)\). This problem is equivalent to the weighted version of Optimal Local Threading where we seek to minimize \(\sum_{e\in E}\ell(e)\,x_{e}\) subject to constraints (C1)-(C4). Our Optimal Threading algorithm hinges upon Lemma 3.5. Fortunately, this result holds for weighted graphs. We demonstrated that, if any threading \(\{x_{e}\}\) has \(x_{e}\geq\Delta\) for some \(e\in E\), then we can construct a strictly shorter threading \(\{x^{\prime}_{e}\}\) that remains consistent with constraints (C1)-(C4). Specifically, \(x^{\prime}_{e}\leq x_{e}\) for all \(e\in E\) and \(x^{\prime}_{e}<x_{e}\) for at least one \(e\in E\), and so \(\sum_{e\in E}\ell(e)\,x^{\prime}_{e}<\sum_{e\in E}\ell(e)\,x_{e}\) for any weight function \(\ell:E\to\mathbb{R}^{+}\). Hence, an optimal threading never traverses an edge more than \(\Delta-1\) times as desired. To adapt our Optimal Threading algorithm for the weighted scenario, we construct a graph similar to \(\hat{H}\) in Section 4.2, but with modified edge weights: a blue edge \((v_{i},u\bar{v}_{j})\) now has weight \(\frac{1}{2}\ell(uv)\) instead of weight \(\frac{1}{2}\), and a green edge \((u\bar{v}_{i},w\bar{v}_{j})\) has weight \(\frac{1}{2}\big{(}\ell(uv)+\ell(uv)\big{)}\) rather than weight \(1\). The black edges continue to have zero weight. Denote this new graph by \(\tilde{H}\). By a similar proof to that of Theorem 4.5, we obtain a reduction from weighted Optimal Threading to minimum-weight perfect matching: **Theorem 4.8**.: \(G\) _has a threading of length \(W+\sum_{e\in E(G)}\ell(e)\) with \(\max_{e\in E(G)}x_{e}\leq\Delta-1\) if and only if \(\tilde{H}\) has a perfect matching of weight \(W\)._ As before, an edge \(uv\) traversed by a threading corresponds to an edge \((u\bar{v}_{i},\bar{u}v_{i})\) that is _not_ part of the perfect matching of \(\tilde{H}\). Both endpoints of this edge must be matched with either a green or blue edge. Each such matching contributes \(\frac{\ell(uv)}{2}\) to the matching's total weight. Thus, we can show that a perfect matching in \(\tilde{H}\) with weight \(W\) corresponds to a threading of \(G\) of length \(W+\sum_{e\in E}\ell(e)\). Special Cases Here we focus on two scenarios: Optimal Threading on cubic graphs and Double Threading, where each edge can be traversed at most twice. ### Cubic Graphics If graph \(G\) is cubic, then by Lemma 3.5, an optimal threading of \(G\) visits each edge at most twice. Furthermore, in a perfect threading of \(G\), if it exists, exactly one edge incident to each vertex is double-threaded due to constraint (C\({}^{*}\)4). Hence, it follows that \(G\) has a perfect threading if and only if \(G\) has a perfect matching. A perfect matching of \(G\) gives the set of edges to be double-threaded in a perfect threading. Every bridgeless cubic graph has a perfect matching [10]--it can be computed in \(O(n\log^{4}n)\) time [1]. In fact, if all bridges of a connected cubic graph \(G\) lie on a single path of \(G\), then \(G\) has a perfect matching [1]. ### The Double Threading Problem In Double Threading, the goal is to minimize the number of double-threaded edges or, equivalently, to maximize the number of edges visited only once. A solution to Double Threading on a cubic graph also solves Optimal Threading on the same graph. This is due to the observation that either zero or two single-threaded edges are incident to each vertex in a solution to Double Threading, which aligns with the reality of Optimal Threading on cubic graphs. By the same observation, a solution to Double Threading matches the upper bound given in Lemma 3.4 for general graphs. We further note that Double Threading may be reduced to the task of finding vertex-disjoint cycles with maximum collective length, which we solve below in Algorithm 2. 1. Construct a weighted graph \(G^{\prime}\) from \(G\) (Figure 6): 1. For each vertex \(v\in V\), create a complete bipartite graph \(G_{u}=K_{d(v),d(v)}\) with zero-weight edges. Let \(D_{u}^{-}\) and \(D_{v}^{+}\) denote the two disjoint vertex sets of this graph. 2. For each edge \(uv\in E\), add an edge unit weight between a vertex of \(D_{u}^{+}\) and a vertex of \(D_{v}^{+}\) such that each vertex of \(D_{u}^{+}\) and \(D_{v}^{+}\) has exactly one edge incident to it. 3. For each subgraph \(G_{v}\), add a zero-weight edge between any two vertices of \(D_{v}^{-}\). 2. Compute a maximum weight perfect matching \(M\) in \(G^{\prime}\). 3. Return edge set \(S\subseteq E\) of \(G\) corresponding to the weighted edges of \(M\). **Algorithm 2** Maximum Length Vertex-Disjoint Cycles We sketch the intuition behind why matching \(M\) corresponds one-to-one to vertex disjoint cycles in \(G\). Observe two cases for each \(u\): (i) If \(M\) contains the edge of 1(c), then \(d-2\) vertices in \(D_{u}^{-}\) match with the vertices in \(D_{u}^{+}\), leaving two vertices in \(D_{u}^{+}\) to match with their neighbors in adjacent subgraphs; (ii) all vertices in \(D_{u}^{+}\) are saturated via connections to \(D_{u}^{-}\), otherwise. That is, each vertex \(u\) is in exactly one cycle (i) or none at all (ii). Running-Time Analysis:We begin our analysis of the running time of Algorithm 2 by first bounding the size of \(G^{\prime}\). Each subgraph \(G_{v}\) has \(2d(u)\) vertices and \(d(v)^{2}+1\) edges, and these subgraphs are connected via \(m\) edges. Because \(\sum_{v\in V}d(v)=2m\) and \(\sum_{v\in V}d(v)^{2}\leq m(2m/(n-1)+n-2)\)[1], we conclude that \(V(G^{\prime})=O(m)\) and \(E(G^{\prime})=O(nm)\). The problems of finding a max-weight perfect matching and a min-weight perfect matching are symmetric: we can multiply edge weights by \(-1\) to switch between the two problems. It follows that we can apply the min-weight perfect matching algorithm proposed by Galil, Mical, and Gabow [1] in Step 2 of our algorithm. This procedure runs in \(O(|V(G^{\prime})||E(G^{\prime})|\log|V(G^{\prime})|)=O(n^{2}m\log m)\) time, which dominates the \(O(nm)\) construction time of \(G^{\prime}\) in the first step. Hence, the overall running time of Algorithm 2 is \(O(nm^{2}\log m)\). ## 6 Future Work Potential avenues for future work include developing tighter upper and lower bounds based on properties of the input graph and devising a more efficient solution to the general problem. Practical challenges associated with the design of reconfigurable structures (Figure 1) inspire further intriguing problems. For instance, friction plays a central role in the deployability of such structures -- it determines the force required to draw the string through the system. According to the Capstan equation, friction increases exponentially with the sum of the absolute values of turning angles in the threading route. Therefore, a logical next step is to investigate a variant of Optimal Threading where the focus is on minimizing this frictional cost instead of the threading length. ## Acknowledgements We thank Anders Aamand, Kiril Bangachev, Justin Chen, and Surya Mathialagan for insightful discussions. We also thank anonymous reviewers for their helpful comments. This research was supported in part by the NSF Graduate Research Fellowship and the MIT Stata Family Presidential Fellowship. Figure 6: Illustration of constructing \(G^{\prime}\) from \(G\).
芸術的な実践、例えばBeadworkとHimmeliにインスパイアされる、私たちは単一の糸を管のセットを貫く問題を研究しています。そうすることで糸を引くことで、希望するグラフが形成されます。より具体的に言えば、接続されたグラフ(辺は管であり、接続された場所を頂点とする)に対して、最小長さの閉鎖的なウォーク(糸の配置)を最小化するためのアルゴリズムを、時間計算が可能なアルゴリズムとして提供します。このアルゴリズムは、最小重量のパ perfect matchingに驚くべき減衰に基づいています。このアルゴリズムを開発する過程で、最適な糸の配置の最Worstケースの最小長さと、この配置が一度の辺を訪問する最大回数について、締め付けられた Worst-case boundを記述しています。また、2つの特殊なケースに対する効率的な解決策も提供しています:立方グラフと、各辺が最多2回
2308.00018
Entanglement and chaos near critical point in strongly coupled gauge theory
We perform a holographic study of the high and low temperature behaviours of logarithmic negativity (LN) and entanglement wedge cross section (EWCS) in a large $N$ strongly coupled thermal field theory with critical point having a well defined gravity dual known as 1RC black hole. The critical point is defined via $\xi \to 2$ limit where, $\xi$ is dimensionless parameter proportional to the charge of the 1RC black hole. We show that the logarithmic negativity in low and high thermal limits enhances with increasing $\xi$. We analytically compute the EWCS in low and high thermal limits and find an agreement with the previously reported numerical results. We holographically explore the correlation between two identical copies of thermal field theory with critical point forming a thermofield double state (TFD) by computing the thermo mutual information (TMI). TMI shows an increasing behaviour with respect to the width of the boundary region. Further, we analyze the impact of an early perturbation on the field theory by analyzing a shock wave perturbation that grows exponentially in the dual eternal 1 RC black hole and then estimate the degradation of TMI. However rate of such disruption of TMI slows down as the value of critical parameter $\xi$ takes higher values.
Sanjay Pant, Debanjan Karan
2023-07-31T17:55:54
http://arxiv.org/abs/2308.00018v3
# More on Entanglement and Chaos near Critical Point in Strongly Coupled Gauge Theory ###### Abstract We perform a holographic study of the high and low temperature behaviours of logarithmic negativity (LN) and entanglement wedge cross section (EWCS) in a large \(N\) strongly coupled thermal field theory with critical point having a well defined gravity dual known as 1RC black hole. The critical point is defined via \(\xi\to 2\) limit where, \(\xi\) is dimensionless parameter proportional to the charge of the 1RC black hole. We show that the logarithmic negativity in low and high thermal limits enhances with increasing \(\xi\). We analytically compute the EWCS in low and high thermal limits and find an agreement with the previously reported numerical results. We holographically explore the correlation between two identical copies of thermal field theory with critical point forming a thermofield double state (TFD) by computing the thermo mutual information (TMI). TMI shows an increasing behaviour with respect to the width of the boundary region. Further, we analyze the impact of an early perturbation on the field theory by analyzing a shock wave perturbation that grows exponentially in the dual eternal 1 RC black hole and then estimate the degradation of TMI. However rate of such disruption of TMI slows down as the value of critical parameter \(\xi\) takes higher values. ArXiv ePrint: 2308.0001 ## 1 Introduction ### Background * 1 Holographic Entanglement Entropy (HEE) * 2 Holographic Logarithmic Negativity for two adjacent subsystems * 4.1 Holographic Logarithmic Negativity for two adjacent subsystems at low temperature * 4.2 Holographic Logarithmic Negativity for two adjacent subsystems at high temperature * 5 Holographic Logarithmic Negativity for two disjoint subsystems * 5.1 Holographic Logarithmic Negativity for two disjoint subsystems at low temperature * 5.2 Holographic Logarithmic Negativity for two disjoint subsystems at high temperature * 6 Holographic Logarithmic Negativity for bipartite systems * 6.1 Holographic Logarithmic Negativity for bipartite systems at low temperature * 6.2 Holographic Logarithmic Negativity for Bipartite Systems at High Temperature * 7 Entanglement Wedge Cross Section (EWCS) * 7.1 Entanglement Wedge Cross Section at low temperature * 7.2 Entanglement Wedge Cross Section at High Temperature * 8 Holographic Mutual Information * 8.1 Holographic Thermo Mutual Information (HTMI) * 8.2 Holographic Thermo Mutual Information with shockwave * 9 Summary and Discussions * A Area of the Extremal Surface for Bipartite Systems * B Approximate EWCS at low temperature limit in terms of boundary parameters * B Introduction In quantum information theory, entanglement entropy (EE) of a bipartite system is synonymous to the von Neumann entropy constructed from the reduced density matrix of one of the subsystems. Hilbert space of bipartite system made out of two subsystems \(A\) and \(B\) is described as \(\mathcal{H}_{A}\otimes\mathcal{H}_{B}\). The EE between subsystem A (or its complement B) is defined as \[\mathcal{S}_{A}=-\text{Tr}(\rho_{A}\log\rho_{A}), \tag{1}\] where, \(\rho_{A}=\text{Tr}_{B}(\rho_{AB})\) is the reduced density matrix of \(A\), obtained by taking the partial trace of the total density matrix \(\rho_{AB}\) over the degrees of freedom of \(B\)[1]. However, EE is not a reliable measure for mixed states as it cannot differentiate between classical and quantum correlations. One of the well-celebrated measure for mixed-state entanglement is the mutual information (MI) that measures the total correlations and characterizes the amount of entanglement between two subsystems. The MI between \(A\) and \(B\) defined as [2] \[I(A:B)=\mathcal{S}_{A}+\mathcal{S}_{B}-\mathcal{S}_{A\cup B} \tag{2}\] where \(\mathcal{S}_{A}\), \(\mathcal{S}_{B}\), and \(\mathcal{S}_{A\cup B}\) are the von Neumann entropies of subsystem \(A\), \(B\) and \(A\cup B\). For pure state \(\mathcal{S}_{A\cup B}=0\), and MI reduces to \(I(A:B)=\mathcal{S}_{A}=\mathcal{S}_{B}\). The positive value of MI \(I(A:B)>0\) indicates the presence of entanglement between \(A\) and \(B\). However, \(I(A:B)=0\) implies that the subsystems may or may not be entangled and in that case, additional entanglement measures or criteria are required. Apart from MI, other measures such as entanglement of purification(EoP) and logarithmic negativity (LN) are widely used to diagnose the entanglement for a mixed state [3; 4]. In a general context, we have the capability to transform a mixed state, denoted as \(\rho_{AB}\), residing within a Hilbert space composed of \(\mathcal{H}_{A}\otimes\mathcal{H}_{A}\), into a pure state \(\ket{\psi}\) within an expanded Hilbert space represented as \(\mathcal{H}_{A}\otimes\mathcal{H}_{A}\otimes\mathcal{H}_{A^{\prime}}\otimes \mathcal{H}_{B^{\prime}}\). It's important to note that there exists an infinite array of purification pathways for \(\ket{\psi}\), all satisfying the condition that \(\rho_{AB}=Tr_{A^{\prime}B^{\prime}}\ket{\psi}\bra{\psi}\). For a given bipartite mixed state \(\rho_{AB}\), we define the EoP, denoted as \(E_{p}(A:B)\), as the minimum of EE among all feasible purification \[E_{p}(A:B)=min_{\rho_{AB}=Tr_{A^{\prime}B^{\prime}}\ket{\Psi}\bra{\Psi}}\{ \mathcal{S}(\rho_{AA^{\prime}})\} \tag{3}\] Further, Vidal and Werner proposed a quantity termed logarithmic negativity (LN) as a measure for the upper bound on the distillable entanglement in a mixed state [5]. Unlike the MI, LN captures only the quantum correlations and defined as, \[\mathcal{E}=\log||\rho_{AB}^{T}|| \tag{4}\] \(||\rho_{AB}^{T}||\) is the trace norm and \(\rho_{AB}^{T}\) is partial transpose of the \(\rho_{AB}\) with respect to the \(B\). Trace norm is directly related to the entanglement negativity via \(N=\frac{||\rho_{AB}^{T}||-1}{2}\)[5]. LN has been computed in CFT\({}_{2}\) employing a version of the usual replica technique involving a specific four-point function of the twist fields [6; 7; 8; 9; 10; 11; 12; 13]. An analytic form of MI in \(CFT_{2}\) is achieved by using the operator product expansion of twist field [14]. However, field theoretic analysis for EoP hardly exists due to the difficulties in the implementation of the minimization procedure except for some numerical results obtained free lattice field theory [15]. The direct study of entanglement measures for strongly coupled field theory is still an open question. Nevertheless, one can study strongly coupled systems by exploiting the strong/weak nature of the holographic dualities. A concrete example of such a holographic dualities is the AdS/CFT correspondence that suggests the information of a conformal field theory (CFT) living on the boundary of Anti-de Sitter (AdS) space is encoded in the bulk gravitational theory of AdS [16; 17]. Although general proof of the conjecture is yet to achieve, it passes numerous consistency tests in diverse fields. The Ryu-Takayanagi formula is a crucial example in favour of the AdS/CFT correspondence and it provides a holographic prescription for computing the entanglement entropy in the boundary CFT, known as holographic entanglement entropy (HEE)[18; 19]. It states that the entanglement entropy of a certain region A in the CFT is given by the area of a minimal surface (called the Ryu-Takayanagi surface) in the bulk AdS spacetime that is homologous to the boundary region. \[\mathcal{S}_{A}=\frac{\mathcal{A}(\gamma_{A})}{4G_{N}^{d+1}} \tag{5}\] where \(\gamma_{A}\) is a co-dimension two surface with the area \(\mathcal{A}(\gamma_{A})\) such that \(\partial\gamma_{A}=\partial A\) and \(G_{N}^{d+1}\) is the \(d+1\) dimensional Newton's constant. Latter Hubeny, Rangamani, and Takayanagi (HRT) extended this idea to the general states including arbitrary time dependence [20]. The study of entanglement entropy in the context of AdS/CFT has provided valuable insights into quantum phase transitions and critical phenomena in both the boundary CFT and the bulk gravity theory [21; 22; 23]. Quite naturally constructing a holographic prescription of computing the entanglement structure in a mixed state is crucial. In the context of \(AdS_{3}/CFT_{2}\), the authors of [26] propose a holographic conjecture to compute the LN of such boundary CFTs that exactly reproduced the CFT\({}_{2}\) results of in large central charge limit [8]. See [27; 28] for further generalization of this proposal. Viable holographic prescriptions for EoP are presented in [29]. One can use the notion of purification and construct a TFD state which has the holographic dual, a two sided eternal black hole [30]. In [31] author shows that the entanglement in a TFD state can be destroyed via the insertion of an early time operator. The degradation of entanglement is considered as the signature of quantum chaos. Entanglement and quantum chaos are two distinct concepts, but they are interconnected in various ways, especially when considering a system described by mixed density matrix. In a chaotic system the entanglement between two causally disconnected parts of a TFD state can be disrupted by an early perturbation which grows exponentially in time. For a strongly coupled field theory, the shockwave analysis and pole skipping are the mostly used holographic methods [31; 32; 33; 34; 35; 36; 37] Four-dimensional, finite temperature \(\mathcal{N}=4\) super Yang-Mills theory charged under a \(U(1)\) subgroup of its \(SU(4)\) R-symmetry, with chemical potential holographically corresponds to five dimensional 1RC black hole background [38; 39]. The low and high temperature limits of HEE and HMI near the critical point are explored in 1RC black hole background[40]. The author show that at and near the critical point the leading behavior of mutual information yields a set of critical exponents. Moreover, in [41], a numerical investigation of the EWCS holographically reveals that the EoP in the dual field theory at finite temperature (\(T\)) and chemical potential (\(\mu\)) behaves as a monotonic function of \(\frac{\mu}{T}\) whereas the EoP behaves drastically different in the presence of a critical point. The investigation of the holographic butterfly effect is carried out within the background of a 1RC black hole. In this context, the dynamical exponent is determined through an expansion of the butterfly velocity in the vicinity of the critical point, as described in [43]. See [44; 45; 46; 47; 48], for more holographic applications in this background. This work aims to improve the understanding of classical and quantum correlation near the critical point of the four-dimensional, finite temperature \(\mathcal{N}=4\) super Yang-Mills theory via performing holographic computation of a few relevant quantities such as LN, EoP and TMI in the dual five dimensional 1RC black hole background. In our analysis we find that, adjacent configurations, at low temperature, the LN decreases as the \(\xi\) parameter increases whereas at the high-temperature it increases with \(\xi\) parameter. For disjoint subsystems, LN increases with \(\xi\) at low temperatures and vanishes at high temperatures. In the bipartite case, LN increases with \(\xi\) at low temperatures and decreases at high temperatures. In all the cases, LN remain finite in critical limit \(\xi\to 2\). EoP also increases with respect to the parameter \(\xi\) and remains finite in the critical limit. We also show that TMI between two entangled subsystems forming a TFD state increases with their individual sizes. At a fixed size of the subsystem, TMI rises with increasing \(\xi\). In order to expand our investigation into the chaotic dynamics of strongly coupled field theories featuring critical point, we introduce an early-time, time-dependent perturbation. This perturbation, when realized within the holographic framework, takes the form of an exponentially growing energy pulse, ultimately manifesting as a shock wave. We explicitly disrupted holographic TMI with a shockwave, and our results indicate that as \(\xi\) parameter takes higher values, the chaotic behavior of the system gets reduced. This paper is organized as follows; in the section 2 we discussed about the holographic dual of the strongly coupled field theory with critical points. In section 3 we review the HEE, section 4, 5 and 6 are devoted to the HLN for two subsystems in different configurations. In section 7 we give the analytic form of EWCS in low and high thermal limits and in section 8 we give the detailed computation of mutual information between two subsystems in a TFD state known as TMI. Finally, in section 9 we summarize the results. ## 2 Background As discussed in the introduction, we proceed with a five-dimensional geometry which is holographic dual to a four-dimensional strongly coupled field theory with a critical point. In the existing literature, this is usually known as the 1RC black hole background [38; 44; 45; 46; 39]. Consider the following five dimensional Einstein-Maxwell-Dilaton action \[\mathcal{S}_{\rm EMD}=\frac{1}{16\pi G_{N}^{(5)}}\int d^{5}x\sqrt{-g}\left[ \mathcal{R}-\frac{f(\phi)}{4}F_{\mu\nu}F^{\mu\nu}-\frac{1}{2}\partial_{\mu} \phi\partial^{\mu}\phi-V(\phi)\right], \tag{1}\] where \(A_{\mu}\) is the gauge field and \(\phi\) is a scalar field. We denote the dilaton potential as \(V(\phi)\) and the coupling between the gauge field and the dilaton is characterized by the coupling function \(f(\phi)\). The functions \(f(\phi)\) and \(V(\phi)\) have the following form \[f(\phi)=e^{-\sqrt{\frac{4}{3}}\phi},\quad V(\phi)=-\frac{1}{R^{2}}\left(8e^{ \frac{\phi}{\sqrt{6}}}+4e^{-\sqrt{\frac{2}{3}}\phi}\right) \tag{2}\] where \(R\) is the \(AdS\) radius. The solution to the equations of motion of the EDM action in equation (1) corresponds to the 1RCBH background described by \[ds^{2}=e^{2A(z)}\left(-h(z)dt^{2}+d\vec{x}_{(3)}^{2}\right)+\frac{e^{2B(z)}}{h(z )}\frac{R^{4}}{z^{4}}dz^{2} \tag{3}\] where, \[A(z) =\ln\left(\frac{R}{z}\right)+\frac{1}{6}\ln\left(1+\frac{Q^{2}z^ {2}}{R^{4}}\right)\] \[B(z) =-\ln\left(\frac{R}{z}\right)-\frac{1}{3}\ln\left(1+\frac{Q^{2}z ^{2}}{R^{4}}\right)\] \[h(z) =1-\frac{M^{2}z^{4}}{R^{6}\left(1+\frac{Q^{2}z^{2}}{R^{4}} \right)} \tag{4}\] \[\phi(z) =-\sqrt{\frac{2}{3}}\ln\left(1+\frac{Q^{2}z^{2}}{R^{4}}\right)\] \[\Phi(z) =\frac{MQ{z_{h}}^{2}}{R^{4}\left(1+\frac{Q^{2}{z_{h}}^{2}}{R^{4}} \right)}-\frac{MQz^{2}}{R^{4}\left(1+\frac{Q^{2}z^{2}}{R^{4}}\right)}\] \(\Phi(z)\) is the electric potential that attributes to the temporal component of the gauge field. In this coordinate system, the boundary is situated at \(z=0\). Note that, the electric potential \(\phi(z)\) is chosen in such a way that it is regular on the boundary [49, 50] and vanishes on the horizon. The parameters \(M\) and \(Q\) are related to the mass and charge respectively of the black hole. One can obtain the following expression for the blackening factor \(h(z)\) using the horizon equation i.e \(h(z_{H})=0\) \[h(z)=1-\left(\frac{z}{z_{h}}\right)^{4}\left(\frac{1+\left(\frac{Qz_{h}}{R^{2 }}\right)^{2}}{1+\left(\frac{Qz}{R^{2}}\right)^{2}}\right)=1-\left(\frac{z}{z_ {h}}\right)^{4}\left(\frac{1+\xi}{1+\xi(\frac{z}{z_{h}})^{2}}\right) \tag{5}\] where, \(\xi\equiv Q^{2}z_{h}^{2}/R^{4}\). The Hawking temperature is given by, \[T=\frac{1}{2\pi z_{h}}\left(\frac{2+\left(\frac{Qz_{h}}{R^{2}}\right)^{2}}{ \sqrt{1+\left(\frac{Qz_{h}}{R^{2}}\right)^{2}}}\right) \tag{6}\] and the chemical potential is, \[\mu=\frac{1}{R}\lim_{z\to 0}\Phi(z)=\frac{Q}{R^{2}\sqrt{1+\left(\frac{Qz_{h}}{R^{2}} \right)^{2}}} \tag{7}\] For convenience, we rewrite the temperature, in terms of a dimensionless quantity \(\xi\) and \(\hat{T}\) as \[T=\hat{T}\left(\frac{1+\frac{\xi}{2}}{\sqrt{1+\xi}}\right),\quad\hat{T}\equiv \frac{1}{\pi z_{h}} \tag{8}\] It is shown in [40] that 1RCBH is thermodynamic stable for \(\xi\in[0,2]\) and \(\xi\to 2\) is the critical point. ## 3 Holographic Entanglement Entropy (HEE) In this section, we provide a concise overview of the HEE calculation as presented in [40]. We leverage the outcomes of HEE computations conducted under various temperature conditions. Subsequently, we employ the method outlined in [42] to compute the HLN for the 1RCBH background. To elaborate, we focus on a boundary subsystem characterized as a rectangular strip denoted as \(A\), with a width \(l\) along the \(x\) direction and extending to a length \(L\) in all the transverse directions \(x^{j}\). The coordinate \(x\) is expressed in terms of the bulk coordinate \(z\). This rectangular strip can be precisely defined as follows, \[x\equiv x\in\Big{[}-\frac{l}{2},\frac{l}{2}\Big{]},\quad x^{j}\in\Big{[}-\frac {L}{2},\frac{L}{2}\Big{]},\quad j=2,3 \tag{9}\] Where \(L\) is exceedingly large. Determining the HEE of subsystem \(A\) requires us to calculate the smallest surface area of the co-dimension two hyper-surface denoted as \(\gamma_{A}\). The area functional of \(\gamma_{A}\) is as follows: \[\mathcal{A}(\gamma_{A})=\int d^{3}x\ \sqrt{\det(g_{mn})} \tag{10}\] Where \(g_{mn}\) is the induced metric of \(\gamma_{A}\), the area can be written in the following form \[\mathcal{A}=2L^{2}\int dz\ e^{3A(z)}\sqrt{x^{\prime}(z)^{2}+\frac{R^{4}}{z^{ 4}h(z)}e^{2(B(z)-A(z))}} \tag{11}\] One can find the conserved quantity corresponds to \(x\) using the Lagrangian in the area functional and obtain the following equation by imposing \(\frac{1}{x^{\prime}(z_{t})}=0\) for \(z\to z_{t}\), \[x^{\prime}(z)=\frac{R^{2}}{z^{2}}\frac{e^{3A(z_{t})}e^{B(z)-A(z)}}{\sqrt{h(z) }\sqrt{e^{6A(z)}-e^{6A(z_{t})}}} \tag{12}\] \(z_{t}\) is the turning point of the surface (\(\gamma_{A}\)). Using \(x^{\prime}(z)\), the area functional (11) now becomes \[\mathcal{A}=2L^{2}R^{2}\int_{0}^{z_{t}}dz\ \frac{e^{B(z)+2A(z)}}{z^{2}\sqrt{h(z )}}\sqrt{\frac{e^{6A(z)}}{e^{6A(z)}-e^{6A(z_{t})}}} \tag{13}\] Finally the holographic entanglement entropy (HEE) is, \[\mathcal{S}=\frac{L^{2}R^{2}}{2G_{N}^{5}}\int_{0}^{z_{t}}dz\ \frac{e^{B(z)+2A(z)}}{z^{2}\sqrt{h(z)}}\sqrt{\frac{e^{6A(z)}}{e^{6A(z)}-e^{6A (z_{t})}}} \tag{14}\] rom (3.4) boundary parameter \(l\) and the bulk parameter \(z_{t}\) are related via, \[\frac{l}{2}=\int_{0}^{z_{t}}dz\ \frac{R^{2}}{z^{2}}\frac{e^{3A(z_{t})}e^{B(z)-A(z)} }{\sqrt{h(z)}\sqrt{e^{6A(z)}-e^{6A(z_{t})}}} \tag{3.7}\] To express the HEE in terms of boundary parameter we have to replace the \(z_{t}\) in (3.6) in terms of \(l\). Finding a solution for the integral (3.7) and expressing \(z_{t}\) in relation to \(l\) poses a significant challenge. Nevertheless, in scenarios where the temperature is either low or high, accomplishing this task becomes feasible. Equation (2.4) and (3.5) gives the following expression \[\mathcal{A}=2L^{2}R^{3}\int_{0}^{z_{t}}dz\ \frac{z_{t}{}^{3}}{z^{6}}\sqrt{\frac{1+ \xi\big{(}\frac{z}{z_{h}}\big{)}^{2}}{1+\xi\big{(}\frac{z}{z_{h}}\big{)}^{2}} }\Bigg{[}1-\left(\frac{z}{z_{h}}\right)^{4}\left(\frac{1+\xi}{1+\xi\big{(} \frac{z}{z_{h}}\big{)}^{2}}\right)\Bigg{]}^{-\frac{1}{2}}\Bigg{[}\Big{(}\frac {z_{t}}{z}\Big{)}^{6}\left(\frac{1+\xi\big{(}\frac{z}{z_{h}}\big{)}^{2}}{1+\xi \big{(}\frac{z_{t}}{z_{h}}\big{)}^{2}}\right)-1\Bigg{]}^{-\frac{1}{2}} \tag{3.8}\] In similar way equation (3.7) can be expressed as, \[\frac{l}{2}=\int_{0}^{z_{t}}dz\left[1+\xi\bigg{(}\frac{z}{z_{h}}\bigg{)}^{2} \right]^{-\frac{1}{2}}\Bigg{[}1-\left(\frac{z}{z_{h}}\right)^{4}\left(\frac{1 +\xi}{1+\xi\big{(}\frac{z}{z_{h}}\big{)}^{2}}\right)\Bigg{]}^{-\frac{1}{2}} \Bigg{[}\Big{(}\frac{z_{t}}{z}\Big{)}^{6}\left(\frac{1+\xi\big{(}\frac{z}{z_{ h}}\big{)}^{2}}{1+\xi\big{(}\frac{z_{t}}{z_{h}}\big{)}^{2}}\right)-1\Bigg{]}^{- \frac{1}{2}} \tag{3.9}\] Now it is possible to analytically solve the above two integrals by considering several binomial and trinomial expansions. We are basically going to employ the following series expansion formulae to write the integrands of the above two equations \[(x+y)^{-n}=\sum_{k=0}^{\infty}{(-1)^{k}\frac{\Gamma(n+k)}{\Gamma( k+1)\Gamma(n)}x^{-n-k}y^{k}};\ \ \text{given}\ |y|<|x|\] \[(x+y+z)^{-n}=\sum_{k=0}^{\infty}{\sum_{j=0}^{k}\frac{\Gamma(n+k)} {\Gamma(k+1)\Gamma(n)}\frac{(-1)^{k}\Gamma(k+1)}{\Gamma(j+1)\Gamma(k-j+1)}x^{ -n-k}y^{k-j}z^{j}},\ \ \text{given}\ |y+z|<|x| \tag{3.10}\] Figure 1: Turning point \(z_{c}\) of RT surface with respect to width \(l\). Using equation (24) we can write the following form of the area integral \[\mathcal{A}=\frac{2L^{2}R^{3}}{\pi}\sum_{k=0}^{\infty}\sum_{n=0}^{k} \sum_{m=0}^{\infty}\sum_{j=0}^{\infty}\frac{(-1)^{k+n}\Gamma(k+\frac{1}{2}) \Gamma(j+m+\frac{1}{2})}{\Gamma(n+1)\Gamma(k-n+1)\Gamma(j+1)\Gamma(m+1)}\xi^{k- n+m}(1+\xi)^{n}\bigg{(}\frac{z_{t}}{z_{h}}\bigg{)}^{2m}\] \[\times\Bigg{[}1+\xi\bigg{(}\frac{z_{t}}{z_{h}}\bigg{)}^{2}\Bigg{]} ^{-m-\frac{1}{2}}\int_{0}^{z_{c}}dz\ \left[1+\xi\bigg{(}\frac{z}{z_{h}}\bigg{)}^{2}\right] \Bigg{[}1-\bigg{(}\frac{z}{z_{t}}\bigg{)}^{2}\Bigg{]}\,z^{-3}\bigg{(}\frac{z}{ z_{t}}\bigg{)}^{6j}\bigg{(}\frac{z}{z_{h}}\bigg{)}^{2(k+n)} \tag{25}\] The region bounded by the extremal surface exhibits divergence, primarily due to its behavior near the boundary, which is a common expectation. Upon closer examination, it becomes evident that when the condition \(k+n+3j>1\) is met, the final integral (and consequently, the enclosed area) remains finite. Consequently, we must isolate and sum the terms corresponding to (\(k=n=j=0\)) and (\(k=1,n=j=0\)) over the variable \(m\) to determine the portion of the region containing the divergent component. By carrying out this procedure, one can derive the subsequent outcome. \[\mathcal{A}_{0}\equiv L^{2}R^{2}\Bigg{\{}\frac{1}{\epsilon^{2}}+\frac{3\xi}{2{ z_{h}}^{2}}-\frac{1}{{z_{t}}^{2}}\Bigg{[}1+\xi\bigg{(}\frac{z_{t}}{z_{h}}\bigg{)}^{2} \Bigg{]}^{\frac{3}{2}}\Bigg{\}} \tag{26}\] In the limit as \(\epsilon\) approaches zero, where \(z\) takes on the value of \(\epsilon\), the cutoff surface within the bulk geometry becomes relevant as it is intricately tied to the ultraviolet (UV) regularization of the field theory. It becomes evident that the problematic term in equation (26) shows behavior akin to an area law, a characteristic also shared by the associated holographic entanglement entropy. In the context of a \(d\)-dimensional boundary field theory, when the primary divergence in the UV limit, as \(\epsilon\) tends towards zero, adheres to an area law, this outcome is entirely anticipated. To simplify calculations, we will subsequently focus on the finite component of the area, achieved by subtracting the \(1/\epsilon^{2}\) term. This can be expressed in the following manner: \[\mathcal{A}_{\text{finite}}=\frac{L^{2}R^{3}}{{z_{t}}^{2}}\Bigg{\{} \frac{3\xi}{2}\bigg{(}\frac{z_{t}}{z_{h}}\bigg{)}^{2}-\Bigg{[}1+\xi\bigg{(} \frac{z_{t}}{z_{h}}\bigg{)}^{2}\Bigg{]}^{\frac{3}{2}}+\frac{1+\xi}{3\xi}\bigg{(} \frac{z_{t}}{z_{h}}\bigg{)}^{2}\left[\Bigg{(}1+\xi\bigg{(}\frac{z_{t}}{z_{h}} \bigg{)}^{2}\Bigg{)}^{\frac{3}{2}}-1\right]\Bigg{\}}\\ +\frac{L^{2}R^{3}}{{z_{t}}^{2}}\Bigg{\{}\sum_{k=2}^{\infty}\sum_{ n=0}^{k}\sum_{m=0}^{\infty}\Lambda_{knm}\frac{\Gamma(m+\frac{1}{2})\Gamma(k+n-1)}{ \Gamma(k+n+m+1)}\bigg{(}\frac{z_{t}}{z_{h}}\bigg{)}^{2(k+n+m)}\\ \times\Bigg{[}(m+1)+(k+n-1)\left(1+\xi\bigg{(}\frac{z_{t}}{z_{h}} \bigg{)}^{2}\bigg{)}\right]\Bigg{\}}\\ +\frac{L^{2}R^{3}}{{z_{t}}^{2}}\Bigg{\{}\sum_{k=0}^{\infty}\sum_ {n=0}^{k}\sum_{m=0}^{\infty}\sum_{j=1}^{\infty}\Lambda_{knm}\frac{\Gamma(m+j+ \frac{1}{2})\Gamma(k+n+3j-1)}{\Gamma(j+1)\Gamma(k+n+m+3j+1)}\bigg{(}\frac{z_{t }}{z_{h}}\bigg{)}^{2(k+n+m)}\\ \times\Bigg{[}(m+1)+(k+n+3j-1)\left(1+\xi\bigg{(}\frac{z_{t}}{z_{ h}}\bigg{)}^{2}\bigg{)}\right]\Bigg{\}} \tag{27}\] where \(\Lambda_{knm}\) is given by the following relation \[\Lambda_{knm}\equiv\frac{(-1)^{k+n}\Gamma(k+\frac{1}{2})}{\pi\Gamma(n+1)\Gamma(k- n+1)}\xi^{k-n+m}(1+\xi)^{n}\Bigg{[}1+\xi\bigg{(}\frac{z_{t}}{z_{h}}\bigg{)}^{2} \Bigg{]}^{-m-\frac{1}{2}} \tag{21}\] Hence, by incorporating the UV-divergence-dependent term into (21), we can derive the overall surface area of the external surface associated with a rectangular strip having a width of \(l\) on the boundary. Similarly, by following this procedure, we can determine the subsystem's width as a function of the turning point. Consequently, through the utilization of multinomial expansions and the solution of the integral, as outlined in equation (19), we can establish the ensuing relationship. \[\frac{l}{2}=z_{t}\sum_{k=0}^{\infty}\sum_{n=0}^{k}\sum_{m=0}^{\infty}\sum_{j=0 }^{\infty}G_{knmj}F_{knmj}\bigg{(}\frac{z_{t}}{z_{h}}\bigg{)}^{2(k+n+m)} \tag{22}\] where the constants \(G_{knmj}\) and \(F_{knmj}\) are defined by the following relations \[\begin{split} G_{knmj}&\equiv\frac{\Gamma(k+\frac {1}{2})\Gamma(j+m+\frac{1}{2})\Gamma(2+3j+k+n)}{2\pi\Gamma(n+1)\Gamma(k-n+1) \Gamma(j+1)\Gamma(3+3j+k+n+m)}\\ F_{knmj}&\equiv(-1)^{k+n}\xi^{k-n+m}(1+\xi)^{n} \Bigg{[}1+\xi\bigg{(}\frac{z_{t}}{z_{h}}\bigg{)}^{2}\Bigg{]}^{-m}\end{split} \tag{23}\] Note that in order to utilize the multinomial expansions for the negative exponents, it has been verified that the subsequent relationships hold true across the entire interval of \(\xi\in[0,2]\) and for values of \(z_{t}\) spanning from the boundary to the horizon. \[\frac{\xi\Big{(}\frac{z_{t}}{z_{h}}\Big{)}^{2}}{1+\xi\Big{(}\frac{z_{t}}{z_{h} }\Big{)}^{2}}\left(1-\frac{z^{2}}{z_{t}{}^{2}}\right)<1,\qquad\text{and}\qquad \xi\bigg{(}\frac{z}{z_{h}}\bigg{)}^{2}-(1+\xi)\bigg{(}\frac{z}{z_{h}}\bigg{)}^ {4}<1 \tag{24}\] We now possess the analytical representation for the area of the extremal surface associated with the subsystem of width \(l\). Furthermore, we have successfully elucidated the connection between this width and the turning point of the RT surface, as described in equations (20) and (22). In the next section, we are going to do this explicitly by considering low and high-temperature limits. By examining equation (20), we can observe the extremal surface area, which is determined by two dimensionless parameters: \(\xi\) and the ratio \(z_{t}/z_{h}\). In the subsequent two subsections, our focus will be on exploring the holographic entanglement negativity concerning the \(z_{t}/z_{h}\) ratio, a parameter that leads to two distinct thermal limits. It's worth noting that \(z_{h}\) is inversely related to the black hole temperature. However, for the analysis of critical behavior, we will reserve our investigation regarding the parameter \(\xi\), which, as previously mentioned in section 2, governs the critical behavior. Considering the ratio between the location of the extremal surface and the horizon position, denoted as \(z_{t}/z_{h}\), one can anticipate two distinct scenarios for the area calculation in this context, specifically when \(z_{t}/z_{h}\ll 1\) and when \(z_{t}/z_{h}\sim 1\). It is important to note that the former scenario implies that the extremal surface is situated close to the boundary at \(z=0\), while the latter scenario indicates that it approaches but does not cross the horizon. From the perspective of field theory, we can directly translate these scenarios into two equivalent thermal limits based on the subsystem width \(l\): \(\hat{T}l\ll 1\) and \(\hat{T}l\gg 1\), respectively, where \(\hat{T}\) is defined in equation (8). Consequently, one can associate the case of \(z_{t}/z_{h}\ll 1\) with the low-temperature limit, corresponding to the ground state of the CFT, and the case of "\(z_{t}/z_{h}\sim 1\) with the high-temperature limit, where the entanglement of thermal excitations becomes significant. Before concluding this section, we would like to explore a few key aspects of the turning point denoted as \(z_{t}\) in fig.1. These aspects are closely tied to some well-established properties of the RT (Ryu-Takayanagi) surface. When the parameter \(l\) approaches zero, it becomes challenging to distinguish between the turning points of the RT surfaces associated with different values of \(\xi\). However, beyond a certain threshold value of \(l\)," it becomes possible to differentiate between the turning points corresponding to various \(\xi\) values. It is important to note that, for a fixed \(\xi\), the turning point initially emerges from the origin and gradually increases as \(l\) increases. This behavior indicates that as the width of the boundary region expands, the RT surfaces extend deeper into the bulk of the system. As \(l\) reaches higher values, the value of \(z_{t}\) saturates, signifying that the RT surface, associated with a boundary region of width \(l\), becomes nearly parallel to the horizon when \(l\) becomes significantly large. Consequently, by examining the plot's characteristics, one can draw the aforementioned conclusions, which align with our understanding of the nature of the RT surface. ## 4 Holographic Logarithmic Negativity for two adjacent subsystems In this section, we utilize the holographic framework outlined in references [24; 25; 26; 27] to analyze the 1RC black hole background and determine the holographic entanglement negativity. This calculation involves the summation of the areas of certain extremal surfaces, located in the bulk and associated with the relevant subsystems. As per the conjecture, the holographic entanglement negativity can be expressed in the following manner. \[\mathcal{E}=\frac{3}{16G_{N}^{5}}\left(\mathcal{A}_{1}+\mathcal{A}_{2}- \mathcal{A}_{12}\right) \tag{10}\] In this context, \(\mathcal{A}_{i}\) represents the area of a co-dimension two extremal surface that is connected to subsystem \(A_{i}\) (refer to fig.2). It's worth noting that \(\mathcal{A}_{12}\) specifically denotes the area of the extremal surface anchored to the combined subsystem \(A_{1}\cup A_{2}\). As previously discussed in Section 3, we have already presented the formula for calculating the extremal surface's area related to a subsystem with a specific width. In the following subsections, we will apply these formulas to compute the HLN in both low and high-temperature scenarios. ### Holographic Logarithmic Negativity for two adjacent subsystems at low temperature In this section, we delve into the low-temperature regime of the area functional, along with the width parameter \(l\). Additionally, we calculate the Holographic Luttinger Number (HLN) in the low-temperature limit, considering two neighboring subsystems with widths \(l_{1}\) and \(l_{2}\). To validate our findings, we demonstrate their correspondence with those of the AdS-Schwarzschild black hole in the \(\xi\to 0\) limit, as discussed in [51]. Before proceeding with the low-temperature limit, it is essential to acknowledge that when dealing with an infinite series, concerns about divergence arise, depending on their growth behavior. However, in the low-temperature regime where \(z_{t}/z_{h}\ll 1\), it becomes evident that both infinite series in equations (3.14) and (3.16) converge. Consequently, when expanding equation (3.16) to the order of \(z_{t}/z_{h}\), we derive the following relationship. \[l=z_{t}\Bigg{\{}a_{1}-\frac{a_{1}\xi}{6}\bigg{(}\frac{z_{t}}{z_{h}}\bigg{)}^{2} +\bigg{[}\frac{a_{2}(1+\xi)}{2}+\frac{a_{3}\xi^{2}}{24}\bigg{]}\left(\frac{z_ {t}}{z_{h}}\right)^{4}+\mathcal{O}\bigg{(}\frac{z_{t}}{z_{h}}\bigg{)}^{6} \Bigg{\}} \tag{4.2}\] where the constants \(a_{1}\), \(a_{2}\) and \(a_{3}\) are, \[\begin{split} a_{1}&\equiv\sum_{j=0}^{\infty} \frac{\Gamma\left(j+\frac{1}{2}\right)}{\sqrt{\pi}\Gamma(j+1)\Gamma(2+3j)}= \frac{3\sqrt{\pi}\Gamma\left(\frac{5}{3}\right)}{\Gamma\left(\frac{1}{6} \right)}\\ a_{2}&\equiv\sum_{j=0}^{\infty}\frac{\Gamma\left( j+\frac{1}{2}\right)}{\sqrt{\pi}\Gamma(j+1)\Gamma(4+3j)}=\frac{\sqrt{\pi} \Gamma\left(\frac{7}{3}\right)}{4\ \Gamma\left(\frac{11}{6}\right)}\\ a_{3}&\equiv\sum_{j=0}^{\infty}\frac{\Gamma\left( j+\frac{1}{2}\right)(4-j)}{\sqrt{\pi}\Gamma(j+1)\Gamma(2+3j)\Gamma(4+3j)}\\ &=\frac{3}{\sqrt{\pi}}\left[\Gamma\left(\frac{5}{6}\right)\Gamma \left(\frac{5}{3}\right)-\frac{3}{5}\Gamma\left(\frac{7}{6}\right)\Gamma\left( \frac{7}{3}\right)\right]-\frac{1}{70}\ {}_{3}F_{2}\left(\frac{3}{2},\frac{5}{3},\frac{7}{3} ;\frac{8}{3},\frac{10}{3};1\right)\end{split} \tag{4.3}\] Now inverting the relation between \(l\) and \(z_{t}\) as, Figure 2: Schematic diagram of the extremal surfaces, involving turning points, corresponding to two adjacent boundary subsystems \(A\) and \(B\) having widths \(l_{1}\) and \(l_{2}\) respectively. Here \(z=0\) denotes the boundary whereas \(z=z_{h}\) denotes the horizon. \[z_{t}=\frac{l}{a_{1}}\Bigg{\{}1+\frac{\xi}{6a_{1}^{2}}\bigg{(}\frac{l}{z_{h}} \bigg{)}^{2}+\frac{1}{24a_{1}^{4}}\left[\frac{\xi^{2}}{6}\left(1-\frac{a_{3}}{2a _{2}}\right)-\frac{a_{2}}{a_{1}}(1+\xi)\right]\left(\frac{l}{z_{h}}\right)^{4}+ \mathcal{O}\bigg{(}\frac{l}{z_{h}}\bigg{)}^{6}\Bigg{\}} \tag{4.4}\] Similarly, one can expand the infinite series for the area functional and can get the following equation \[\begin{split}\mathcal{A}_{\text{finite}}^{\text{low}}& =\frac{L^{2}R^{3}}{z_{t}^{2}}\left[\frac{1+\xi}{2}\bigg{(}\frac{z_ {t}}{z_{h}}\bigg{)}^{4}-1\right]+\frac{L^{2}R^{3}}{z_{t}^{2}}\sum_{j=1}^{\infty }\frac{\Gamma\left(j+\frac{1}{2}\right)}{\sqrt{\pi}\Gamma(j+1)\Gamma(3j-1)}\\ &\times\left[1+\frac{\xi}{3}\bigg{(}\frac{z_{t}}{z_{h}}\bigg{)}^ {2}+\left(\frac{(-4\xi^{2}+9\xi+9)j-3(\xi+1)}{18j+6}\right)\left(\frac{z_{t}}{ z_{h}}\right)^{4}\right]\end{split} \tag{4.5}\] Now performing the sum and substituting the expression of the turning point we get, \[\begin{split}\mathcal{A}_{\text{finite}}^{\text{low}}& =\frac{R^{3}L^{2}}{l^{2}}\Bigg{\{}a_{1}^{2}(w_{1}-1)+\frac{\xi}{3} \bigg{(}\frac{l}{z_{h}}\bigg{)}^{2}+\frac{1}{2a_{1}^{2}}\Bigg{[}(1+\xi)\left(1 -w_{3}+3w_{2}+\frac{2(w_{1}-1)a_{2}}{a_{1}}\right)\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\frac{\xi^{2}}{6} \left((w_{1}-1)(\frac{a_{3}}{a_{1}}-1)-8w_{2}\right)\Bigg{]}\bigg{(}\frac{l}{ z_{h}}\bigg{)}^{4}\Bigg{\}}\end{split} \tag{4.6}\] where the numerical constants \(w_{1}\), \(w_{2}\), and \(w_{3}\) are, \[\begin{split} w_{1}&\equiv\frac{1}{\sqrt{\pi}}\sum _{j=1}^{\infty}\frac{\Gamma\left(j+\frac{1}{2}\right)}{\Gamma(j+1)(3j-1)}= \frac{1}{2^{2/3}}\ _{2}F_{1}\left(\frac{1}{3},\frac{2}{3};\frac{5}{3};-1\right)\\ w_{2}&\equiv\frac{1}{\sqrt{\pi}}\sum_{j=1}^{\infty }\frac{j\Gamma\left(j+\frac{1}{2}\right)}{\Gamma(j+1)(3j-1)(3j+1)}=\frac{1}{16} \ _{3}F_{2}\left(\frac{2}{3},\frac{4}{3},\frac{3}{2};\frac{5}{3},\frac{7}{3};1 \right)\\ w_{3}&\equiv\frac{1}{\sqrt{\pi}}\sum_{j=1}^{\infty }\frac{\Gamma\left(j+\frac{1}{2}\right)}{\Gamma(j+1)(3j-1)(3j+1)}=\frac{3}{16} \ _{3}F_{2}\left(\frac{2}{3},\frac{4}{3},\frac{3}{2};\frac{5}{3},\frac{7}{3};1 \right)-\frac{1}{2^{1/3}}\ _{2}F_{1}\left(\frac{4}{3},\frac{5}{3};\frac{7}{3};-1\right)\end{split} \tag{4.7}\] Note that in the limit where \(\xi\to 0\), we get \(z_{h}=1/\pi T\), and the subleading terms become \(2^{\text{nd}}\) and \(4^{\text{th}}\) order in \(Tl\). To express this relation in a more simplified way, we define \[\begin{split}& c\equiv a_{1}^{2}(w_{1}-1)\\ & f(\xi)\equiv(1+\xi)\frac{\left(1-w_{3}+3w_{2}+\frac{2(w_{1}-1)a _{2}}{a_{1}^{2}}\right)}{a_{1}^{2}}+\frac{\xi^{2}}{6}\frac{\left((w_{1}-1)( \frac{a_{3}}{a_{1}}-1)-8w_{2}\right)}{a_{1}^{2}}\end{split} \tag{4.8}\] Therefore, using the above definitions we finally get the area functional of a boundary subsystem which was thought of a rectangular strip with width \(l\) \[\mathcal{A}_{\text{finite}}^{\text{low}}=R^{3}\bigg{(}\frac{L}{l}\bigg{)}^{2} \Bigg{\{}c+\frac{\xi}{3}\Big{(}\pi\hat{T}l\Big{)}^{2}+\frac{1}{2}f(\xi)\Big{(} \pi\hat{T}l\Big{)}^{4}\Bigg{\}} \tag{4.9}\] In the context of two adjoining subsystems, we designate these subsystems as \(A_{1}\) and \(A_{2}\), each defined by the width of their respective rectangular strips, denoted as \(l_{1}\) and \(l_{2}\). Utilizing equation (4.1), we derive the subsequent expression for the HLN within the low-temperature regime for the scenario involving two adjacent subsystems. \[\mathcal{E}_{low}=\frac{3R^{3}}{16G_{N}^{5}}\Bigg{[}c\Bigg{\{}\bigg{(} \frac{L}{l_{1}}\bigg{)}^{2}+\bigg{(}\frac{L}{l_{2}}\bigg{)}^{2}-\bigg{(}\frac{L} {l_{1}+l_{2}}\bigg{)}^{2}\Bigg{\}}+\frac{\xi}{3}L^{2}\pi^{2}\hat{T}^{2}-f(\xi) \left(\pi L^{2}\hat{T}^{4}\right)l_{1}l_{2}\Bigg{]} \tag{4.10}\] This serves as a reminder that the HLN expression above is derived by considering the finite portion of the extremal areas of the adjacent subsystems. Consequently, the UV-divergence component is not evident in this expression. Nevertheless, if we work with the complete area expression, the UV divergence term will also appear in the negativity expression. Now one can compare this HLN expression with the one in [51] for the AdS\({}_{d+1}\) Schwarzschild black hole. Please note that this comparison is valid in the limit of \(Q\to 0\), which can be achieved by setting \(\xi\to 0\). In the provided expression, the first three terms within the curly braces are inversely proportional to the squares of the lengths corresponding to the relevant boundary regions. The third term, which depends on \(f(\xi)\), accounts for the product of the widths of two subregions and includes the \(\hat{T}^{4}\) dependence. The second term will naturally disappear in the critical limit of \(\xi\to 0\), rendering the HLN finite in the \(\xi\to 2\) limit. ### Holographic Logarithmic Negativity for two adjacent subsystems at high temperature As mentioned in the previous section, there is always a concern regarding the divergence of infinite series. Fortunately, various methods for summation or regularization are available to address the issue of divergence in a given series. We observe that in the high-temperature limit where \(z_{t}\) approaches \(z_{h}\), the infinite series in equation (3.13) does not exhibit convergence. Nevertheless, it is possible to regularize the series by reordering its terms in a manner that allows us to recover the component proportional to \(l\). In the limit where \(z_{t}\) tends toward \(z_{h}\), the expression for the area of the RT-surface takes the following form. \[\mathcal{A}_{\text{finite}}^{\text{high}}=R^{3}\bigg{(}\frac{L}{z_{h}} \bigg{)}^{2}\Bigg{\{}\sqrt{1+\xi}\left(\frac{l}{z_{h}}\right)+(S_{1}+S_{2}+S_ {3})\Bigg{\}} \tag{4.11}\] where \(S_{1}\), \(S_{2}\) and \(S_{3}\) dependent on \(\xi\) and given by \[S_{1}\equiv\frac{3\xi}{2}-\frac{1}{3}-\frac{11}{5\xi}-\frac{244 }{105\xi^{2}}-\frac{32}{35\xi^{3}}-\frac{16}{35\xi^{4}}+\sqrt{\xi+1}\left(- \frac{64\xi}{105}-\frac{124}{105}+\frac{26}{21\xi}+\frac{214}{105\xi^{2}}+ \frac{24}{35\xi^{3}}+\frac{16}{35\xi^{4}}\right)\] \[S_{2}\equiv\sum_{k=2}^{\infty}\sum_{n=0}^{k}\sum_{m=0}^{\infty} \frac{\Gamma\left(k+\frac{1}{2}\right)\Gamma\left(m+\frac{1}{2}\right)\Gamma( k+n+2)(-1)^{k+n}\xi^{k-n+m}(1+\xi)^{n-m-\frac{1}{2}}}{\pi\Gamma(n+1)\Gamma(k-n+1) \Gamma(k+n+m+3)}\] \[\times\Bigg{\{}\frac{m+1}{k+n-1}\left[1+\frac{m+1}{k+n}\left(2+ \frac{m}{k+n+1}\right)\right]+\frac{(1+\xi)(m+1)}{k+n}\left(2+\frac{m}{k+n+1} \right)\Bigg{\}} \tag{4.12}\] \[S_{3} \equiv\sum_{k=2}^{\infty}\sum_{n=0}^{k}\sum_{m=0}^{\infty}\sum_{j=1} ^{\infty}\frac{\Gamma\left(k+\frac{1}{2}\right)\Gamma\left(j+m+\frac{1}{2} \right)\Gamma(k+n+3j+2)}{\pi\Gamma(n+1)\Gamma(j+1)\Gamma(k-n+1)\Gamma(k+n+m+3j+3)} \tag{4.13}\] \[\times(-1)^{k+n}\xi^{k-n+m}(1+\xi)^{n-m-\frac{1}{2}}\times\left\{ \frac{m+1}{k+n+3j-1}\left[1+\frac{m+1}{k+n+3j}\left(2+\frac{m}{k+n+3j+1}\right) \right]\right.\] \[+\left.\frac{(1+\xi)(m+1)}{k+n+3j}\left(2+\frac{m}{k+n+3j+1} \right)\frac{}{}\right\}\] Finally, in terms of temperature \(\hat{T}\), we can write \[\mathcal{A}_{\text{finite}}^{\text{high}}=R^{3}\!\left(\frac{L}{l}\right)^{2 }\!\left\{\sqrt{1+\xi}\!\left(\pi\hat{T}l\right)^{3}+S_{4}\!\left(\pi\hat{T}l \right)^{2}\right\}\ \ \text{where}\ \ S_{4}=S_{1}+S_{2}+S_{3} \tag{4.14}\] Therefore, using the formula for the HLN as given by equation (4.1) we find the HEN at a high-temperature regime for two adjacent subsystems \(A_{1}\) and \(A_{2}\) \[\mathcal{E}_{high}=\frac{3R^{3}}{16G_{N}^{5}}\Bigg{\{}S_{4}L^{2}\!\left(\pi \hat{T}\right)^{2}\Bigg{\}} \tag{4.15}\] Similarly, as previously noted, the negativity expression above is derived by considering the finite part of the extremal areas of the adjacent subsystems. Consequently, the UV-divergence component does not manifest in this expression. To establish a comparison, one can now contrast this entanglement negativity expression with the result obtained in [51] for the AdS\({}_{d+1}\) Schwarzschild black hole in the limit as \(\xi\to 0\). To observe an exact match in the high-temperature scenario, it is necessary to expand the exponential terms in [51] to linear order in \(l\), resulting in a dependence solely on \(T^{d-2}\). Before concluding this section, it's worth noting that in equation (4.14), the initial term dependent on temperature scales with the volume of the rectangular strip, represented as \(L^{2}l\), while the subsequent term is related to the area. Consequently, the first term characterizes thermal entropy, while the second term represents the entanglement entropy between the strip region and its complement. However, in the case of negativity, the volume-dependent component is absent. This indicates that at high temperatures, entanglement entropy and thermal entropy become equivalent and exhibit a temperature dependence of \(\hat{T}^{2}\), as deduced from the area calculation in [40]. ## 5 Holographic Logarithmic Negativity for two disjoint subsystems In this section, we will determine the HLN for two distinct subsystems within the background of a 1RC black hole. Similar to our analysis of neighboring subsystems, we will establish a connection between our findings and the general results outlined in [42] for a \((d+1)\)-dimensional AdS Schwarzschild black hole. Specifically, we focus on two non-overlapping intervals, denoted as \(A_{1}\) and \(A_{2}\), each with widths \(l_{1}\) and \(l_{2}\) respectively, as illustrated in fig.3. These intervals collectively constitute the mixed-state subsystem \(A\), with a gap separating them corresponding to a subsystem \(A_{m}\subset B\) of width \(l_{m}\), where \(B=A^{c}\) represents the remainder of the system. To clarify, we define the three intervals as follows \[A_{1} : x^{1}\equiv x\in\left[-\frac{l_{1}}{2},\frac{l_{1}}{2}\right],\qquad x ^{(j)}\in\left[-\frac{L}{2},\frac{L}{2}\right]\ \ \text{ where }j=2,3 \tag{5.1}\] \[A_{2} : x^{2}\equiv x\in\left[-\frac{l_{2}}{2},\frac{l_{2}}{2}\right], \qquad x^{(j)}\in\left[-\frac{L}{2},\frac{L}{2}\right]\ \ \text{ where }j=2,3\] (5.2) \[A_{m} : x^{m}\equiv x\in\left[-\frac{l_{m}}{2},\frac{l_{m}}{2}\right], \quad x^{(j)}\in\left[-\frac{L}{2},\frac{L}{2}\right]\ \ \text{ where }j=2,3 \tag{5.3}\] where \(L\) for the transverse coordinates are taken to be very large \(L\to\infty\). Now following the conjecture as given in [42, 52] one can write the entanglement negativity corresponding to the disjoint intervals as \[\mathcal{E}=\frac{3}{16G_{N}^{5}}\Big{(}\mathcal{A}_{A_{1}\cup A_{m}}+ \mathcal{A}_{A_{m}\cup A_{2}}-\mathcal{A}_{A_{1}\cup A_{m}\cup A_{2}}- \mathcal{A}_{A_{m}}\Big{)} \tag{5.4}\] where \(\mathcal{A}_{A_{1}\cup A_{m}}\) and \(\mathcal{A}_{A_{m}\cup A_{2}}\) are the areas of the extremal surfaces anchored with respect to the region \(A_{1}\cup A_{m}\) and \(A_{2}\cup A_{m}\) respectively. \(\mathcal{A}_{A_{1}\cup A_{m}\cup A_{2}}\) is the area of the extremal surface anchored with respect to the region \(A_{1}\cup A_{m}\cup A_{2}\) and \(A_{m}\) follows a similar meaning. Note that all the three surfaces corresponding to the intervals \(A_{1}\), \(A_{2}\) and \(A_{m}\) have the turning points labeled as \(z_{t_{1}}\), \(z_{t_{2}}\) and \(z_{t_{m}}\). By utilizing the surface area of the RT surfaces in their respective regions, we can calculate the HLN. Furthermore, we will examine the low and high-temperature limits separately in the following two subsections. ### Holographic Logarithmic Negativity for two disjoint subsystems at low temperature In our prior analysis, we derived the expression for the extremal surface area corresponding to a region of width \(l\) in the low-temperature limit. Consequently, we shall reformulate the area expression as follows \[\mathcal{A}_{\text{finite}}^{\text{low}}=R^{3}\bigg{(}\frac{L}{l}\bigg{)}^{2 }\Bigg{\{}c+\frac{\xi}{3}\Big{(}\pi\hat{T}l\Big{)}^{2}+\frac{1}{2}f(\xi)\Big{(} \pi\hat{T}l\Big{)}^{4}\Bigg{\}} \tag{5.5}\] To calculate the HEN, we must utilize this relation to formulate the expressions for the areas of the extremal surfaces associated with all the intervals specified in equation (5.4). By performing this procedure, we derive the following relationships \[\mathcal{A}_{A_{1}\cup A_{2}\cup A_{m}}=R^{3}\bigg{(}\frac{L}{l_{ 1}+l_{2}+l_{m}}\bigg{)}^{2}\Bigg{\{}c+\frac{\xi}{3}\Big{(}\pi\hat{T}(l_{1}+l_{2 }+l_{m})\Big{)}^{2}+\frac{1}{2}f(\xi)\Big{(}\pi\hat{T}(l_{1}+l_{2}+l_{m})\Big{)} ^{4}\Bigg{\}}\] \[\mathcal{A}_{A_{1}\cup A_{m}}=R^{3}\bigg{(}\frac{L}{l_{1}+l_{m}} \bigg{)}^{2}\Bigg{\{}c+\frac{\xi}{3}\Big{(}\pi\hat{T}(l_{1}+l_{m})\Big{)}^{2}+ \frac{1}{2}f(\xi)\Big{(}\pi\hat{T}(l_{1}+l_{m})\Big{)}^{4}\Bigg{\}}\] \[\mathcal{A}_{A_{2}\cup A_{m}}=R^{3}\bigg{(}\frac{L}{l_{2}+l_{m}} \bigg{)}^{2}\Bigg{\{}c+\frac{\xi}{3}\Big{(}\pi\hat{T}(l_{2}+l_{m})\Big{)}^{2}+ \frac{1}{2}f(\xi)\Big{(}\pi\hat{T}(l_{2}+l_{m})\Big{)}^{4}\Bigg{\}}\] \[\mathcal{A}_{A_{m}}=R^{3}\bigg{(}\frac{L}{l_{m}}\bigg{)}^{2} \Bigg{\{}c+\frac{\xi}{3}\Big{(}\pi\hat{T}l_{m}\Big{)}^{2}+\frac{1}{2}f(\xi) \Big{(}\pi\hat{T}l_{m}\Big{)}^{4}\Bigg{\}} \tag{5.6}\] Using the above equation in (5.4) one would obtain the HLN at low temperatures for two disjoint subsystems \[\begin{split}\mathcal{E}_{low}&=\frac{3R^{3}}{16G_{N}^ {5}}\Bigg{[}c\Bigg{\{}\bigg{(}\frac{L}{l_{1}+l_{m}}\bigg{)}^{2}+\bigg{(}\frac{L }{l_{2}+l_{m}}\bigg{)}^{2}-\bigg{(}\frac{L}{l_{1}+l_{2}+l_{m}}\bigg{)}^{2}- \bigg{(}\frac{L}{l_{m}}\bigg{)}^{2}\Bigg{\}}\\ &\qquad+\frac{1}{2}f(\xi)\left(\pi L^{2}\hat{T}^{4}\right)\Bigg{\{} (l_{1}+l_{m})^{2}+(l_{2}+l_{m})^{2}-(l_{1}+l_{2}+l_{m})^{2}-l_{m}^{2}\Bigg{\}} \Bigg{]}\end{split} \tag{5.7}\] Note that we are dealing exclusively with the finite portion of the area. This is the reason why, in previous instances, the HLN (Hartman-Maldacena-Niarchos) did not incorporate the UV divergence term, denoted as \(L^{2}/\epsilon^{2}\). However, in the scenario of disjoint intervals, a closer examination reveals that even if we consider the entire area expression, including the divergent portion, the HLN remains independent of the cutoff. This stands in contrast to the situation encountered in the case of mixed-state configurations involving adjacent intervals. The first term on the right-hand side of the equation above originates from the contribution of the AdS\({}_{5}\) vacuum and remains unaffected by temperature changes. The remaining terms represent finite-temperature corrections to the HLN at low temperatures, which closely resemble the conditions observed in the mixed-state scenario of adjacent intervals. A similar outcome has been previously documented in [52]. Additionally, one can naturally anticipate that as the limit \(l_{m}\to\epsilon\) is approached, the entanglement negativity (HLN) for separate subsystems will replicate the outcome expressed in equation (4.10) for adjacent subsystems. When taking the limit \(l_{m}\to\epsilon\) in equation (5.7), it becomes possible to recreate both the first component (which is independent of temperature) and the third component (dependent on \(\hat{T}^{4}l_{1}l_{2}\)). Furthermore, a term reliant on the cutoff Figure 3: Schematic diagram of the extremal surfaces at low effective temperature, involving the turning points, corresponding to the subregions \(A_{1}\) and \(A_{2}\) separated by an interval \(A_{m}\). emerges, expressed as \(\frac{2}{d-2}\big{(}\frac{L}{\epsilon}\big{)}^{d-2}\). Intriguingly, this term would have been present in the HLN expression at low temperatures if the cutoff-dependent part within the region of the RT surfaces for multiple subregions had been considered. Hence, we can deduce that as \(l_{m}\) approaches \(\epsilon\), the entanglement negativity for separate subsystems converges to that of adjacent subsystems. ### Holographic Logarithmic Negativity for two disjoint subsystems at high temperature In our previous analysis, we obtained the expression of the area of the extremal surface corresponding to a region with a width of \(l\) at a high-temperature limit. Therefore we rewrite the expression of the area as follows \[\mathcal{A}_{\text{finite}}^{\text{high}}=R^{3}\bigg{(}\frac{L}{l}\bigg{)}^{2 }\Bigg{\{}\sqrt{1+\xi}\Big{(}\pi\hat{T}l\Big{)}^{3}+S_{4}\Big{(}\pi\hat{T}l \Big{)}^{2}\Bigg{\}} \tag{5.8}\] Now to compute the HLN we employ this relation to write down the expressions of the area of the extremal surfaces of all the intervals required in equation (5.4). By doing so we obtain the following relations \[\mathcal{A}_{A_{1}\cup A_{2}\cup A_{m}}=R^{3}\bigg{(}\frac{L}{l_{ 1}+l_{2}+l_{m}}\bigg{)}^{2}\Bigg{\{}\sqrt{1+\xi}\Big{(}\pi\hat{T}(l_{1}+l_{2} +l_{m})\Big{)}^{3}+S_{4}\Big{(}\pi\hat{T}(l_{1}+l_{2}+l_{m})\Big{)}^{2}\Bigg{\}}\] \[\mathcal{A}_{A_{1}\cup A_{m}}=R^{3}\bigg{(}\frac{L}{l_{1}+l_{m}} \bigg{)}^{2}\Bigg{\{}\sqrt{1+\xi}\Big{(}\pi\hat{T}(l_{1}+l_{m})\Big{)}^{3}+S_{ 4}\Big{(}\pi\hat{T}(l_{1}+l_{m})\Big{)}^{2}\Bigg{\}}\] \[\mathcal{A}_{A_{2}\cup A_{m}}=R^{3}\bigg{(}\frac{L}{l_{2}+l_{m}} \bigg{)}^{2}\Bigg{\{}\sqrt{1+\xi}\Big{(}\pi\hat{T}(l_{2}+l_{m})\Big{)}^{3}+S_{ 4}\Big{(}\pi\hat{T}(l_{2}+l_{m})\Big{)}^{2}\Bigg{\}}\] \[\mathcal{A}_{A_{m}}=R^{3}\bigg{(}\frac{L}{l_{m}}\bigg{)}^{2} \Bigg{\{}\sqrt{1+\xi}\Big{(}\pi\hat{T}l_{m}\Big{)}^{3}+S_{4}\Big{(}\pi\hat{T}l _{m}\Big{)}^{2}\Bigg{\}}\] Using the above equation in (5.4) we obtain the expression for HLN at high temperature, \[\mathcal{E}_{high}=\frac{3R^{3}L^{2}}{16G_{N}^{5}}\Bigg{\{}\sqrt{1+ \xi}(\pi T)^{3}(l_{1}+l_{m})+S_{4}(\pi T)^{2}+\sqrt{1+\xi}(\pi T)^{3}(l_{2}+l _{m})+S_{4}(\pi T)^{2}\] \[-\sqrt{1+\xi}(\pi T)^{3}(l_{1}+l_{2}+l_{m})-S_{4}(\pi T)^{2}- \sqrt{1+\xi}(\pi T)^{3}l_{m}-S_{4}(\pi T)^{2}\Bigg{\}} \tag{5.10}\] By simplifying the expression above, we readily demonstrate that the HLN evaluates to zero. Consequently, when dealing with two separate subsystems under high-temperature conditions, the HLN becomes null. This outcome aligns with expectations since entanglement negativity exclusively quantifies quantum correlations, whereas high temperatures primarily entail thermal entropy. To validate this outcome, it would be beneficial to examine the HLN expression for disjoint subsystems at elevated temperatures as provided n the reference [52]. This reference offers the following expression for the same within a generic background of a \((d+1)\)-dimensional AdS Schwarzschild black hole. \[\mathcal{E}=\frac{3}{16G_{N}^{5}}\bigg{(}\frac{4\pi}{d}\bigg{)}^{d- 1}\frac{C_{1}}{4\pi}\sqrt{2d(d-1)}L^{d-2}T^{d-2}\Bigg{\{} -e^{-\sqrt{\frac{d-1}{2d}}4\pi T(l_{1}+l_{m})}-e^{-\sqrt{\frac{d-1}{2d}}4\pi T (l_{2}+l_{m})}\] \[+e^{-\sqrt{\frac{d-1}{2d}}4\pi T(l_{1}+l_{2}+l_{m})}+e^{-\sqrt{ \frac{d-1}{2d}}4\pi Tl_{m}}\Bigg{\}} \tag{5.11}\] If one expands the exponential terms on the right-hand side of the above equation, the following expression can be obtained \[\mathcal{E}=\frac{3}{16G_{N}^{5}}\bigg{(}\frac{4\pi}{d}\bigg{)}^{ d-1}\frac{C_{1}}{4\pi}\sqrt{2d(d-1)}L^{d-2}T^{d-2}\Bigg{\{}-1+\sqrt{\frac{d-1}{2d }}4\pi T(l_{1}+l_{m})-1 \tag{5.12}\] \[+\sqrt{\frac{d-1}{2d}}4\pi T(l_{2}+l_{m})+1-\sqrt{\frac{d-1}{2d }}4\pi T(l_{1}+l_{2}+l_{m})+1-\sqrt{\frac{d-1}{2d}}4\pi Tl_{m}\Bigg{\}}\] From the equation presented above, it becomes clear that the HLN expression evaluates to zero, mirroring the result we derived in this subsection. Justification for expanding the exponential terms to linear order in \(l\) can be found in the behavior of the extremal area at high temperatures, as it exclusively encompasses linear order dependence in \(l\). Consequently, we can confidently affirm that our high-temperature entanglement negativity result for disjoint subsystems aligns with the findings in [52]. Figure 4: Schematic diagram of the extremal surfaces at high effective temperature, involving the turning points, corresponding to the subregions \(A_{1}\) and \(A_{2}\) separated by an interval \(A_{m}\). Holographic Logarithmic Negativity for bipartite systems In this section, we will calculate the HLN for a bipartite configuration. Similar to the preceding sections, we will determine the entanglement negativity for both low and high-temperature regimes. To validate our findings, we will establish their consistency by comparing them to previously obtained results for the general \((d+1)\)-dimensional AdS-Schwarzschild black hole, as documented in [25]. We provide a brief overview of the setup for the bipartite system. To gain a clear understanding of this setup, it is essential to begin by partitioning the boundary CFT into two subsystems, denoted as \(A\) and its complement \(A^{c}\). Furthermore, we will consider two additional subsystems, namely \(B_{1}\) and \(B_{2}\), which are situated adjacent to \(A\) and positioned on either side of it in such a way that we have the union \(B=B_{1}\cup B_{2}\). As in the preceding sections, we will use the notation \(A_{\gamma}\) to represent the area of co-dimension 2 static minimal surfaces within the bulk geometry, anchored on these subsystems. Consequently, in a general context, the HLN (Holographic entanglement negativity) for the bipartite system formed by the union of \(A\) and \(A^{c}\) is expressed as follows \[\mathcal{E}=\lim_{B\to A^{c}}\biggl{[}\frac{3}{16G_{N}^{(d+1)}}\Bigl{(}2 \mathcal{A}_{A}+\mathcal{A}_{B_{1}}+\mathcal{A}_{B_{2}}-\mathcal{A}_{A\cup B _{1}}-\mathcal{A}_{A\cup B_{2}}\Bigr{)}\biggr{]} \tag{10}\] In the above equation, \(G_{N}^{(d+1)}\) represents the Newton constant in a \((d+1)\)-dimensional context. It's important to note that we can interpret the bipartite limit, denoted as \((B\to A^{c})\), by extending the subsystems \(B_{1}\) and \(B_{2}\) to the extent that \(B\) effectively becomes equal to the complement of \(A\), denoted as \(A^{c}\). To provide precise definitions for the subsystems in question namely, \(A\), \(B_{1}\), and \(B_{2}\) we will describe them in the context of the 4-dimensional boundary CFT as follows: \[A: \quad x^{1}\equiv x\in\biggl{[}-\frac{l}{2},\frac{l}{2}\biggr{]} \,,\qquad\,x^{(j)}\in\biggl{[}-\frac{L_{2}}{2},\frac{L_{2}}{2}\biggr{]}\,\,\, \,\,\text{where}\,\,j=2,3\] \[B_{1}: \quad x^{1}\equiv x\in\biggl{[}-L,-\frac{l}{2}\biggr{]}\,,\,\,\, \,\,\,\,\,\,\,\,x^{(j)}\in\biggl{[}-\frac{L_{2}}{2},\frac{L_{2}}{2}\biggr{]}\,\, \,\,\,\text{where}\,\,j=2,3\] (11) \[B_{2}: \quad x^{1}\equiv x\in\biggl{[}\frac{l}{2},L\biggr{]}\,,\,\,\,\, \,\ olographic Logarithmic Negativity for bipartite systems at low temperature In this section, we compute the HLN for the bipartite state in the low-temperature regime. This regime corresponds to the temperature limit \(\hat{T}l\ll 1\), which in the bulk corresponds to the case where the horizon is at a large distance from the turning point \(z_{t_{2}}\) of the extremal surface anchored on the subsystem \(A\). At the low-temperature limit, we are already aware of the perturbative solution of the infinite series of \(\frac{l}{2}\) as discussed in section 4. By doing so, one could obtain the relation between the turning point of the RT surface anchored of the subsystem \(A\) and the width of the subsystem as \[z_{t_{2}}=\frac{l}{a_{1}}\Bigg{\{}1+\frac{\xi}{6a_{1}^{2}}\bigg{(}\frac{l}{z_{ h}}\bigg{)}^{2}+\frac{1}{24a_{1}^{4}}\left[\frac{\xi^{2}}{6}\left(1-\frac{a_{3}}{2 a_{2}}\right)-\frac{a_{2}}{a_{1}}(1+\xi)\right]\left(\frac{l}{z_{h}}\right)^{4}+ \mathcal{O}\bigg{(}\frac{l}{z_{h}}\bigg{)}^{6}\Bigg{\}} \tag{6.4}\] Using the above relation we obtained the area of the extremal surface corresponding to the subsystem \(A\) at a low-temperature regime as follows 1 Footnote 1: Likewise the previous cases, note that the equation below does not contain the UV cut-off dependent part as we have considered the finite part of the area of \(\mathcal{A}\) only. \[\mathcal{A}_{A}=R^{3}\bigg{(}\frac{L}{l}\bigg{)}^{2}\Bigg{\{}c+\frac{\xi}{3} \Big{(}\pi\hat{T}l\Big{)}^{2}+\frac{1}{2}f(\xi)\Big{(}\pi\hat{T}l\Big{)}^{4} \Bigg{\}} \tag{6.5}\] The subsystems \(B_{1}\) and \(A\cup B_{1}\) in the boundary with lengths \((L-l/2)\) and \((L+l/2)\) along the \(x^{1}\) direction are very large in the limit \(B\to A^{c}\) which corresponds to the limit \(L\to\infty\). Therefore, the extremal surfaces described by the areas \(\mathcal{A}_{B_{1}}\) and \(\mathcal{A}_{A\cup B_{1}}\) will extend deep into the bulk approaching the black hole horizon even at low temperatures i.e., \((z_{t_{1}}\sim z_{h})\) and \((z_{t_{3}}\sim z_{h})\). Hence, for computing the expressions for the areas \(\mathcal{A}_{B_{1}}\) and \(\mathcal{A}_{A\cup B_{1}}\) we employ the method developed in [53] for the case when the RT surfaces approach the black Figure 5: Schematic diagram of the extremal surfaces corresponding to the bipartite subsystem at low effective temperature. hole horizon. Following the procedure, we can write the turning points of the extremal surfaces \(\mathcal{A}_{B_{1}}\) and \(\mathcal{A}_{A\cup B_{1}}\) as follows for a \((d+1)\)-dimensional AdS Schwarzchild black hole 2 Footnote 2: Although, we are writing \(d+1\) dimensional result but we will use \(d=4\) in final expressions. \[z_{t_{1}} =z_{h}(1+\epsilon_{1})=z_{h}\Bigg{[}(1+k_{2}e^{-\sqrt{\frac{d(d-1)} {2}}z_{h}\left(L-\frac{l}{2}\right)}\Bigg{]} \tag{6.6}\] \[z_{t_{3}} =z_{h}(1+\epsilon_{1})=z_{h}\Bigg{[}(1+k_{2}e^{-\sqrt{\frac{d(d-1 )}{2}}z_{h}\left(L+\frac{l}{2}\right)}\Bigg{]}\] where \(k_{2}\) has the following form \[k_{2}=\frac{1}{d}e^{\sqrt{\frac{d(d-1)}{2}}c_{1}} \tag{6.7}\] \[c_{1}=\frac{2\sqrt{\pi}\Gamma\left(\frac{d}{2(d-1)}\right)}{ \Gamma\left(\frac{1}{d-1}\right)}+\sum_{n=1}^{\infty}\left\{\frac{2}{(1+nd)} \frac{\Gamma\left(n+\frac{1}{2}\right)}{\Gamma(n+1)}\frac{\Gamma\left(\frac{d (n+1)}{2(d-1)}\right)}{\Gamma\left(\frac{nd+1}{2(d-1)}\right)}-\frac{\sqrt{2}} {\sqrt{d(d-1)}n}\right\} \tag{6.8}\] We can now find out the area of the extremal surface for the subsystems \(\mathcal{A}_{B_{1}}\) and \(\mathcal{A}_{A\cup B_{1}}\) by substituting (6.6) in (3.13). We then take the sum as an expansion of \(\epsilon_{1}\) and \(\epsilon_{3}\) respectively and consider the terms in linear order of them. Therefore we obtain the following expressions \[\mathcal{A}_{B_{1}} =\frac{L^{2}R^{3}}{z_{h}^{2}}\Big{\{}\alpha(\xi)+\gamma(\xi)+\mu( \xi)\Big{\}}+\frac{L^{2}R^{3}}{z_{h}}\left(L-\frac{l}{2}\right)\Big{\{}\beta( \xi)+\delta(\xi)+\nu(\xi)\Big{\}} \tag{6.9}\] \[\mathcal{A}_{A\cup B_{1}} =\frac{L^{2}R^{3}}{z_{h}^{2}}\Big{\{}\alpha(\xi)+\gamma(\xi)+\mu( \xi)\Big{\}}+\frac{L^{2}R^{3}}{z_{h}}\left(L+\frac{l}{2}\right)\Big{\{}\beta( \xi)+\delta(\xi)+\nu(\xi)\Big{\}}\] where all the \(\xi\)-dependent functions have been defined in Appendix-A. Finally using equation (6.9) in (6.3) we can get the HLN for the bipartite system at the low-temperature limit \[\mathcal{E}_{low}=\frac{3}{8G_{N}^{5}}\Bigg{[}R^{3}\bigg{(}\frac{L}{l}\bigg{)} ^{2}\Bigg{\{}c+\frac{\xi}{3}\Big{(}\pi\hat{T}l\Big{)}^{2}+\frac{1}{2}f(\xi) \Big{(}\pi\hat{T}l\Big{)}^{4}\Bigg{\}}-R^{3}L^{2}l\hat{T}g(\xi)\Bigg{]} \tag{6.10}\] where the function \(g(\xi)\) can be written as \(g(\xi)=\pi(\beta(\xi)+\delta(\xi)+\nu(\xi))\). Note that in the equation presented earlier, the last term is directly proportional to \(L^{2}l\), representing the three-dimensional volume of subsystem A. With this correspondence, one can infer that the final term is proportional to the thermal entropy associated with subsystem A. To further scrutinize our findings, we can examine the behavior as \(\xi\) approaches zero. It is expected that in the limit where \(Q\) tends to zero (which can also be achieved by setting \(\xi\) to zero), the aforementioned result will coincide with the AdS Schwarzschild black hole result found in [25]. It is worth emphasizing that the initial term enclosed within the curly brackets, with the appropriate scaling factor, precisely reproduces the entanglement entropy of subsystem A at low temperatures. Now, as the limit \(\xi\to 0\) is taken, we can examine the behavior of the function \(g(\xi)\), revealing that \(g(\xi)\) scales as \(1/\xi\). Consequently, in terms of temperature, we can express this as \(g(\xi)\propto T^{2}\). By combining the aforementioned arguments, it becomes apparent that the final term, which is proportional to the volume \(V=L^{2}l\), exhibits an explicit temperature dependence of order \(T^{3}\). Hence, one can interpret this last term in the preceding equation as the thermal entropy (in \((d+1)\)-dimensional AdS Schwarzschild geometry, thermal entropy is proportional to \(VT^{d-1}\)) of the system A. Consequently, we can now reformulate equation (6.10) in the following manner \[\mathcal{E}_{low}=\frac{3}{2}\Big{\{}S_{A}-\mathcal{C}S_{A}^{\rm Th}\Big{\}}, \ \ \text{where $\mathcal{C}$ is a constant} \tag{6.11}\] Surprisingly, the equation presented above reveals that the HLN effectively quantifies distillable entanglement by eliminating the thermal contribution in low-temperature conditions. This characteristic stands as a universal trait of entanglement negativity within finite-temperature mixed states. ### Holographic Logarithmic Negativity for Bipartite Systems at High Temperature In the high-temperature regime, the turning point \(z_{t_{2}}\) of the extremal surface, representing the area \(\mathcal{A}A\), converges close to the black hole horizon. This convergence is characterized by the condition \(zt_{2}\sim z_{h}\), as illustrated in fig. 6. Consequently, we can employ the same methodology to calculate the area of the extremal surface corresponding to subsystem \(A\) as we did for \(\mathcal{A}B_{1}\) and \(\mathcal{A}A\cup B_{1}\) in the preceding section. It's worth noting that, as previously explained, these surfaces consistently explore the vicinity of the black hole horizon, both at low and high temperatures. This behavior is a consequence of the limit \(B\to A^{c}\), or equivalently, \(L\to\infty\). Therefore, we can utilize equation (6.9) to compute the Holographic Entanglement Negativity (HEN) in the high-temperature regime. We can write the following expression of the turning point corresponding to the extremal surface of the subsystem \(A\) Figure 6: Schematic diagram of the extremal surfaces corresponding to the bipartite subsystem at high effective temperature. \[z_{t_{2}}=z_{h}(1+\epsilon_{1})=z_{h}\Bigg{[}(1+k_{2}e^{-\sqrt{\frac{d(d-1)}{2}}z_{ h}l}\Bigg{]} \tag{6.12}\] where \(k_{2}\) is, \[k_{2}=\frac{1}{d}e^{\sqrt{\frac{d(d-1)}{2}}c_{1}} \tag{6.13}\] \[c_{1}=\frac{2\sqrt{\pi}\Gamma\left(\frac{d}{2(d-1)}\right)}{\Gamma\left(\frac {1}{d-1}\right)}+\sum_{n=1}^{\infty}\Bigg{\{}\frac{2}{(1+nd)}\frac{\Gamma \left(n+\frac{1}{2}\right)}{\Gamma(n+1)}\frac{\Gamma\left(\frac{d(n+1)}{2(d-1) }\right)}{\Gamma\left(\frac{nd+1}{2(d-1)}\right)}-\frac{\sqrt{2}}{\sqrt{d(d-1 )n}}\Bigg{\}} \tag{6.14}\] Using the above equations we can write, \[\mathcal{A}_{A}=\frac{L^{2}R^{3}}{z_{h}^{2}}\Big{\{}\alpha(\xi)+\gamma(\xi)+ \mu(\xi)\Big{\}}+\frac{L^{2}R^{3}}{z_{h}}l\Big{\{}\beta(\xi)+\delta(\xi)+\nu (\xi)\Big{\}} \tag{6.15}\] Ultimately, by incorporating equations (6.15) and (6.9) into (6.3), we arrive at the following outcome for the HLN in the bipartite scenario under high-temperature limit \[\mathcal{E}_{high}=\frac{3}{8G_{N}^{5}}\Bigg{[}\frac{L^{2}R^{3}}{z_{h}^{2}} \Big{\{}\alpha(\xi)+\gamma(\xi)+\mu(\xi)\Big{\}}+\frac{L^{2}R^{3}}{z_{h}}l \Big{\{}\beta(\xi)+\delta(\xi)+\nu(\xi)\Big{\}}-L^{2}R^{3}l\hat{T}g(\xi)\Bigg{]} \tag{6.16}\] Note that, as previously demonstrated for the low-temperature scenario, we can similarly reformulate the equation above for the high-temperature regime. This can be achieved by applying the same analysis in the limit as \(\xi\) approaches zero, resulting in a more concise expression. \[\mathcal{E}_{high}=\frac{3}{2}\Big{\{}S_{A}-\mathcal{C}S_{A}^{\rm Th}\Big{\}},\ \ \text{where $\mathcal{C}$ is a constant} \tag{6.17}\] Much like in the case of low temperatures, in the high-temperature regime, the HLN also facilitates the extraction of distillable quantum entanglement. This extraction process involves eliminating the thermal contribution, a universal characteristic of the entanglement negativity observed in finite-temperature mixed states of a holographic CFT. ## 7 Entanglement Wedge Cross Section (EWCS) In this section, we compute the analytic form of the EWCS and perform limiting analysis for low and high-temperature regimes. To delineate the concept of the entanglement wedge, we take into account two subsystems, labeled as \(A\) and \(B\), situated on the boundary. The Ryu-Takayanagi (RT) surface, represented as \(\gamma_{AB}\), characterizes the region encompassing \(A\cup B\). Subsequently, the entanglement wedge is defined as the volume corresponding to the boundary \(A\cup B\cup\gamma_{AB}\). The Entanglement Wedge Cross Section (EWCS) is established through the extremal area surface \(\Gamma_{W}\), which bifurcates the regions \(A\) and \(B\), as illustrated in fig. 7. In a numerical analysis conducted in [41], the EWCS has been computed. Nevertheless, the analytic expression for the EWCS in the background of 1RCBH has not yet been documented. In this context, we establish the boundary subsystems \(A\) and \(B\), each with a length of \(l\) and separated by a distance \(D\). \[\begin{array}{llll}A:&x^{1}\equiv x\in\left[-l-\frac{D}{2},-\frac{D}{2}\right],&x^{(j)}\in\left[-\frac{L_{2}}{2},\frac{L_{2}}{2}\right]&\text{where $j=2,3$}\\ B:&x^{1}\equiv x\in\left[l+\frac{D}{2},\frac{D}{2}\right],&x^{(j)}\in\left[- \frac{L_{2}}{2},\frac{L_{2}}{2}\right]&\text{where $j=2,3$}\end{array} \tag{109}\] In the given setup, the surface with the minimum area, denoted as \(\Sigma_{\text{min}}\), which separates the subsystems \(A\) and \(B\), is precisely the vertical surface positioned at \(x=0\). The metric on the Cauchy surface is described as follows: \[ds^{2}_{\Sigma_{min}}=e^{2A(z)}d\vec{x}_{2}^{2}+\frac{e^{2B(z)}}{h(z)}\frac{R^ {4}}{z^{4}}dz^{2} \tag{110}\] The EWCS is then computed by [59] \[E_{W}=\frac{L^{2}}{4G_{N}^{5}}\int_{z_{t}(D)}^{z_{t}(2l+D)}dz\sqrt{g_{mn}} \tag{111}\] The equation (110) defines the induced metric \(g_{\text{mn}}\). The turning points of the extremal surfaces we have examined are denoted as \(z_{t}(2l+D)\) and \(z_{t}(D)\). Consequently, when considering equations (110) and (111), we obtain: \[E_{W}=\frac{L^{2}R^{2}}{4G_{N}^{5}}\int_{z_{t}(D)}^{z_{t}(2l+D)}dz\frac{e^{2A( z)+B(z)}}{z^{2}\sqrt{h(z)}} \tag{112}\] By employing (4) and the definition of the dimensionless parameter \(\xi\), we can express Figure 7: Schematic diagram of the extremal surfaces corresponding to two disjoint subsystems of equal length \(l\) and separated by a distance \(D\). The surface \(\Gamma_{W}\), marked in red, is the entanglement wedge the integral above as follows, 3 Footnote 3: Note that in deriving equation (7.5), we have applied the multinomial expansion, similar to our earlier approach in section 4. \[E_{W}=\frac{L^{2}R^{3}}{4G_{N}^{5}}\int_{z_{t}(D)}^{z_{t}(2l+D)}dz \sum_{k=0}^{\infty}\sum_{j=0}^{k}\sum_{i=0}^{\infty}\frac{(-1)^{k+j}}{2}\frac{ \Gamma(k+\frac{1}{2})\xi^{i+j+k}(1+\xi)^{j}}{\Gamma(i+1)\Gamma(\frac{3}{2}-i) \Gamma(j+1)\Gamma(k-j+1)}\frac{z^{2i+2j+2k-3}}{z_{h}{}^{2i+2j+2k}}\] \[=\frac{L^{2}R^{3}}{4G_{N}^{5}}\sum_{k=0}^{\infty}\sum_{j=0}^{k} \sum_{i=0}^{\infty}\frac{(-1)^{k+j}}{2}\frac{\Gamma(k+\frac{1}{2})\xi^{i+j+k}( 1+\xi)^{j}}{\Gamma(i+1)\Gamma(\frac{3}{2}-i)\Gamma(j+1)\Gamma(k-j+1)}\frac{1}{ (2i+2j+2k-2)}\] \[\times\Bigg{\{}\frac{z_{t}(2l+D)^{2i+2j+2k-2}}{z_{h}^{2i+2j+2k}} -\frac{z_{t}(D)^{2i+2j+2k-2}}{z_{h}^{2i+2j+2k}}\Bigg{\}} \tag{7.5}\] In the following two subsections, we will examine the behavior of the EWCS under varying temperature conditions, employing suitable approximations as previously demonstrated. As indicated by the expression above, it becomes evident that when \(D\) significantly exceeds \(l\), the EWCS completely disappears. ### Entanglement Wedge Cross Section at low temperature As it is generally challenging to reverse the relationship between \(z_{t}\) and width \(l\), and formulate a universal expression for EWCS based on boundary parameters, we resort to specific thermal limits for this purpose. Therefore, we examine EWCS in both low and high-temperature limits. In the low-temperature limit, where \(z_{t}(D)\ll z_{H}\) and \(z_{t}(2l+D)\ll z_{H}\), we can derive the following expressions for the turning points using equation (4.2). \[z_{t}(D)=\frac{D}{a_{1}}\Bigg{\{}1+\frac{\xi}{6a_{1}^{2}}\bigg{(} \frac{D}{z_{h}}\bigg{)}^{2}+\frac{1}{24a_{1}^{4}}\left[\frac{\xi^{2}}{6}\left( 1-\frac{a_{3}}{2a_{2}}\right)-\frac{a_{2}}{a_{1}}(1+\xi)\right]\left(\frac{D} {z_{h}}\right)^{4}+\mathcal{O}\bigg{(}\frac{D}{z_{h}}\bigg{)}^{6}\Bigg{\}} \tag{7.6}\] \[z_{t}(2l+D)=\frac{2l+D}{a_{1}}\Bigg{\{}1+\frac{\xi}{6a_{1}^{2}} \bigg{(}\frac{2l+D}{z_{h}}\bigg{)}^{2}+\frac{1}{24a_{1}^{4}}\left[\frac{\xi^{ 2}}{6}\left(1-\frac{a_{3}}{2a_{2}}\right)-\frac{a_{2}}{a_{1}}(1+\xi)\right] \left(\frac{2l+D}{z_{h}}\right)^{4}\] (7.7) \[+\mathcal{O}\bigg{(}\frac{2L+D}{z_{h}}\bigg{)}^{6}\Bigg{\}}\] By substituting equations (7.6) and (7.7) into equation (7.5), one can derive the expression for EWCS in the low-temperature regime. It's important to note that simplifying the EWCS at low temperatures can be achieved by applying a binomial expansion to both turning points, considering terms up to the first order. This expansion is feasible because when examining the coefficients of \(\mathcal{O}(1/z_{h}^{2})\), \(\mathcal{O}(1/z_{h}^{4})\), etc., within the parentheses in equations (7.6) and (7.7), it becomes evident that these terms are smaller than 1 in the low-temperature limit. Further simplifications in the resulting equation (see Appendix B) can be achieved by truncating the series at the lowest order, which corresponds to setting \(i=j=k=0\). It is also reasonable to assert that at low temperatures, both \(D\) and \(l\) are small, allowing us to neglect higher exponents associated with these length scales. Consequently, we can express the simplified version of the EWCS at low temperatures as follows. \[E_{W}^{\rm low}=\frac{L^{2}R^{3}}{4G_{N}^{5}}\Bigg{[}\frac{a_{1}^{2}}{2}\Bigg{\{} \frac{1}{D^{2}}-\frac{1}{(2l+D)^{2}}\Bigg{\}}+\frac{2}{a_{1}^{2}}\Bigg{\{} \frac{\xi^{2}}{6}\left(1-\frac{a_{3}}{a_{2}}\right)-\frac{a_{2}}{a_{1}}(1+\xi )\Bigg{\}}l(l+D)\frac{1}{z_{h}^{4}}+\mathcal{O}\bigg{(}\frac{1}{z_{h}}\bigg{)} ^{6}\Bigg{]} \tag{100}\] In terms of temperature the above expression becomes \[E_{W}^{\rm low}=\frac{L^{2}R^{3}}{4G_{N}^{5}}\Bigg{[}\frac{a_{1}^{2}}{2}\Bigg{\{} \frac{1}{D^{2}}-\frac{1}{(2l+D)^{2}}\Bigg{\}}+\frac{2}{a_{1}^{2}}\Bigg{\{} \frac{\xi^{2}}{6}\left(1-\frac{a_{3}}{a_{2}}\right)-\frac{a_{2}}{a_{1}}(1+ \xi)\Bigg{\}}l(l+D)(\pi\hat{T})^{4}+\mathcal{O}\left(\hat{T}^{6}\right)\Bigg{]} \tag{101}\] Now, we will examine the outcomes we've acquired for the EWCS when the low temperature in the background of a 1RCBH. As anticipated, the initial term enclosed in curly braces, which is the temperature independent component, implies that the EWCS will rise as the separation between the subsystems diminishes, and in the limit where D approaches zero, it becomes unbounded. We can further verify the validity of the previously obtained result by cross-referencing it with the mutual information calculation in [40]. By leveraging the connection between the EWCS and mutual information as discussed in [60], we can observe that, at low temperatures, the EWCS exhibits an identical behavior to that of mutual information. This alignment serves as strong confirmation for the accuracy of our findings. In the critical limit, denoted as \(\xi\to 2\), it is noteworthy that the EWCS remains finite, mirroring the behavior reported for mutual information in [40]. ### Entanglement Wedge Cross Section at High Temperature We can now examine the EWCS under high-temperature conditions. To achieve this, there are two viable options regarding the boundary parameters \(l\) and \(D\). If we opt for a scenario where \(D\) is chosen to be very large but finite, both the turning points corresponding to the extremal surfaces \(\gamma_{D}\) and \(\gamma_{2l+D}\) will move deeper into the bulk. Consequently, one can, in principle, employ the near-horizon expansion for the turning points \(z_{t}(D)\) and \(z_{t}(2l+D)\). However, this approach yields a trivial outcome as, in the limit where \(D\) tends to infinity, the EWCS at high temperatures naturally diminishes. Alternatively, one can argue that to impose the high-temperature limit, we can take the limit of \(l\) approaching infinity while keeping \(D\) fixed at a small value. In this scenario, a non-zero, significantly large value for the EWCS is expected, and this can be obtained by focusing on the near-horizon expansion for the extremal surface \(\gamma_{2l+D}\) exclusively. For the upcoming calculations, we will be working within the former limit. Utilizing the techniques employed in the previous sections, we commence with the following expression for the turning point. \[z_{t}(D)=z_{h}(1+\epsilon)=z_{h}\left(1+k_{2}e^{-\sqrt{\frac{d(d-1)}{2}}z_{h}D}\right) \tag{102}\] where, \[k_{2}=\frac{1}{d}e^{\sqrt{\frac{d(d-1)}{2}}c_{1}} \tag{103}\] \[c_{1}=\frac{2\sqrt{\pi}\Gamma\left(\frac{d}{2(d-1)}\right)}{\Gamma\left(\frac{1}{ d-1}\right)}+\sum_{n=1}^{\infty}\left\{\frac{2}{(1+nd)}\frac{\Gamma\left(n+\frac{1}{2} \right)}{\Gamma(n+1)}\frac{\Gamma\left(\frac{d(n+1)}{2(d-1)}\right)}{\Gamma \left(\frac{nd+1}{2(d-1)}\right)}-\frac{\sqrt{2}}{\sqrt{d(d-1)}n}\right\} \tag{112}\] Note that we are working with \(d=4\). Now we use the (100) in the last line of (105) to obtain \[E_{W}^{\rm high}=\frac{L^{2}R^{3}}{4G_{N}^{5}}\sum_{k=0}^{\infty }\sum_{j=0}^{k}\sum_{i=0}^{\infty}\frac{\left(-1\right)^{k+j}}{2}\frac{\Gamma (k+\frac{1}{2})\xi^{i+j+k}(1+\xi)^{j}}{\Gamma(i+1)\Gamma(\frac{3}{2}-i)\Gamma (j+1)\Gamma(k-j+1)}\frac{1}{(2i+2j+2k-2)}\] \[\times\Bigg{\{}\frac{\left(1+k_{2}e^{-\sqrt{6}z_{h}(2l+D)} \right)^{2i+2j+2k-2}}{z_{h}^{2}}-\frac{\left(1+k_{2}e^{-\sqrt{6}z_{h}D} \right)^{2i+2j+2k-2}}{z_{h}^{2}}\Bigg{\}} \tag{113}\] Taking the binomial expansion in the above equation up to order \(\epsilon\) and suppressing the higher order terms one obtains the following expression for EWCS at high temperature \[E_{W}^{\rm high}=\frac{L^{2}R^{3}}{4G_{N}^{5}}\sum_{k=0}^{\infty}\sum_{j=0}^{ k}\sum_{i=0}^{\infty}\frac{\left(-1\right)^{k+j}}{2}\frac{\Gamma(k+\frac{1}{2}) \xi^{i+j+k}(1+\xi)^{j}}{\Gamma(i+1)\Gamma(\frac{3}{2}-i)\Gamma(j+1)\Gamma(k-j +1)}\frac{k_{2}}{z_{h}^{2}}\Bigg{(}e^{-\sqrt{6}z_{h}(2l+D)}-e^{-\sqrt{6}z_{h} D}\Bigg{)} \tag{114}\] In terms of temperature, we can rewrite the above expression in the following form \[E_{W}^{\rm high}=\frac{L^{2}R^{3}}{4G_{N}^{5}}\sum_{k=0}^{\infty}\sum_{j=0}^{ k}\sum_{i=0}^{\infty}\frac{\left(-1\right)^{k+j}}{2}\frac{\Gamma(k+\frac{1}{2}) \xi^{i+j+k}(1+\xi)^{j}}{\Gamma(i+1)\Gamma(\frac{3}{2}-i)\Gamma(j+1)\Gamma(k-j +1)}(\pi\hat{T})^{2}\Bigg{(}e^{-\frac{\sqrt{6}(2l+D)}{\pi\hat{T}}}-e^{-\frac{ \sqrt{6}D}{\pi\hat{T}}}\Bigg{)} \tag{115}\] Similar to the previous section, we will now refer to the calculation of mutual information at high temperatures as presented in [40]. In equation (115), we can apply the high-temperature limit by considering \(D\) to be large but finite. Consequently, as we take the limit \(D\to\infty\), the EWCS diminishes, as does the mutual information, as indicated in [40]. This leads us to conclude that the expression we have derived for EWCS at elevated temperatures is consistent. It's worth noting, as mentioned earlier, that working with the non-trivial limit in the boundary parameter could be an intriguing exercise, demonstrating that this limit corresponds to a substantial EWCS value. However, we defer this exploration to future research. ## 8 Holographic Mutual Information The holographic dual of a thermofield double (TFD) state is essential for studying information scrambling in a strongly coupled field theory in the context of AdS/CFT correspondence. Entangled states, defined in a bipartite Hilbert space comprised of the individual Hilbert spaces of two identical and non-interacting copies of strongly coupled field theories, can serve as examples of TFD states. Two entangled AdS black holes are holographically dual to such a TFD state. The outer region on both sides of the two-sided black hole in the Penrose diagram fig.8 is made up of the right (R) and left (L) wedges, two causally disconnected regions of spacetime. It is possible to have a non-local correlation between two boundary theories that are separately residing in the asymptotic boundaries of the R or L regions. An appropriate generalization of mutual information (MI), known as thermo mutual information (TMI), was first introduced in [56], along with a generalization of holographic mutual information (HMI), known as holographic thermo mutual information (HTMI), can be used to describe a practical measure of such correlation. Later holographically studied in [34; 37; 57]. Therefore the above definition is true for TMI, but the entangling regions lie on the causally disconnected boundaries. An early time-dependent perturbation that grows exponentially can destroy these correlations holographically this perturbation is known as the shockwave created by a small amount of energy in the asymptotic past. In the following sections, we will study the HTMI without and with the shockwave. ### Holographic Thermo Mutual Information (HTMI) To determine the HTMI, we adopt the methodology presented in [34; 37; 57], considering two strip-like identical subsystems denoted as \(A\) and \(B\) with a width of \(l\). Subsystem \(A\) is positioned along the left (\(L\)) asymptotic boundary, while subsystem \(B\) is situated along the right (\(R\)) asymptotic boundary of the eternal black hole at time \(t=0\). In accordance with the RT proposal, \(\mathcal{S}(A)\) and \(\mathcal{S}(B)\) are directly linked to the minimal surface areas of \(\gamma_{A}\) and \(\gamma_{B}\) within the bulk, which correspond to the entangling regions \(A\) and \(B\), respectively. We define the embedding for \(\gamma_{i}(i=A,B)\) as \((t=0,z,-l/2\leq x(z)\leq l/2,-L/2\leq x^{j}\leq L/2\)\(j=2....d-1)\). For the extremal surface corresponds to \(\gamma_{A\cup B}\) can be either \(\gamma_{A}\cup\gamma_{B}\) or \(\gamma_{1}\cup\gamma_{2}=\gamma_{\rm wormhole}\) where for \(\gamma_{1}\) and \(\gamma_{2}\) the appropriate embedding is \((t=0,z,x=-l/2,-L/2\leq x^{j}\leq L/2)\) and \((t=0,z,x=l/2,-L/2\leq x^{j}\leq L/2)\) respectively. \(\gamma_{1}\) and \(\gamma_{2}\) surfaces are connecting two asymptotic boundaries through the bifurcation point of the eternal black hole denoted by the doted line in the Fig.8. TMI becomes zero as \(\mathcal{A}(\gamma_{A}\cup\gamma_{B})\leq\mathcal{A}(\gamma_{\rm wormhole})\), and \(I(A,B)\) is positive for the opposite situation. To find the area of wormhole surface we need to follow the Hubeny-Rangamani-Takayanagi (HRT) prescription [20]. The induced metrics components for RT and HRT surfaces are given by, \[G_{\rm in}^{A}=G_{\rm in}^{B}=\left(\frac{R_{\rm AdS}^{2}}{z^{2}}\right)^{d-1 }\left(\frac{1}{f(z)}+x^{\prime 2}\right),\qquad G_{\rm in}^{\rm wormhole}= \left(\frac{R_{\rm AdS}^{2}}{z^{2}}\right)^{d-1}\frac{1}{f(z)}, \tag{8.1}\] Figure 8: Penrose diagram of the eternal blackhole. At t = 0, the spatial extremal surface connecting two asymptotic boundaries of an eternal black hole is denoted by the dashed line passing through the bifurcation point. where \(x^{\prime}=0\) for \(\gamma_{1}\) and \(\gamma_{2}\). Now, HTMI is \[I(A,B) =\frac{1}{4G_{N}^{5}}\left(\int_{-L/2}^{L/2}dx^{i}\right)\left[2\int _{0}^{z_{t}}dz\left(\sqrt{G_{in}^{A}}+\sqrt{G_{in}^{B}}\right)-4\int_{0}^{z_{h} }dz\sqrt{G_{in}^{A\cup B}}\right] \tag{11}\] \[=\frac{L^{2}R^{2}}{G_{N}^{5}}\left[\int_{0}^{z_{c}}\frac{e^{B(z)+ 2A(z)}dz}{z^{2}\sqrt{f(z)}}\sqrt{(\frac{e^{6A(z_{c})}}{e^{6A(z)}-e^{6A(z_{c})} }+1)}-\int_{0}^{z_{h}}\frac{e^{3A(z)}dz}{z^{2}\sqrt{f(z)}e^{A(z)-B(z)}}\right]\] Due to the symmetric layout of the extremal surfaces, equation (11) incorporates coefficients of both 2 and 4. The parameter \(z_{t}\) denotes the turning point of the RT surfaces associated with regions \(A\) and \(B\). The relationship between the HTMI and the width (\(l\)) of entangling region can be determined through the use of the \(l\) and \(z_{t}\) relation. \[\frac{l}{2}=\int_{0}^{z_{t}}\frac{dz}{\sqrt{\left(\frac{Q^{2}z^{2}}{R^{4}}+1 \right)\left(\frac{z_{t}^{6}\left(\frac{Q^{2}z^{2}}{R^{4}}+1\right)}{z^{6} \left(\frac{Q^{2}z^{2}}{R^{4}}+1\right)}-1\right)\left(1-\frac{z^{4}\left( \frac{Q^{2}z^{2}}{R^{4}}+1\right)}{z_{h}^{4}\left(\frac{Q^{2}z^{2}}{R^{4}}+1 \right)}\right)}} \tag{12}\] The fig.9 illustrates that, as the width \(l\) is increased, the HTMI also increases. However, when we reach a specific value of \(l\) (say critical width \(l_{c}\)), the TMI is zero for any \(l\leq l_{c}\). This critical value of width decreases as we raise the value of \(\xi\). The TMI exhibited analogous characteristics as reported in [34; 37]. In [37], it was noted that as the backreaction parameter \(b\) increases, the critical width decreases, a trend also observed in [34] with the anisotropic parameter \(a\). This occurs because, when \(l\leq l_{c}\), the HRT surface connecting points \(A\) and \(B\) accumulates a greater surface area than the combined areas of the individual RT surfaces associated with \(A\) and \(B\). Consequently, the selection for the \(A\cup B\) surface will be equivalent to the sum of the areas of the RT surfaces corresponding to \(A\) and \(B\). It is evident that as the parameter \(\xi\) increases, the critical width \(l_{c}\) decreases. As \(\xi\) approaches the critical point of the theory, which is \(\xi\to 2\), all the values of the critical width \(l_{c}\) for non-zero \(\xi\) converge towards the value associated with \(\xi=2\), while the critical width corresponding to \(\xi=0\) is significantly distant. The closer \(\xi\) gets to the critical value of 2, the smaller the separation between the critical widths becomes. Figure 9: HTMI with respect to width \(l\) for \(T=1,R=1\) with different values of \(\xi\) ### Holographic Thermo Mutual Information with shockwave In this section, we examine the time-dependent behavior of the HTML following the application of a shockwave. The profile of the shockwave is \(\alpha\approx e^{\frac{2\pi}{\beta}t}\). The impact of the shock wave on the geometry can be accounted by adjusting the Kruskal coordinate from \(V\) to \(\hat{V}=V+\Theta(U)\alpha\), while leaving all other coordinates unchanged and denoting them with a hat, as previously demonstrated in [34; 37; 57]. The function \(\Theta(U)\) ensures that the shockwave's influence is confined to the left region of the Penrose diagram shown in fig. 8, which is modified as depicted in fig.10. Entanglement entropy exhibits UV divergences, while mutual information remains unaffected by these divergences, as discussed in the preceding section. The introduction of a shock wave can introduce new divergences into the system. We investigate an early asymptotic pulse of energy generated at the boundary, which acts as a minor inward disturbance entering the left (L) boundary of the eternal black hole. This pulse experiences blue-shifting and evolves into a shockwave as it progresses through time, eventually reaching the horizon at a late time corresponding to the boundary time \(t=0\). In light of this, it proves advantageous to define the HTML in the presence of the shockwave as, \[I(A:B;\alpha)=I(A,B;\alpha=0)-\mathcal{S}^{\rm reg}_{A\cup B}(\alpha), \tag{100}\] \(I(A:B;\alpha=0)\) has been previously calculated in equation (101), and in order to mitigate the \(\alpha\)-independent UV divergences, we introduce a regularized variant of HEE. \(\mathcal{S}^{\rm reg}_{A\cup B}(\alpha)=\mathcal{S}_{A\cup B}(\alpha)- \mathcal{S}_{A\cup B}(\alpha=0)\). In order to compute \(\mathcal{S}^{\rm reg}_{A\cup B}(\alpha)\) we choose a set of time-dependent embeddings defined as \(\{t,z(t),x=-l/2,-L/2\leq x^{j}\leq L/2\}\) and \(\{t,z(t),x=l/2,-L/2\leq x^{j}\leq L/2\}\). The area functional corresponding to either of these time-dependent embeddings is given as, \[\mathcal{A}=L^{2}\int dt\biggl{[}-e^{6A(z)}h(z)+\dot{z}^{2}\frac{R^{4}e^{2B(z) +4A(z)}}{z^{4}h(z)}\biggr{)}\biggr{]}^{\frac{1}{2}},\ \ \mathcal{L}=L^{2}\biggl{[}-e^{6A(z)}h(z)+\dot{z}^{2}\frac{R^{4}e^{2B(z)+4A(z)}} {z^{4}h(z)}\biggr{]}^{\frac{1}{2}} \tag{101}\] Note that the Lagrangian density \(\mathcal{L}\) lacks explicit time dependence, consequently leading to the derivation of a conserved quantity based on boundary conditions \(\dot{z}=0|_{z=z_{0}}\), is \(\mathcal{P}=-L^{2}e^{3A(z_{0})}\sqrt{-h(z_{0})}\). Now, we can write, \[\dot{z}^{2}=\frac{z^{2}h(z)}{R^{4}e^{4A(z)+2B(z)}}\biggl{[}\frac{L^{4}h^{2}(z )e^{12A(z)}}{\mathcal{P}}+e^{6A(z)}h(z)\biggr{]}^{\frac{1}{2}} \tag{102}\] Substituting equation (102) in (101) the area functional, \[\mathcal{A}=L^{2}R^{2}\int_{0}^{z_{0}}dz\ \frac{e^{5A(z)+B(z)}}{z^{2}\sqrt{ \mathcal{P}^{2}+L^{4}h(z)e^{6A(z)}}} \tag{103}\] Also by integrating equation (102) we get, \[t(z)=\pm\int dz\ \frac{R^{2}e^{B(z)-A(z)}}{z^{2}h(z)\sqrt{(1+\frac{L^{4}h(z)e^{6A (z)}}{\mathcal{P}^{2}})}} \tag{104}\] Using the conserved momenta \({\cal P}\) in equation (8.7) and (8.8) we get, \[{\cal A}=L^{2}R^{2}\int_{0}^{z_{0}}dz\,\frac{e^{5A(z)+B(z)}}{z^{2}\sqrt{h(z)e^{6A (z)}-h(z_{0})e^{6A(z_{0})}}},\,\,t(z)=\pm\int dz\,\frac{R^{2}e^{B(z)-A(z)}}{z^{2} h(z)\sqrt{1-\frac{h(z)}{h(z_{0})}e^{6A(z)-6A(z_{0})}}}. \tag{8.9}\] To determine the wormhole's area, denoted as \({\cal A}(\gamma_{\rm w})\), we partition the integration variable \(z\) from equation (8.9) into three distinct regions. The initial region (I) initiates at the left boundary and spans from there into the bulk, ultimately reaching the horizon. Subsequently, the second region (II) commences at the horizon and concludes at \(z=z_{0}\). Lastly, the third region (III) commences at \(z=z_{0}\), proceeds towards the right (as depicted in fig.10), and extends until it encounters the horizon. The direction of \(t\) is contingent on the rate of change in \(z\) along the extremal surface. Taking into account the three regions labeled as I, II, and III, the specific expression Figure 10: Penrose diagram after the shock wave. Red line shows the wormhole surface with turning point \(z_{0}\) connecting \(A\) and \(B\). for \({\cal A}(\gamma_{\rm w})\) takes the following form: \[\begin{split}{\cal A}(\gamma_{\rm w})=4L^{2}R^{2}&\bigg{[} \int_{0}^{z_{H}}dz\bigg{(}\frac{e^{5A(z)+B(z)}}{z^{2}\sqrt{h(z)e^{6A(z)}-h(z_{0} )e^{6A(z_{0})}}}-\frac{e^{2A(z)+B(z)}}{z^{2}\sqrt{h(z)}}\bigg{)}\\ &+2\int_{z_{H}}^{z_{0}}dz\frac{e^{5A(z)+B(z)}}{z^{2}\sqrt{h(z)e^{ 6A(z)}-h(z_{0})e^{6A(z_{0})}}}\bigg{]}\end{split} \tag{111}\] The regularized HEE is expressed as \(S^{\rm reg}A\cup B(z_{0})=\frac{{\cal A}(\gamma_{\rm w})}{4G_{N}^{2}}\). This regularized entropy is depicted in Fig.11 as a function of the dimensionless parameter \(\frac{z_{0}}{z_{h}}\). It is evident from the plot that as the ratio \(z_{0}/z_{H}\) increases from unity, the value of \(S^{\rm reg}\) initiates from zero at \(z_{0}=z_{H}\) and progressively rises. For various nonzero values of \(\xi\), \(S^{reg}\) exhibits varying changes; for smaller \(\xi\) values, the regularized entropy increases more significantly than it does for the maximum permissible value of \(\xi=2\). By employing equations (109) and (111), we can express the HTMI as a function dependent on \(z_{0}\). Nevertheless, to examine how the HTMI changes concerning the shock wave parameter \(\alpha\), we must establish the connection between \(z_{0}\) and \(\alpha\). In Fig.10, Region (I) is defined as the area between the boundary point \((\hat{U},\hat{V})=(1,-1)\) and the point on the horizon \((\hat{U},\hat{V})=(\hat{U}_{1},0)\). Region (II) spans from \((\hat{U},\hat{V})=(\hat{U}_{1},0)\) to a point denoted as \((\hat{U},\hat{V})=(\hat{U}_{2},\hat{V}_{2})\), while Region (III) extends from \((\hat{U},\hat{V})=(\hat{U}_{2},\hat{V}_{2})\) to \((\hat{U},\hat{V})=(0,\alpha/2)\). Region (II) from \((\hat{U},\hat{V})=(\hat{U}_{1},0)\) to \(z_{0}\)\((\hat{U},\hat{V})=(\hat{U}_{2},\hat{V}_{2})\) and region (III) from \((\hat{U},\hat{V})=(\hat{U}_{2},\hat{V}_{2})\) to \((\hat{U},\hat{V})=(0,\alpha/2)\). Using the definition of Kruskal coordinates, it is possible to express the variation of \(\hat{U}=\pm e^{\frac{2\pi}{\beta}(z_{*}-t)}\) and \(\hat{V}=\pm e^{\frac{2\pi}{\beta}(z_{*}+t)}\) as, \[\begin{split}\Delta\log\hat{U}^{2}&=\log\hat{U}_{1 }^{2}-\log\hat{U}_{0}^{2}=\frac{4\pi}{\beta}(\Delta z_{*}-\Delta t)\\ \Delta\log\hat{V}^{2}&=\log\hat{V}_{2}^{2}-\log\hat{ V}_{1}^{2}=\frac{4\pi}{\beta}(\Delta z_{*}+\Delta t)\end{split} \tag{112}\] \[\begin{split}\log\hat{U}&=\frac{2\pi}{\beta}\int dz \ \frac{R^{2}e^{B(z)-A(z)}}{z^{2}h(z)}\Bigg{(}\frac{1}{\sqrt{1-\frac{h(z)}{h(z_{0} )}e^{6A(z)-6A(z_{0})}}}-1\Bigg{)}\\ \log\hat{V}&=\frac{2\pi}{\beta}\int dz\ \frac{R^{2}e^{B(z)-A(z)} }{z^{2}h(z)}\Bigg{(}\frac{1}{\sqrt{1-\frac{h(z)}{h(z_{0})}e^{6A(z)-6A(z_{0})} }}+1\Bigg{)}\end{split} \tag{113}\] where \(z_{*}\) is defined as follows, \[z_{*}=-\int dz\ \frac{R^{2}e^{B(z)-A(z)}}{z^{2}h(z)} \tag{114}\] Note that in region (I), when \(\dot{z}<0\), it leads to an overall negative sign in the expression for \(t\). Conversely, in region (II), the negative numerical value of \(h(z)\) corresponds to \(\dot{z}>0\), and hence we introduce a negative sign. Now, let's consider the variation of \(\hat{U}\) from the boundary to the horizon. \[\hat{U}_{1}^{2}=\exp\Biggl{\{}\Bigg{[}\frac{4\pi}{\beta}\int_{0}^{z_{H}}dz\ \frac{R^{2}e^{B(z)-A(z)}}{z^{2}h(z)}\Bigg{(}\frac{1}{\sqrt{1-\frac{h(z)}{h(z_{0} )}e^{6A(z)-6A(z_{0})}}}-1\Bigg{)}\Bigg{]}\Bigg{\}} \tag{115}\] \[\frac{\hat{U}_{2}^{2}}{\hat{U}_{1}^{2}}=\exp\Biggl{\{}\left[\frac{4\pi}{\beta} \int_{0}^{z_{H}}dz\ \frac{R^{2}e^{B(z)-A(z)}}{z^{2}h(z)}\Biggl{(}\frac{1}{\sqrt{1-\frac{h(z)}{h(z_{0} )}e^{6A(z)-6A(z_{0})}}}-1\Biggr{)}\right]\Biggr{\}} \tag{111}\] To find \(\hat{U}_{2}\) consider a reference point at \(\bar{z}\) where \(z_{*}\) is zero. \[\hat{V}_{2}\hat{U}_{2}=\exp\Biggl{\{}\left(\frac{4\pi}{\beta}z_{*}\right) \Biggr{\}}=\exp\Biggl{\{}\left[-\frac{4\pi}{\beta}\int_{\bar{z}}^{z_{0}}dz\ \frac{R^{2}e^{B(z)-A(z)}}{z^{2}h(z)}\right]\Biggr{\}} \tag{112}\] In region (III), where \(\dot{z}>0\), yet \(h(z)\) remains in the negative numerical range, we introduce an overall negative sign to the variable \(t\). Consequently, the expression for the coordinate \(\Delta\hat{V}\) in region (III) adopts the following form: \[\frac{\alpha^{2}}{4\hat{V}_{2}^{2}}=\frac{4\pi}{\beta}\int_{0}^{z_{h}}dz\ \frac{R^{2}e^{B(z)-A(z)}}{z^{2}h(z)}\Biggl{(}\frac{1}{\sqrt{1-\frac{h(z)}{h(z_{0 })}e^{6A(z)-6A(z_{0})}}}-1\Biggr{)} \tag{113}\] From equation (112) and (113) we can write the relation between \(\alpha\) and \(z_{0}\) as, \[\alpha(z_{0})=2\exp\{(\eta_{\rm I}+\eta_{\rm II}+\eta_{\rm III})\} \tag{114}\] where \[\eta_{\rm I} =\frac{4\pi}{\beta}\int_{\bar{z}}^{z_{0}}dz\ \frac{R^{2}e^{B(z)-A(z)}}{z^{2}h(z)}, \ \ \ \ \eta_{\rm II}=\frac{2\pi}{\beta}\int_{0}^{z_{h}}dz\ \frac{R^{2}e^{B(z)-A(z)}}{z^{2}h(z)}\Biggl{(}\frac{1}{ \sqrt{1-\frac{h(z)}{h(z_{0})}e^{6A(z)-6A(z_{0})}}}-1\Biggr{)}\] \[\eta_{\rm III} =\frac{4\pi}{\beta}\int_{z_{h}}^{z_{0}}dz\ \frac{R^{2}e^{B(z)-A(z)}}{z^{2}h(z)}\Biggl{(}\frac{1}{ \sqrt{1-\frac{h(z)}{h(z_{0})}e^{6A(z)-6A(z_{0})}}}-1\Biggr{)}\] By utilizing the equation (114), it becomes possible to generate a graphical representation of the shock wave parameter concerning the dimensionless quantity \(z_{0}/z_{H}\), as depicted in fig.12. It is noteworthy that, in accordance with expectations, the shockwave parameter escalates as \(z_{0}\) increases, and the pace of this elevation is contingent on the \(\xi\) parameter. In instances where \(\xi\) assumes larger values, the rate at which \(\alpha\) increases is comparatively slower in contrast to situations with smaller \(\xi\) values. In the end, in Fig. 13, the graph illustrates how the HTML changes concerning the shockwave parameter \(\alpha\) for various nonzero \(\xi\) values. For distinct \(\xi\) values, HTML starts declining from a specific initial value, each with its own rate, ultimately reaching zero at a critical point, denoted as \(\alpha=\alpha_{c}\). HTML only exists for \(\alpha\) values less than or equal to \(\alpha_{c}\). This critical value of \(\alpha\) increases as the \(\xi\) parameter grows. It's worth noting that as previously mentioned, \(\xi=2\) represents the theory's critical point, and it's evident that HTML remains finite at this critical point, echoing a similar observation made for one-sided MI in [40]. ## 9 Summary and Discussions In this work, we study the various measures for the entanglement structure of mixed states and the properties of chaos in the four-dimensional \(\mathcal{N}=4\) super Yang-Mills theory at finite temperature T, charged under a \(U(1)\) subgroup of its \(SU(4)\) R-symmetry with critical point. We use the HLN, EWCS, HTML to probe the entanglement structure near the critical point. We also study the disruption of HTML due the shockwave perturbation and finally interpret our results in terms of boundary theory parameters. We study the effect of parameter \(\xi\) (related to the charge of black hole) on the HLN in low and high temperature limits. In this analysis, we observe that the corresponding RT surface dual to the boundary region A receives a modification due to the presence of \(\xi\). Moreover, for a fixed width \(l\) of the boundary region \(A\), the RT surface goes deeper into the bulk for larger value of \(\xi\). For computing the HLN at low and high temperature, we consider adjacent, disjoint and bipartite configuration of subsystems in the boundary. It is straighforward to see that \(\xi\to 0\) (\(Q\to 0\)) correctly reproduce the results obtained for the AdS\({}_{d+1}\) Schwarzschild black hole background. The HLN exhibits a decreasing trend for adjacent configurations as the parameter \(\xi\) increases at low temperatures and an increasing trend as temperature approaches the high-temperature limit. For disjoint subsystems the HLN shows an increasing behavior with \(\xi\) at low temperature and vanishes for high temperature. In the bipartite case the HLN shows an increasing behavior with \(\xi\) at low temperature and an decreasing behavior for high temperature. In the field theory, the Figure 13: TMI vs shock parameter \(\alpha\) for different \(\xi\) and \(T=1,R=1\). growth of the HLN can be understood as indicative of the increasing quantum entanglement between two subsystems. As the critical limit is approached (\(\xi\to 2\)) in all cases, HLN remains finite. A similar finding was previously documented for HEE and HMI in the study by Ebrahim et al. ([40]). We give analytic expressions for EWCS for 1RC black hole in the low and high temperature limits that are consistent with the numerical result obtained in [41]. We observe that, at low temperatures, the EWCS experiences a correction attributed to the parameter \(\xi\) and consequently it exhibits a growth with respect to \(\xi\). It's worth noting that mutual information is intricately connected to EWCS, as described in [60]. Our result of EWCS also agrees with the numerical analysis of HMI reported in [40]. For disjoint case, in the low-temperature regime, both the HLN and EWCS exhibit a similar dependence on the boundary parameter(characteristic length of the different regions) as well as the temperature. In the high-temperature limit, these quantities vanish as stated in [40]. Moreover we notice that the entanglement between two subsystems of a TFD state measured by TMI increases with the size of the subsystem. This is expected as the larger the Hilbert space of individual subsystems more the correlation. If we fix the size of subsystems, TMI increases as \(\xi\) parameter approaches to the higher values. Based on our analysis, we have noted that the two separate subsystems do not manifest correlations regardless of the subsystem's size. It is only once a specific size, denoted as the critical width \(l_{c}\), is reached that total correlations start to emerge. As we already mentioned the entanglement in the TFD state can be destroyed by the insertion of an operator evolves in time. We have demonstrated the explicit disruption of holographic TMI in the presence of a shockwave. Our findings suggest that the parameter \(\xi\) attempts to mitigate this disruption, indicating that the presence of the \(\xi\) parameter tends to stabilize the system and reduce its chaotic behavior. For substantial values of \(\xi\), holographic TMI exhibits a slower rate of decay. In simpler terms, when \(\xi\) is large, it takes more time for TMI to completely vanish. This is in contrast to the findings in a recent study [37], where it was noted that TMI diminishes more rapidly with increased backreaction. **Acknowledgement** We express our gratitude to Shankhadeep Chakrabortty for valuable comments on the draft. SP acknowledges the support of Senior Research Fellowship from the Ministry of Human Resource and Development, Government of India. SP expresses gratitude to Dongmin Gang and Seoul National University for their generous support and warm hospitality during a part of this work. DK wishes to extend appreciation to Shankhadeep Chakrabortty and IIT Ropar for their support and warm hospitality during the final stages of this project. ## Appendix A Area of the Extremal Surface for Bipartite Systems We provide a concise overview of employing the near horizon expansion technique to estimate extremal surfaces within the bipartite subsystem, aiming to determine the surface area. \(\mathcal{A}_{B_{1}}\) and \(\mathcal{A}_{A\cup B_{1}}\) in the limit \(L\to\infty\) it is convenient to start with the equation (3.13) by rewriting it in the following form \[\mathcal{A}=\mathcal{A}^{(1)}+\mathcal{A}^{(2)}+\mathcal{A}^{(3)}\] (A.1) where we define the quantities \(\mathcal{A}^{(1)},\mathcal{A}^{(2)},\mathcal{A}^{(3)}\) in the following way \[\mathcal{A}^{(1)}=\frac{L^{2}R^{3}}{{z_{t}}^{2}}\Bigg{\{}\frac{3 \xi}{2}\bigg{(}\frac{z_{t}}{z_{h}}\bigg{)}^{2}-\Bigg{[}1+\xi\bigg{(}\frac{z_{t} }{z_{h}}\bigg{)}^{2}\Bigg{]}^{\frac{3}{2}}+\frac{1+\xi}{3\xi}\bigg{(}\frac{z_{t} }{z_{h}}\bigg{)}^{2}\left[\left(1+\xi\bigg{(}\frac{z_{t}}{z_{h}}\bigg{)}^{2} \right)^{\frac{3}{2}}-1\right]\Bigg{\}} \tag{114}\] \[\mathcal{A}^{(2)}=\frac{L^{2}R^{3}}{{z_{t}}^{2}}\Bigg{\{}\sum_{n= 0}^{2}\Lambda_{2n0}\frac{\sqrt{\pi}\Gamma(n+1)}{\Gamma(n+3)}\bigg{(}\frac{z_{t }}{z_{h}}\bigg{)}^{4+2n}\times\left[1+(n+1)\left(1+\xi\bigg{(}\frac{z_{t}}{z_{ h}}\bigg{)}^{2}\right)\right]\Bigg{\}}\] (115) \[\mathcal{A}^{(3)}=\frac{L^{2}R^{3}}{{z_{t}}^{2}}\Bigg{\{}\sum_{j =1}^{\infty}\Lambda_{000}\frac{\Gamma(j+\frac{1}{2})\Gamma(3j-1)}{\Gamma(j+1 )\Gamma(3j+1)}\times\left[1+(3j-1)\left(1+\xi\bigg{(}\frac{z_{t}}{z_{h}}\bigg{)} ^{2}\right)\right]\Bigg{\}} \tag{116}\] note that we have truncated the series when writing \(\mathcal{A}_{2}\) and \(\mathcal{A}_{3}\) in order to obtain the lowest-order contribution, which is our focus for the near-horizon expansion. Consequently, the higher-order contributions will become superfluous for our analysis. We then employ equation (101) within the context of equations (114), (115), and (116) to derive the expression for extremal surfaces in the bipartite limit, as presented in equation (100). In this context, we introduce the following functions of \(\xi\). \[\alpha(\xi)=\bigg{(}\frac{3\xi}{2}-\frac{2}{3}(1+\xi)+\frac{k_{2}}{3\xi}(1-2 \xi)(\xi-2)\bigg{)}\,,\ \ \beta(\xi)=k_{2}\sqrt{6}\frac{(2\xi-1)(\xi-2)}{3\xi} \tag{117}\] \[\gamma(\xi)=\Bigg{(}u_{1}(\xi)-v_{1}(\xi)+x_{1}(\xi)+k_{2}\bigg{(}u _{2}(\xi)-v_{2}(\xi)+x_{2}(\xi)\bigg{)}\Bigg{)},\ \ \delta(\xi)=\Bigg{(}-u_{2}(\xi)+v_{2}(\xi)-x_{2}(\xi)\Bigg{)} \tag{118}\] \[u_{1}(\xi)=\frac{3\xi^{2}}{16}(\xi^{2}+3\xi+2)\qquad\qquad\qquad u _{2}(\xi)=\frac{3\xi^{2}}{16}(3\xi^{2}+6\xi+4)\] \[v_{1}(\xi)=\frac{3\xi(1+\xi)}{24}(2\xi^{2}+5\xi+3)\qquad v_{2}( \xi)=\frac{3\xi(1+\xi)}{24}(10\xi^{2}+21\xi+12)\] \[x_{1}(\xi)=\frac{3(1+\xi)^{2}}{96}(\xi^{2}+7\xi+4)\qquad x_{2}( \xi)=\frac{3(1+\xi)^{2}}{96}(21\xi^{2}+44\xi+24) \tag{119}\] \[\mu(\xi)=\sum_{j=1}^{\infty}\frac{3}{\sqrt{\pi}}\frac{\Gamma\left(j+\frac{1}{ 2}\right)\Gamma(3j-1)}{\Gamma(j+1)\Gamma(3j+1)}(1+\xi-(\xi+2)k_{2}) \tag{120}\] \[\nu(\xi)=\sum_{j=1}^{\infty}\frac{3}{\sqrt{\pi}}\frac{\Gamma \left(j+\frac{1}{2}\right)\Gamma(3j-1)j}{\Gamma(j+1)\Gamma(3j+1)}(\xi+2)k_{2} \sqrt{6}\] Using the above functions we obtain the following extremal areas \[\mathcal{A}_{B_{1}}=\frac{L^{2}R^{3}}{z_{h}^{2}}\Big{\{}\alpha( \xi)+\gamma(\xi)+\mu(\xi)\Big{\}}+\frac{L^{2}R^{3}}{z_{h}}\left(L-\frac{l}{2} \right)\Big{\{}\beta(\xi)+\delta(\xi)+\nu(\xi)\Big{\}} \tag{121}\] \[\mathcal{A}_{A\cup B_{1}}=\frac{L^{2}R^{3}}{z_{h}^{2}}\Big{\{} \alpha(\xi)+\gamma(\xi)+\mu(\xi)\Big{\}}+\frac{L^{2}R^{3}}{z_{h}}\left(L+\frac{l }{2}\right)\Big{\{}\beta(\xi)+\delta(\xi)+\nu(\xi)\Big{\}}\] By applying the equations mentioned earlier to the entanglement negativity formula associated with the bipartite state in a low-temperature regime, we derive equation (101). It's worth noting that equation (6.10) is obtained through the correlation between \(\hat{T}\) and \(z_{h}\). Additionally, we introduce the following function for a more concise expression of the HLN. \[g(\xi)=\pi\big{(}\beta(\xi)+\delta(\xi)+\nu(\xi)\big{)}\] (A.9) We employ the defined function \(g(\xi)\) to examine the scenario as we approach the limit where \(\xi\) tends towards zero. In this limit, it becomes evident from the equations mentioned earlier that (A.9) simplifies to the following expression. \[g(\xi)\Bigg{|}_{\xi\to 0}=\pi\Bigg{(}\frac{2}{3}k_{2}\sqrt{6}\frac{1}{\xi}- \frac{3}{4}+2k_{2}\sqrt{6}\sum_{j=1}^{\infty}\frac{3}{\sqrt{\pi}}\frac{\Gamma \left(j+\frac{1}{2}\right)\Gamma(3j-1)j}{\Gamma(j+1)\Gamma(3j+1)}\Bigg{)}\] (A.10) Hence, if one's primary concern is the temperature dependency of the function \(g(\xi)\) as \(\xi\) approaches zero, an approximate approach involves focusing on the initial term within the parentheses in the previous equation. Consequently, it becomes evident that the function \(g(\xi)\) exhibits a proportionality to \(T^{2}\). Before concluding this appendix, it's worth noting that equation (6.11) introduces a constant \(\mathcal{C}\). The precise value of this constant can be determined by considering the coefficient of the initial term in equation (A.10). ## Appendix B Approximate EWCS at low temperature limit in terms of boundary parameters In this context, we derive the expression for the EWCS under the conditions of low temperature. By inserting the equations for the turning points into the overarching expression for the EWCS, as provided in equation (7.5), we derive the ensuing series. \[E_{W}^{low}=\frac{L^{2}R^{3}}{4G_{N}^{5}}\sum_{k=0}^{\infty}\sum_ {j=0}^{k}\sum_{i=0}^{\infty}\frac{(-1)^{k+j}}{2}\frac{\Gamma(k+\frac{1}{2}) \xi^{i+j+k}(1+\xi)^{j}}{\Gamma(i+1)\Gamma(\frac{3}{2}-i)\Gamma(j+1)\Gamma(k-j+ 1)}\frac{z_{h}^{2-2i-2j-2k}}{(2i+2j+2k-2)}\] \[\times\Bigg{[}\Bigg{\{}\bigg{(}\frac{2l+D}{a_{1}}\bigg{)}^{2i+2j+ 2k-2}-\bigg{(}\frac{D}{a_{1}}\bigg{)}^{2i+2j+2k-2}\Bigg{\}}+(2i+2j+2k-2)\frac{ \xi}{6z_{h}^{2}}\Bigg{\{}\bigg{(}\frac{2l+D}{a_{1}}\bigg{)}^{2i+2j+2k}\] \[-\bigg{(}\frac{D}{a_{1}}\bigg{)}^{2i+2j+2k}\Bigg{\}}+\frac{2i+2j+ 2k-2}{2z_{h}^{4}}\left(\frac{\xi^{2}}{6}\left(1-\frac{a_{3}}{2a_{2}}\right) \right)\Bigg{\{}\bigg{(}\frac{2l+D}{a_{1}}\bigg{)}^{2i+2j+2k+2}-\bigg{(}\frac{ D}{a_{1}}\bigg{)}^{2i+2j+2k+2}\Bigg{\}}\] \[+\mathcal{O}\bigg{(}\frac{1}{z_{h}}\bigg{)}^{6}\Bigg{]}\] As stated in Section 8, we have the option to conclude the series for increased simplification by setting \(i=j=k=0\). Consequently, this procedure yields Equation (7.8) with ease.
``` 高温と低温におけるlogarithmic negativity(LN)とエンタングルメントウェッジクロスセクション(EWCS)の高低温挙動に関する、大きなNの strongly coupled thermal field theoryにおけるホログラフ的研究を行っています。この理論は、その臨界点を持つ1RCブラックホールとして知られている、明確な重力対称性を持つ。臨界点は、$\xi \to 2$の極限において定義されます。$\xi$は、1RCブラックホールの電荷に比例する無次元パラメータです。低温と高温の臨界点におけるlogarithmic negativityは、$\xi$の増加に伴って増加します。低温と高温の臨界点におけるEWCSを解析的に計算し、これまでに報告された数値結果と一致する結果を得ました。ホログラフ的に、臨界点を持つ熱場理論の2つの同一の複製が、Thermofield Double (TFD) という状態
2309.06998
Data-Driven Synthesis of Configuration-Constrained Robust Invariant Sets for Linear Parameter-Varying Systems
We present a data-driven method to synthesize robust control invariant (RCI) sets for linear parameter-varying (LPV) systems subject to unknown but bounded disturbances. A finite-length data set consisting of state, input, and scheduling signal measurements is used to compute an RCI set and invariance-inducing controller, without identifying an LPV model of the system. We parameterize the RCI set as a configuration-constrained polytope whose facets have a fixed orientation and variable offset. This allows us to define the vertices of the polytopic set in terms of its offset. By exploiting this property, an RCI set and associated vertex control inputs are computed by solving a single linear programming (LP) problem, formulated based on a data-based invariance condition and system constraints. We illustrate the effectiveness of our approach via two numerical examples. The proposed method can generate RCI sets that are of comparable size to those obtained by a model-based method in which exact knowledge of the system matrices is assumed. We show that RCI sets can be synthesized even with a relatively small number of data samples, if the gathered data satisfy certain excitation conditions.
Manas Mejari, Sampath Kumar Mulagaleti, Alberto Bemporad
2023-09-13T14:47:07
http://arxiv.org/abs/2309.06998v1
Data-Driven Synthesis of Configuration-Constrained Robust Invariant Sets for Linear Parameter-Varying Systems ###### Abstract We present a data-driven method to synthesize _robust control invariant_ (RCI) sets for _linear parameter-varying_ (LPV) systems subject to unknown but bounded disturbances. A finite-length data set consisting of state, input, and scheduling signal measurements is used to compute an RCI set and invariance-inducing controller, without identifying an LPV model of the system. We parameterize the RCI set as a _configuration-constrained_ polytope whose facets have a fixed orientation and variable offset. This allows us to define the vertices of the polytopic set in terms of its offset. By exploiting this property, an RCI set and associated vertex control inputs are computed by solving a single _linear programming_ (LP) problem, formulated based on a data-based invariance condition and system constraints. We illustrate the effectiveness of our approach via two numerical examples. The proposed method can generate RCI sets that are of comparable size to those obtained by a model-based method in which exact knowledge of the system matrices is assumed. We show that RCI sets can be synthesized even with a relatively small number of data samples, if the gathered data satisfy certain excitation conditions. ## I Introduction Safety guarantees for constrained controlled systems can be analysed through set invariance theory [4]. A _robust control invariant_ (RCI) set is a subset of the state-space in which a system affected by bounded but unknown disturbances can be enforced to evolve _ad infinitum_, by an appropriately designed invariance-inducing controller [5]. Many works have proposed algorithms for computating such RCI sets along with their associated controllers for _linear parameter-varying_ (LPV) systems, see, _e.g._, [8, 9, 15, 16]. These approaches are _model-based_, in that an LPV model of the system is assumed to be known. However, identifying an LPV model poses several challenges [17]. Modelling errors can result in the violation of the invariance property and constraints during closed-loop operations. To overcome the drawbacks of model-based methods, data-driven approaches have emerged as favorable alternatives. Data-driven _control-oriented_ identification algorithms were proposed in [7, 14] which simultaneously compute an RCI set and a controller, while selecting an 'optimal' model from the admissible set. The approaches [7, 14] synthesize RCI sets with reduced conservatism compared to the sequential approach which first selects a model to best fit the data and then computes an invariant set for it. Alternatively, _direct_ data-driven approaches were presented in [2, 11, 22, 3], which synthesize RCI sets and controllers directly from open-loop data, without the need of model identification. The algorithm presented in [3], computes a state-feedback controller from open-loop data to induce robust invariance in a _given_ polyhedral set, while methods proposed in [2, 11, 22] simultaneously compute invariance-inducing controllers along with RCI sets having zonotopic [2], polytopic [11] or ellipsoidal [22] representations. These contributions, however, are limited to linear time-invariant (LTI) systems. For LPV systems, direct data-driven algorithms have mainly focused on LPV control design, see, _e.g._, LPV input-output controllers for constrained systems [17], predictive controllers [19], and gain-scheduled controllers [13, 20]. To our knowledge, only a recent work [12] has addressed computation of RCI set for LPV systems in a data-driven setting. This work differs from [12] in terms of description of the RCI sets and computational complexity. We represent the RCI set with a polytope having fixed orientation and varying offset that we optimize in order to maximize the size of the set. As presented in [15, 21], we enforce _configuration constraints_ (CC) on this polytope, which enable us to switch between their vertex and hyperplane representations. We exploit this property to parameterize the controller as a vertex control law which is inherently less conservative than a linear feedback control law [10]. A single _linear program_ (LP) is formulated and solved to compute the CC-RCI set with associated vertex control law, while the approach in [12] requires to solve a semi-definite programming problem. Our approach does not require an LPV model of the system but only a single state-input-scheduling trajectory consisting of a finite number of data samples. We show via numerical examples that if the gathered data satisfies certain excitation conditions, then the obtained RCI sets and associated control inputs can be synthesized with a relatively small number of data samples. **Paper organization:** The notation and preliminary results used in the paper are given in Section II. The problem of computing the RCI set from data collected from an LPV system is formalized in Section III. The configuration-constrained parameterization of RCI sets is presented in Section IV. The proposed data-based invariance conditions and maximization of the size of the set is formulated as an LP in Section V. The effectiveness of the proposed algorithm is demonstrated with two numerical examples in Section VI. ## II Notations and Preliminaries A set of natural numbers between two integers \(m\) and \(n\), \(m\leq n\), is denoted by \(\mathbb{I}_{m}^{n}\triangleq\{m,\ldots,n\}\). Let \(A\in\mathbb{R}^{m\times n}\) be a matrix written according to its \(n\) column vectors as \(A=\left[{{}_{a_{1}}\cdots{}_{a_{n}}}\right]\); we define the vectorization of \(A\) as \(\vec{A}\triangleq\left[{{}_{a_{1}}\cdots{}_{a_{n}}}\right]^{\top}\in\mathbb{ R}^{mn}\), stacking the columns of \(A\). For a finite set \(\Theta=\{\theta^{1},\theta^{2},\ldots,\theta^{r}\}\) with \(\theta^{j}\in\mathbb{R}^{n}\) for \(i\in\mathbb{I}_{1}^{r}\), the convex-hull of \(\Theta\) is given by, \(\mathrm{ConvHull}(\Theta)\triangleq\left\{\theta\in\mathbb{R}^{n}:\theta=\sum_ {j=1}^{r}\alpha_{j}\theta^{j},\mathrm{s.t.}\ \sum_{j=1}^{r}\alpha_{j}=1,\alpha_{j}\geq 0\right\}\). The Minkowski sum of the two sets \(\mathcal{X}\) and \(\mathcal{Y}\) is defined as \(\mathcal{X}\oplus\mathcal{Y}:=\{x+y:x\in\mathcal{X},y\in\mathcal{Y}\}\), and set subtraction as \(\mathcal{X}\ominus\mathcal{Y}:=\{x:\{x\}\oplus\mathcal{Y}\subseteq\mathcal{X}\}\). For matrices \(A\) and \(B\), \(A\otimes B\) denotes their Kronecker product. The following results will be used in the paper: **Lemma 1** (Vectorization): _For matrices \(A\in\mathbb{R}^{k\times l}\), \(B\in\mathbb{R}^{l\times m}\), \(C\in\mathbb{R}^{m\times n}\) and \(D\in\mathbb{R}^{k\times n}\), the matrix equation \(ABC=D\) is equivalent to [1, Ex. \(10.18\)],_ \[(C^{\top}\otimes A)\vec{B}=\overrightarrow{ABC}=\overrightarrow{D}, \tag{1}\] **Lemma 2** (Strong duality): _Given \(a\in\mathbb{R}^{n}\), \(b\in\mathbb{R}\), \(M\in\mathbb{R}^{m\times n}\) and \(q\in\mathbb{R}^{m}\), the inequality \(a^{\top}x\leq b\) is satisfied by all \(x\) in a nonempty set \(\mathcal{X}:=\{x:Mx\leq q\}\) if and only if there exists some \(\mathbf{\Lambda}\in\mathbb{R}^{1\times m}_{+}\) satisfying \(\mathbf{\Lambda}q\leq b\) and \(\mathbf{\Lambda}M=a^{\top}\)._ ## III Problem Setting ### _Data-generating system and constraints_ We consider the following discrete-time LPV data-generating system \[x_{t+1}=\mathcal{A}(p_{t})x_{t}+\mathcal{B}(p_{t})u_{t}+w_{t}, \tag{2}\] where \(x_{t}\in\mathbb{R}^{n}\), \(u_{t}\in\mathbb{R}^{m}\), \(p_{t}\in\mathbb{R}^{s}\), and \(w_{t}\in\mathbb{R}^{n}\) are the state, control input, scheduling parameter, and (additive) disturbance vectors, at time \(t\), respectively. The matrix functions \(\mathcal{A}(p_{t})\) and \(\mathcal{B}(p_{t})\) have a linear dependency on the parameter \(p_{t}\) as \[\mathcal{A}(p_{t})=\sum_{j=1}^{s}\!p_{t,j}A_{o}^{j},\quad\mathcal{B}(p_{t})= \sum_{j=1}^{s}\!p_{t,j}B_{o}^{j}, \tag{3}\] where \(p_{t,j}\) denotes the \(j\)-th element of \(p_{t}\in\mathbb{R}^{s}\) and \(A_{o}^{j},B_{o}^{j},\ j\in\mathbb{I}_{1}^{s}\) are _unknown_ system matrices. Using (3), the LPV system (2) can be written as \[x_{t+1}=\underbrace{\left[A_{o}^{1}\cdots{}_{a}^{s}{}_{b}^{1} \cdots{}_{a}^{s}{}_{b}^{1}\cdots{}_{b}^{s}{}_{o}^{1}\right]}_{M_{o}}\left[ \begin{matrix}p_{t}\otimes x_{t}\\ p_{t}\otimes u_{t}\end{matrix}\right]+w_{t}. \tag{4}\] Assume that a state-input-scheduling trajectory of \(T+1\) samples \(\{x_{t},p_{t},u_{t}\}_{t=1}^{T+1}\) generated from system (2) is available. The generated dataset is represented by the following matrices \[X^{+}\triangleq\left[x_{2}\quad x_{3}\quad\cdots\quad x_{T+1} \right]\in\mathbb{R}^{n\times T}, \tag{5a}\] \[X_{u}^{p}\triangleq\begin{bmatrix}p_{1}\otimes x_{1}&p_{2} \otimes x_{2}&\cdots&p_{T}\otimes x_{T}\\ p_{1}\otimes u_{1}&p_{2}\otimes u_{2}&\cdots&p_{T}\otimes u_{T}\end{bmatrix}\! \in\mathbb{R}^{(n\!+\!m)s\times T}. \tag{5b}\] Note that the state measurements \(x_{t}\) are generated according to (2), which are affected by disturbance samples \(w_{t}\) for \(t\in\mathbb{I}_{1}^{T+1}\) whose values are _not_ known. However, we assume that for all \(t\in\mathbb{I}_{1}^{T}\), \[w_{t}\in\mathcal{W}\triangleq\left\{w:-h_{n_{w}}\leq H_{w}w\leq h_{n_{w}} \right\}, \tag{6}\] _i.e._, the additive disturbance \(w_{t}\) is unknown but bounded a priori in the 0-symmetric polytope \(\mathcal{W}\). Furthermore, we assume that for all \(t\in\mathbb{I}_{1}^{T}\), the system parameter satisfies \[p_{t}\in\mathcal{P}\triangleq\mathrm{ConvHull}(\{p^{j}\},j\in\mathbb{I}_{1}^{v_ {p}}),\] where \(\{p^{j}\},j\in\mathbb{I}_{1}^{v_{p}}\) are \(v_{p}\) given vertices defining the parameter set \(\mathcal{P}\). Given the sets \(\mathcal{W}\) and \(\mathcal{P}\), our goal is to synthesize an RCI set for the LPV system (2) that satisfies the state and input constraints \[\mathcal{X}\triangleq\left\{x:H_{x}x\leq h_{n_{x}}\right\},\ \mathcal{U} \triangleq\left\{u:H_{u}u\leq h_{n_{u}}\right\}, \tag{7}\] where \(\mathcal{X}\) and \(\mathcal{U}\) are given polytopic sets. ### _Set of feasible models_ A set of _feasible models_ which are compatible with the measured data \(X^{+},X_{u}^{p}\) and the bound on the disturbance samples captured by the set \(\mathcal{W}\), is given as follows \[\mathcal{M}_{T}\triangleq\left\{M:x_{t+1}-M\begin{bmatrix}p_{t}\otimes x_{t} \\ p_{t}\otimes u_{t}\end{bmatrix}\in\mathcal{W},k\in\mathbb{I}_{1}^{T}\right\}, \tag{8}\] where \(M=\left[{{}_{A}}^{1},\ldots{}_{A}^{s}{}_{B}^{1},\ldots{}_{B}^{s}\right]\in \mathbb{R}^{n\times(n+m)s}\) are feasible model matrices. Our assumption is that the true system matrix \(M_{o}\) in (4) belongs to this set, \(M_{o}\in\mathcal{M}_{T}\). Using the definitions of data matrices in (5) and disturbance set \(\mathcal{W}\) in (6), the feasible model set \(\mathcal{M}_{T}\) is represented as, \[\mathcal{M}_{T}\triangleq\left\{M:-h_{w}\leq H_{w}X^{+}-H_{w}MX_{u}^{p}\leq h_{ w}\right\}, \tag{9}\] with \(h_{w}\triangleq\left[{{}_{h_{w}}\ \ h_{w}\ \cdots\ h_{w}}\right]\in\mathbb{R}^{n_{w}\times T}\). We now rewrite the feasible model set \(\mathcal{M}_{T}\) in (9) using the vectorization Lemma 1 for \(\overrightarrow{M}\in\mathbb{R}^{n(n+m)s}\) as \[\mathcal{M}_{T}\triangleq\left\{\overrightarrow{M}:-\overrightarrow{h}_{w}+h_{M} \leq H_{M}\overrightarrow{M}\leq\overrightarrow{h}_{w}+h_{M}\right\}, \tag{10}\] where we define \(H_{M}\in\mathbb{R}^{Tn_{w}\times n(n+m)s}\), \(h_{M}\in\mathbb{R}^{Tn_{w}}\) and \(\overrightarrow{h}_{w}\in\mathbb{R}^{Tn_{w}}\) as \[H_{M}\triangleq\left(X_{u}^{p^{\top}}\otimes H_{w}\right),\ h_{M}\triangleq \begin{bmatrix}H_{w}x_{2}\\ H_{w}x_{3}\\ \vdots\\ H_{w}x_{T+1}\end{bmatrix},\ \overrightarrow{h}_{w}\triangleq\begin{bmatrix}h_{n_{w}}\\ h_{n_{w}}\\ \vdots\\ h_{n_{w}}\end{bmatrix} \tag{11}\] **Proposition 1** (Bounded feasible model set): _The feasible model set \(\mathcal{M}_{T}\) in (10) is a bounded polyhedron if and only if \(\mathrm{rank}\left(X_{u}^{p}\right)=(n+m)s\) and \(H_{w}\) has a full column-rank \(n\)[3, Fact 1]._ The full row-rank of \(X_{u}^{p}\) can be checked from the data, which also relates to the "richness" of data and _persistency of excitation_ condition for LPV systems [20, condition 1]. If this condition is not satisfied, the set \(\mathcal{M}_{T}\) \(\mathcal{M}_{T}\) is bounded via simple rank condition on the data matrix \(X_{u}^{p}\), based on [5, p. 108], [3, Fact 1]. For a general polytope \(\mathcal{W}\), the conditions for \(\mathcal{M}_{T}\) to be a bounded polyhedron are more involved [5, p. 119, ex. 11]. ### _Invariance condition_ A set \(\mathcal{S}\subseteq\mathcal{X}\) is referred to as _robust control invariant_ (RCI) for LPV system (4), if for any given \(p\in\mathcal{P}\), there exists a control input \(u\in\mathcal{U}\) such that the following condition is satisfied: \[x\in\mathcal{S}\;\Rightarrow\;x^{+}\in\mathcal{S},\;\forall w\in\mathcal{W}, \;\forall M\in\mathcal{M}_{T}, \tag{12}\] where the time-dependence of the signals is omitted for brevity and \(x^{+}\) denotes the successor state. We now state and prove two equivalent conditions for invariance. Let \(\{x^{i},i\in\mathbb{I}_{1}^{v_{s}}\}\) be the \(v_{s}\) vertices of the convex RCI set \(\mathcal{S}\). For each vertex \(x^{i},i\in\mathbb{I}_{1}^{v_{s}}\), we suppose that there exists a vertex control input \(\mathbf{u}^{i}\in\mathcal{U},i\in\mathbb{I}_{1}^{v_{s}}\). **Lemma 3**: _If the set \(\mathcal{S}\) is robustly invariant for system (4), then the following two statements are equivalent:_ 1. _for all_ \(x\in\mathcal{S}\)_, for any given_ \(p\in\mathcal{P}\)_, and_ \(\forall(w,M)\in(\mathcal{W},\mathcal{M}_{T})\)_,_ \[x^{+}\triangleq M\begin{bmatrix}p\otimes x\\ p\otimes u\end{bmatrix}+w\in\mathcal{S};\] (13) 2. _for each vertex_ \(\{x^{i},\mathbf{u}^{i},i\in\mathbb{I}_{1}^{v_{s}}\}\)_, for each vertex_ \(\{p^{j},j\in\mathbb{I}_{1}^{v_{p}}\}\) _of the set_ \(\mathcal{P}\)_, and_ \(\forall(w,M)\in\mathcal{W},\mathcal{M}\)_,_ \[x^{i,j}{}^{+}\triangleq M\begin{bmatrix}p^{j}\otimes x^{i}\\ p^{j}\otimes\mathbf{u}^{i}\end{bmatrix}+w\in\mathcal{S}.\] (14) Since for each vertex \(x^{i},i\in\mathbb{I}_{1}^{v_{s}}\) and \(p^{j},j\in\mathbb{I}_{1}^{v_{p}}\), it holds that \(x^{i}\in\mathcal{S}\) and \(p^{j}\in\mathcal{P}\), it can be easily seen that \((i)\Rightarrow(ii)\). Now, we prove the converse, _i.e._, \((ii)\Rightarrow(i)\). Any \(x\in\mathcal{S}\) can be represented as a convex combination of its vertices as follows: \[x=\sum_{i=1}^{v_{s}}\lambda_{i}x^{i},\quad\sum_{i=1}^{v_{s}}\lambda_{i}=1, \quad\lambda_{i}\geq 0,\forall\;i\in\mathbb{I}_{1}^{v_{s}}. \tag{15}\] For this state, we choose the corresponding control input as \[u=\sum_{i=1}^{v_{s}}\lambda_{i}\mathbf{u}^{i}. \tag{16}\] Note that, \(u\in\mathcal{U}\), as \(\mathbf{u}^{i}\in\mathcal{U}\) and \(\mathcal{U}\) is convex. Similarly, any given scheduling parameter \(p\in\mathcal{P}\) can be expressed as \[p=\sum_{j=1}^{v_{p}}\alpha_{j}p^{j},\quad\sum_{j=1}^{v_{p}}\alpha_{j}=1, \quad\alpha_{j}\geq 0,\forall j\in\mathbb{I}_{1}^{v_{p}}.\] Applying the control input (16) to System (4), for any \(w\in\mathcal{W}\), we get, \[x^{+} =M\begin{bmatrix}\sum_{j=1}^{v_{p}}\alpha_{j}p^{j}\otimes \sum_{i=1}^{v_{s}}\lambda_{i}x^{i}\\ \left(\sum_{j=1}^{v_{p}}\alpha_{j}p^{j}\right)\otimes\sum_{i=1}^{v_{s}}\lambda _{i}\mathbf{u}^{i}\end{bmatrix}+w, \tag{17a}\] \[=\sum_{j=1}^{v_{p}}\alpha_{j}\sum_{i=1}^{v_{s}}\lambda_{i}\underbrace {M\begin{bmatrix}p^{j}\otimes x^{i}\\ p^{j}\otimes\mathbf{u}^{i}\end{bmatrix}+w}_{x^{i,j}{}^{+}\in\mathcal{S}},\] (17b) \[=\sum_{j=1}^{v_{p}}\alpha_{j}\underbrace{\sum_{i=1}^{v_{s}}\lambda _{i}x^{i,j}{}^{+}}_{x^{j}{}^{+}\in\mathcal{S}},\] (17c) \[=\sum_{j=1}^{v_{p}}\alpha_{j}x^{j}{}^{+}\in\mathcal{S}, \tag{17d}\] where (17b) follows from the distributive property of the Kronecker product. As \(\mathcal{S}\) is convex, and from (14) we know that \(x^{i,j}{}^{+}\in\mathcal{S}\), then \(x^{j}{}^{+}\in\mathcal{S}\) in (17c). Similarly, as \(x^{+}\) in (17d) is obtained as a convex combination of \(x^{j}{}^{+}\in\mathcal{S}\), it follows that \(x^{+}\in\mathcal{S}\), thus, proving \((ii)\Rightarrow(i)\). We now formalize the problem addressed in the paper: **Problem 1**: _Given data matrices \((X^{+},X_{u}^{p})\) defined in (5) and the constraint sets (7), compute an invariant set \(\mathcal{S}\) and associated vertex control inputs \(\mathbf{u}^{i}\in\mathcal{U},\;i\in\mathbb{I}_{1}^{v_{s}}\) such that: \((i)\) All elements of the set \(\mathcal{S}\) satisfy the state constraints \(\mathcal{S}\subseteq\mathcal{X}\); \((ii)\) the invariance condition (14) holds. We also aim at maximizing the size of the RCI set \(\mathcal{S}\)._ ## IV RCI set parameterization: configuration-constrained Polytopes We parameterize the RCI set \(\mathcal{S}\) as the following polytope \[\mathcal{S}\leftarrow\mathcal{S}(\mathbf{q})\triangleq\{x:Cx\leq\mathbf{q}\} \,,\;C\in R^{n_{c}\times n}, \tag{18}\] whose facets have a fixed orientation determined by the user-defined matrix \(C\) and offset \(\mathbf{q}\in\mathbb{R}^{n_{c}}\) to be computed. We enforce _configuration-constraints_ (CC) [21] over \(\mathcal{S}(\mathbf{q})\), which enable us to switch between the vertex and hyperplane representation of \(\mathcal{S}(\mathbf{q})\) in terms of \(\mathbf{q}\). ### Configuration-constraints Given a polytope \(\mathcal{S}(\mathbf{q})\triangleq\{x:Cx\leq\mathbf{q}\}\), \(\mathbf{q}\in\mathbb{R}^{n_{c}}\), having \(v_{s}\) vertices, the configuration constraints over \(\mathbf{q}\) are described by the cone \[\mathbb{S}\triangleq\{\mathbf{q}:\;E\mathbf{q}\leq\mathbf{0}_{n_{c}v_{s}}\} \tag{19}\] with \(E\in\mathbb{R}^{n_{c}v_{s}\times n_{c}}\) whose construction is detailed in Appendix -C. Let \(\{V^{i}\in\mathbb{R}^{n\times n_{c}},i\in\mathbb{I}_{1}^{v_{s}}\}\) be the matrices defining the vertex maps of \(\mathcal{S}(\mathbf{q})\), _i.e._, \(\mathcal{S}(\mathbf{q})=\mathrm{ConvHull}\{V^{i}\mathbf{q},\;i\in\mathbb{I}_{1}^{v _{s}}\}\) for a given \(\mathbf{q}\). Then, for a particular construction of \(\{V^{i},i\in\mathbb{I}_{1}^{v_{s}},E\}\), the configuration constraints (19) dictate that \[\forall\mathbf{q}\in\mathbb{S}\quad\Rightarrow\quad\mathcal{S}(\mathbf{q})= \mathrm{ConvHull}\{V^{i}\mathbf{q},\;i\in\mathbb{I}_{1}^{v_{s}}\}. \tag{20}\] For a user-specified matrix \(C\) parameterizing \(\mathcal{S}(\mathbf{q})\) in (18), we assume we are given matrices \(\{V^{i},i\in\mathbb{I}_{1}^{v_{s}},E\}\) satisfying (20). Such matrices are then used to enforce that the RCI set \(\mathcal{S}(\mathbf{q})\) is a CC-polytope. For further details regarding their constructions, we refer the reader to Appendix VIII. **Remark 2**: _The choice of \(C\in\mathbb{R}^{n_{c}\times n}\) acts as a trade-off between representational complexity of the set \(\mathcal{S}(\mathbf{q})\) vs the conservativeness of the proposed approach._ ## V computation of RCI set and invariance-inducing controller In this section, we enforce that the set \(\mathcal{S}(\mathbf{q})\) is RCI under vertex control inputs induced by \(\mathbf{u}^{i},i\in\mathbb{I}_{1}^{v_{s}}\). We recall that a particular construction of matrices \(\{V^{i}\in\mathbb{R}^{n\times n_{c}},i\in\mathbb{I}_{1}^{v_{s}},E\in\mathbb{R} ^{n_{c}v_{s}\times n_{c}}\}\) satisfying (20) is given. We enforce that \(\mathcal{S}(\mathbf{q})\) is a configuration-constrained polytope through the following constraints \[E\mathbf{q}\leq\mathbf{0}. \tag{21}\] ### _System constraints_ Let us enforce the inclusion \(\mathcal{S}\subseteq\mathcal{X}\) and input constraints \(\mathbf{u}^{i}\in\mathcal{U}\). Note that from (20), under the constraint (21), we have the following vertex map of \(\mathcal{S}(\mathbf{q})\), \[\mathcal{S}(\mathbf{q})=\mathrm{ConvHull}\{V^{i}\mathbf{q},\,\,i\in\mathbb{I }_{1}^{v_{s}}\} \tag{22}\] We now enforce the state and input constraints in (7) in terms of \(\mathbf{q}\) and \(\mathbf{u}^{i}\) as follows \[H_{x}V^{i}\mathbf{q}\leq h_{n_{x}},\,\,\,\,\,\,\,\,\,H_{u}\mathbf{u}^{i}\leq h _{n_{u}},\,\,\,\,\,\,\,\,\,\,\forall i\in\mathbb{I}_{1}^{v_{s}}. \tag{23}\] ### _Invariance condition_ We now enforce the invariance condition \({x^{i,j}}^{+}\in\mathcal{S}(\mathbf{q})\) in (14) for all \(w\in\mathcal{W}\) and for all feasible models in the set \(M\in\mathcal{M}_{T}\). Recall that condition (14) is enforced at each vertex \(\{x^{i},\mathbf{u}^{i},i\in\mathbb{I}_{1}^{v_{s}}\}\) and \(\{p^{j},j\in\mathbb{I}_{1}^{v_{p}}\}\) of the set \(\mathcal{P}\). Note that from (22), the vertices of \(\mathcal{S}(\mathbf{q})\) are \(\{x^{i}\triangleq V_{i}\mathbf{q},\,\,i\in\mathbb{I}_{1}^{v_{s}}\}\), under the constraints in (21). Then, the successor state from \({x^{i,j}}^{+}\) for parameter \(p^{j}\), input \(\mathbf{u}^{i}\), and disturbance \(w\) is given in terms of \(\mathbf{q}\) as follows \[{x^{i,j}}^{+}=M\begin{bmatrix}p^{j}\otimes V^{i}\mathbf{q}\\ p^{j}\otimes\mathbf{u}^{i}\end{bmatrix}+w. \tag{24}\] Thus, the inclusion in (14) is enforced by the inequality \[{C}{x^{i,j}}^{+}\leq\mathbf{q}-d\,\,\,\,\,\,\forall i\in\mathbb{I}_{1}^{v_{s} },\,\,\forall j\in\mathbb{I}_{1}^{v_{p}},\,\,\forall M\in\mathcal{M}_{T}, \tag{25}\] where \(d\triangleq\max\{Cw:w\in\mathcal{W}\}\) tightens the set \(\mathcal{S}(\mathbf{q})\) by the disturbance set \(\mathcal{W}\). Using vectorization in (1), and substituting (24), the inequality (25) can be written as follows \[C\left(\left(\begin{bmatrix}p^{j}\otimes V_{i}\mathbf{q}\\ p^{j}\otimes\mathbf{u}^{i}\end{bmatrix}\right)^{\top}\otimes I_{n}\right) \vec{M}\!\leq\!\mathbf{q}\!-\!d,\] \[\forall\vec{M}\in\mathcal{M}_{T}\triangleq\{\vec{M}:H_{M}\vec{M} \leq h_{M}\}, \tag{26}\] where we define \(\bar{H}_{M}=\begin{bmatrix}H_{M}\\ -H_{M}\end{bmatrix}\) and \(\bar{h}_{M}=\begin{bmatrix}\overrightarrow{h}_{w}\!+\!h_{M}\\ \overrightarrow{h}_{w}\!-\!h_{M}\end{bmatrix}\) with \(H_{M},h_{M},\overrightarrow{h}_{w}\) defined as in (11). Using strong duality (Lemma 2), the invariance condition (26) holds if and only if there exists some multipliers \(\mathbf{\Lambda}^{ij}\in\mathbb{R}_{+}^{n_{c}\times 2Tn_{w}}\) for all \(i\in\mathbb{I}_{1}^{v_{s}}\), \(j\in\mathbb{I}_{1}^{v_{p}}\) satisfying \[\mathbf{\Lambda}^{ij}\bar{h}_{M}\leq\mathbf{q}-d, \tag{27a}\] \[\mathbf{\Lambda}^{ij}\bar{H}_{M}=C\left(\left(\begin{bmatrix}p^{j} \otimes V_{i}\mathbf{q}\\ p^{j}\otimes\mathbf{u}^{i}\end{bmatrix}\right)^{\top}\otimes I_{n}\right). \tag{27b}\] ### _Maximizing the size of the RCI set_ We characterize the size of the RCI set \(\mathcal{S}\subseteq\mathcal{X}\) as \[\mathrm{d}_{\mathcal{X}}(\mathcal{S}):=\min_{\epsilon}\{\|\epsilon\|_{1}\,\,\, \,\,\mathrm{s.t.}\,\,\,\,\mathcal{X}\subseteq\mathcal{S}\oplus\mathcal{D}( \epsilon)\}, \tag{28}\] where \(\mathcal{D}(\epsilon)\triangleq\{x:Dx\leq\epsilon\}\) is a polytope having user-specified normal vectors \(\{D_{1}^{\top},i\in\mathbb{I}_{1}^{m_{d}}\}\). Thus, we want to compute a desirability large RCI set \(\mathcal{S}\) by minimizing the 'distance' \(\mathrm{d}_{\mathcal{X}}(\mathcal{S})\) in (28). The user-specified matrix \(D\) allows us to maximize the size of the set \(\mathcal{S}\) in the direction of interest. Let \(\{y^{l},l\in\mathbb{I}_{1}^{v_{s}}\}\) be the known vertices of the state-constraint set \(\mathcal{X}\), _i.e._, \(\mathcal{X}=\mathrm{ConvHull}\{y^{l},l\in\mathbb{I}_{1}^{v_{s}}\}\). For each vertex \(y^{l}\) of \(\mathcal{X}\), let \(\mathbf{z}^{l}\in\mathcal{D}(\epsilon)\) and \(\mathbf{s}^{l}\in\mathcal{S}\) for \(l\in\mathbb{I}_{1}^{v_{s}}\) be the corresponding points in the sets \(\mathcal{D}\) and \(\mathcal{S}\). Then, the inclusion \(\mathcal{X}\subseteq\mathcal{S}\oplus\mathcal{D}(\epsilon)\) in (28) is equivalent to [18], \[\forall l\in\mathbb{I}_{1}^{v_{s}},\,\,\exists\{\mathbf{z}^{l},\mathbf{s}^{l} \}:y^{l}=\mathbf{z}^{l}+\mathbf{s}^{l},\,\,D\mathbf{z}^{l}\leq\epsilon,\,\,C \mathbf{s}^{l}\leq\mathbf{q} \tag{29}\] We now consider the following LP problem which aims at computing the RCI set parameter \(\mathbf{q}\) and invariance inducing vertex control inputs \(\{\mathbf{u}^{i},i\in\mathbb{I}_{1}^{v_{s}}\}\) for the LPV system (2). Our goal is to maximize the size of the RCI set \(\mathcal{S}(\mathbf{q})\) (or equivalently, to minimize (28)), while satisfying the system constraints, the invariance condition, and the configuration constraints, for all \(i\in\mathbb{I}_{1}^{v_{s}}\), \(j\in\mathbb{I}_{1}^{v_{p}}\) and \(l\in\mathbb{I}_{1}^{v_{s}}\): \[\min_{\begin{subarray}{c}\{\mathbf{q},\mathbf{u}^{i},\mathbf{\Lambda}^{ij}, \mathbf{z}^{l},\mathbf{s}^{l},\epsilon\}\\ \text{subject to:}\end{subarray}}\hskip-14.226378pt\begin{array}{l}\|\epsilon\|_{1}\\ \text{(configuration constraints)},\\ \text{(\@@cite[cite]{\@@bibref{Authors:2019}{}{}}{})}\\ \text{(\@@cite[cite]{\@@bibref{Authors:2020}{}{}}{})}\\ \text{(\@@cite[cite]{\@@bibref{Authors:2021}{}{}}{})}\\ \text{(\@@cite[cite]{\@@bibref{Authors:2021}{}{}}{})}\\ \text{(\@@cite[cite]{\@@bibref{Authors:2022}{}{}}{})}\\ \text{(\@@cite[cite]{\@@bibref{Authors:2021}{}{}}{})}\\ \text{(\@@cite[cite]{\@@bibref{Authors:2022}{}{}}{})}\\ \text{(\@@cite[cite]{\@@bibref{Authors:2023}{}{}}{})}\\ \text{(\@@cite[cite]{\@@bibref{Authors:2021}{}{}}{})}\\ \text{(\@@cite[cite]{\@@bibref{Authors:2022}{}{}}{})}\\ \text{(\@@cite[cite]{\@@bibref{Authors:2023}{}{}}{})}\\ \text{(\@@cite[cite]{\@@bibref{Authors:2022}{}{}}{})}\\ \text{(\@@cite[cite]{\@@bibref{Authors:2021}{}{}}{})}\\ \text{(\@@cite[cite]{\@@bibref{Authors:2022}{}{}}{})}\\ \text{(\@@cite[cite]{\@@bibref{Authors:2023}{}{}}{})}\\ \text{(\@@cite[cite]{\ where \(\{\lambda_{t}^{i,*},\ i\in\mathbb{I}_{1}^{v_{s}}\}\) are computed by solving the following LP: \[\begin{array}{ll}\{\lambda_{t}^{i,*}\}=&\arg\min\sum_{i=1}^{v_{s}}\lambda_{t} ^{i}\\ &\{\lambda_{t}^{i}\}\\ \text{subject to:}&\sum_{i=1}^{v_{s}}\lambda_{t}^{i}x^{i}=x_{t},\ \ \ 0\leq \lambda_{t}^{i}\leq 1.\end{array} \tag{32}\] The LP problem (31) is solved at each time step \(t\) upon the availability of the new state measurement \(x_{t}\). ## VI Numerical examples We demonstrate the effectiveness of the proposed approach via two numerical examples. All computations are carried out on an i7 1.9-GHz Intel core processor with 32 GB of RAM running MATLAB R2022a. ### _Example 1: LPV Double integrator_ We consider the following LPV double integrator data-generating system [9], \[x_{t+1}=\begin{bmatrix}1+\delta_{t}&1+\delta_{t}\\ 0&1+\delta_{t}\end{bmatrix}x_{t}+\begin{bmatrix}0\\ 1+\delta_{t}\end{bmatrix}u_{t}+w_{t}, \tag{33}\] where \(|\delta_{t}|\leq 0.25\), with constraints \(\mathcal{X}\triangleq\{x:|x|<=[5~{}5]^{\top}\}\), \(\mathcal{U}\triangleq\{u:|u|\leq 1\}\), and \(\mathcal{W}\triangleq\{w:|w|\leq[0.25~{}0]^{\top}\}\). This system can be brought to the LPV form (2) with \[A^{1}=\begin{bmatrix}1.25&1.25\\ 0&1.25\end{bmatrix},A^{2}=\begin{bmatrix}0.75&0.75\\ 0&0.75\end{bmatrix},B^{1}=\begin{bmatrix}0&1.25\end{bmatrix}^{\top}, \tag{34}\] using \(p_{t,1}=2(0.25+\delta_{t})\), \(p_{t,2}=2(0.25-\delta_{t})\). This corresponds to the simplex scheduling-parameter set \(\mathcal{P}=\{p\in\mathbb{R}^{2}:p\in[0,1],p_{1}+p_{2}=1\}=\mathrm{Conv Hull}(\begin{bmatrix}1\\ 0\end{bmatrix},\begin{bmatrix}0\\ 1\end{bmatrix})\). The system matrices in (34) are _unknown_ and only used to gather the data. A single state-input-scheduling trajectory of \(T=100\) samples is gathered by exciting system (33) with inputs uniformly distributed in \([-1,1]\). The data satisfies the rank conditions given in Proposition 1, _i.e_, \(\mathrm{rank}(X_{u}^{p})=(n+m)s=6\). We choose matrix \(C\) defining an RCI set with representational complexity given by \(n_{c}=50\), _i.e._, \(C\in\mathbb{R}^{50\times 2}\), such that \(\mathcal{S}(\mathbf{1}_{50})\) is an entirely simple polytope. In particular, each row of \(C\) is chosen as follows [21, Remark 3] \[C^{i}=\left[\cos\left(\frac{2\pi(i-1)}{n_{c}}\right),\ \sin\left(\frac{2\pi(i-1)}{n_{c}} \right)\right],i\in\mathbb{I}_{1}^{n_{c}}. \tag{35}\] Based on the selected \(C\), we build \(\{V^{i},i\in\mathbb{I}_{1}^{50}\}\), and \(E\) satisfying the configuration constraints in (20). We refer the reader to Appendix VIII for the details on the construction of \(\{V^{i},i\in\mathbb{I}_{1}^{v_{s}}\}\), and \(E\). We set \(D=C\) defining the distance in (28). The RCI set \(\mathcal{S}(\mathbf{q})\) obtained by solving the LP problem (30) is shown in Fig. 1. The total construction and solution time is \(40.6\) s. We compare the proposed approach to a model-based method, where we compute a CC-RCI set \(\mathcal{S}_{\mathrm{model}}\) using the knowledge of the true system matrices. In particular, we fix the model matrix \(M\) in (14) to the true system matrices \(M=\begin{bmatrix}A^{1},A^{2},B^{1},B^{2}\end{bmatrix}\) given in (34), and compute \(\mathcal{S}_{\mathrm{model}}\) solving an LP minimizing \(\mathrm{d}_{\mathcal{X}}(\mathcal{S}_{\mathrm{model}})\). In the model-based case, invariance constraints (25) are directly computed for a given fixed \(M\). The volume of the RCI set \(\mathcal{S}\) obtained with the proposed data-driven proposed algorithm is \(25.43\), while that provided by the model-based method is \(24.56\), which shows that the proposed data-based approach generates RCI sets that are of comparable size to those of model-based method. Fig. 2 depicts closed-loop state trajectories starting from some of the vertices of the RCI set (left panel), and corresponding control input trajectories (right panel). The maximal RCI (MRCI) set \(\Omega_{\infty}\) computed using a model-based geometric approach [6, Algorithm 10.5] is also plotted. The state trajectories are obtained by simulating the true system (33) in closed-loop with the invariance inducing controller \(u_{t}\) in (31) computed by solving the LP (32) at each time instance. Note that for each closed-loop simulation, a different realization of the scheduling signal \(p\) taking values in the given interval \([0,1]\) is generated. Moreover, during each closed-loop simulation, different realizations of the disturbance signal \(w_{t}\in\mathcal{W}\) are acting on the system. The result shows that the approach guarantees robust invariance Fig. 1: Example 1: State constraint set \(\mathcal{X}\) (yellow), model-based CC-RCI set \(\mathcal{S}_{\mathrm{model}}\) (dashed-red), data-based CC-RCI set \(\mathcal{S}\) (green). Fig. 2: Example 1: Left panel: CC-RCI set \(\mathcal{S}\) (green) with closed-loop state trajectories and MRCI \(\Omega_{\infty}\) (dashed-red); Right panel: corresponding control input trajectories and input constraints (dashed red). w.r.t. all possible scheduling signals taking values in a given set as well as in the presence of a bounded but unknown disturbance, while respecting the state-constraints. The corresponding input trajectories shown in Fig. 2 (right panel) show that the input constraints are also satisfied. Lastly, we analyse the effect of the number \(T\) of data samples on the size of the RCI set. The volume of the RCI set and the LP objective \(\mathrm{d}_{\mathcal{X}}(\mathcal{S}(\mathbf{q}))\) for varying \(T=30,50,100\) are reported in Table I. As \(T\) increases, the feasible model set \(\mathcal{M}_{T}\) shrinks progressively, \(\mathcal{M}_{T+1}\subseteq\mathcal{M}_{T}\), thus constraint \(\forall M\in\mathcal{M}_{T}\) is less restrictive, resulting in an increased size of the RCI set. ### _Example 2: Van der Pol oscillator embedded as LPV_ We consider the Euler-discretized LPV representation of the Van der Pol oscillator system [15] as data-generating system in the form (2) with \[\left[\begin{array}{c|c|c|c|c}A^{1}&A^{2}\end{array}\right]\!=\!\left[ \begin{array}{ccc|c|c}1&T_{s}&1&T_{s}\\ -T_{s}&1&-T_{s}&2\end{array}\right],B^{1,2}=\left[\begin{array}{c}0\\ T_{s}\end{array}\right], \tag{36}\] where \(T_{s}=0.1\) is the sampling time. The scheduling parameters are chosen as \(p_{t,1}=1-\mu T_{s}(1-x_{t,1}^{2})\) with \(\mu=2\) and \(p_{t,2}=1-p_{t,1}\). The system constraints are \(\mathcal{X}\triangleq\{x:\left\|x\right\|_{\infty}\leq 1\}\), \(\mathcal{U}\triangleq\{u:\left|u\right|\leq 1\}\) and \(\mathcal{W}\triangleq\{w:\left|w\right|\leq[10^{-3}\ 10^{-3}]^{\top}\}\). The scheduling parameter set \(\mathcal{P}\triangleq\{p:p_{1}\in[1-\mu T_{s},1],p_{2}\in[0,1],p_{1}+p_{2}=1 \}=\mathrm{ConvHull}\left(\begin{bmatrix}1&1-\mu T_{s}\\ 1&\mu T_{s}\end{bmatrix}\right)\). The system matrices \(\{A^{1},A^{2},B\}\) are _unknown_ and only used to gather the data. A single state-input-scheduling trajectory of \(T=100\) samples is gathered by exciting system (36) with inputs uniformly distributed in \([-1,1]\). The data satisfy the rank conditions given in Proposition 1, _i.e_, \(\mathrm{rank}(X_{u}^{p})=(n+m)s=6\). The matrix \(C\) parameterizing RCI set is selected with \(n_{c}=30\). Each row of \(C\in\mathbb{R}^{30\times 2}\) is set according to (35), such that \(\mathcal{S}(\mathbf{1}_{30})\) is an entirely simple polytope. Based on the chosen \(C\), we build the matrices \(\{V^{i},i\in\mathbb{I}_{1}^{30}\}\), and\(E\) satisfying the configuration constraints in (20). We set \(D=C\) defining the distance in (28). The RCI set \(\mathcal{S}(\mathbf{q})\) obtained by solving the LP problem (30) is shown in Fig. 3. The total construction and solution time is \(21.5\) s. For comparision, as in Example \(1\), we also compute the CC-RCI set \(\mathcal{S}_{\mathrm{model}}\) with the model-based approach using the knowledge of the true system matrices given in (36). The volume of the RCI set \(\mathcal{S}\) with the proposed data-driven algorithm is \(1.59\), while that of \(\mathcal{S}_{\mathrm{model}}\) is \(1.62\), which shows that the proposed method is able to generate RCI sets that are of similar size to those of the model-based method. Fig. 4 shows closed-loop state trajectories starting from the vertices of the RCI set (left panel) for different realizations of the scheduling and disturbance signals during closed-loop simulation. The corresponding invariance-inducing control inputs (31) are depicted Fig. 4 (right panel), obtained by solving (32), which satisfy the input constraints. Finally, the volume of the RCI set \(\mathrm{vol}(\mathcal{S}(\mathbf{q}))\) and the LP objective \(\mathrm{d}_{\mathcal{X}}(\mathcal{S}(\mathbf{q}))\) for varying \(T=20,50,100\) are reported in Table II. As \(T\) increases, the feasible model set \(\mathcal{M}_{T}\) becomes smaller, resulting in an increased size of the RCI set. ## VII Conclusions The paper proposed a data-driven approach to compute a polytopic CC-RCI set and a corresponding vertex control laws for LPV systems. The RCI set is parameterized as a configuration-constrained polytope which enables traversing between vertex and hyperplane representation. A data-based \begin{table} \begin{tabular}{|c|c|c|c||c|} \hline \(T\) & \(20\) & \(50\) & \(100\) & \(\mathcal{S}_{\mathrm{model}}\) \\ \hline volume & 1.50 & 1.56 & 1.59 & 1.62 \\ \hline \(\mathrm{d}_{\mathcal{X}}(\mathcal{S}(\mathbf{q}))\) & 19.04 & 18.81 & 18..67 & 18.54 \\ \hline \end{tabular} \end{table} TABLE II: Example \(2\): Size of the RCI set _vs_ number of data samples \(T\). Fig. 4: Example 2: Left panel: CC-RCI set \(\mathcal{S}\) with closed-loop state trajectories; Right panel: Corresponding control trajectories and input constraints (dashed red). Fig. 3: Example 2: State constraint set \(\mathcal{X}\) (yellow), model-based CC-RCI \(\mathcal{S}_{\mathrm{model}}\) (dashed-red), proposed data-driven CC-RCI set \(\mathcal{S}\) (green). Note that \(\mathcal{S}_{\mathrm{model}}\) and \(\mathcal{S}\) are nearly overlapping. invariance condition was proposed which utilizes a single state-input-scheduling trajectory without requiring to identify an LPV model of the system. The RCI sets are computed by solving a single LP problem. The effectiveness of the proposed algorithm was shown via two numerical examples to generate RCI sets from a'small' number of collected data samples. As future work, we consider synthesizing parameter-dependent RCI sets for LPV systems in a data-driven setting.
We present a データ駆動型メソッド to synthesize robust control invariant (RCI) sets for linear parameter-varying (LPV) systems subject to unknown but bounded disturbances. A finite-length data set consisting of state, input, and scheduling signal measurements is used to compute an RCI set and invariance-inducing controller, without identifying an LPV model of the system. We parameterize the RCI set as a configuration-constrained polytope whose facets have a fixed orientation and variable offset. This allows us to define the vertices of the polytopic set in terms of its offset. By exploiting this property, an RCI set and associated vertex control inputs are computed by solving a single linear programming (LP) problem, formulated based on a data-based invariance condition and system constraints. We illustrate the effectiveness of our approach via two numerical examples. The proposed method can generate RCI sets that are of comparable size to those obtained by a model-based method in which exact knowledge of the system matrices
2309.16553
MatrixCity: A Large-scale City Dataset for City-scale Neural Rendering and Beyond
Neural radiance fields (NeRF) and its subsequent variants have led to remarkable progress in neural rendering. While most of recent neural rendering works focus on objects and small-scale scenes, developing neural rendering methods for city-scale scenes is of great potential in many real-world applications. However, this line of research is impeded by the absence of a comprehensive and high-quality dataset, yet collecting such a dataset over real city-scale scenes is costly, sensitive, and technically difficult. To this end, we build a large-scale, comprehensive, and high-quality synthetic dataset for city-scale neural rendering researches. Leveraging the Unreal Engine 5 City Sample project, we develop a pipeline to easily collect aerial and street city views, accompanied by ground-truth camera poses and a range of additional data modalities. Flexible controls over environmental factors like light, weather, human and car crowd are also available in our pipeline, supporting the need of various tasks covering city-scale neural rendering and beyond. The resulting pilot dataset, MatrixCity, contains 67k aerial images and 452k street images from two city maps of total size $28km^2$. On top of MatrixCity, a thorough benchmark is also conducted, which not only reveals unique challenges of the task of city-scale neural rendering, but also highlights potential improvements for future works. The dataset and code will be publicly available at our project page: https://city-super.github.io/matrixcity/.
Yixuan Li, Lihan Jiang, Linning Xu, Yuanbo Xiangli, Zhenzhi Wang, Dahua Lin, Bo Dai
2023-09-28T16:06:02
http://arxiv.org/abs/2309.16553v1
# MatrixCity: A Large-scale City Dataset ###### Abstract Neural radiance fields (NeRF) and its subsequent variants have led to remarkable progress in neural rendering. While most of recent neural rendering works focus on objects and small-scale scenes, developing neural rendering methods for city-scale scenes is of great potential in many real-world applications. However, this line of research is impeded by the absence of a comprehensive and high-quality dataset, yet collecting such a dataset over real city-scale scenes is costly, sensitive, and technically difficult. To this end, we build a large-scale, comprehensive, and high-quality synthetic dataset for city-scale neural rendering researches. Leveraging the Unreal Engine 5 City Sample project, we develop a pipeline to easily collect aerial and street city views, accompanied by ground-truth camera poses and a range of additional data modalities. Flexible controls over environmental factors like light, weather, human and car crowd are also available in our pipeline, supporting the need of various tasks covering city-scale neural rendering and beyond. The resulting pilot dataset, MatrixCity, contains 67k aerial images and 452k street images from two city maps of total size \(28km^{2}\). On top of MatrixCity, a thorough benchmark is also conducted, which not only reveals unique challenges of the task of city-scale neural rendering, but also highlights potential improvements for future works. The dataset and code will be publicly available at our project page: [https://city-super.github.io/matrixcity/](https://city-super.github.io/matrixcity/). ## 1 Introduction Realistic rendering of city-scale scenes is a crucial component of many real-world applications, including aerial surveying, virtual reality, film production, and gaming. While NeRF [22] has made notable advancements in rendering objects and small-scale scenes, only a few early attempts [30, 33, 37] have sought to extend NeRF and its variants to larger city-scale scenes. Due to the paucity of benchmark dataset, the complexity and challenges of city-scale neural rendering have not been thoroughly investigated. Collecting a comprehensive and high-quality city-scale dataset in real-world is time consuming and resource intensive, and can be technically difficult. Moreover, it is also infeasible to control environmental factors, such as lighting conditions, weather patterns, and the presence of transient objects like pedestrians and vehicles. Thus, existing urban datasets [17, 33, 8] are limited to a few independent scenes rather than comprehensive city maps, failing to capture the diversity of urban environments. Furthermore, existing datasets often feature monotonous viewing angles, such as street-level [30] or aerial imagery [17, 33, 8], leading to partial city modeling with incomplete building geometries and ground-level details. Even if sufficient real-world city data is collected, legal or commercial issues can limit its accessibility, e.g., Block-NeRF dataset [30] only provides access to 1km street data, and UrbanScene3D dataset [17] offers only two real-world scenarios. Such restrictions significantly hinder the ability of researchers to advance the field of city-scale neural rendering. This paper presents _MatrixCity_, a comprehensive and high-quality synthetic dataset to support the research of city-scale neural rendering as well as other extended tasks. Specifically, _MatrixCity_ has several distinguished features: 1) _High Quality._ It is built in the City Sample project 1 of Unreal Engine 5 2 with advanced graphic technologies which allows for the public release of rendered images3. As shown in Figure 1, this engine offers rich city details of fine-grained textures and geometries from its photo-realistic rendering quality with realistic lighting, shadow effects, and accurate ground-truth camera poses. 2) _Scale and Diversity._ To create the MatrixCity dataset, we develop a plugin that can automatically capture data from the map of two cities provided by Unreal Engine 5, resulting in 172k and 347k images, respectively. These images cover areas equivalent to \(2.7\)km2 and \(25.3\)km2 in the real-world. The captured regions showcase a broad spectrum of urban landscapes, mirroring the complexity and heterogeneity of genuine cities. 3) _Controllable Environments._ Our developed plugin provides flexible control over a range of environmental factors that are uncontrollable in the real world, including lighting, weather, and human and car crowds. By decoupling these various factors, we are able to provide aligned data that can support in-depth research of city-scale neural rendering. 4) _Multiple Properties._ The plugin can also customize data collection trajectories, and extract multiple ground-truth components, including depth, normal and decomposed components of reflectance (e.g., diffuse, specular, metallic, etc.). Such advanced feature enables researchers to not only perform a range of city-scale neural rendering tasks under varying conditions but also supports other extended tasks, for example depth estimation and inverse rendering. Footnote 1: [https://www.unrealengine.com/marketplace/product/city-sample](https://www.unrealengine.com/marketplace/product/city-sample) Footnote 2: [https://www.unrealengine.com/](https://www.unrealengine.com/) Footnote 3: [https://www.unrealengine.com/eula/unreal](https://www.unrealengine.com/eula/unreal) Our benchmark study demonstrates the value of MatrixCity in advancing city-scale neural rendering researches. We experiment with several state-of-the-art neural rendering methods to conduct empirical analyses first on aerial and street data respectively, then on the fused data from both modes. Preliminary results indicate that even with these advanced methods, city-scale neural rendering is still a far-reaching goal. Specifically, we identify several challenges: 1) In aerial data, learning high-rise city regions poses a greater challenge than low-rise/ground areas due to complex building structures and occlusion; 2) Street data contains significantly more details than aerial data, which raises challenges for model capacity. Although block-size aerial data modeling is feasible, modeling street data with the same size may be more difficult; 3) The view direction and level of details varied significantly between the two modes of data, making it difficult to train them together; 4) Current models generally perform poorly on smaller objects with more details and reflective buildings in urban scenes. These findings present significant opportunities to advance research in city-scale neural rendering. In summary, our contributions are as follows: * We construct a large-scale, high-quality dataset for city-scale neural rendering, named _MatrixCity_. This dataset emphasizes attributes pivotal to city-scale scenes, encompassing elements like dynamic interactions and lighting conditions. MatrixCity contains both aerial and street-level images of complete city maps with extra depth, normal, and decomposed BRDF materials capable of supporting multiple tasks. * We develop a plugin that leverages Unreal Engine 5 for automatic high-quality city data collection, allowing researchers to flexibly control lighting, weather, and transient objects. The plugin simplifies data collection for different task settings, making it a valuable tool for the community where users can build up advanced datasets as demanded. * We conduct extensive studies on the MatrixCity dataset, which reveal some key challenges of city-scale neural rendering, and hopefully facilitate future research in this area ## 2 Related work ### 3D Neural Representation at City Scale City-scale reconstruction has been studied for decades. Previous methods for representing geometry of a city mainly relied on raw point clouds acquired through either structure-from-motion [1] or Lidar sensors [12]. Recently, with the emergence of Neural Radiance Fields (NeRF) [22], novel view synthesis has become more efficient and effective. Numerous methods in this direction have further improved the speed [18, 28, 10, 23, 7] and accuracy [40, 3, 4] of reconstruction. NeRF is also used in a wide range of applications beyond novel view synthesis, such as inverse rendering [5, 27, 42, 26], surface reconstruction [34, 36, 2, 38] or HDR synthesis [11, 14, 21]. Although these methods demonstrate acceptable performance with small objects, grappling with urban scenes remains a significant challenge due to the limited representation capability of NeRF. Based on these observations, recent methods are proposed for reconstructing radiance fields in urban-scale scenes. NeRF-W [20] captured per-image appearance variations and separated the entire scene into static and transient components, enabling the modeling of unstructured collections of in-the-wild photographs. Block-NeRF [30] extends NeRF-W [20] to model an neighborhood of San Francisco by dividing up urban environments into individually small Block-NeRFs. Mega-NeRF [33] also adopts the advantages of NeRF-W [20] and Block-NeRF [30] by first decomposing large-scale fly-view scenes into small spatial cells and then training these cells in parallel. Urban Radiance Fields [25] synthesizes novel RGB images and extracted 3D surfaces from a combination of panoramas and Lidar inputs in the urban environments. BungeeNeRF [37] introduces progressive modeling with multi-level supervision to handle city-scale data with varying levels of detail. Despite the progress made by the aforementioned methods [20, 30, 33, 25, 37], there is no unified dataset for evaluating these methods due to their varying settings. Significant challenge still remains in the city reconstruction problem, particularly in integrating aerial data and street-level data with varying levels of detail. ### NeRF-based Datasets and Benchmarks Several benchmarks based on NeRF are proposed in the recent two years, which focus on the effective and better reconstruction of single objects [22, 18, 13], indoor scenes [9], or outdoor unbounded scenes [4, 15]. While there have been some good attempts to collect high-quality large-scale datasets using high-precision acquisition equipment [8, 17, 30, 19] as shown in Table 1, the high acquisition costs limit their size and scale. Some datasets are limited to only a few independent scenes that are far from urban-scale or are not fully open-source due to privacy and commercial reasons. For instance, Mill 19 dataset [33] only includes two suburban-like scenes, and the Quak 6D [8] and OMMO [19] datasets focus on a limited number of independent scenes that are not city-scale. Waymo Block-NeRF [30] dataset only grants access to 100 seconds of driving data and Urban Scene3D dataset [17] only releases two real-world scenarios. Additionally, existing real-world datasets commonly provide only one type of image data, such as street-level or aerial imagery, which makes modeling buildings incomplete [17, 19]. Collecting real data in outdoor scenes poses significant challenges due to difficulties in controlling environmental factors such as pedestrian movement, weather, and lighting. As a result, a standard and comprehensive benchmark for city-scale neural rendering has not yet been established. Existing outdoor NeRF-based benchmarks like OMMO [19] are too trivial to explore and analyze the urban implicit scene representation. To address these issues, we develop a plugin in Unreal Engine 5 to easily collect aerial and street city data with ground-truth camera poses. We built a city-scale and multitasking dataset that includes both fly-view and street-view images and propose a new city-scale benchmark for neural rendering. We also provide a detailed analysis of the challenges and opportunities of NeRF in urban environments. ## 3 MatrixCity Dataset The MatrixCity dataset aims to introduce a new challenging benchmark to the field of city-scale neural rendering by providing comprehensive city maps consisting of both aerial and street-level data. In addition to RGB images, we also offer normal, depth, and decomposed reflectance properties to support other tasks. Moreover, we can flexibly control environmental factors, including light direction and intensity, fog density, and human or vehicle crowding, to enable simulating real-world dynamic situations. Sec 3.1 describes our data construction procedure. Sec 3.2 and 3.3 provide detailed statistics and characteristics of this dataset. ### Dataset Construction **City Data Collection.** Densely captured 2D images with sufficient multi-view supervision are required to learn a faithful scene geometry, especially for large city scenes. Collecting a sufficient amount of data in Unreal Engine 5 for city scenes is a complex process that requires adjusting camera trajectories to capture specific viewpoints. Although the Unreal Engine 5 offers a movie render queue plugin for high-quality image rendering, it can be time-consuming and inflexible to manually set up the position, rotation, and frame number of key points. For urban settings, it is not practical to manually set camera trajectories in a city-scale environment. To address this, we develop a plugin that automatically generates camera trajectories, reducing the need for manual annotation and increasing the efficiency of data collection. The developed plugins can also be used in other Unreal Engine projects. For **aerial-view** collection, we divide the city map into 10 blocks based on building heights (Figure 2 (a)) to better capture the building details. We provide the height of every collected block in the Table 6. Note that current neural scene representations are generally suitable for bounded scenes, where scenes with large variation in height may cast great difficulty for accurate ray sampling. We then generate trajectories using the input of camera height and the four vertices' coordinates of the corresponding block (Figure 2 (b)) Our plugin puts four cameras at each capturing location, with each camera rotating \(90^{\circ}\) apart from each other in the yaw direction and identical pitch values. The pitch value for the floor area is \(-45^{\circ}\), while it is \(-60^{\circ}\) for high-rise area as there are more occlusions at higher levels. For **street-view** data collection, we manually annotate the start and end points of each road and use them as inputs to generate straight-line trajectories with our plugin. We position six perspective cameras at each capturing location to render a cube map, providing a comprehensive view of the surroundings. Note that the cube map can be naturally transformed into panorama images, which are suitable for capturing the street views as much as possible with limited camera positions. Figure 2 (c) shows the resulting street-level trajectories for a specific block. Our plugin saves the generated camera trajectories as sequence assets of Unreal Engine, which can be easily reused to render images with different environmental settings. We will enhance our plugin to support more complex camera trajectories in the future, enabling us to generate even higher quality city-scale data. Note that we adapt auto-exposure to collect data. If we use the same fixed exposure for two types of data, the street views will be under-exposured while the aerial views will be over-exposured. HDR images will be included in the future. **Quality Control.** To build a high-quality dataset for city-scale neural rendering, we utilize several mechanisms to ensure that the rendered images are of high quality and that the camera poses are accurate. Rather than using the more efficient real-time rendering pipeline, which often produces flickering images, we use the movie render queue plugin to render images with movie-level standards. Additionally, we set the Engine Scalability Settings to the best, turn off the motion blur and use anti-aliasing during the rendering process to achieve the highest possible image quality. We inspect the images thoroughly after rendering to remove any aerial views that look outside the map boundaries and ensure that there are no object clippings. Unreal Engine 5 provides ground-truth camera poses, which we have further verified through additional experiments to ensure their accuracy. Even with a small set of street data, training the MipNeRF-360 [4] model yields almost perfect novel view synthesis results, as demonstrated in Figure 6. This confirms the accurate annotation of our camera poses. Overall, by adopting these mechanisms, we ensure that the Matrix-City dataset provides high-quality images with precise camera poses, which is crucial for city-scale neural rendering research. Without considering noises like inaccurate pose and motion blur, we intend to gain more insights about the intrinsic challenges of city scenes since isolating these noises from real data is generally infeasible. **Dynamic Environments.** The City Sample project of Unreal Engine 5 provides a plethora of powerful functions that allow for the creation of dynamic city scenes. As shown in Figure 1, we have the ability to control the presence of moving people and cars in the scene, adding to the realism of the environment. Additionally, we can quantitatively adjust the angle and intensity of the lighting to emulate the natural changes in light throughout a day, as demonstrated in Figure 3(a). We can also control the amount of fog in a scene, Figure 2: Illustration of data collection in the _small city_ in Unreal Engine 5. (a) Aerial block split for the entire _small city_; (b&c) Camera aerial and street trajectory of block 4 (visualized in bird-eye views) used in our plugin for data collection. as shown in Figure 3(b), providing another quantitative tool for enhancing realism. Taken together, these functions allow for the simulation of almost all basic dynamic situations found in the real world. In addition, general camera noises like motion blur and defocus blur shown in Figure 4 can be simulated in Unreal Engine. Such varying lighting, weather conditions, moving objects and camera noises will lead to more realistic and accurate city-scale neural rendering. **Multiple Properties.** Figure 1 (c) and Figure 3 (c) illustrates the various intermediate products generated by Unreal Engine during the rendering process, including depth, normal, and decomposed components (diffuse, specular, metallic, and roughness). These attributes are especially important for studies on inverse rendering and semantic analysis, which are popular for city scene analysis. Our plugin offers the ability to extract these properties without incurring any additional costs, which can be prohibitively expensive to obtain in real-world scenarios. ### Dataset Statistics The MatrixCity dataset comprises two scenes from the City Sample project: Small City covering an area of \(2.7km^{2}\) and Big City spanning \(25.3km^{2}\). In total, we collect 67k aerial images and 452k street-level images to ensure comprehensive coverage. As shown in Table 1, many of current datasets [17, 8, 19] do not offer the dense image captures of the whole city but small-size independent scenes. Although Waymo Block-NeRF [30] dataset densely collects an area of approximately \(960m\times 570m\), it only contains street data and results in the reconstructed buildings incomplete. All the existing datasets do not have quantitatively controllable environments including light, weather and human and car crowds, nor multiple properties like normal, depth, the decomposed reflectance components, etc, which restricts the in-depth study of city-scale neural rendering in dynamic scenes and other extension tasks. KITTI-360 [16], NuScenes [6], Waymo Open [29] are not designed for neural rendering purposes and only provide limited camera viewpoints. ### Dataset Characteristics **High Quality.** For constructing MatrixCity dataset, we use the City Sample project and the movie-level plugin named movie render queue of the Unreal Engine 5 which is demonstrated to reproduce _The Matrix Awakens_. Unlike games, the rendering process is not real-time and costs huge computations with pre-defined camera poses. Such movie-level rendering quality enables the collection of realistic city-scale data similar to the real world with fully dynamic environment factors. Vigorous quality control is performed during its collection phase. **Large-scale and Diversity.** The City Sample project of Unreal Engine 5 includes two cities with a large-scale coverage, which captures varying buildings, pedestrians, signs, vehicles, and lighting conditions, resulting in more diverse Figure 4: Examples of camera motion blur and defocus blur. Figure 3: Illustration of controlling dynamic environment factors in Unreal Engine 5 such as illumination (a), fog density (b) and decomposed reflectance (c). and realistic outdoor scenes that are representative of real-world cities. This ensures that researchers have access to a broad range of data to train their models on, leading to more accurate and effective city-scale neural rendering. **Controllable Environments.** Unlike the real world data, we could control the lighting angle and intensity, the density and height of fog, and the density of flow of pedestrians and vehicles in a fine-grained manner. This flexibility enables us to generate dynamic scenarios of city scenes that would be difficult to capture in real-world data. This level of control over the environment allows for more detailed exploration of how different factors influence the training process of city-scale neural rendering. **Multiple Properties.** Our developed plugin is able to extract additional information such as depth, normal and the decomposed reflectance components with minimum extra cost in Unreal Engine 5. This information supports additional tasks such as depth estimation, inverse rendering, which cannot be supported by real-world data without excessive labor. **Applications.** By exploring neural rendering models on MatrixCity, we can transfer the algorithms to real-world urban scenes. This may facilitate the creation of scenes for applications ranging from video games and virtual reality to autonomous driving and virtual studios. Additionally, these rendered environments can enable seamless interactions with digital humans within the metaverse. ## 4 Experiments In this section, we mainly investigate the quality of novel view rendering, and reveal the challenges of adapting existing SOTA methods on this task. Additional studies (, dynamics scenes, lighting control, ) are provided in the Appendix D. ### Datasets and Metrics **MatrixCity benchmark.** The MatrixCity dataset contains two city maps: Small City and Big City. According to the common practice in surveying and mapping that adjacent images should have an overlap of 70%-80%, we set a camera capture location every 40 m for aerial data collection and 5 m for street data collection. Small City includes 6k aerial images and 30k street-level images, while Big City has 60 k aerial images and 286 k street-level images. Note that we remove the aerial images that look outside the map boundary manually. Also, we remove the street images that look straight down following nerfstudio [31], which crops the bottom 20% of the 360 images to reduce useless information. The ratio of training set to testing set is 8:1. To ensure both completeness of training perspectives and generalization ability in testing, test set is collected separately with no location overlap with the training set. For aerial data, the yaw direction randoms from \(0^{\circ}\) to \(360^{\circ}\) and the pitch direction randoms from \(-60^{\circ}\) to \(-45^{\circ}\), and every camera location captures 1 image. For street data, the yaw direction randoms from \(0^{\circ}\) to \(90^{\circ}\) and every camera location captures 5 images, whose pitch and roll direction keep the same with the training set. Since the street data contains more details, \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline Dataset & \#Images & Level & Types & Source & Lighting & Human/Car & Weather & D-Reflectance \\ \hline UrbanScene3D [17] & 128K & Scene & Aerial & Synthetic \& Real & ✗ & ✗ & ✗ & ✗ \\ Quad 6K [8] & 5.1K & Scene & Aerial & Real & ✗ & ✗ & ✗ & ✗ \\ Mill 19 [33] & 3.6K & Scene & Aerial & Real & ✗ & ✗ & ✗ & ✗ \\ Waymo Block-NeRF [30] & 12K & City & Street & Real & ✓ & ✗ & ✗ & ✗ \\ OMMO [19] & 14.7K & Scene & Aerial & Real & ✓ & ✗ & ✗ & ✗ \\ KITTI-360 [16] & 300K & City & Street & Real & ✗ & ✓ & ✗ & ✗ \\ NuScenes [6] & 1.4M & City & Street & Real & ✓ & ✓ & ✓ & ✗ \\ Waymo Open [29] & 1M & City & Street & Real & ✓ & ✓ & ✓ & ✗ \\ \hline \hline **Ours** & 519k & City & Aerial+Street & Synthetic & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of statistics and properties between our _MatrixCity_ dataset with previous datasets. \begin{table} \begin{tabular}{c|c c c|c c c|c c c|c c c|c c c} \hline \hline \multirow{2}{*}{Block} & \multicolumn{3}{c|}{**NeRF**[22]} & \multicolumn{3}{c|}{**DFOO**[28]} & \multicolumn{3}{c|}{**TensorRF**[7]} & \multicolumn{3}{c|}{**Instant-NGP**[23]} & \multicolumn{3}{c|}{**MipNeRF-360**[4]} & \multicolumn{3}{c|}{Average} \\ & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) & Height \\ \hline Block\_A & 23.15 & 0.561 & 0.649 & 25.04 & 0.677 & 0.520 & 25.96 & 0.720 & 0.462 & **27.21** & **0.793** & **0.376** & 26.64 & 0.772 & 0.406 & 150 \\ Block\_B & 22.94 & 0.613 & 0.485 & 22.72 & 0.649 & 0.463 & 24.95 & 0.776 & 0.326 & **25.45** & **0.826** & **0.271** & 24.80 & 0.765 & 0.352 & 432 \\ Block\_C & 22.15 & 0.590 & 0.527 & 21.39 & 0.649 & 0.475 & 24.11 & 0.754 & 0.370 & 32.31 & **0.788** & **0.311** & **24.20** & 0.759 & 0.365 & 419 \\ Block\_D & 23.09 & 0.570 & 0.548 & 24.14 & 0.656 & 0.486 & 24.99 & 0.712 & 0.416 & 26.24 & 0.785 & 0.338 & **26.45** & **0.790** & **0.338** & 250 \\ Block\_E & 23.53 & 0.612 & 0.534 & 24.74 & 0.704 & 0.467 & 25.66 & 0.749 & 0.408 & 26.36 & 0.807 & **0.335** & **26.54** & 0.811 & 0.338 & 200 \\ Overall & 22.97 & 0.589 & 0.548 & 23.61 & 0.667 & 0.482 & 25.13 & 0.762 & 0.396 & 25.69 & **0.800** & **0.326** & **25.73** & 0.772 & 0.360 & 279 \\ \hline \hline \end{tabular} \end{table} Table 2: Performance comparison of representative neural rendering methods on the aerial data of our _MatrixCity_ benchmark. we also ablate the street data collection density in Table 4, which demonstrates that grid-based method is more sensitive to data density than MLP-based NeRF method. Additionally, we provide a super dense version street data with 135k for Small City with 1 m interval. For demonstrative purpose, we conduct experiments on the Small City in this stage, where the interval between adjacent frames is 5 m for street data. We will release the data splits of the following sections. **Evaluation metric.** We evaluate the rendering performance of each baseline method based on PSNR(Peak Signal-to-Noise Ratio), SSIM(Structural Similarity) [35] and the VGG implementation of LPIPS [41]. And we also use mean angular error (MAE) and mean squared error (MSE) to evaluate estimated normal vectors and depth map, respectively. ### Baselines We aim to test the performance of current neural rendering methods on the MatrixCity dataset to explore the challenges for city-scale neural rendering. To achieve this, we choose five widely recognized methods: NeRF [22], DVGO [28], Instant-NGP [23], TensoRF [7] and MipNeRF-360 [4]. Note that we all use the official implementation of these baselines except NeRF and Instant-NGP. For NeRF we use the widely recognized Pytorch version [39]. And for Instant-NGP, we use the open-source version [24]. We find that ngp-pl [24] generally performs better than torchngp [32]. To address the challenge of increasingly intricate urban content, we recognize the limited capacity of the original baseline models. So we increase the number of parameters to handle more complex urban environments. Specific details regarding these parameter increases can be found in the Appendix A. ### Neural Rendering on Aerial Data Due to the limitations of current methods and models, it is impractical to use a single model to represent an entire map. Therefore, we divide the map into five blocks based on building height and coverage area. Each block covers a roughly homogeneous area, where buildings within each block have similar heights. Our results, shown in Table 2, indicate that MipNeRF-360 [4] and Instant-NGP [23] perform better, while NeRF [22] performs the worst. This indicates that grid-based methods and MLP-based NeRF methods both can model the block-size aerial data well. Despite scaling up the NeRF model significantly, its ability to model large-scale scenes remains limited, as illustrated in Figure 5. Figure 5: Visualization of novel view synthesis results of previous representative large-scale neural rendering methods on the aerial data of our _MatrixCity_ dataset. Additionally, we found that the high-rise area is more challenging to model than the floor area. In the high-rise area, there are numerous occlusions between the buildings, which is a significant challenge for aerial data modeling. From Figure 5, we can observe that current methods still struggle to accurately model small objects and reflective buildings. ### Neural Rendering on Street Data We first run all these baselines on the street data of Block_A and find that all the methods perform much worse than the results of training with aerial data, especially for the grid-based methods, as shown in Table 3. Street data contains much more details than aerial data, and it is harder to achieve high-quality results on street data, which is also demonstrated in Figure 7. Thus we conclude that modeling the street data of a block-size area in a single model is not reasonable and filter a crossroad data to test current methods, called Block_Small. Analyzed the results on Block_Small, we find that the MLP-based NeRF methods perform better than the grid-based methods, which is also \begin{table} \begin{tabular}{c|c c c|c c c} \hline \hline \multirow{2}{*}{Density} & \multicolumn{3}{c|}{**Instant-NGP**[23]} & \multicolumn{3}{c}{**MipNeRF-360**[4]} \\ & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) \\ \hline 5.0 m & 21.436 & 0.733 & 0.402 & 27.75 & 0.866 & 0.2956 \\ 3.6 m & 24.978 & 0.803 & 0.350 & 29.366 & 0.884 & 0.280 \\ 2.0 m & 30.025 & 0.885 & 0.235 & 31.420 & 0.901 & 0.265 \\ 1.0 m & 32.444 & 0.912 & 0.211 & 31.858 & 0.905 & 0.263 \\ 0.5 m & 32.999 & 0.921 & 0.202 & 32.210 & 0.907 & 0.261 \\ \hline \hline \end{tabular} \end{table} Table 4: Ablation on the density of street data collection on our _MatrixCity_ benchmark. Figure 6: Visualization of novel view synthesis of city-scale neural rendering methods on (a) Block_Small and (b) Block_A of street-view data. MLP-based NeRF methods suffer from capacity issues while grid-based baselines shows severe artifacts. \begin{table} \begin{tabular}{c|c c|c c|c c|c c|c c|c c|c c} \hline \hline \multirow{2}{*}{Block} & \multicolumn{3}{c|}{**NeRF [22]**} & \multicolumn{3}{c|}{**DVGO**[28]} & \multicolumn{3}{c|}{**TensoRF [7]**} & \multicolumn{3}{c|}{**Instant-NGP**[23]} & \multicolumn{3}{c|}{**MipNeRF-360**[4]} \\ & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) \\ \hline Block\_A & 20.12 & 0.601 & 0.626 & 20.47 & 0.617 & 0.604 & 20.93 & 0.643 & 0.577 & 21.96 & 0.712 & 0.493 & **22.00** & **0.717** & **0.488** \\ Block\_Small & 22.15 & 0.678 & 0.511 & 22.10 & 0.711 & 0.454 & 22.95 & 0.741 & 0.445 & 22.84 & 0.745 & 0.408 & **24.47** & **0.827** & **0.297** \\ Overall & 21.14 & 0.640 & 0.569 & 21.29 & 0.664 & 0.529 & 21.94 & 0.692 & 0.511 & 22.40 & 0.729 & 0.451 & **23.24** & **0.772** & **0.393** \\ \hline \hline \end{tabular} \end{table} Table 3: Performance comparison of representative neural rendering methods on the street data of our _MatrixCity_ benchmark. demonstrated in Figure 6. The Block_Small can also be seen as a 360 unbounded scenes with a distant background. Figure 6 shows that MipNeRF-360 can alleviate this problem to some extent. However, the reflective parts and fine-grained architectures are still not well reconstructed. ### Neural Rendering on Joint Types of Data The major motivation to fuse data from both aerial and street view is to provide content information at different granularity. While aerial views are generally easier to train with less geometry ambiguities, it lacks many details on the near-ground, which are critical to deliver immersive experience for exploring a city. On the other hand, street-view images often only offer partial information about the scene revealing the local contents, which are sensitive to overfit to training views. We therefore explore to train the aerial and street data together, which cover the same area, aiming the leverage the advantage of two sources of data to ensure wide coverage as well as fine details. However, according to Table 5, we find that the performance of TensoRF [7] and MipNeRF-360 [4] both got worse after naively fusing the aerial and street data to train together. As shown in the Figure 8, the ground part of the aerial view becomes dirty after training with the street data for both methods. For the street view, the foreground of MipNeRF-360 becomes worse. We analyze that due to the significant difference in the level of details between street-level and aerial imagery, as well as the large disparity in distance from the foreground, it is challenging to train models simply by utilizing both types of data. We need to further investigate how algorithms can effectively utilize both the geometric information from aerial imagery and the detailed information from street-level imagery, such as finetuning, progressive training, separate group of hyperparamters, _etc_. ## 5 Conclusion In this paper, we propose _MatrixCity_, a high-quality and city-scale benchmark with diverse, controllable and realistic data collected from the powerful Unreal Engine 5. Additional information like depth and normal are also collected with minimum extra cost in our _MatrixCity_ dataset, enables other potential tasks and applications like depth estimation and inverse rendering. On top of _MatrixCity_, we have empirically investigated representative methods on two types of data independently and the fusion of both aerial-view and street-view data. We hope these efforts could facilitate new advances in the field of city-scale neural rendering. **Acknowledgment** This project is funded in part by Shanghai AI Laboratory (P23KS00020, 2022ZD0160201), CUHK Interdisciplinary AI Research Institute, and the Centre for Perceptual and Interactive Intelligence (CPII) Ltd under the Innovation and Technology Commission (ITC)'s InnoHK. We would like to thank Haiyi Mei and Lei Yang for their invaluable help and discussions for the plug-in development, Jiamian Yan, Bin Wang and Conghui He for their contributions on the street data annotations of Big City. Figure 8: Visualization of neural rendered results on aerial and street views before and after fusion of two types of views. Streets views are generally harder than aerial views to deliver high-quality rendering results, with notable floating artifacts, where the model get easily overfitting to the training views with cheated geometry. The naive joint training on the fused data downgrades the quality. \begin{table} \begin{tabular}{c|c c c|c c c} \hline \multirow{2}{*}{Data Type} & \multicolumn{3}{c|}{**TensoRF**[7]} & \multicolumn{3}{c}{**MipNeRF-360**[4]} \\ & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) \\ \hline Aerial & 27.26 & 0.829 & 0.231 & 28.37 & 0.855 & 0.197 \\ Street & 22.10 & 0.727 & 0.449 & 23.05 & 0.805 & 0.312 \\ Fusion & 21.44 & 0.656 & 0.504 & 17.07 & 0.470 & 0.600 \\ \hline \end{tabular} \end{table} Table 5: Ablation on the fusion of aerial and street data on our _MatrixCity_ benchmark. Figure 7: Visualization of the depth and normal results of MipNeRF-360 on aerial and street views.
neural radiance fields (NeRF) とその後継的な変種は、神経レンダリングにおける驚くべき進歩をもたらした。ほとんどの最近の神経レンダリングは、オブジェクトと小さなスケールのシーンに焦点を当てているが、都市規模のシーン向けの神経レンダリング方法の開発は、多くの現実世界アプリケーションにおいて大きな可能性を秘めている。しかし、この研究の道を阻むのは、包括的で高品質なデータセットの欠如であり、しかし、現実の都市規模のシーンを収集することは、コストがかかり、センシティブで、技術的に難しい。このため、私たちは、都市規模の神経レンダリング研究のための包括的な、大規模な、高品質な合成データセットを構築した。Unreal Engine 5 CitySample プロジェクトを活用し、空中とストリートの都市のビューを簡単に収集するためのパイプラインを開発した。このパイプラインには、カメラの位置、地面の真のデータモデ
2309.09473
Self-supervised Multi-view Clustering in Computer Vision: A Survey
Multi-view clustering (MVC) has had significant implications in cross-modal representation learning and data-driven decision-making in recent years. It accomplishes this by leveraging the consistency and complementary information among multiple views to cluster samples into distinct groups. However, as contrastive learning continues to evolve within the field of computer vision, self-supervised learning has also made substantial research progress and is progressively becoming dominant in MVC methods. It guides the clustering process by designing proxy tasks to mine the representation of image and video data itself as supervisory information. Despite the rapid development of self-supervised MVC, there has yet to be a comprehensive survey to analyze and summarize the current state of research progress. Therefore, this paper explores the reasons and advantages of the emergence of self-supervised MVC and discusses the internal connections and classifications of common datasets, data issues, representation learning methods, and self-supervised learning methods. This paper does not only introduce the mechanisms for each category of methods but also gives a few examples of how these techniques are used. In the end, some open problems are pointed out for further investigation and development.
Jiatai Wang, Zhiwei Xu, Xuewen Yang, Hailong Li, Bo Li, Xuying Meng
2023-09-18T04:11:18
http://arxiv.org/abs/2309.09473v1
# Self-supervised Multi-view Clustering in Computer Vision: A Survey ###### Abstract Multi-view clustering (MVC) has had significant implications in cross-modal representation learning and data-driven decision-making in recent years. It accomplishes this by leveraging the consistency and complementary information among multiple views to cluster samples into distinct groups. However, as contrastive learning continues to evolve within the field of computer vision, self-supervised learning has also made substantial research progress and is progressively becoming dominant in MVC methods. It guides the clustering process by designing proxy tasks to mine the representation of image and video data itself as supervisory information. Despite the rapid development of self-supervised MVC, there has yet to be a comprehensive survey to analyze and summarize the current state of research progress. Therefore, this paper explores the reasons and advantages of the emergence of self-supervised MVC and discusses the internal connections and classifications of common datasets, data issues, representation learning methods, and self-supervised learning methods. This paper does not only introduce the mechanisms for each category of methods but also gives a few examples of how these techniques are used. In the end, some open problems are pointed out for further investigation and development. IET Research Journals, pp. 1-14 ## 1 Introduction Data often presents multiple views, collected from diverse sensors or obtained through various feature extractors. For example, specific news events are reported by multiple news organizations, RGB images or depth maps are captured by different types of cameras or from varying angles by the same camera, and videos can take on multiple forms, including images, audio, and text. Consequently, single-view methods struggle to effectively utilize the information contained within multi-view data. To better construct a comprehensive vision model of an object, it is essential to comprehensively observe its various views or utilize multiple modalities within images and videos. Hence, there is a strong demand for effective multi-view learning methods, especially those that operate in an unsupervised manner, in real-world vision applications. As one of the most important unsupervised multi-view methods, multi-view clustering (MVC) aims to separate data points into different clusters in an unsupervised fashion [1; 2; 3; 4; 5]. To achieve this end, existing methods [7; 8; 9; 10; 11; 12] use deep neural networks to explore consistency and complementarity across different views so that a common/shared representation is learned. However, some deep MVC methods [13; 14; 15; 16; 17] depend on too many hyperparameters. In practical clustering applications, where label information is lacking for tuning, this poses a significant challenge. Furthermore, many deep MVC methods suffer from shortcomings such as limited representation capability and high computational complexity, thereby constraining their performance when tackling large-scale data clustering tasks. To overcome the above issues, self-supervised multi-view learning based methods are gradually appearing in some works [13; 19; 20; 21; 22], and better progress has been made in guiding the feature learning process through the self-supervised signal. It guides all multiple views to learn more distinguishing features by extracting common representations from view-specific representations and keeping each view space independent, thus minimizing the distance between positive samples, and forming pseudo-labels. The pseudo-label as a self-supervised signal can be used to lead all views to learn more discriminative features, which further produce clearer clustering structures, as shown in Fig. 1. Therefore, self-supervised multi-view clustering pre-trains sample data to obtain pseudo-labels and achieves good supervision and guidance for downstream clustering tasks through migration and fine-tuning [23; 24; 25; 26; 27]. Additionally, the diverse learning methods and self-supervised signal representations bring both opportunities Figure 1: Process illustration of self-supervised MVC and traditional MVC. Images or videos are the source of multi-view data. Based on the data, traditional MVC methods will first do representation learning to get high-dimensional semantic features and then complete clustering by using clustering algorithms such as K-means [18]. Self-supervised MVC methods will generate self-supervised signals, e.g., pseudo-labels, through representation learning and then mine the consistency of views through self-supervised learning methods, such as contrastive learning, and thus have stronger generalization ability and robustness. and difficulties for future research. Self-supervised MVC has thus attracted more and more attention in the past few years, which makes it necessary and beneficial to summarize the state of the art and delineate open problems to guide future advancement. The prevailing method in the current paradigm is to perform linear clustering following representation learning. This is achieved by sampling from multi-view data distributions in an auto-encoding reconstruction process to extract self-supervised signals [7, 28, 29]. Additionally, it's important to note that self-supervised signals are not limited to pseudo-labels; they extend to various abstract models or mathematical functions such in representation learning, which we will describe in detail below. In essence, self-supervised learning is rooted in transfer learning [30], implying that this method transfers features from the lower levels of multi-view data rather than semantic features from the top. Thus, we can preserve better spatial location relations by using artificially designed proxy tasks [31], leading to correct representation learning in training. In this paper, we categorize existing work from three perspectives: data challenges, representation learning, and self-supervised learning: a) Missing and unaligned data is the primary problem and major challenge for self-supervised MVC. b) Representation learning determines the features of self-supervised signaling and needs to be revisited. c) Self-supervised learning methods are the main means of consistency learning. For example, some generative methods use Generative Adversarial Network (GAN) [32] to deal with missing data, and some contrastive methods are more widely used based on the idea of instance discrimination. On one hand, previous works [33, 8, 34] primarily focused on reviewing existing shallow and deep model-based MVC without delay into self-supervised learning. In our survey, we give self-supervised MVC a more focused exploration, considering it as the future mainstream. On the other hand, by analyzing and comparing their technical details, we discuss the current challenges and future directions. Understanding their commonalities will benefit researchers in developing a more unified and collaborative framework, ultimately methoding the complexity of real human intelligence systems. Several comprehensive reviews have been conducted in the domains of multi-view clustering [35, 36], incomplete multi-view clustering [37, 38], and representation learning within multi-view clustering [39]. However, none of these reviews have focused on the intriguing concept of self-supervised learning itself. To the best of our knowledge, there has been no comprehensive survey and summary of the field of self-supervised multi-view clustering concerning image and video data. This gap in the literature makes our work valuable to researchers interested in entering this field. In this endeavor, we undertake a comprehensive survey of MVC methods, viewing them through the lens of self-supervised learning. Our investigation covers commonly used datasets for images and videos, self-supervised signals, and learning paradigms. Our contributions can be summarized as follows: * We examine various data patterns with distinct properties, presenting relevant data formats and widely used image and video datasets in extensive use by researchers. * We focus on multi-view clustering in self-supervised learning scenarios and review the problems and challenges due to data incompleteness over the past few years. * We delve into self-supervised signal forms derived from representation learning methods and extend the consistency learning paradigm from self-supervised learning techniques. This chosen presentation format aids readers in comprehending the distinctions among existing methods. * Our work offers researchers insights into prospective research directions and associated challenges. The overall structure of the paper is as follows: In Section 1, we introduced the background and principles of self-supervised MVC and provided an overview of this paper. Additionally, we introduce some publicly available multi-view datasets, including image and video representations, and summarize the directions and challenges derived from the incompleteness of the data in Section 2. Next, we proceed to present the representation learning that can acquire self-supervised signals in Section 3, sub-categorized mainly by their learning methods. In Section 4, we present concrete self-supervised MVC methods with generative and contrastive proxy tasks, respectively. Finally, in Section 5 and Section 6, we conclude this paper and discuss challenges and future trends for self-supervised MVC. The main structure is also illustrated in Fig. 2. ## 2 Multi-view Data and Data-related Issues Vision serves as a crucial channel for humans to gather external information, and different types of visual data possess distinct properties and roles, all of which are vital for achieving self-supervised MVC. In this section, as illustrated in Table I, we compile publicly available image and video datasets within the MVC domain, providing essential information such as their scope, format, and other Figure 2: Structure of this paper. relevant details. It's worth noting that a significant portion of image multi-view datasets is derived from the original single-view datasets. Therefore, many researchers generate various new datasets on top of these datasets on their own for further research. In addition, we also discuss that these datasets bring in new assumptions to fit more complex realities by dealing with missing and unaligned cases in simulated realities, which also creates the division of MVC from the perspective of practical problems. ### Image Dataset In the realm of MVC, images represent the most traditional and widely utilized form of visual data. With the advent of the big data era, there is a growing trend of employing numerous image datasets for training MVC models. In the subsequent sections, we will introduce some of the prominent image datasets individually. **Sence-15**[40] contains 4485 images from 15 different interior and outdoor scene categories, as well as PIOG and GIST functionality. It is widely used for various computer vision research problems, such as multi-view clustering. In terms of multi-view properties, scene 15 has a 20-dim GIST feature and a 59-dim PHOG feature that are utilized as two distinct viewpoints. **NoisyMNIST**[41] uses view 1 of the original 70k MNIST images and view 2 of white Gaussian-noise within-class images chosen at random. Since most baselines cannot handle such a large dataset in experiments, a 20k MNIST subset consisting of 10k validation images and 10k tests is usually used. **Caltech101**[42] consists of 9,144 images distributed over 102 categories, and it has two features, i.e., the 1,984-dim HOG feature and the 512-dim GIST feature, which are extracted as two views. These images include animals, wheels, flowers, etc., and come from the same category with a very large variation of shapes. Furthermore, Caltech101-20 [56] dataset, which includes 2,386 images of 20 subjects with two handcrafted features as two views, is commonly used in experiments with HOG and GIST features as two views. **LandNet-21**[43] is a 21-class land use image dataset meant for research purposes. There are 100 images for each of the following classes, and each image measures 256x256 pixels. The images were manually extracted from large images from the USGS National Map Urban Area Imagery collection for various urban areas around the country. The pixel resolution of this public domain imagery is 1 foot. It consists of 2100 satellite images from 21 categories, with 100 images each, with PHOG and LBP features. **UWA** (UWA3D Multi-view Activity) [44] is collected by Kinect sensors with RGB and depth features. It consists of 660 action sequences, or 11 acts carried out by 12 subjects five times each. **DHA** (Depth-included Human Action dataset) [45] consists of 660 action sequences, or 11 acts carried out by 12 subjects five times each. **MNIST-USPS**[46] is a popular handwritten digit dataset, which contains 5,000 samples with two different styles of digital images. The USPS picture is 256 dim, while the MNIST image is 784 dim. **BDGP**[47] was used for gene expression analysis by the Berkeley Drosophila Genome Project. It is composed of 5 categories and 2500 samples, where each class has 500 samples, each of which is represented by visual and textual features. Specifically, each of these is represented by visual and textual features. **Handwritten**[48] contains five views and 2000 samples from ten numerals (i.e., 09), where the five views are obtained by Fourier coefficients, profile correlations, the Karhunen-Love coefficient, Zernike moments, and pixel average extractors. **NUS-WIDE**[49] is a real-world web image dataset collected by researchers at the National University of Singapore. In their experiments, the researchers typically use a multi-view subset containing 30,000 images and 31 classes, where each image is represented by five low-level features, i.e., color histogram, color correlation map, edge orientation histogram, wavelet texture, and chunked color moments. **Fashion**[50] has a total of 10 categories. 60,000 images and labels of 28*28 pixel points of clothes and pants are provided for training, and 10,000 images and labels of 28*28 pixel points of clothes and pants are provided for testing. **SUN RGB-D**[51] has 10,335 RGB-D images. The features are extracted from the original images using the deep neural network. **Wikipedia**[52] contains 2866 multimedia documents, which were collected from Wikipedia. Each document contains two views, i.e., the image view and the text view. **COIL-20**[53] is composed of 1,440 images of 20 objects in which the background has been discarded. Each image is represented by three kinds of features, including a 1024-dimension intensity, a 3304-dimension local binary pattern and a 6750-dimension Gabor. ### Video Dataset Video content, as a sequence of images presented in a temporal framework, has gained increasing prominence in comparison to static images. Platforms like YouTube and TikTok are clear examples of the growing utilization of videos by people. Videos offer a richer source of information, capturing more detailed features than static images. Learning from multi-view videos allows the acquired representations to effectively decompose functional properties, even as viewpoints and agents remain consistent. This, in turn, facilitates the subsequent learning of new tasks. Consequently, within the MVC field, researchers have shifted their focus towards using video \begin{table} \begin{tabular}{c c c c} \hline \hline Datasets & Type & Size & \# of categories \\ \hline Sence-15[40] & Image & 4485 & 15 & [https://figshare.com/articles/dataset/](https://figshare.com/articles/dataset/) \\ NoisyMNIST [41] & Image & 20k & 10 & [http://yann.lecun.com/exdb/mnist/](http://yann.lecun.com/exdb/mnist/) \\ Caltech101 [42] & Image & 9144 & 102 & [https://data.caltech.com/records/mzj6-wcc20](https://data.caltech.com/records/mzj6-wcc20) \\ LandUse-21 [43] & Image & 2100 & 21 & [http://weege.vision.com/crceded/datasets/landuse.html](http://weege.vision.com/crceded/datasets/landuse.html) \\ UWA3D Multi-view [44] & Image & 660 & 11 & [https://iee-detector.org/documents/](https://iee-detector.org/documents/) \\ Activity (UWA) & Image & 660 & 11 & www-3d-multiview-activity-ii-dataset \\ Depth-included Human [45] & Image & 660 & 11 & [https://www.researchgate.net/](https://www.researchgate.net/) \\ Action dataset (DHA) [45] & Image & 660 & 11 & Action-ans-in-the-DHA-dataset\_fig\_200700290 \\ MNIST-USPS [46] & Image & 5000 & 10 & [https://git-disl.github.io/GTDLBench/datasets/usps](https://git-disl.github.io/GTDLBench/datasets/usps)\_dataset/ \\ BDGP [47] & Image & 2500 & 5 & [https://www.fruitfly.org/](https://www.fruitfly.org/) \\ Handwritten [48] & Image & 2000 & 10 & [http://yann.lecun.com/exdb/mnist/](http://yann.lecun.com/exdb/mnist/) \\ NUS-WIDE [49] & Image & 30000 & 31 & [https://lns.comp.nus.edu.sg/wp-content/](https://lns.comp.nus.edu.sg/wp-content/) \\ Fusion [50] & Image & 60000 & 10 & [https://github.com/2alandoresearch/](https://github.com/2alandoresearch/) \\ SUN RGB-D [51] & Image & 10335 & 700+ & [https://rgb.com/10.5/](https://rgb.com/10.5/) \\ Wikipedia [52] & Image & 2866 & - & [https://www.cs.columbia.edu/CAVE/](https://www.cs.columbia.edu/CAVE/) \\ COIL-20 [53] & Image & 1440 & 20 & [https://www.cs.nvidust.com/exdb/](https://www.cs.nvidust.com/exdb/) \\ Consumer Video(CCV) [54] & Video & 6773 & 20 & [https://www.ec.columbia.edu/In/dVmm/CCV/](https://www.ec.columbia.edu/In/dVmm/CCV/) \\ YouTube Video [55] & Video & 120000 & 31 & [https://research.google.com/youtube8m/](https://research.google.com/youtube8m/) \\ \hline \hline \end{tabular} \end{table} Table 1: Dataset summary.’ indicates that the division of the number of categories in reality will vary with the purpose of the research’s analysis. datasets to train MVC models, aiming to attain superior results. In the following, we will introduce some of the main video datasets separately. **CCV** (Consumer Video) [54] is a video dataset with 6,773 samples belonging to 20 classes and provides hand-crafted Bag-of-words representations of three views, such as STIP, SIFT, and MFCC. **YouTube Video**[55] is about 120,000 films' worth of feature values, and class labels make up the dataset. Up to 13 different feature kinds, including auditory, textual, and visual features, from three high-level feature families can be used to describe each video. There are 31 class labels, 30 of which correspond to well-known video games, and the remaining nine are for different games. The data's high-quality feature representation makes it extremely useful for grouping online videos. ### Data-related Issues Although images and videos are two different types of datasets, existing self-supervised MVC methods do not treat the two data forms any differently, as both essentially satisfy the multimodal input of the algorithm. The success of existing multi-view clustering methods [57, 58, 59, 60, 61] heavily relies on the assumption of view instance completeness and consistency, referred to as complete information. However, these two assumptions would be inevitably violated in data collection and transmission, thus leading to incomplete and unaligned MVC. Researchers have therefore begun to artificially make datasets missing or unaligned to simulate more complex real-world situations [62, 63, 28, 64]. This has greatly expanded the research space, and many excellent works promoting the use of self-supervised learning in MVC have emerged. Different views exhibit consistency and complementary properties of the same data, leading to extensive research on multi-view learning. As shown in Fig. 3, the presentation of data provides another way to divide the way we look at self-supervised MVC into the following four categories: **MVC** aims to harness information from multiple views to enhance clustering. In the literature, existing MVC methods can be broadly categorized into two groups: traditional methods and deep methods. Traditional MVC often relies on machine learning algorithms like matrix factorization, graph learning, and kernel learning. However, these methods face challenges when applied to large-scale datasets and exhibit limited generalization capabilities. In contrast, deep MVC methods have gained popularity recently due to their exceptional representation capabilities, as acknowledged within the community [19, 23, 65, 8, 17]. The majority of self-supervised MVC methods, which also fall under the deep MVC category, have emerged as the future research mainstream, largely propelled by the widespread adoption of contrastive learning in self-supervised learning. Irrespective of the specific method employed, MVC [57, 64, 58] typically necessitates access to complete and aligned multi-view data within the model to uphold consistency and accuracy. For example, Xu et al. introduced a novel framework [58] for multi-level feature learning in contrastive multi-view clustering, addressing the challenge of balancing consistency and complementarity. Their Multi-VAE [57], capable of learning disentangled and interpretable visual representations while tackling large-scale data clustering tasks, has exhibited commonable performance. This is achieved by employing a generative model to regulate mutual information capacity. Consequently, generative or contrastive methods, falling under the realm of self-supervision, constitute the primary means for learning consistent information and managing complementary information in contemporary MVC research. **Incomplete MVC** stands as a significant unsupervised method designed to cluster multi-view data that contains missing information in certain views. Initially, addressing such challenges involved simply populating the missing elements with mean average eigenvalues or other matrix representations. However, these basic padding strategies prove inadequate when the missing data rate is high. Furthermore, these methods struggle to fully capture the latent information within the missing data, as the missing elements are not adequately recovered. In recent times, as self-supervised learning has evolved, generative models such as Generative Adversarial Network (GAN) [32] and Variational Autoencoder (VAE) [66] have demonstrated superior performance, particularly in scenarios where significant data is available for training. For instance, Wang et al. introduced a deep Incomplete Multi-View Clustering (IMC) method [67] based on GAN. Lin et al. proposed a novel objective thatifies representation learning and data recovery within a cohesive framework, viewing it from the perspective of information theory. Notably, they integrated generation and contrastive learning into a unified and consistent learning framework [28]. DIMVC [7], currently recognized as the State-of-the-Art (SOTA) method, converts complementary information into a supervised signal with high confidence. Its primary objective is to establish multi-view clustering consistency for both complete and incomplete data. **Unaligned MVC** represents a relatively new and less-explored direction within the field, with limited existing research and considerable room for development. In recent years, substantial efforts have been directed towards addressing incomplete MVC, primarily through the imputation of missing samples using various data recovery methods [28, 29, 68]. In contrast to incomplete MVC, unaligned MVC is a relatively uncharted territory, emerging only recently [63, 62]. A plausible method to tackling unaligned MVC begins by realizing the data using the Hungarian algorithm [69], followed by performing MVC based on the realigned data. However, this method is unsuitable for large datasets due to the non-differentiable nature of the Hungarian algorithm. To address this limitation, PVC [62] proposes the utilization of a differentiable surrogate for the non-differentiable Hungarian algorithm. This allows it to be recast as a pluggable module, and subsequently, it constructs the distance matrix to supervise alignment correspondence in the latent space. Fig. 3: Illustrative examples of the related data issues. Taking a bi-view data as a showcase, we use two rows of polygons to denote two views, where each column of polygons represents a pair of instances that may be incomplete or unaligned as shown in (a). Polygons with the same shape belong to one category, and the same color is a pair of aligned instances. The ”\(\triangledown\) denotes that the view sample is missing. (a) MVC: multi-view clustering based on complete and aligned multi-view data. (b) Incomplete MVC: multi-view clustering based on incomplete and aligned multi-view data. (c) Unaligned MVC: multi-view clustering based on complete and unaligned multi-view data. (d) Unaligned MVC: multi-view clustering based on incomplete and unaligned multi-view data. Nevertheless, both the original Hungarian algorithm and PVC focus on achieving instance-level alignment, which may be insufficient for MVC. The core of clustering and classification lies in establishing a one-to-many mapping, thus rendering category-level alignment more advantageous. Subsequent work by MVCLN [63] reframes the view alignment problem as an identification task. It introduces a novel noise-robust contrastive loss designed to mitigate or even eliminate the impact of noisy labels that may arise during pair construction. **Incomplete & unaligned MVC** represents a relatively unexplored problem, emerging only recently [62]. This direction aligns more closely with real-world challenges. To date, the sole work addressing this issue is SURE [64], which strives to learn categorical similarities and establish correspondences across views. SURE achieves this by leveraging a novel noise-robust contrastive learning paradigm initially proposed by Yang et al. ## 3 Representation Learning Method Representation learning plays a pivotal role in self-supervised MVC and the process of generating and utilizing self-supervised signals. In this section, we provide an in-depth discussion of the various forms in which self-supervised signals manifest and categorize them into five distinct categories. In practice, the concept of self-supervised signals is exceptionally versatile, encompassing feature representations, mathematical functions, pseudo-labels, or even algorithmic designs. This versatility provides self-supervised MVC with a vast research space and practical significance. Researchers inject prior information into representation learning algorithms or devise proxy tasks to guide the network's optimization, thereby generating self-supervised signals. Appropriately designed self-supervised signals have the capacity to harness the complementary and consensus information present across multiple views, facilitating the clustering of objects into distinct partitions [36]. The consensus principle strives to maximize consistency among different views, while the complementarity principle recognizes that each view contains unique information not found in others. Consequently, gaining a deeper understanding of self-supervised signaling can offer valuable insights into representation learning and drive progress in future research endeavors. ### Graph-based Representation Graphs are widely used to represent the relationships between different samples, where nodes represent data samples and edges represent the relationships between samples. This graphical method offers a more comprehensive means of encapsulating the richness of multi-view data. To unearth the underlying clustering structure more effectively, it's crucial to efficiently integrate these graphs in a mutually reinforcing manner. In essence, self-supervised signals based on graph representations are derived through the fusion of multi-view data, and the generalized process aligns with the illustration in Fig. 4. Specifically, we can formulate the generic problem of multi-view graph clustering as follows: \[\begin{split}\min_{\mathbf{A}^{(v)},\mathbf{S}}\sum_{v=1}^{V}\sum _{i\neq j}^{N}&\mathrm{Dis}\left(\mathbf{x}_{i}^{(v)},\mathbf{x }_{j}^{(v)}\right)\mathbf{A}^{(v)}+\lambda\Psi\left(\mathbf{S},\mathbf{A}^{(v )}\right)\\ &\text{s.t.}\quad\mathbf{A}^{(v)}\geqslant 0,\mathbf{a}_{i}^{(v)} \mathbf{1}=1,\mathbf{S}\geqslant 0,\mathbf{s}_{i}\mathbf{1}=1\end{split} \tag{1}\] where \(\mathbf{x}_{i}^{(v)}\) represents the \(i\)-th feature vector for the \(v\)-th view, \(A^{(v)}\) is the affinity map for a particular view, and \(\mathbf{S}\) is the consistent similarity matrix of multiple affinity graphs after some fusion. \(Dis.(.)\) denotes some similarity or distance metric (e.g., Euclidean distance). \(\Psi(.)\) is certain fusion function to combine multiple \(\mathbf{A}^{(v)}\) to obtain the final consensus cluster assignment \(\mathbf{S}\). The graph convolutional network (GCN)-based model [61, 70, 71, 72, 73] employs a deep embedding method for MVC and incorporates graph structure information into its model. This allows for the utilization of both the information from the graph structure and the node features. Furthermore, self-supervised GCN constructs a new view descriptor tailored for graph-structured data. Simultaneously, the generated self-supervised signals guide the learning process for latent representations and coefficient matrices. These learned representations and matrices are subsequently used to perform node clustering. Among the early works in this domain, O2MA [70] introduced graph autoencoders for learning node embeddings based on a single information graph. It employs the captured view consistency in low-dimensional feature representations as a fine-tuning stage for self-supervised signals. Subsequently, in the following years, numerous works [61, 71, 72, 73] on self-supervised MVC based on graph representations emerged: 1) Cheng et al. developed a model that employs two-pathway encoders to map graph embedding features and learn view-consistency information. This method explores graph embeddings and consistent embeddings of high-dimensional samples [71]. 2) MDGRL [72] is built upon the concept of the graph autoencoder for local feature learning and is a valid variant of the variational graph autoencoder for global deep graph representation learning. 3) Cai et al. combined global and partial GCN autoencoders to create a self-training clustering module with adaptively weighted fusion. They use this module to simultaneously mine the global and unique structures from various viewpoints [61]. 4) Xia et al. imposed the diagonal constraint on the consensus representation that generated by multiple GCN autoencoders with the self-supervised clustering scheme better clustering capability [73]. They utilize the clustering labels produced to supervise the self-expressive coefficient matrix \(\mathbf{C}\), specifically, \[\min_{\mathbf{C}}\sum_{i,j=0}^{n}\left|\mathbf{c}_{ij}\right|\frac{\left\| \widehat{\mathbf{l}_{i}}-\widehat{\mathbf{l}_{j}}\right\|_{2}^{2}}{2}, \tag{2}\] where \(\widehat{\mathbf{l}}_{i},\widehat{\mathbf{l}}_{j}\in\widehat{\mathbf{L}}\) represent the label vector corresponding to the \(i\)-th and \(j\)-th nodes, respectively. 5) In the latest work, DMVCJ [15] utilizes the latent graphs to promote the performance of deep-embedded MVC models from two aspects: the global weights act as self-supervised signals and also mitigate the noise problem. As contrastive learning continues to evolve, several methods [13, 14, 75, 15] have emerged as self-supervised methods for learning node and graph-level representations through multi-view contrastive graph clustering. Hassani et al. introduced a self-supervised method for learning node and graph-level representations by contrasting structural views of graphs [14]. However, they overtook the fact that the original graph data may contain noise or be incomplete, rendering their method less directly applicable. To address this challenge, two subsequent works have been proposed. First, Pan et al. employ graph filtering techniques to filter out undesirable high-frequency noise while preserving the essential geometric features of the graph. This results in a smoother representation of nodes, which is then used to learn a consensus graph regularized by a graph contrastive loss [75]. Second, SGCMC [13] utilizes clustering labels to guide the learning of latent representations and coefficient matrices. These Figure 4: General procedure of graph-based multi-view clustering representation learning. \(V\) is the number of views. learned representations and matrices are subsequently employed for node clustering. SGCMC constructs a new view descriptor for graph-structured data by mapping the original node contents to a complex space using the Euler transform. This method not only suppresses outliers but also unveils the nonlinear patterns within the embedded data. In addition to the challenge of sample noise, the problem of sample missing is more practical. ACTIVE [74] is designed with both intra-view graph contrastive learning and cross-view graph consistency learning to maximize the mutual information across different views within a cluster. Graphs being a discrete data structure often exhibit tight correlations in common graph learning tasks. Consequently, the design of graph contrastive learning algorithms tailored to these properties and how contrastive learning can effectively enhance graph representation and node representation continue to be active areas of exploration. ### Subspace-based Representation Multi-view subspace clustering methods usually either learn a shared and unified subspace representation from multiple view-specific subspaces or discover a latent space for high-dimensional multi-view data to reduce the dimensionality, based on which latter subspace learning is conducted. To illustrate, the general process of the multi-view subspace representation learning method is shown in Fig. 5, where the subspace representation itself is usually used as a self-supervised signal. In addition, the objective formula for multi-view subspace clustering can be rewritten \[\begin{split}\min_{\mathbf{z}^{(v)}}\sum_{v=1}^{m}\left\| \mathbf{x}^{(v)}-\mathbf{x}^{(v)}\mathbf{z}^{(v)}\right\|_{F}^{2}+\lambda \Omega\left(\mathbf{z}^{(v)}\right)+\\ \gamma\Psi\left(\mathbf{z},\mathbf{z}^{(v)}\right)\\ \text{s.t.}\quad\mathbf{z}^{(v)}\geqslant 0,\mathbf{z}^{(v)} \mathbf{1}=1\end{split} \tag{3}\] where \(\mathbf{x}^{(v)}\) denotes the data of the \(v\)-th view, \(\mathbf{z}^{(v)}\) denotes the view-specific subspace representation, and \(\mathbf{z}\) is the learned common representation. \(\Omega(.)\) stands for the certain regularization term about \(\mathbf{z}^{(v)}\). To discover the latent space, there is another objective function which can be formulated as \[\begin{split}\min_{\mathbf{z}}\sum_{v=1}^{m}\mathcal{F}\left( \mathbf{x}^{(v)},\mathbf{H}\right)+\lambda\|\mathbf{H}-\mathbf{H}\mathbf{z}\| _{F}^{2},\\ \text{s.t.}\quad\mathbf{z}\geqslant 0,\mathbf{z}\mathbf{1}=1, \end{split} \tag{4}\] where \(\mathbf{H}\) denotes the latent space learned from multiple views. Then subspace learning is performed on the basis of consensus \(\mathbf{H}\). In recent times, there has been a growing emergence of deep learning-based methods for multi-view subspace clustering. These methods aim to enhance the learning of representations for each view and uncover common latent subspaces [8, 19, 20, 65, 76, 77]. Drawing inspiration from [25], Abavisani et al. introduced a self-expressive layer designed to enforce the self-expressiveness property. This addition contributed to advancements in subspace reconstruction [78]. Following this, Cui et al. proposed SG-DMSC [8], which introduced a novel loss term called spectral supervision. This addition simplifies the consensus clustering process, resulting in improved clustering performance. However, prior work often treated spectral clustering and affinity learning as separate entities. Sun et al. introduced S2DMVSC [19], a framework that seamlessly integrates spectral clustering and affinity learning within a deep learning context. It leverages clustering results to guide latent representation learning for each view and common latent subspace learning across multiple views. DASIMSC [20] proposed a dual-aligned, self-supervised, incomplete multi-view subspace clustering network. This method maintains consistency in semantics between the inherent local structure within a view and the incomplete view. In addition, there is work focused on subspace and contrast learning. The SCMC method [76] utilizes view-specific autoencoders to map raw multi-view data into compact features, capturing their nonlinear structure. Subsequently, subspace learning is employed to unify multi-view data into a shared semantic space. As research in this field deepens, the challenge of information bottleneck has become increasingly prominent. Addressing this issue is crucial for performance enhancement. SIB-MSC [65], as the pioneering work exploring information bottlenecks in multi-view subspace clustering, learns minimal sufficient latent representations for each view guided by self-supervised information bottleneck principles. This method helps uncover common information shared across different views. Recently, unified representations of latent space were fed into an off-the-shelf deep clustering model in order to produce the clustering results. Due to the powerful representation capability of deep learning, multi-view subspace clustering has achieved good performance on deep multi-view subspace clustering networks, self-supervised multi-view deep subspace clustering networks, generalized latent multi-view subspace clustering, and other methods. ### Matrix-based Representation A fundamental assumption in multi-view data analysis is the existence of a consistent label distribution across different views, often referred to as multi-view semantic consistency. When employing matrix factorization for multi-view clustering, the primary goal is to decompose the multi-view data into consensus representations that are shared by all views. This decomposition process aims to generate self-supervised signals capable of guiding and training subsequent learning stages. The hierarchical decomposition of the dataset \(X\) by the depth model can be expressed as \[X\approx F_{1}F_{2}\cdots F_{m}R_{m} \tag{5}\] where \(F_{1},F_{2},...,F_{m}\) denotes a series of mapping matrices, \(m\) denotes the total number of layers and \(R_{m}\) denotes the final common latent representation. A generic representation learning process for matrix-based decomposition is shown in Fig. 6. Many studies [16, 17, 26, 79] focusing on multi-view matrix decomposition have yielded impressive results. Furthermore, within the framework of multi-view matrix decomposition, we systematically break down less important factors layer by layer, ultimately Figure 5: General procedure of subspace-based multi-view clustering representation learning. Figure 6: General procedure of matrix factorization -based multi-view clustering representation learning. generating an effective consensus representation in the final layer of the MVC process. This method has significantly enhanced clustering performance. Zhao et al. [16] introduced a deep matrix decomposition framework tailored for MVC. They employed semi-negative matrix decomposition to hierarchically learn common feature representations with greater consistency across views. This method ensures that the consensus representation retains most of the shared structural information across multiple graphs. Notably, this was the pioneering attempt to apply semi-negative matrix decomposition to self-supervised MVC. For the sake of simplicity, the SMDMF method [17] automatically assigns weights to each view without the introduction of additional parameters. This method is capable of autonomously assigning suitable weights to each view for information fusion, enabling the creation of a shared matrix representation without the need for additional hyperparameters. Recently, Wei et al. introduced DMClusts [79], a method designed to identify multiple clusters within multi-view data. DMClusts employs a progressive method by decomposing the multi-view data matrix into a layer-by-layer representation subspace, generating a cluster at each layer. This innovative method leverages deep matrix decomposition in conjunction with deep learning techniques and introduces a novel metric, balanced diversity, to uncover multiple distinct and high-quality clusters. In summary, while matrix decomposition is a well-established machine learning technique, its integration with deep learning has significantly enhanced algorithm efficiency. Nevertheless, it's worth noting that due to the complexity of algorithm design and the computational resources required, its application in self-supervised MVC is somewhat limited compared to other representation learning methods. ### Pseudo label-based Representation Researchers have made a significant discovery indicating that learning a unified latent representation of multi-view data through pseudolabeling can greatly enhance clustering performance [80, 81, 82]. The use of pseudo-label constraints helps retain the view-specific characteristics of multi-view data while learning a unified latent representation. This method simultaneously preserves internal view similarities and inter-view relationships. As illustrated in Fig. 7, the typical process involves extracting latent representations using view-specific encoders and subsequently combining these latent representations from all views to generate pseudo-labels through the K-means algorithm. Generating accurate pseudo labels is of paramount importance, given their inherent high confidence level. SG-DMSC [8], for instance, utilized spectral clustering to generate pseudo labels as a self-guided multi-view encoder fusion layer, thereby exploring features conducive to clustering and obtaining improved latent representations. They also developed a theoretically derived alternative iterative optimization algorithm to rationalize the pseudo labels effectively. To harness the learned pseudo-labels to their fullest potential, Kheirandshfard et al. [25] adopted a self-supervised strategy to construct the objective function. Furthermore, methods that incorporate pseudolabeling as a self-supervised signal often combine it with other representation learning methods. For example, L-MSC [83] and PLCMF [26] integrate matrix decomposition to enhance the consistency between the affinity matrix and the learning assignment matrix. These methods are designed with self-iterating modules that impose pseudolabeling constraints on top of them. However, some of the aforementioned work overlooks the issue of highly ambiguous clustering structures. SDMVC [23], on the other hand, exploits complementary information to construct global features, which leads to more accurate pseudo labels. These labels are then employed to learn more discriminative features and achieve consistent predictions across multiple views. Overall, multi-view clustering methods guided by pseudolabels have demonstrated excellent performance [8, 25, 77, 84]. This method addresses the scalability challenges posed by existing methods when dealing with large-scale datasets and ensures a more integrated and correlated learning process across the two stages. ### Complementary information -based Representation As illustrated in Fig. 8, different views often contain complementary information. To describe data objects more comprehensively and accurately, it becomes necessary to leverage this complementary information for enhanced internal clustering across multiple views, thereby providing deeper supervisory signals [7]. Complementary information present in multiple views can be harnessed to improve the performance of multi-view clustering. By combining information from multiple views, a more comprehensive representation of the target object can be obtained. In essence, multi-view clustering integrates the acquired representation features through the combination of view-specific depth encoders and graph embedding strategies within a unified framework. This method captures both high-level features and local structures from each view, effectively amalgamating the strengths of each view while mitigating their respective shortcomings. SG-DMSC [8] introduces a view-fusion layer to exploit complementarity across multiple views, but its integration is relatively straightforward and lacks depth in information comprehensiveness and structural representation. CDIMC-net [9], on the other hand, incorporates high-level features and local structures from each view by combining view-specific depth encoders and graph embedding strategies within its framework, resulting in more effective fusion representation. Furthermore, the concept of complementarity has been extended to address data deficiencies in methods like GP-MVC [10] and DIMVC [7]. GP-MVC [10] implements a weighted adaptive fusion method to leverage complementary information among different views, with the learned common representation aiding in data imputation. These methods either mine complementarity by fusing multiple similarity matrices or employ fusion layers. While fusion layers can be effective, some views may have a negative impact on the fusion process due to their inherently low quality or inaccurate estimations [9, 10]. DIMVC [7], however, takes a different method by implementing a high-dimensional mapping that linearly transforms linearly separable clustering information into complementary information. This information is used as high-confidence supervised data to ensure consistent clustering assignments across all views, even for incomplete data. By concatenating embedded features from all views to create global features, DIMVC overcomes the negative impact of unclear clustering structures in Fig. 8: General procedure of Complementary information-based multi-view clustering representation learning. Fig. 7: General procedure of pseudo label-based multi-view clustering representation learning. certain views. In summary, existing multi-view clustering methods typically explore complementarity between multi-view data through a fusion process [7, 8, 9, 10, 11]. ## 4 Self-supervised Learning Method Generation and contrastive methods are two of the most crucial techniques in self-supervised learning extensively applied in the field of multi-view clustering, where they have made significant advancements. Generative methods aim to grasp the underlying data distribution and employ generative models to represent the data. Contrastive methods, on the other hand, directly optimize an objective function that involves pairwise similarities to minimize the average similarity within clusters and maximize the average similarity between clusters. More specifically, self-supervised MVC leverages the input data itself as supervision, effectively extracting transformation and relational information from the data across various perspectives. In this section, we categorize self-supervised MVC models into four groups: generative, contrastive, generative-contrastive, and other, with specific subcategories within each. ### Generative Methods In the context of multi-view clustering, the self-supervised generative method involves utilizing the inherent data structure across multiple views to learn representations that improve clustering performance without relying on external labels. This method often involves three common types of generation models in MVC: 1) AE(Autoencoder) is used to directly synthesize decoded data. 2) GAN employs a competitive process to generate missing data while obtaining backpropagation signals to refine the generation process. 3) VAE is used to learn interpretable representations in self-supervised MVC. These generation processes are typically embedded within the data reconstruction process of multi-view data, making autoencoders an essential component. In the following subsections, I will describe multi-view learning works for clustering tasks categorized by specific data modality combinations in the form of "AE+X," where X represents the generation method. #### 4.1.1 Autoencoder An autoencoder consists of two main components: an encoder and a decoder [85], as shown in Fig. 9. The Encoder network captures the most salient features from the high-dimensional data and the decoder network aims to recover the data from the encoded features. So almost all MVC methods are extensions of the autoencoder. Specifically, for a sample \(\mathbf{x}\), the activity value of the intermediate hidden layer of the autoencoder is the encoding of \(\mathbf{x}\), mathematically: \[\mathbf{z}=f\left(\mathbf{w}^{(1)}\mathbf{x}+\mathbf{b}^{(1)}\right). \tag{6}\] The output of the autoencoder is the reconstructed data: \[\hat{\mathbf{x}}=g\left(\mathbf{w}^{(2)}\mathbf{z}+\mathbf{b}^{(2)}\right), \tag{7}\] where \(\mathbf{w}^{(1)}\) and \(\mathbf{b}^{(1)}\) are the parameters of the encoder \(f\), and \(\mathbf{w}^{(2)}\) and \(\mathbf{b}^{(2)}\) are the parameters of the decoder \(g\), and these parameters are obtained by gradient descent training. \(\widehat{\mathbf{x}}\) is the result of \(\mathbf{x}\) being fed to the autoencoder for reconstruction. The autoencoder not only projects multi-view data into a latent space but also serves as a generator to produce missing views and more. AE was initially introduced in [85] for pre-training artificial neural networks. Subsequently, several AE-based methods have been gradually introduced for self-supervised MVC to enhance the reconstruction of inputs from corrupted data [28, 25, 33, 34, 59, 62, 78, 86, 87]. Building on these developments, Abavisani et al. introduced AE into deep multi-view subspace clustering methods for the first time in [78]. Moreover, in addition to employing individual autoencoders in the aforementioned method, Xu et al. implemented a collaborative training scheme involving multiple AE networks to effectively extract complementary and consistent information from each view. This method enriched the latent representations of each view with comprehensive information [86]. Furthermore, compared to the method proposed in [86], Yang et al. expanded on this method by incorporating heterogeneous graph learning, which fused the latent representations from different views using adaptive weights [33]. Cui et al. utilized spectral clustering to generate pseudo-labels, serving as self-guidance for the multi-view encoder fusion layer [59]. However, this method overlooks the diversity in representations generated by the autoencoder (AE) for each view. To address this concern, Zhu et al. introduced a multi-view deep subspace clustering network (MvDSCN) in their work [34]. This method learns multi-view self-representations in an end-to-end manner by integrating convolutional AEs and self-representations. MvDSCN comprises two sub-networks: the Diversity Network (Dnet) and the Universal Network (Unet). The latent space is constructed using a deep convolutional AE, utilizing fully connected layers to develop a self-representation matrix within the latent space. Specifically, the deep convolutional AE takes hand-crafted features or raw data as input and learns for each view. Moreover, Dnet learns a view-specific self-representation matrix, while Unet learns a shared self-representation matrix for all views. Alignment between each view's self-representation matrix and the common self-representation matrix is achieved through universal regularization. The AE reconstruction loss formula for MvDSCN is expressed as follows: \[\min\left\{\begin{array}{c}\left\|\mathbf{x}^{(1)}-\hat{\mathbf{x}}^{(1)} \right\|^{2}+\left\|\mathbf{x}^{(2)}-\hat{\mathbf{x}}^{(2)}\right\|^{2}+\\...\left\|\mathbf{x}^{(V)}-\hat{\mathbf{x}}^{(V)}\right\|^{2}\end{array} \right\}, \tag{8}\] where \(\mathbf{x}^{1},\dots,\mathbf{x}^{2},\dots,\mathbf{x}^{(v)}\) represent the input of multiple views, \(V\) denote the number of views. Recently, AE have gained significant attention and found applications in addressing unaligned MVC challenges. Huang et al. introduced PVC [62], which focuses on learning specific latent spaces for each view (denoted as the \(v\)-th view) using autoencoders to minimize reconstruction errors. This method notably enhances clustering performance, especially when dealing with partially aligned data. However, these methods do not address solutions for handling missing data. In contrast, Lin et al. proposed COMPLETER [28], which integrates representation learning and data recovery within a unified framework from an information-theoretic perspective. The COMPLETER includes two training modules: view-specific autoencoders and cross-view prediction networks. For each view, the COMPLETER utilizes an autoencoder to extract the latent representation \(\mathbf{z}^{(v)}\) by minimizing the reconstruction loss \(\mathcal{L}_{rec}\). It is worth noting that the AE structure is instrumental in preventing trivial solutions. In summary, autoencoders offer several advantages, including the ability to learn shared low-dimensional representations from multiple views, extract latent data features, and generate new views to enhance data richness and completeness. However, they are susceptible to overfitting, and compared to other generative methods, they Figure 9: Self-supervised multi-view clustering based on autoencoders. \(f\) and \(g\) denote the encoder and decoder respectively, \(\mathbf{x}\) is the data of a particular view and \(\mathbf{\hat{x}}\) is the decoded representation. \(\mathbf{\hat{x}}\) and \(\mathbf{x}\) have to be as similar as possible, so the loss function \(\mathcal{L}_{rec}=\left\|\mathbf{x}-\mathbf{\hat{x}}\right\|_{2}^{2}\) is as small as possible. Finally, clustering is accomplished based on the latent representation \(\mathbf{z}\) with self-supervised signals. may face challenges in capturing complex relationships between different views, which can lead to suboptimal accuracy in the generated representations. #### 4.1.2. Autoencoder + GAN GAN is a fundamental model. Its core idea lies in the Generator, which learns the characteristics of data distribution from random noise, while the Discriminator distinguishes real data from generated data, as illustrated in Fig. 10. The objective of generating high-quality data is achieved through this adversarial process. Similar to the foundational concept of autoencoders, GAN, as a more innovative extension, holds a crucial role in the field of data generation. The classic GAN loss is represented by Formula 13. Recent research has demonstrated the widespread use of GAN and their substantial impact on improving clustering performance in self-supervised MVC [7, 89, 90, 88, 87, 89, 91]. In the context of MVC, two common types of GAN utilization have emerged: 1) Employing generative adversarial networks to generate missing data, as evidenced in [92, 10, 67, 93]. 2) Harnessing GANs to capture data distribution and unlock latent spaces through adversarial multi-view clustering networks, as outlined in [87, 94]. The integration of GAN into the autoencoding process not only maximizes the utility of new features from partial data but also enhances the model's robustness in scenarios with high data missing rates. Currently, GANs have made substantial advancements in data generation, and their application in partial MVC has also been extensively investigated [95, 91]. Wang et al. introduced a novel and consistent Generative Adversarial Network for partial multi-view clustering in their work [93]. This method is designed to learn a shared low-dimensional representation and employs a combination of GAN and deep embedding clustering layers to capture the common embedding structure within partial multi-view data. It serves a dual purpose: generating missing view data and enhancing the capture of common structures for clustering. In contrast to other methods, this method utilizes a publicly shared representation encoded by each view to generate missing data for the corresponding view through an adversarial network. It employs both an encoder and a clustering network in the process. This method is intuitive and meaningful, as encoding the public representation and generating missing data within the model mutually benefit each other. This consistent GAN method not only captures a more robust clustering structure but also infers missing views effectively. Furthermore, the model fully exploits the complementary information present in multi-view data, distinguishing it from some GAN models that employ random noise for data generation. Building upon the support of the aforementioned research, an increasing number of MVC studies have integrated GAN for multi-view data generation. For instance, AIMC [67] combines monadic reconstruction and GAN techniques to make inferences about missing data. Additionally, Wang et al. introduced GP-MVC [92], which utilizes a GAN model to learn local view representations, capture shared clustering structures, and address missing data issues. In the GP-MVC, a multi-view encoder network is employed to encode latent shared representations among multiple views. View-specific generative adversarial networks are designed to predict missing view data conditions based on latent representations from other views. Adversarial training is utilized to explore consistent information across all views. In this setup, the generator of GP-MVC aims to complete the missing data, while the discriminator's role is to differentiate between false and true data for each view. In the domain of self-supervised MVC, beyond the application of GANs for generating missing data, there has been substantial research into using GAN to capture underlying data distributions and features. Li et al. introduced the DAMC network [94], which employs GAN as a regularizer to guide the encoder's training. The encoder captures the data distribution for each individual view and subsequently reveals the common latent space. However, DAMC does not address the challenge of preserving low-dimensional information embeddings in multi-view networks. In response to this limitation, Sun et al. proposed a novel GAN framework for multi-view network embedding, named MEGAN [87]. This method aims to retain information from each individual network view while considering the consistency and complementarity between different views. MEGAN leverages GANs for multi-view network embedding to tackle the key challenge of modeling not only the connections between different views but also the intricate associations between these views. It introduces a new GAN framework for learning low-dimensional embeddings that typically exhibit nonlinearity while preserving information in a given multi-view network. This method yields significant performance enhancements in the realm of self-supervised MVC. The generator models multi-view connectivity to produce synthetic samples capable of deceiving the discriminator, which, in turn, distinguishes true pairs of nodes from counterfeit pairs. Through adversarial competition between the generator and the discriminator, GAN can acquire latent distribution features of data and generate missing data from random noise. While GAN excels at generating authentic samples, its training process is intricate, demanding a delicate balance between the performance of the generator and the discriminator. Furthermore, when compared to alternative generative methods, GAN places a strong emphasis on data generation and offers distinct advantages in data learning for self-supervised MVC. #### 4.1.3. Autoencoder + VAE The fusion of variational inference and autoencoder techniques gave rise to the Variational Autoencoder (VAE) [66]. In contrast to GAN, VAE entails concurrent learning of both the generator and the inference network, known as the Encoder. Its primary objective is to map data into a latent space and subsequently generate data samples from this latent space, as depicted in Fig 11. The objective function for VAE can be expressed as follows: (9) \(\mathbf{x}_{i}\) denotes the \(i\)-th sample of the input dataset, \(\mathbf{z}\) denotes the latent variable, \(p_{\theta}(\mathbf{z})\) denotes the prior distribution, and \(q_{\theta}(\mathbf{z}\mid\mathbf{x}_{i})\) denotes the encoder generated the posterior distribution of the latent variable \(\mathbf{z}\), and \(p_{\theta}(\mathbf{x}_{i}\mid\mathbf{z})\) denotes the generative model. Given the outstanding advantages of Variational Autoencoders (VAE) in feature learning and data generation, there has been extensive research progress in VAE-based self-supervised multi-view clustering and multi-modal learning methods [96, 97, 98, 99, 66, 97, 66, 98, 99, 100, 101, 102, 103]. A pioneering self-supervised generative clustering method within the VAE framework was introduced by Jiang et al. [104]. However, existing methods often grapple with challenges related to large-scale datasets and suboptimal sample reconstruction. In response to these issues, Yin et al. presented DMVCVAE [101], a novel multi-view clustering method that learns a shared generative latent representation conforming to a mixture of Gaussian distributions. In a similar vein, Xu et al. proposed Multi-VAE [57], capable of acquiring disentangled and interpretable visual representations, thereby addressing large-scale data clustering problems. Different from the existing multi-view clustering methods, they introduce a view-common variable \(y\) and multiple view-peculiar variables \(\left\{\mathbf{z}^{(1)},\mathbf{z}^{(2)},\ldots,\mathbf{z}^{(V)}\right\}\) in a multiple VAE architecture. The model can disentangle all views' Figure 10. GAN-based self-supervised multi-view clustering. \(D(\mathbf{x})\) denotes the discriminator and \(G(\mathbf{z})\) denotes the generator. \(\mathbf{x}\) is the view-specific data, \(\hat{\mathbf{x}}\) is the decoded representation, and \(\mathbf{x}\) and \(\hat{\mathbf{x}}\) must be as similar as possible. Finally, clustering is completed based on the latent representation \(\mathbf{z}\) with self-supervised signals. common cluster representations and each view's peculiar visual representations. In this way, the interference of multiple views' superfluous information is reduced when mining their complementary information for clustering. The generative model of Multi-VAE can be expressed as follows: \[p\left(\mathbf{x}^{(v)},\mathbf{z}^{(v)},\mathbf{y}\right) =p\left(\mathbf{x}^{(v)}\mid\mathbf{z}^{(v)},\mathbf{y}\right)p \left(\mathbf{z}^{(v)},\mathbf{y}\right) \tag{10}\] \[=p\left(\mathbf{x}^{(v)}\mid\mathbf{z}^{(v)},\mathbf{y}\right)p \left(\mathbf{z}^{(v)}\right)p(\mathbf{y}),\] the \(\mathbf{x}^{(v)}\) denotes the data for all views and \(\mathbf{c}\) denotes the view public variable. For the \(v\)-th view, \(\mathbf{z}^{(v)}\) denotes its unique visual information. The posteriors of \(\mathbf{y}\) and \(\mathbf{z}^{(v)}\) are written as \(p\left(\mathbf{y}\mid\mathbf{x}^{(v)}\right)\) and \(p\left(\mathbf{z}^{(v)}\mid\mathbf{x}^{(v)}\right)\). VAE has not only made significant strides in the realm of self-supervised MVC but has also yielded impressive results in multi-modal representation learning. DMMVAE [99], for instance, introduces a generative variational model capable of learning both the private and shared latent spaces for each modality. Each latent variable corresponds to a disentangled representational factor. The model enhances inter-modal compatibility by introducing a cross-VAE task, aiming for cross-modal reconstruction through a shared latent space. In summary, VAE facilitates the mapping of data into a latent space through collaborative learning involving an encoder and a generator. It can generate high-quality data samples from the latent distribution. VAE's primary focus lies in capturing the structure of data distribution, maximizing data likelihood, and minimizing the KL divergence of the latent distribution during training, all of which contribute to feature learning and data generation. Differing from GAN, VAE places more emphasis on the continuity of data distribution and its feature learning capabilities, which confer advantages in fields such as multi-view clustering and multi-modal learning. ### Contrastive Methods To address the challenges posed by heterogeneity, noise, and dimensional inconsistencies in multi-view clustering, the integration of contrastive learning into multi-view clustering is recognized as an effective method. Its primary objective is to enhance clustering performance by boltering the consistency among different views. Contrastive learning, as a paradigm, endeavors to acquire meaningful feature representations of data by assessing distinctions between samples, maximizing similarities among similar samples across different views, and minimizing similarities between dissimilar samples. This paradigm finds widespread use in self-supervised learning, exemplified by techniques like CMC [105], MoCo [106], and SimCLR [107]. As illustrated in Fig. 12, we can categorize contrastive methods into two main groups: Instance-Instance and Context-Instance. #### 4.2.1 Instance-Instance Instance-instance contrastive learning centers on the features of individual samples and assesses the similarities and differences among samples to improve feature distinctiveness. Given the inherent data representation inconsistencies among different views in multi-view data, namely, heterogeneity, instance-instance contrastive learning becomes particularly valuable in capturing correlation information between samples. This method reinforces cross-view consistency by bringing similar samples closer in feature representations through pairs of samples from different views. We can define the formula for instance-instance contrastive learning adoption as follows: \[\mathcal{L}_{\text{Instance-Instance}}=-\log\frac{\exp\left(\sin\left( \mathbf{z}_{i},\mathbf{z}_{j}\right)/\tau\right)}{\sum_{k=1}^{N}1_{\mid k \neq i\mid}\exp\left(\sin\left(\mathbf{z}_{i},\mathbf{z}_{k}\right)/\tau \right)}, \tag{11}\] where \(\mathbf{z}_{i}\) and \(\mathbf{z}_{j}\) are the feature representations of the sample instances after feature extraction, \(sim()\) denotes the similarity measure between \(\mathbf{z}_{i}\) and \(\mathbf{z}_{j}\), \(N\) is the total number of samples, \(\tau\) is the temperature parameter to control data distribution, and \(l_{\mid k\neq i\mid}\) is the indicator function, which takes the value of \(1\) when \(k\neq i\) and \(0\) otherwise. Numerous studies have undersored the significance of instance-instance contrastive learning in the context of multi-view clustering [14, 22, 28, 68, 74, 75, 108, 109, 110], with InstDisc [111] serving as a prominent example. Building upon InstDisc, CMC [105] introduces a novel method by considering multiple different views of an image as positive samples while designating another view as the negative counterpart. CMC seeks to bring together multiple views of an image in the embedding space while distancing them from other samples. Specifically, only one negative sample is selected for each positive sample. MoCo [106] further advances this idea by introducing the concept of momentum comparison, significantly increasing the number of negative samples. However, MoCo's method of positive sampling is somewhat simplistic; each pair of positive samples originates from the same source without any transformation or augmentation, making these positive samples easily distinguishable. In contrast, SimCLR [107] underscores the importance of utilizing a challenging positive sampling strategy by introducing up to \(10\) forms of data augmentation. This augmentation method shares similarities with CMC, which leverages various views to enhance positive sample pairs. On a different note, BYOL [112] adopts a more radical method by forgoing negative sampling altogether in self-supervised learning, achieving superior results. The advancement of the aforementioned contrastive learning techniques has spurred extensive research efforts within the domain of multi-view clustering. Fang et al. introduced an inductive multi-view image clustering framework incorporating self-supervised contrastive heterogeneous graph co-learning [21]. This framework incorporates two contrastive objectives, aiming to merge multiple views, achieve comprehensive local feature propagation embedding, and maximize mutual information between local feature propagation and influence-perception feature propagation. Nonetheless, Figure 11: VAE-based self-supervised multi-view clustering. The input sample \(\mathbf{x}\) is obtained by the encoder \(\mathrm{q}(\mathbf{z}\mid\mathbf{x})\) to obtain a vector of mean and standard deviation, and then the hidden vector \(z\) is obtained by sampling, followed by the decoder \(\mathrm{p}(\mathbf{x}\mid\mathbf{z})\) to obtain the output \(\mathbf{x}\) and \(\mathbf{\hat{x}}\) must be as similar as possible. Finally, clustering is accomplished based on the hidden variable \(\mathbf{z}\) with self-supervised signals. Figure 12: Each encoder-decoder denotes the processing of a view, and the red dashed box denotes the contrastive learning module, where the purple portion denotes the instance-instance and the orange portion denotes the context-instance. Red flags denote the self-supervised signals required for the clustering process. the method falls short of fully harnessing the latent complementary information inherent in different views. In a related vein, SCMC [76], as proposed by Zhang et al., leverages view-specific autoencoders to transform raw multi-view data into compact features representing perceptually nonlinear structures. Recognizing the challenges posed by missing views and view inconsistency in real-world scenarios, Wang et al. integrated multi-view information to align data and acquire latent representations, unveiling a novel cross-view graph comparison learning framework [113]. To address noise issues encountered in previous comparison methods, Yang et al. identified noise within those methods and introduced the use of known correspondences to construct positive pairs and random sampling for constructing negative pairs [63]. However, despite previous studies addressing multiple targets within the same feature space, they often overlooked the conflict between learning consistent public semantics and reconstructing inconsistent views of private information. Subsequently, Xu et al. introduced a new framework for multi-level feature learning in contrastive multi-view clustering [58]. This framework learns various levels of features, encompassing low-level features, high-level features, and semantic labels/features, from the original features without fusion, effectively achieving both the reconstruction and consistency goals across different feature spaces. Moreover, in the high-level feature space, the framework strengthens the consistency objective through contrastive learning, enabling high-level features to concentrate on learning the common semantics shared among all views. #### 4.2.2 Context-Instance Context-instance contrastive learning places emphasis on understanding the attribution relationship between a sample's local features and its global context [21, 58, 63, 64, 76, 113, 114, 63, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 252, 254, 255, 256, 257, 259, 261, 258, 259, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 323, 334, 335, 336, 337, 338, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 409, 411, 42, 434, 44, 44, 45, 46, 47, 48, 49, 42, 44, 45, 46, 47, 48, 49, 43, 49, 44, 46, 48, 49, 45, 47, 49, 46, 49, 48, 49, 40, 41, 42, 43, 44, 45, 46, 47, 49, 48, 49, 40, 41, 42, 43, 44, 45, 46, 47, 49, 48, 49, 40, 42, 44, 46, 49, 41, 43, 44, 45, 46, 47, 48, 49, 49, 40, 42, 44, 48, 49, 41, 44, 45, 46, 47, 49, 48, 49, 49, 41, 44, 49, 42, 45, 46, 49, 47, 48, 49, 42, 48, 49, 43, 49, 44, 45, 46, 48, 49, 45, 49, 46, 47, 49, 48, 49, 49, 41, 42, 43, 44, 45, 46, 47, 49, 48, 49, 45, 49, 46, 49, 47, 48, 49, 49, 49, 42, 49, 43, 49, 44, 45, 46, 47, 49, 48, 49, 45, 49, 46, 49, 47, 48, 49, 49, 49, 48, 49, 49, 40, 41, 42, 43, 44, 45, 46, 49, 47, 49, 48, 49, 49, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 59, 52, 54, 59, 53, 56, 57, 59, 50, 54, 52, 55, 56, 58, 59, 53, 59, 54, 57, 59, 55, 56, 59, 57, 58, 59, 59, 50, 51, 53, 59, 52, 54, 50, 55, 59, 56, 57, 59, 58, 59, 59, 51, 50, 52, 54, 53, 56, 59, 57, 59, 58, 59, 59, 50, 51, 54, 52, 55, 59, 56, 57, 59, 58, 59, 59, 52, 59, 50, 51, 55, 52, 53, 54, 55, 56, 57, 59, 58, 59, 59, 50, 51, 52, 54, 55, 59, 57, 59, 52, 58, 59, 53, 59, 54, 55, 59, 56, 57, 59, 58, 59, 59, 50, 51, 52, 53, 54, 55, 57, 59, 58, 59, 59, 50, 52, 59, 50, 53, 57, 59, 51, 54, 55, 59, 56, 59, 57, 58, 59, 59, 51, 59, 52, 59, 53, 59, 54, 59, 55, 57, 59, 58, 59, 59, 50, 53, 59, 51, 54, 55, 59, 52, 55, 56, 57, 59, 51, 56, 59, 57, 58, 59, 59, 59, 52, 59, 53, 57, 59, 54, 59, 50, 55, 57, 59, 58, 59, 59, 50, 51, 52, 59, 53, 59, 54, 50, 55, 56, 57, 59, 51, 57, 59, 52, 59, 53, 57, 58, 59, 59, 50, 54, 59, 51, 52, 53, 59, 54, 52, 55, 56, 57, 59, 53, 57, 59, 54, 55, 58, 59, 55, 59, 56, 59, 57, 59, 58, 59, 59, 50, 59, 51, 52, 53, 54, 55, 59, 56, 57, 59, 58, 59, 59, 50, 51, 52, 59, 59, 50, 53, 57, 59, 59, 51, 54, 55, 59, 56, 57, 59, 58, 59, 59, 50, 51, 59, 52, 59, 50, 51, 55, 59, 59, 52, 59, 51, 52, 59, 53, 59, 56, 57, 59, 58, 59, 59, 51, 50, 52, 59, 53, 59, 54, 59, 55, 56, 59, 57, 59, 58, 59, 59, 51, 50, 59, 51, 52, 59 proposal involves constructing in-depth feature learning and clustering models for each view independently, effectively leveraging the complementary information within multi-view data. CMVC [116] takes a different method by normalizing the views to enhance the features learned from each view, ultimately improving clustering quality through optimized training strategies. ## 5 Future Work And Discussion In this section, we delve into the advantages of self-supervised multi-view clustering while also addressing several open issues and suggesting potential directions for future research. In the realm of real-world data, diversity abounds, with various modalities and views presenting themselves. Self-supervised multi-view clustering endeavors to enhance clustering performance by amalgamating information from distinct data modalities. This method offers several notable advantages: \(\bullet\)Self-supervised multi-view clustering excels at capturing richer data features and uncovering correlations among different views by integrating information from multiple data sources, thereby enhancing the quality of clustering results. \(\bullet\)In real-world scenarios with numerous missing data, self-supervised multi-view clustering exhibits the ability to mitigate the impact of data incompleteness to a certain extent. Whether through inferring missing information or making judgments based on available data, self-supervised multi-view clustering can still yield meaningful clustering results in situations of data scarcity. \(\bullet\)Self-supervised multi-view clustering is adept at exploiting the consistency and alignment present among multi-view data, leading to more precise identification and analysis of underlying patterns and structures within the data. This, in turn, enables a deeper understanding and provides valuable insights. However, despite the remarkable achievements of self-supervised multi-view clustering in enhancing clustering performance, several challenges persist, leaving room for potential improvements: \(\bullet\)Current research frequently relies on experiments conducted with artificial incomplete datasets, which to some extent, limits its applicability and reliability in real-world scenarios. Conducting more empirical studies using real-world data can provide better validation of the method's effectiveness. \(\bullet\)Self-supervised multi-view clustering still graphes with the challenge of accurately handling incomplete data. Existing methods may exhibit sensitivity to missing data situations, necessitating the development of more robust missing data recovery and filling strategies to bolster clustering resilience. \(\bullet\)Inherent data biases and noise-related challenges continue to impact the quality of clustering results, especially in the presence of noisy data. Future research endeavors can explore methods for more effectively addressing these issues to achieve more accurate and robust clustering outcomes. In summary, self-supervised multi-view clustering holds significant promise for a wide range of applications in multi-modal data analysis. However, several technical and practical challenges must be addressed to further enhance its performance and applicability. ## 6 Conclusion The self-supervised learning problem presents a significant challenge within the realm of MVC, and its investigation holds paramount importance for practical applications. To provide readers with a comprehensive grasp of self-supervised MVC, we acquaint them with primary research materials within related fields. This includes commonly employed self-supervised MVC datasets and related problems, offering insights from both image and video perspectives. Subsequently, this paper introduces a novel classification method aimed at categorizing existing self-supervised MVC methods: representation learning and self-supervised learning. Representation learning plays an integral role within self-supervised MVC and the process of generating and utilizing self-supervised signals. We place our focus on the identification and categorization of self-supervised signals into five distinct categories based on specific methodologies, providing detailed descriptions for each. Self-supervised learning methods encompass two distinct learning models that leverage the input data itself as supervision. These models are generation-based and comparison-based. Generative methods strive to learn the underlying data distribution and represent the data using generative models. In contrast, comparison methods directly optimize an objective function that involves pairwise similarity, aiming to minimize the average similarity within clusters while maximizing the average similarity between clusters. Finally, it is imperative to highlight several open and challenging problems, encouraging researchers to delve deeper into further research and make substantial progress in this domain. ## Acknowledgments This work was supported by the National Science Foundation of China (61962045, 62062055, 61902382, 61972381), Program for Young Talents of Science and Technology in Universities of Inner Mongolia Autonomous Region(NJYT23104), the Science and Technology Planning Project of Inner Mongolia Autonomous Region (2023YFSH0066).
マルチビュークラスタリング(MVC)は近年、クロスモダルリプレゼンテーション学習とデータドリブンデcisionMakingに大きな影響を与えてきました。これを実現するのは、複数の視点の整合性と補完的な情報を利用して、サンプルを異なったグループにクラスタリングすることです。しかし、対比学習はコンピュータビジョン分野で進展を続けており、自己教師あり学習は急速に発展し、MVC方法においても支配的な役割を果たしてきました。これは、画像やビデオデータの表現を代理タスクで設計することで、教師情報の収集を促すものです。自己教師ありMVCの急速な発展にもかかわらず、その研究進捗の現状を分析・要約するための包括的な調査は存在しません。そこで、この論文は自己教師ありMVCの出現の理由とメリット、そして一般的なデータセット、データ問題、表現学習方法、自己教師あり学習方法の内部的関連性と分類について検討します
2309.06740
Fourier coefficient of parameterized quantum circuits and barren plateau problem
We show the relationship between the Fourier coefficients and the barren plateau problem emerging in parameterized quantum circuits. In particular, the sum of squares of the Fourier coefficients is exponentially restricted concerning the qubits under the barren plateau condition. Throughout theory and numerical experiments, we introduce that this property leads to the vanishing of a probability and an expectation formed by parameterized quantum circuits. The traditional barren plateau problem requires the variance of gradient, whereas our idea does not explicitly need a statistic. Therefore, it is not required to specify the kind of initial probability distribution.
Shun Okumura, Masayuki Ohzeki
2023-09-13T06:16:27
http://arxiv.org/abs/2309.06740v1
# Fourier coefficient of parameterized quantum circuits and barren plateau problem ###### Abstract We show the relationship between the Fourier coefficients and the barren plateau problem emerging in parameterized quantum circuits. In particular, the sum of squares of the Fourier coefficients is exponentially restricted concerning the qubits under the barren plateau condition. Throughout theory and numerical experiments, we introduce that this property leads to the vanishing of a probability and an expectation formed by parameterized quantum circuits. The traditional barren plateau problem requires the variance of gradient, whereas our idea does not explicitly need a statistic. Therefore, it is not required to specify the kind of initial probability distribution. _Introduction._ Quantum computers are expected in many fields because they can speed up calculations. For instance, they can exponentially speed up prime factorization algorithm [1], algorithm for solving linear systems of equations [2], and full configuration interaction method for quantum chemical calculations [3]. It is necessary to implement these algorithms, which require quantum error correction and huge computational resources. Unfortunately, it will take time to achieve these things. For these reasons, a class of quantum computers called noisy intermediate-scale quantum devices (NISQ) [4], which do not use quantum error correction and only use a small amount of computational resources, has attracted much attention. NISQ algorithms are often computed by a hybrid of a classical computer and a quantum computer due to the constraints of computational resources. The role of the quantum computer in the hybrid algorithm is to generate a parameterized quantum state. This quantum state gives us a classical output parameterized by measurements. We often use this output to define the cost function of the problem we want to solve. On the classical computer, the optimal parameters are explored by calculating the gradient of the cost function. Variational quantum eigensolver (VQE) [5], quantum approximate optimization algorithm (QAOA) [6], and quantum neural network (QNN) [7; 8] are typical examples of hybrid algorithms. These algorithms have been applied to many real-world problems, including finance [9; 10], and quantum chemistry [11; 12]. Parameterized quantum states are obtained by using parameterized quantum circuits (PQCs). There are many design strategies for PQCs. It is a difficult question as to which to use. Intuitively, it would be better to be able to search widely in Hilbert space. Because we manipulate parametrized quantum states through PQCs, we can find the quantum state we need in this space. However, if the search range is wide enough, there is a problem called a barren plateau (BP) problem, where the variance of the gradient of a cost function vanishes exponentially concerning qubits [13; 14]. It causes optimization on the classical computer to fail. Unfortunately, some research has shown that it is impossible to avoid this phenomenon, even with higher order derivatives [15] and gradient-independent optimization algorithms [16]. To avoid BP problem, many methods are still proposed. Therefore, they may be promoted by providing new tools. We focus on a Fourier series used in signal processing as a tool because PQCs have a periodic structure. The Fourier series is often used to understand QNN through periodicity. For example, universality [17], generalization [18], learning capability [19], and benign overfitting [20] are typical cases. Nevertheless, the relationship between the BP problem and the Fourier series has not been investigated. This paper investigates the relationship between the Fourier coefficients and the BP problem. Our results are: (i) The sum of squares of the Fourier coefficients decreases exponentially under the BP problem conditions. (ii) If the BP problem occurs after initialization with a uniform distribution, this problem cannot be avoided by initializing even with other distributions. These results will provide new interpretations and tools for the BP problem. _Parameterized Quantum Circuits._ Let us consider a parameterized unitary gate \(V(\theta)\) (or unitary operator) defined as follows, \[V(\theta):=e^{-i\theta H}=I\cos\theta-iH\sin\theta, \tag{1}\] where \(I\) is identity operator, \(H\) is Hermite operator, and \(\theta\) is a parameter. From this definition, we can confirm that it is periodic: \[V(\theta)=V(\theta+2\pi). \tag{2}\] We set ansatz \(U(\mathbf{\theta})\) by the product of the parameterized unitary gates: \[U(\mathbf{\theta}):=\prod_{i=1}^{D}V_{i}(\theta_{i})W_{i}, \tag{3}\] where \(V_{i}\) is the \(i\)-th parameterized unitary gate, and \(W_{i}\) is the \(i\)-th parameter-independent unitary gate. Examples of \(W_{i}\) are a Hadamard gate and a CNOT gate. There are two outputs of PQCs, including ansatz \(U(\mathbf{\theta})\), of expected value type and probability type. For the expected value type, an output \(f(\mathbf{\theta})\) is calculated as follows, \[f(\mathbf{\theta})=\mathrm{Tr}\big{[}U(\mathbf{\theta})\rho U(\mathbf{\theta})^{\dagger}O \big{]}, \tag{4}\] where \(\rho:=|0\rangle\!\langle 0|^{\otimes n}\) is the initial state of \(n\) qubits and \(O\) is Pauli operator. This equation has the properties \(|f(\mathbf{\theta})|\leq 1\) and \(f(\mathbf{\theta}+2\pi\mathbf{e_{i}})=f(\mathbf{\theta})\) (periodic), where \(\mathbf{e_{i}}\) is \(i\)-th unit vector. This type is used in VQE. If the expected value type is used in QNN, a unitary gate \(E(\mathbf{x})\) is added to encode data \(\mathbf{x}\): \[f(\mathbf{x},\mathbf{\theta})=\mathrm{Tr}\big{[}U(\mathbf{\theta})E(\mathbf{x})\rho E(\mathbf{x})^ {\dagger}U(\mathbf{\theta})^{\dagger}O\big{]}. \tag{5}\] This embedding corresponds to a feature map. Next, we consider the case of probability type. We set the probability \(q_{\mathbf{b}}(\mathbf{\theta})\) of getting a bitstring \(\mathbf{b}\) as \[q_{\mathbf{b}}(\mathbf{\theta})=\mathrm{Tr}\big{[}U(\mathbf{\theta})\rho U(\mathbf{\theta})^{ \dagger}P_{\mathbf{b}}\big{]}, \tag{6}\] where \(P_{\mathbf{b}}:=|\mathbf{b}\rangle\!\langle\mathbf{b}|\) is the projection operator. This type is used quantum circuit born machine (QCBM) [21], and quantum kernel functions [22]. Finally, we introduce the method of updating parameters. Various optimizers, such as the standard gradient method, update the parameters. In most of the cases, we need to calculate the gradient. The parameter shift rule [7; 23] is a good technique for computing gradients: \[\frac{\partial f(\mathbf{\theta})}{\partial\theta_{i}}=\frac{1}{2}\Big{(}f\Big{(} \mathbf{\theta}+\frac{\pi}{2}\mathbf{e_{i}}\Big{)}-f\Big{(}\mathbf{\theta}-\frac{\pi}{2} \mathbf{e_{i}}\Big{)}\Big{)}. \tag{7}\] This method is more robust to errors than the difference method. _Expressivity._ There are many choices for ansatz \(U(\mathbf{\theta})\)[24]. It is necessary to quantify the search range of Hilbert space to find the most desirable ansatz of these. Therefore, "expressibility" has been proposed to quantify the search range of Hilbert space [25]. To define expressibility, the set is defined as \(\mathbb{U}:=\{U(\mathbf{\theta})|\mathbf{\theta}\sim P\}\), where \(P\) is the distribution that determines the initial values. Let us consider expressibility \(\epsilon_{\mathbb{U}}^{(t)}(\rho)\) defined as follows, \[\epsilon_{\mathbb{U}}^{(t)}(\rho):=\\ \left\|\int_{V\in\mathrm{Haar}}d\mu(V)V^{\otimes t}\rho^{ \otimes t}V^{\dagger\otimes t}-\int_{U\in\mathbb{U}}dUU^{\otimes t}\rho^{ \otimes t}U^{\dagger\otimes t}\right\|_{1}, \tag{8}\] where \(d\mu(V)\) and \(dU\) are volume elements corresponding to Haar measure and the uniform distribution on \(\mathbb{U}\), respectively. In addition, \(\left\|\mathbf{\cdot}\right\|_{1}\) is a trace norm. Here \(\epsilon_{\mathbb{U}}^{(t)}(\rho)\) quantifies the close \(t\)-th order moments of the Haar distribution and the distribution generated by \(\mathbb{U}\). If \(\epsilon_{\mathbb{U}}^{(t)}(\rho)=0\), then \(U(\mathbf{\theta})\) is called unitary \(t\)-design [26]. In this case, the \(\mathbb{U}\) distribution corresponds to Haar distribution up to \(t\)-th order moments. The closer to the Haar distribution, the ansatz \(U(\mathbf{\theta})\) can access all of Hilbert space. _Barren Plateau Problem._ It is not recommended to form a unitary 2-design because if ansatz \(U(\mathbf{\theta})\) forms a unitary 2-design, it causes BP problem. Let us consider this problem defined as follows, \[Var_{\mathbf{\theta}}\!\left[\frac{\partial f(\mathbf{\theta})}{\partial\theta_{i}} \right]\in\mathcal{O}\big{(}b^{-n}\big{)}, \tag{9}\] where \(b\) is a positive constant. Many ansatzes are known to cause this phenomenon. To mitigate or avoid this problem, making the measurement local or changing the distribution of the initial strategy is considered [27; 28]. Phenomena similar to the BP problem are induced, such as in quantum kernel functions [29]. If embedding \(E(\mathbf{x})\) to encode data forms 2-design, it is exponentially concentrated at a certain value. _Main results._ Functions defined by Eqs. (4) and (6) are periodic. Hence, we can use Fourier series expansions. The Fourier series expansion of Eq. (4) is as follows: \[f(\mathbf{\theta})=\sum_{|\mathbf{k}|\leq\infty}c_{\mathbf{k}}e^{-i\mathbf{k}\cdot\mathbf{\theta}}, \tag{10}\] where \(c_{\mathbf{k}}\) is a Fourier coefficient. Perceval's equality determines the sum of squares of Fourier coefficients. This equality holds as follows: \[\sum_{|\mathbf{k}|\leq\infty}|c_{\mathbf{k}}|^{2}=\frac{1}{(2\pi)^{d}}\int_{[0,2\pi]^{ d}}|f(\mathbf{\theta})|^{2}d\mathbf{\theta}, \tag{11}\] where \(d\) is the dimension of parameters \(\mathbf{\theta}\). We consider more generalized functions created by multiple ansatz. If \(U_{1}(\mathbf{\theta}_{1})\), \(U_{2}(\mathbf{\theta}_{2})\),..., \(U_{L}(\mathbf{\theta}_{L})\) are ansatzes, the output \(f(\mathbf{\theta}_{1},\mathbf{\theta}_{2},...,\mathbf{\theta}_{L})\) is calculated as follows, \[f(\mathbf{\theta}_{1},\mathbf{\theta}_{2},...,\mathbf{\theta}_{L})=\mathrm{Tr}\!\left[\prod _{l=1}^{L}U_{l}(\mathbf{\theta}_{l})\rho\!\left(\prod_{l=1}^{L}U_{l}(\mathbf{\theta}_{l })\right)^{\dagger}O\right]\!. \tag{12}\] For convenience, we transform Eq. (12) as follows: \[f(\mathbf{\theta}_{1},\mathbf{\theta}_{2},...,\mathbf{\theta}_{L})=\\ \mathrm{Tr}\big{[}U_{i}(\mathbf{\theta}_{i})\rho(\mathbf{\theta}_{1},..., \mathbf{\theta}_{i-1})U_{i}(\mathbf{\theta}_{i})^{\dagger}O(\mathbf{\theta}_{i+1},...,\bm {\theta}_{L})\big{]}, \tag{13}\] where we define them as follows: \[\rho(\mathbf{\theta}_{1},...,\mathbf{\theta}_{i-1}) :=\prod_{l=1}^{i-1}U_{l}(\mathbf{\theta}_{l})\rho\!\left(\prod_{l=1} ^{i-1}U_{l}(\mathbf{\theta}_{l})\right)^{\dagger}, \tag{14}\] \[O(\mathbf{\theta}_{i+1},...,\mathbf{\theta}_{L}) :=\left(\prod_{l=i+1}^{L}U_{l}(\mathbf{\theta}_{l})\right)^{\dagger}O \prod_{l=i+1}^{L}U_{l}(\mathbf{\theta}_{l}). \tag{15}\] We can consider the Fourier series expansion for a single ansatz \(U_{i}(\mathbf{\theta}_{i})\): \[f(\mathbf{\theta}_{1},\mathbf{\theta}_{2},...,\mathbf{\theta}_{L})=\\ \sum_{|\mathbf{k}|\leq\infty}c_{\mathbf{k}}(\mathbf{\theta}_{1},\mathbf{\theta}_{2},...\mathbf{\theta}_{i-1},\mathbf{\theta}_{i+1},...,\mathbf{\theta}_{L})e^{-i\mathbf{k}\cdot\bm {\theta}_{i}}. \tag{16}\] Then, Perceval's equality holds: \[\sum_{|\mathbf{k}|\leq\infty}|c_{\mathbf{k}}(\mathbf{\theta}_{1},\mathbf{\theta}_ {2},...\mathbf{\theta}_{i-1},\mathbf{\theta}_{i+1},...,\mathbf{\theta}_{L})|^{2}=\\ \frac{1}{(2\pi)^{d_{i}}}\int_{[0,2\pi]^{d_{i}}}|f((\mathbf{\theta}_{ 1},\mathbf{\theta}_{2},...,\mathbf{\theta}_{L}))|^{2}d\mathbf{\theta}_{i}, \tag{17}\] where \(d_{i}\) is the dimension of parameters \(\mathbf{\theta}_{i}\). Similar discussions are also available with probability types. Furthermore, it is valid if some parameter is changed to data \(\mathbf{x}\). We can obtain the following relationship as Eq. (17) and expressivity \(\epsilon_{\text{U}}^{(2)}(\rho)\): \[\left|\sum_{|\mathbf{k}|\leq\infty}|c_{\mathbf{k}}(\mathbf{\theta}_{1},\mathbf{ \theta}_{2},...\mathbf{\theta}_{i-1},\mathbf{\theta}_{i+1},...,\mathbf{\theta}_{L})|^{2}- \frac{1}{2^{n}+1}\right|\\ \leq\epsilon_{\text{U}_{i}}^{(2)}(\rho(\mathbf{\theta}_{1},...,\mathbf{ \theta}_{i-1})), \tag{18}\] where we define as \(\text{U}_{i}:=\left\{U_{i}(\mathbf{\theta}_{i})|\mathbf{\theta}_{i}\in[-\pi,\pi]^{d_{i }}\right\}\). If \(U_{i}(\mathbf{\theta}_{i})\) is unitary 2-design, then \(\epsilon_{\text{U}_{i}}^{(2)}(\rho(\mathbf{\theta}_{1},...,\mathbf{\theta}_{i-1}))=0\). We obtain the following equality \[\sum_{|\mathbf{k}|\leq\infty}|c_{\mathbf{k}}(\mathbf{\theta}_{1},\mathbf{\theta}_{2},...\mathbf{ \theta}_{i-1},\mathbf{\theta}_{i+1},...,\mathbf{\theta}_{L})|^{2}=\frac{1}{2^{n}+1}. \tag{19}\] It is implied that the possible values that the Fourier coefficients can take are exponentially restricted concerning the qubits \(n\). Therefore, if \(n\gg 1\), then \(|c_{k}|\to 0\) for all \(k\). In this case, we reach \[f(\mathbf{\theta}_{1},\mathbf{\theta}_{2},...,\mathbf{\theta}_{L})\to 0. \tag{20}\] This result shows that the gradient vanishes when the parameter shift method is used. For probability types, the results are slightly different. In this case, the following inequality holds: \[\left|\sum_{|\mathbf{k}|\leq\infty}|c_{\mathbf{k}}(\mathbf{\theta}_{1},\mathbf{ \theta}_{2},...\mathbf{\theta}_{i-1},\mathbf{\theta}_{i+1},...,\mathbf{\theta}_{L})|^{2}- \frac{1}{2^{(n-1)}(2^{n}+1)}\right|\\ \leq\epsilon_{\text{U}_{i}}^{(2)}(\rho(\mathbf{\theta}_{1},...,\mathbf{ \theta}_{i-1})). \tag{21}\] If \(U_{i}(\mathbf{\theta}_{i})\) forms 2-design, then the following equality holds: \[\sum_{|\mathbf{k}|\leq\infty}|c_{\mathbf{k}}(\mathbf{\theta}_{1},\mathbf{\theta}_{2},...\mathbf{ \theta}_{i-1},\mathbf{\theta}_{i+1},...,\mathbf{\theta}_{L})|^{2}=\frac{1}{2^{(n-1)}( 2^{n}+1)}. \tag{22}\] The discussion that follows is the same as before. These relations are obtained by regarding Parseval's equality as the second-order moment from a uniform distribution. Therefore, if it is periodic, only uniform distributions must be considered. Thus, if the BP problem occurs in the initialization of the uniform distribution, the function and gradient will vanish if the initialization strategy is changed. _Numerical experiments._ We confirm our results through numerical experiments. For convenience, we use one-variable QNN. Hardware efficient embedding (HEE) encodes the data \(x\). HEE replaces the parameters part of Hardware efficient ansatz (HEA) with data [11]. HEE and HEA are known to cause BP problems by increasing the size of the layers. Therefore, we can confirm that Eq. (19) is valid if we increase the size of the layers of ansatz. We use \(L=5,10,15,20,25,30,35,40,45,50\) as the number of layers and \(n=2,4,6,8\) as the number of qubits. Measurements are taken for all qubits. In addition, we measure on the Z-basis. We use the Pennylare simulator in this numerical experiment [30]. We illustrate the PQCs in this numerical experiment in Fig.1 and Fig.2. If \(E(x)\) does not form a 2-design, Eq. (19) does not hold in general. Parameters are given by the uniform Figure 2: Illustration of QNN’s quantum circuit for four qubits. \(E^{L}(x)\) is \(L\) layers hardware efficient embedding. The number of parameters equals the number of qubits. The expected value of this circuit is \(f(x,\mathbf{\theta})=\text{Tr}\big{[}U(\mathbf{\theta})E^{L}(x)\rho E^{L}(x)^{\dagger}U (\mathbf{\theta})^{\dagger}Z^{\otimes 4}\big{]}\). Figure 1: Illustration of the \(L\) layers hardware-efficient embedding for four qubits. When \(L\) is large, the feature map becomes more complex. distribution of \([0,2\pi)\). The parameters are sampled 300 times to calculate the sum of squares of the respective Fourier coefficients. Then, we evaluate whether Eq. (19) holds or not using the mean and variance of the sampled results. The results of our numerical experiments are shown in Fig. 3. The results show that at \(L\geqq 15\), the variance is close enough to zero, and the sum of squares of the Fourier coefficients follows Eq. (19). This result indicates that \(E^{L}(x)\) is close to Haar random in the neighborhood of \(L=15\). In the other case, when \(L=15\), each coefficient when the number of qubits is increased with \(n=2,4,6\) is shown in Fig.4. In this result, it can be seen that the more qubits are increased, the more the magnitude of the coefficients that can be taken is attenuated. This phenomenon implies that under Eq. (19), Eq. (20) is established. _Conclusion._ We have shown that there is a very close relationship between the Fourier series and the BP problem. In particular, the sum of squares of the Fourier coefficients is exponentially smaller for qubits. This result can be one of understanding the vanishing of functions and gradients. Since Perceval's equality can be considered as the second moment of a uniform distribution, we do not need to consider the distribution of initial values. To obtain this result, only periodicity is required. The small number of assumptions suggests that it can be used in combination with a variety of theories. However, when parameters are sufficiently small, care should be taken. In this case, ansatz can be considered to be non-existent. If the ansatz does not exist, we cannot perform the Fourier series expansion for that ansatz, as in our results. This knowledge could be used to avoid or mitigate the BP problem. In other cases, the BP problem is caused by entanglement [31], noise [32], and global measurements [27]. The relationship between these and the Fourier series is unknown. It is a future work to study these issues using this technique. _Acknowledgement._ This work is supported by JSPS KAKENHI Grant No. 23H01432. Our study receives financial support from the MEXT-Quantum Leap Flagship Program Grant No. JPMXS0120352009, as well as Public Private R&D Investment Strategic Expansion PrograM (PRISM) and programs for Bridging the gap between R&D and the IDeal society (society 5.0) and Generating Economic and social value (BRIDGE) from Cabinet Office.
Fourier係数の関係性と、パラメータ化量子回路におけるバーンPlateau問題の関係を示します。特に、Fourier係数の平方和は、バーンPlateau条件の下では、 qubits に対して指数的に制限されます。理論と数値実験を通じて、この性質が、パラメータ化量子回路によって形成される確率と期待値の消失に繋がると示しました。伝統的なバーンPlateau問題では、勾配の分散を求める必要があり、一方、私たちのアイデアは、統計的な指定を必要としない。したがって、初期確率分布の種類を指定する必要はありません。
2309.05249
Evaluating Visual Odometry Methods for Autonomous Driving in Rain
The increasing demand for autonomous vehicles has created a need for robust navigation systems that can also operate effectively in adverse weather conditions. Visual odometry is a technique used in these navigation systems, enabling the estimation of vehicle position and motion using input from onboard cameras. However, visual odometry accuracy can be significantly impacted in challenging weather conditions, such as heavy rain, snow, or fog. In this paper, we evaluate a range of visual odometry methods, including our DROID-SLAM based heuristic approach. Specifically, these algorithms are tested on both clear and rainy weather urban driving data to evaluate their robustness. We compiled a dataset comprising of a range of rainy weather conditions from different cities. This includes, the Oxford Robotcar dataset from Oxford, the 4Seasons dataset from Munich and an internal dataset collected in Singapore. We evaluated different visual odometry algorithms for both monocular and stereo camera setups using the Absolute Trajectory Error (ATE). From the range of approaches evaluated, our findings suggest that the Depth and Flow for Visual Odometry (DF-VO) algorithm with monocular setup performed the best for short range distances (< 500m) and our proposed DROID-SLAM based heuristic approach for the stereo setup performed relatively well for long-term localization. Both VO algorithms suggested a need for a more robust sensor fusion based approach for localization in rain.
Yu Xiang Tan, Marcel Bartholomeus Prasetyo, Mohammad Alif Daffa, Deshpande Sunny Nitin, Malika Meghjani
2023-09-11T05:55:01
http://arxiv.org/abs/2309.05249v3
# Evaluating Visual Odometry Methods for Autonomous Driving in Rain ###### Abstract The increasing demand for autonomous vehicles has created a need for robust navigation systems that can also operate effectively in adverse weather conditions. Visual odometry is a technique used in these navigation systems, enabling the estimation of vehicle position and motion using input from onboard cameras. However, visual odometry accuracy can be significantly impacted in challenging weather conditions, such as heavy rain, snow, or fog. In this paper, we evaluate a range of visual odometry methods, including our DROID-SLAM based heuristic approach. Specifically, these algorithms are tested on both clear and rainy weather urban driving data to evaluate their robustness. We compiled a dataset comprising of a range of rainy weather conditions from different cities. This includes, the Oxford Robotcar dataset from Oxford, the 4Seasons dataset from Munich and an internal dataset collected in Singapore. We evaluated different visual odometry algorithms for both monocular and stereo camera setups using the Absolute Trajectory Error (ATE). From the range of approaches evaluated, our findings suggest that the Depth and Flow for Visual Odometry (DF-VO) algorithm with monocular setup performed the best for short range distances (\(<500m\)) and our proposed DROID-SLAM based heuristic approach for the stereo setup performed relatively well for long-term localization. Both VO algorithms suggested a need for a more robust sensor fusion based approach for localization in rain. ## I Introduction Visual Odometry (VO) is a cost-effective localization solution for autonomous urban driving. However, visual data can be easily compromised in adverse weather conditions such as rain, fog or snow. In rain, images are occluded by raindrops on the camera lenses and rain streaks reduce the visibly of the background objects [1]. Lens flare also appear due to rain which further reduces the visibility of the scene [2] as shown in Fig. 1. These adverse weather effects could negatively impact visual odometry algorithms designed and trained on clear weather conditions [3]. This calls for a robust localization algorithm to enable autonomous vehicles to operate in all-weather conditions. In this paper, we evaluate across a range of VO algorithms, including our DROID-SLAM based heuristic approach [4], for urban driving in rain. Our aim is to identify the VO algorithm that performs relatively well for robust localization in rainy weather. We compiled the available open-source rain datasets and augmented them with our internal rain dataset to create a comprehensive suite of datasets for evaluation. The open-source datasets comprise of the Oxford Robotcar [5] and the 4Seasons datasets [6]. Our internal dataset was collected in Singapore while Oxford Robotcar and 4Seasons datasets are from Oxford and Munich respectively. Thus, the resultant combined dataset used in our study contains a wide range of road and rain conditions from climatically and geographically different cities. Sample images from the three datasets are presented in Fig. 1. We analyze the strengths and limitations of various approaches and provide insights into future research directions that can improve the robustness and reliability of visual odometry based algorithms in these scenarios. Our contributions are: (a) a comprehensive evaluation of existing VO algorithms for both clear and rain conditions and (b) an analysis of strengths and limitations of different VO algorithms for rain conditions. ## II Related Work There are three parts of related work for this paper: (a) robust sensors and sensor fusion based localization algorithms in adverse weather, (b) robust visual feature extraction in adverse weather scenarios and (c) visual odometry algorithms. ### _Localization in Adverse Weather_ The diverse range of noise artefacts caused by adverse weather makes localization a challenging problem. The state-of-the-art approaches for localization in adverse weather, either localize based on robust sensors or perform sensor fusion. Zhang et al. discussed the impacts of adverse weather on autonomous driving, across multiple weather conditions, and sensors [7]. Meanwhile, our paper performs an in-depth analysis on the effects of rain conditions and evaluates a wide range of visual odometry methods on real-world rain datasets. Fig. 1: Sample images from the three datasets. #### Iii-A1 Utilizing Robust Sensors Robust sensors such as Radar and LiDAR are less affected by adverse weather conditions when compared to the vision data. Thus, recent approaches [8, 9, 10] adopt to utilize such sensors for adverse weather conditions. However, such sensors are more expensive compared to cameras and require higher computational requirements. #### Iii-A2 Sensor Fusion Approaches Sensor fusion utilizes multiple sensors together to perform localization for difficult conditions. An example would be Visual-Inertial Odometry where camera images are used together with the Inertial Measurement Unit (IMU) to improve robustness of the localization algorithm [11, 12]. Other examples include GPS-SLAM [13] which combines GPS data with VO while Brubaker et al [14] proposed a map-based approach which combines map data with VO. In this paper, we focus our evaluation on pure VO algorithms to find a suitable VO component for any sensor fusion approach for localization in rainy weather. ### _Vision in Adverse Weather_ Apart from localization, many other applications suffer from poor visual data due to adverse weather conditions. There are two main ideas to improve robustness of vision algorithms in adverse weather: (a) improving the robustness of visual features and (b) removing the noise caused by adverse weather. #### Iii-B1 Robust Feature Detectors This category of approaches focuses on improving reliability of visual odometry by improving the feature detectors. Feature detectors form the basis of VO methods, where poorly detected features directly affect localization accuracy. Thus, if the feature detectors are robust, it would in turn improve the robustness of VO in adverse weather. Algorithms such as R2D2 [15] and D2-Net [16] are designed to perform accurate feature detection in presence of illumination and viewpoint changes. However, such methods are not directly trained and tested on adverse weather data. #### Iii-B2 Removing Noise Artefacts Noise artefacts such as raindrops on the camera lens or lens flare can be removed using a neural network. Methods such as [17, 18] utilize pairs of adverse weather and clear weather data to train the model to remove the artefacts caused by adverse weather. This allows the algorithm to run in adverse weather the way it does in clear weather. However, it is difficult to collect pairs of adverse weather and clear weather data for a new environment and such noise removal algorithms are not proven to be generalizable enough to work properly on out-of-distribution data. In this paper, we aim to quantitatively evaluate the readiness of existing VO methods to run in adverse weather without such noise removal algorithms. ### _Visual Odometry_ Yousif et al. [19] provides an overview of the techniques involved in visual odometry and visual SLAM, while Kazerouni et al. [20] discusses state-of-the-art visual SLAM approaches. However, both survey papers do not analyze and evaluate in detail the capability of visual odometry in challenging scenarios. Agostinho et al. [21] evaluates visual odometry methods in challenging scenarios such as the presence of vegetation, tunnels and dynamic objects but not in adverse weather conditions. The authors highlight that visual odometry suffers in these challenging conditions due to the lack of good visual features. Similarly, our contribution lies in evaluation and analysis of visual odometry methods in challenging conditions but specific to rain conditions. In the following sections we discuss different categories of VO approaches. #### Iii-B1 Direct vs Indirect Approaches Direct approaches [22, 23, 24] minimize the photometric loss while indirect approaches [11, 25, 26, 27, 28] minimize the reprojection error. Though direct approaches are resistant to photometric noise, they are computationally expensive as they solve a more complex optimization problem. Indirect approaches are more resistant to geometric noise but are more susceptible to scenes with less texture [22]. #### Iii-B2 Dense vs Sparse Approaches Dense approaches use the entire image while sparse approaches use a subset of the image. Sparse methods identify keypoints [22] or edges [28, 27] to be used in their optimization formulation. Although dense methods provide more information, the trade-off is that they have a higher computational cost. Sparse methods provide the option of having lower computational cost for less information. In rainy weather, occlusions caused by raindrops significantly reduce the amount of features identified if the raindrops land in a feature-rich region. Thus, sparse methods might delocalize due to a lack of features and dense methods might not work if they use the raindrop-occluded regions as part of their set of features. Our findings suggest that adopting a dense approach provides more robustness as long as the identified features are reliable. #### Iii-B3 Learning-based vs Classical Approaches Recent efforts use machine learning models [26, 23] to identify features learnt from large amounts of data. Such methods are ideal for algorithms operating within the same distribution that they are trained on and achieve localization accuracy much higher than classical methods. However, it is unknown whether a learning algorithm is able to generalize to different environments or different weather conditions. We implement learning-based localization algorithms trained on either urban driving or aerial datasets, and evaluate them on urban driving datasets in different cities and different weather conditions to test their generalizability. Classical methods use handcrafted feature detection algorithms [11, 22] which we test to compare against learning-based methods. Our findings suggest that the learning-based algorithms delocalize less compared to classical methods in rain conditions. #### Iii-B4 Mixed Approaches Mixed approaches use both methods from the binary categories above. SVO uses both direct and indirect methods [29] while DF-VO uses both dense and sparse methods [23]. CNN-SVO combines both learning-based and classical methods, where a learning-based model is used to predict depth information which is fed into the classical VO model for optimizing pose [30]. Although mixed approaches are designed to bring out the best of both methods, they might also include the limitations of both methods such as the assumptions made in both the direct and indirect formulations. Our findings suggest that combining dense and sparse approach perform relatively well compared to a purely dense or sparse approach in rain conditions. Also, introducing a learning-based approach for estimating depth improves the localization accuracy for rain conditions as well. ## III Evaluated Visual Odometry Algorithms We chose seven open-source localization algorithms along with our proposed approach to perform evaluation in both clear and rain conditions. These algorithms were specifically chosen across three categories:(a) dense vs sparse, (b) classical vs learning, and (c) direct vs indirect. In this section, we give a brief overview of each of the algorithms implemented and highlight parts of their design that are affected by rain. Particularly, we will be considering the feature extraction and matching strategy alongside methods of tackling failures. ### _Direct Sparse Odometry (DSO)_ DSO is a classical, sparse and direct method that uses keyframes to perform a joint optimization of camera pose and 3D world model. It was tested on three datasets: TUM monoVO, EuroC MAV [31] and ICL-NUIM [32] datasets and it performed robustly with accurate localization results. DSO is able to handle low textured environments using its novel feature selection strategy and has multiple methods of detecting outliers or tracking failures. The feature selection strategy used in DSO aims to find an even spread of features across the image and at the same time select unique features in each region of the image. To create an even spread, the image is first split into square grids. The pixel with the largest gradient within each square grid is selected to be a possible feature. To ensure that each feature is unique, the pixel has to have a gradient larger than the gradient threshold before it is selected as a feature. This gradient threshold is determined by the median gradient in each 32x32 pixel block. This allows the feature selection strategy to automatically find features that are unique even in low textured environments. This might be good for rain conditions where a large number of occlusions occur due to raindrops but good features could still be found in the regions that are not occluded. However, this design does not account for scenarios where the majority of the image is compromised and no good features could be found. Such a scenario is likely to occur in a sequence of over-exposed images, commonly seen in rain conditions. This would result in a lack of features found, leading to tracking failure. DSO adopts outlier detection where matched points with errors above a threshold are discarded. The threshold is dependent on the median residual of the image which accounts for segments where the image is of a lower quality allowing matches with higher errors to pass the threshold. This is common in rain conditions where image quality changes randomly and would require an adaptive threshold to account for this change in image quality. When the photometric error of the current frame exceeds double the error of the previous frame, it is considered as a tracking failure and the algorithm tries to recover the localization. This allows for the recovery of the algorithm when the quality of images improve, at the cost of losing localization for the segment of poor quality. Such a design could be good for sensor fusion methods where other sensors can support the localization task when the vision component has failed. But, it is not ideal for pure VO in rain conditions as there will be a loss in localization for that segment. ### _Semi-direct Visual Odometry_ SVO [33] is a classical, sparse, mixed direct and indirect method that allows for both monocular and stereo setup. It was designed to strike a good balance between precision and speed for onboard computers on Micro Aerial Vehicles (MAVs), and was tested on the TUM RGB-D [34], EuroC MAV [31] and ICL-NUIM [32] datasets. It achieved similar performance to DSO on the EuroC and ICL-NUIM datasets. SVO uses additional edgelet features to supplement their feature extraction strategy and uses affine illumination model to handle sudden exposure changes in the scene. The feature extraction strategy involves dividing the image into 32x32 pixel grids and within each grid, the FAST [35] corner feature with the highest score is selected. If the grid has no corner features, the pixel with the highest gradient magnitude is selected as an edge feature. Although such a design ensures a fixed number of features are found, it might result in tracking of poor features. This allows SVO to continue tracking in segments of poor quality (over-exposure or large occlusions by raindrops) at the cost of localization accuracy. Also, it might be more susceptible to segments where only a small region of the image is distorted (small raindrop), but features in that region reduce the localization accuracy. SVO uses a robust model for depth prediction to minimize the impact of outliers. It also uses an affine illumination model to handle the illumination change across a longer time frame. The presence of outliers and illumination change are more likely to occur in rain conditions and such a design could improve localization accuracy in rain. ### _Cnn-Svo_ CNN-SVO [30] is a mixed learning and classical method where it builds upon SVO by applying a single-image depth prediction via a convolutional neural network. This network initializes the Bayesian depth filter with a mean and variance rather than a large uncertain value range such that the filter converges to the true depth value faster, resulting in better robustness and motion estimation. As a result, CNN-SVO performs significantly better for autonomous driving applications than SVO which is designed for MAVs. CNN-SVO uses the same feature extraction and matching strategy as SVO. ### _Depth and Flow for Visual Odometry (DF-VO)_ DF-VO [23] is a learning-based, direct monocular VO method that uses mixed dense and sparse features. It uses deep learning models for optical flow and depth prediction while using classical methods to perform pose estimation and scale recovery. DF-VO is trained using data sequences 00-08 from the KITTI dataset [36] and was evaluated on both the KITTI and Oxford Robotcar [5] datasets. DF-VO outperformed ORB-SLAM without loop closing [37] as well as DSO [22] and SVO [33] for the Oxford Robotcar Dataset. Although deep learning models were used, sparse features from the dense optical flow predictions were extracted to reduce computational cost. The sparse features are extracted by dividing the image into 100 regions (10x10). Then to get an even spread of features, K number of optical flow features within each region is extracted. K is determined by the number of features that pass a given threshold or the number required to extract 2000 features in the image, whichever is lower. The threshold is determined by their proposed flow consistency metric which checks the accuracy of the predicted flow. In light rain conditions, this design helps improve localization accuracy as it removes outliers but in heavy rain conditions, it could result in tracking failure. DF-VO also accounts for extreme scenarios where a constant velocity motion model is used to replace the tracking when insufficient features are found. This would be useful in rain conditions where it is common to find a series of over-exposed images that has minimal useful visual features. ### _TartanVO_ TartanVO [26] is a learning-based, indirect, monocular method that uses dense features. It was designed to be generalizable and without need of fine-tuning to perform well on an unseen dataset. It was trained on the TartanAir dataset [38], an all-weather drone dataset and tested on both urban driving and aerial datasets. It performed well when compared to ORB-SLAM [37] for the KITTI dataset [36] as well as ORB-SLAM, DSO and SVO for the EuroC dataset [31]. It does not have a feature extraction and matching strategy as it uses deep learning models for both optical flow and pose predictions. It also does not detect localization failure or outliers. This might cause it to be susceptible to extreme distortions caused by rain despite being trained on all-weather data. ### _Orb-Slam3_ ORB-SLAM3 is a classical, indirect VSLAM algorithm that uses sparse features and supports both monocular and stereo setup. It uses ORB features [39] which are fast to detect and resistant to noise. ORB-SLAM3 was tested on the EuRoC [31] dataset. It outperformed DSO and SVO on average for the monocular camera setup while outperforming SVO for the stereo setup. Its feature extraction and matching strategy is described in [37] where features are extracted for every frame while in contrast, DSO and SVO only extract features in the keyframes. ORB-SLAM3 also aims to split the features found evenly across the image where the image is divided into a grid to search for corner features. A threshold is also employed to find the best features within each cell. This threshold is reduced if insufficient features are found. Such a design ensures that sufficient features are found even for images with poor quality at the cost of identifying lower quality features. Outliers are detected by using an orientation consistency test and also the RANSAC procedure was used when computing the homography and fundamental matrix. Tracking also stops when insufficient correspondences are found. Thus, when the image suffers from large amounts of distortions under rain conditions, the algorithm will stop localizing and perform relocalization. ### _Droid-Slam_ DROID-SLAM [25] is a learning-based, indirect VSLAM method that uses dense features and supports both monocular and stereo camera setups. It was trained on the TartanAir [38] dataset, which is an all-weather synthetic drone dataset. Different from TartanVO, DROID-SLAM uses classical methods to perform bundle adjustment for pose estimation while keeping the optical flow model to perform feature matching. DROID-SLAM was evaluated on the EuroC and TartanAir datasets and outperformed both ORB-SLAM3 and TartanVO respectively. The optical flow model used for DROID-SLAM predicts both optical flow and confidence score for each of the pixel. Every pixel matching is weighted by a confidence score and used in the bundle adjustment optimization step, thereby maximizing the information used. This design ensures that tracking will not be lost even when images are compromised as the confidence value will prevent the erroneous matching from worsening the pose estimation. However, this is dependent on the accuracy of the confidence prediction by the model and would cause errors in pose estimation if a high confidence is given to a poor matching. DROID-SLAM does not detect whether tracking has failed and thus might perform poor localization in extreme conditions where no matches could be found. ### _DROID-SLAM based Heuristic Approach_ We propose a variant of DROID-SLAM algorithm to include additional map information and heuristics for the DROID-SLAM algorithm to detect and improve upon poor localization in rain conditions for stereo camera setup [4]. The map information could be easily obtained from any online routing services, which is used to provide a conservative global reference path (CGRP). The heuristics (H) are designed to dynamically modify the keyframe selection criteria depending on the confidence of the feature matching. The lower the confidence, the more keyframes should be taken to reduce the inaccuracies caused by a lower confidence estimation. Such a design improves localization robustness and accuracy. ## IV Datasets We used three datasets from different cities as discussed below. The first dataset is Oxford Robotcar Dataset which provides multiple long-distance sequences taken along the same route at different times for an urban environment. Nine sequences were recorded in rain conditions, out of which four with sufficiently long ground truth were selected for evaluation. One sequence in clear weather along the same route was also included, where the route is roughly 9km long. Out of these five sequences, the 2015-05-29 sequence has misaligned ground truth near the end while the 2014-11-21 sequence has an incomplete ground truth. Thus, these two sequences have a shorter route compared to the other three. The Oxford Robotcar Dataset uses the Bumblebee XB3 Trinocular stereo camera which has a 24cm stereo baseline providing an image stream at 16fps. The dataset provides a real-time kinematic (RTK) ground truth that is obtained by post-processing raw GPS, IMU, and static GNSS base station recordings [40]. The second dataset is 4Seasons Dataset which provides two rain sequences taken along different routes, namely, the 10-07 sequence from a suburban environment and the 12-22 sequence from an urban environment, both in the city of Munich. Two monochrome cameras were placed in a stereo setup with a 30cm stereo baseline providing an image stream at 30fps. The provided RTK-GNSS data was used as ground truth. An internal dataset recorded in Singapore was used for heavy rain evaluation in urban environment. Two monocular NIR cameras were set up in a stereo format with a 40cm baseline, while the GNSS+IMU system was used as the ground truth. To quantify the rain intensity from each sequence, a blur index was used as an approximation. Using the algorithm described in [41], the Haar wavelet transform was used to measure blurriness. We report the "BlurExtent" ratio as described in [41] for each of the Tables under Section VI. Fig. 2 shows the blur value for the same scene taken at a different time and weather condition. ## V Experimental Results ### _Setup_ For each algorithm evaluated, we implemented the default configuration parameters provided by the respective open-source repository for each of the datasets. The default configuration refers to the unique parameters of each method and also includes the image pre-processing steps. Only the intrinsic parameters of the cameras were modified to match the image data from each dataset. For learning-based methods, the model was also taken as it is and no fine-tuning or additional training was conducted. We note that some methods are designed for aerial context and that the default configuration might not be ideal for the urban driving context. For each dataset, the left image from the stereo camera setup was used for Monocular evaluation and the images were undistorted before inputting them into the VO method. The ground truth for each sequence was interpolated such that each ground truth pose corresponds to each image frame. It was also cleaned to remove any erroneous points. For the Oxford Robotcar Dataset, a later start point was chosen where the vehicle is already on the main road such that it is consistent with the start points of the other two datasets. In the following list, we provide algorithm specific changes that were made to run long-term localization in rainy weather. * DROID-SLAM is unable to run on long routes due to memory constraints as it saves every keyframe image and dense features for Global Bundle Adjustment (GBA). Thus, it was modified by removing the GBA and keyframes are forgotten after it reaches a 6GB GPU memory capacity. This modified version of DROID-SLAM is shortened to be MDS. (MDS + CGRP + H) adds on the Conservative Global Reference Path (CGPR) and heuristics to MDS. * SVO has the option to use their exposure compensation algorithm [42]. Given the challenging task of localization in rain, we enabled this algorithm to improve localization accuracy. * For DF-VO, we used the monodepth2 model trained on the stereo KITTI dataset. * For CNN-SVO, the default monodepth model is city2kititi, while we used the kitti_resnet50 model that is trained on KITTI as it worked best for the Oxford Robotcar sequences that we used for evaluation. * DSO tended to crash easily when running on the Oxford Robotcar sequences that we used for evaluation. Therefore two open source patches to the code were used in this experiment (pull request 234 and 81). The first pull request fixes a code implementation of the Schur complement, while the second fixes a segmentation fault bug which causes a crash when no positive IDepth is available. * Stereo DSO implemented uses an open source modification [43] of DSO inspired by techniques used in LSD-SLAM [24]. This is not to be confused with [44]. * TartanVO was used without any changes. ### _Evaluation Metric_ The Absolute Trajectory Error [45] is used for our evaluation. The output poses are scaled and aligned (7DOF) with the ground truth for each sequence. It is also projected onto the 2D plane before evaluation. ## VI Results and Analysis ### _Quantitative Analysis_ Monocular VO is unable to localize well for long distances and the ATE is high across all methods as well as datasets when evaluating on the entire route. We consider a vehicle to be delocalized when (a) there is a tracking failure in the middle of the route, (b) there is delayed initialization of the localization algorithm for more than 10m and (c) there is a prolonged static localization even though the vehicle is moving. In order to make meaningful comparisons across different blur values in rain, we evaluate the first 500m of each data sequence and reported the results in Table I. The classical methods (first 3 methods) tend to delocalize for the heavy rain sequences while the learning-based methods (last 4 methods) were able to continue tracking. This is due to the intentional design of the classical methods which stops localization when the number of good features to track falls below a certain threshold. Such a Fig. 2: Comparison of blur value for the same scene taken from the 11-21 and 12-09 sequences respectively, from the Oxford Robotcar Dataset design causes it to be less robust to challenging sequences for a pure visual approach but could be useful in a sensor fusion approach that switches between modalities. For the classical methods, DSO and ORB-SLAM3 outperforms the learning-based methods for the 12-09 clear weather sequence. SVO delocalized for the clear weather sequence likely because of using irrelevant features selected from image regions with sky thus, resulting in erroneous localization output. However, for the heavier rain sequences, SVO is able to prevent delocalization likely due to its mixed direct and indirect model together with its affine illumination model which is used to handle exposure change and improve its robustness. For the light rain sequences from the 4Seasons dataset, all the classical methods suffer from much higher errors despite a minimal change in blurriness. This is likely due to a lack of generalizability across datasets, which displays the need for manual tuning of parameters despite all the datasets being in the same category of urban driving scenarios. For the learning-based methods, TartanVO suffers from high localization errors throughout all data sequences due to a drifting issue described in more detail in qualitative analysis section. CNN-SVO outperforms SVO for the Oxford Robotcar and Singapore dataset, showing the usefulness of having a depth prediction model. For the 4Seasons dataset, CNN-SVO was unable to localize fully because of initialization errors. MDS performs consistently well across the data sequences except for the high error obtained on the 12-22 sequence. This is caused by tracking the combined raindrop and dynamic object in the scene, which indicates that more training is required to generalize MDS to specific rain conditions. Overall, DF-VO performs consistently well across both clear and rain conditions. This robustness could be due to its outlier detection module together with its depth prediction module. We also run the stereo camera setup for all datasets to evaluate long-term localization in both clear and rainy weather, and present its results in Table II. The classical methods delocalizes for almost all datasets. This is likely due to the build up of errors across longer data sequences. Although stereo DSO and ORB-SLAM3 are sometimes able to continue localizing after tracking failure, those data sequences are still marked as delocalized (represented as x in Table II). The MDS method suffers from high error rates in rain sequences but does perform significantly better after adding on our proposed modifications (MDS + CGRP + H) [4]. When comparing between monocular and stereo methods, the stereo MDS fixes the scale inconsistency problem present in the monocular MDS. There is no clear correlation between localization error and average blur as there are too many confounding variables such as different dynamic objects or varying traffic conditions. Future work could evaluate visual odometry with synthetic raindrops to investigate purely the effects of raindrops on the scene. ### _Qualitative Analysis_ The methods such as DF-VO and CNN-SVO which employ a depth prediction model are able to maintain a consistent scale even for rain sequences, while other methods suffer from scale inconsistency problems shown in Fig. 3. TartanVO suffers from drifting issues when the vehicle is at rest, as shown in Fig. 4. This might be due to training bias from the constant motion experienced by an aerial vehicle even when hovering in place. DSO, SVO and ORB-SLAM3 delocalize easily when encountering large exposure change, likely due to the lack of matching features. SVO is more resistant to such changes due to the exposure compensation algorithm and thus is able to localize for more rain sequences. The first 500m of the 12-22 sequence is particularly challenging due to large dynamic objects, which explains the high errors seen in Table I. For monocular camera setup, DF-VO is able to identify the outliers caused by the dynamic objects which reduces the errors significantly. For the entire route of 12-22 sequence in the 4Seasons dataset, the stereo camera setup fails due to a challenging tunnel segment. The best localization results in the stereo camera setup were obtained from our proposed variant of modified DROID-SLAM (MDS + CGRP + H) algorithm. In general, the occlusion caused by the adherent raindrop does not significantly impact visual odometry as long as sufficient good visual features can still be found in the scene. This is difficult for the rain + night sequence as the unoccluded regions of the images does not provide good visual features due to the night time imaging. The lens flare and rain streaks might result in undesirable visual features identified which calls for additional filters to ignore or remove this effect. Additional issues such as over-exposure when making a turn could cause the visual odometry to lose all visual features for that short period of time which calls for the need of a sensor fusion approach. ## VII Conclusion An evaluation of a wide range of VO methods was done for both clear and rain datasets. We found that the VO methods that employ a depth prediction model are able to maintain a consistent scale even for rain sequences. The stereo setup is also able to provide scale information but would require additional map information to perform well for long-term localization. Classical methods tend to delocalize easily in rain and is not recommended for rain condition unless paired with other sensors for a sensor fusion approach. Out of all the monocular methods, not one method performed the best across all three datasets. However, DF-VO performed consistently well out of all the evaluated approaches and could be adopted for a sensor fusion approach for short localization in rain conditions. For longer localization sequences the stereo method could be considered, where our proposed approach (MDS + CGRP + H) performs the best out of all the evaluated approaches across the three datasets. In conclusion, all evaluated VO approaches are insufficient to localize in rain. Hence, a more robust sensor fusion based approach is required for autonomous urban driving in rain.
自動運転車需要の増加に伴い、悪天候にも効果的に動作できる堅牢なナビゲーションシステムが必要となっています。視覚odometryは、これらのナビゲーションシステムで使用されている技術であり、車体位置と運動の推定を行うために車載カメラからの入力を用いることができます。しかし、視覚odometry精度を悪天候の状況で大きく影響を与える可能性があります。たとえば、激しい雨、雪、霧などの状況です。この論文では、視覚odometryの方法の範囲を評価し、その方法の一つとして、私たちが提案したD-ROID-SLAMに基づいた推論法を採用しています。具体的には、これらのアルゴリズムは、晴天と雨天の都市での運転データを使用してテストされました。私たちは、異なる都市からの多様な雨天時の状況を含む、視覚odometryアルゴリズムのセットをまとめました。これはオックスフォードのロボットカーデータ、ミュンヘン4Seasonsデータ、シン